venue
stringclasses
2 values
paper_content
stringlengths
7.54k
83.7k
prompt
stringlengths
161
2.5k
format
stringclasses
5 values
review
stringlengths
293
9.84k
ICLR
Title Unrestricted Adversarial Examples via Semantic Manipulation Abstract Machine learning models, especially deep neural networks (DNNs), have been shown to be vulnerable against adversarial examples which are carefully crafted samples with a small magnitude of the perturbation. Such adversarial perturbations are usually restricted by bounding their Lp norm such that they are imperceptible, and thus many current defenses can exploit this property to reduce their adversarial impact. In this paper, we instead introduce “unrestricted” perturbations that manipulate semantically meaningful image-based visual descriptors – color and texture – in order to generate effective and photorealistic adversarial examples. We show that these semantically aware perturbations are effective against JPEG compression, feature squeezing and adversarially trained model. We also show that the proposed methods can effectively be applied to both image classification and image captioning tasks on complex datasets such as ImageNet and MSCOCO. In addition, we conduct comprehensive user studies to show that our generated semantic adversarial examples are photorealistic to humans despite large magnitude perturbations when compared to other attacks. N/A Machine learning models, especially deep neural networks (DNNs), have been shown to be vulnerable against adversarial examples which are carefully crafted samples with a small magnitude of the perturbation. Such adversarial perturbations are usually restricted by bounding their Lp norm such that they are imperceptible, and thus many current defenses can exploit this property to reduce their adversarial impact. In this paper, we instead introduce “unrestricted” perturbations that manipulate semantically meaningful image-based visual descriptors – color and texture – in order to generate effective and photorealistic adversarial examples. We show that these semantically aware perturbations are effective against JPEG compression, feature squeezing and adversarially trained model. We also show that the proposed methods can effectively be applied to both image classification and image captioning tasks on complex datasets such as ImageNet and MSCOCO. In addition, we conduct comprehensive user studies to show that our generated semantic adversarial examples are photorealistic to humans despite large magnitude perturbations when compared to other attacks. 1 INTRODUCTION Machine learning (ML), especially deep neural networks (DNNs) have achieved great success in various tasks, including image recognition (Krizhevsky et al., 2012; He et al., 2016), speech processing (Hinton et al., 2012) and robotics training (Levine et al., 2016). However, recent literature has shown that these widely deployed ML models are vulnerable to adversarial examples – carefully crafted perturbations aiming to mislead learning models (Carlini & Wagner, 2017; Kurakin et al., 2016; Xiao et al., 2018b). The fast growth of DNNs based solutions demands in-depth studies on adversarial examples to help better understand potential vulnerabilities of ML models and therefore improve their robustness. To date, a variety of different approaches has been proposed to generate adversarial examples (Goodfellow et al., 2014b; Carlini & Wagner, 2017; Kurakin et al., 2016; Xiao et al., 2018a); and many of these attacks search for perturbation within a bounded Lp norm in order to preserve their photorealism. However, it is known that the Lp norm distance as a perceptual similarity metric is not ideal (Johnson et al., 2016; Isola et al., 2017). In addition, recent work show that defenses trained on Lp bounded perturbation are not robust at all against new types of unseen attacks (Kang et al., 2019). Therefore, exploring diverse adversarial examples, especially those with “unrestricted" magnitude of perturbation has acquired a lot of attention in both academia and industries (Brown et al., 2018). Recent work based on generative adversarial networks (GANs) (Goodfellow et al., 2014a) have introduced unrestricted attacks (Song et al, 2018). However, these attacks are limited to datasets like MNIST, CIFAR and CelebA, and are usually unable to scale up to bigger and more complex datasets such as ImageNet. Xiao et al. (2018b) directly manipulated spatial pixel flow of an image to produce adversarial examples without Lp bounded constraints on the perturbation. However, the attack does not explicitly control visual semantic representation. More recently, Hosseini & Poovendran (2018) manipulated hue and saturation of an image to create adversarial perturbations. However, these examples are easily distinguishable by human and are also not scalable to complex datasets. ∗ indicates equal contributions. In this work, we propose unrestricted attack strategies that explicitly manipulate semantic visual representations to generate natural-looking adversarial examples that are “far” from the original image in tems of the Lp norm distance. In particular, we manipulate color (cAdv) and texture (tAdv) to create realistic adversarial examples (see Fig 1). cAdv adaptively chooses locations in an image to change their colors, producing adversarial perturbation that is usually fairly substantial, while tAdv utilizes the texture from other images and adjusts the instance’s texture field using style transfer. These semantic transformation-based adversarial perturbations shed light upon the understanding of what information is important for DNNs to make predictions. For instance, in one of our case studies, when the road is recolored from gray to blue, the image gets misclassified to tench (a fish) although a car remains evidently visible (Fig. 2b). This indicates that deep learning models can easily be fooled by certain large scale patterns. In addition to image classifiers, the proposed attack methods can be generalized to different machine learning tasks such as image captioning ( Karpathy & Fei-Fei (2015)). Our attacks can either change the entire caption to the target (Chen et al., 2017; Xu et al., 2019) or take on more challenging tasks like changing one or two specific target words from the caption to a target. For example, in Fig. 1, “stop sign” of the original image caption is changed to “cat sitting” and “umbrella is” for cAdv and tAdv respectively. To ensure our “unrestricted" semantically manipulated images are natural, we conducted extensive user studies with Amazon Mechanical Turk. We also tested our proposed attacks on several state of the art defenses. Rather than just showing the attacks break these defenses (better defenses will come up), we aim to show that cAdv and tAdv are able to produce new types of adversarial examples. Experiments also show that our proposed attacks are more transferable given their large and structured perturbations (Papernot et al., 2016). Our semantic adversarial attacks provide further insights about the vulnerabilities of ML models and therefore encourage new solutions to improve their robustness. In summary, our contributions are: 1) We propose two novel approaches to generate “unrestricted" adversarial examples via semantic transformation; 2) We conduct extensive experiments to attack both image classification and image captioning models on large scale datasets (ImageNet and MSCOCO); 3) We show that our attacks are equipped with unique properties such as smooth cAdv perturbations and structured tAdv perturbations. 4) We perform comprehensive user studies to show that when compared to other attacks, our generated adversarial examples appear more natural to humans despite their large perturbations; 5) We test different adversarial examples against several state of the art defenses and show that the proposed attacks are more transferable and harder to defend. 2 COLORIZATION ATTACK (cADV) Background. Image Colorization is the task of giving natural colorization to a grayscale image. This is an ill-posed problem as there are multiple viable natural colorizations given a single grayscale image. Deshpande et al. (2017) showed that diverse image colorization can be achieved by using an architecture that combines VAE (Kingma & Welling (2013)) and Mixture Density Network; while Zhang et al. (2017) demonstrated an improved and diverse image colorization by using input hints from users guided colorization process. Our goal is to adversarially color an image by leveraging a pretrained colorization model. We hypothesize that it is possible to find a natural colorization that is adversarial for a target model (e.g., classifier or captioner) by searching in the color space. Since a colorization network learns to color natural colors that conform to boundaries and respect short-range color consistency, we can use it to introduce smooth and consistent adversarial noise with a large magnitude that looks natural to humans. This attack differs from common adversarial attacks which tend to introduce short-scale high-frequency artifacts that are minimized to be invisible for human observers. We leverage Zhang et al. (2016; 2017) colorization model for our attack. In their work, they produce natural colorizations on ImageNet with input hints from the user. The inputs to their network consist of the L channel of the image in CIELAB color space XL ∈ RH×W×1, the sparse colored input hints Xab ∈ RH×W×2, and the binary mask M ∈ BH×W×1, indicating the location of the hints. cAdv Objectives. There are a few ways to leverage the colorization model to achieve adversarial objectives. We experimented with two main methods and achieved varied results. Network weights. The straightforward approach of producing adversarial colors is to modify Zhang et al. (2017) colorization network, C directly. To do so, we simply update C by minimizing the adversarial loss objective Jadv, which in our case, is the cross entropy loss. t represents target class and F represents the victim network. θ∗ = argmin θ Jadv(F(C(XL, Xab,M ; θ)), t) (1) Hints and mask. We can also vary input hints Xab and mask M to produce adversarial colorizations. Hints provides the network with ground truth color patches that guides the colorization, while the mask provides its spatial location. By jointly varying both hints and mask, we are able to manipulate the output colorization. We can update the hints and mask as follows: M∗, X∗ab = argmin M,Xab Jadv(F(C(XL, Xab,M ; θ)), t) (2) cAdvAttack Methods. Attacking network weights allows the network to search the color space with no constraints for adversarial colors. This attack is the easiest to optimize, but the output colors are not realistic as shown in Fig. 2. Our various strategies outlined below are ineffective as the model learns to generate the adversarial colors without taking into account color realism. However, colorizations produced often correlate with colors often observed in the target class. This suggests that classifiers associate certain colors with certain classes which we will discuss more in our case study. Attacking input hints and mask jointly gives us natural results as the pretrained network will not be affected by our optimization. Attacking hints and mask separately also works but takes a long optimization time and give slightly worse results. For our experiments, we use Adam Optimizer (Kingma & Ba (2014)) with a learning rate of 10−4 in cAdv. We iteratively update hints and mask until our adversarial image reaches the target class and the confidence change of consecutive iterations does not exceed a threshold of 0.05. Control over colorization. Current attack methods lack control over where the attack occurs, opting to attack all pixels indiscriminately. This lack of control is not important for most attacks where the is small but is concerning in cAdv where making unstructured large changes can be jarring. To produce realistic colorization, we need to avoid making large color changes at locations where colors are unambiguous (e.g. roads in general are gray) and focus on those where colors are ambiguous (e.g. an umbrella can have different colors). To do so, we need to segment an image and determine which segments should be attacked or preserved. To segment the image into meaningful areas, we cluster the image’s ground truth AB space using K-Means. We first use a Gaussian filter of σ = 3 to smooth the AB channels and then cluster them into 8 clusters. Then, we have to determine which cluster’s colors should be preserved. Fortunately, Zhang et al. (2017) network output a per-pixel color distribution for a given image which we used to calculate the entropy of each pixel. The entropy represents how confident the network is at assigning a color at that location. The average entropy of each cluster represents how ambiguous their color is. We want to avoid making large changes to clusters with low-entropy while allowing our attack to change clusters with high entropy. One way to enforce this behavior is through hints, which are sampled from the ground truth at locations belonging to clusters of low-entropy. We sample hints from the k clusters with the lowest entropy which we refer as cAdvk (e.g. cAdv2 samples hints from the 2 lowest entropy clusters). Number of input hints. Network hints constrain our output to have similar colors as the ground truth, avoiding the possibility of unnatural colorization at the cost of color diversity. This trade-off is controlled by the number of hints given to the network as initialization (Fig. 4). Generally, providing more hints gives us similar colors that are observed in original image. However, having too many hints is also problematic. Too many hints makes the optimization between drawing adversarial colors and matching local color hints difficult. Since the search space for adversarial colors is constrained because of more hints, we may instead generate unrealistic examples. Number of Clusters. The trade-off between the color diversity and the color realism is also controlled by the number of clusters we sample hints from as shown in Fig. 3. Sampling from multiple clusters gives us realistic colors closer to the ground truth image at the expense of color diversity. Empirically, from our experiments we find that in terms of color diversity, realism, and robustness of attacks, using k = 4 and 50 hints gives us better adversarial examples. For the rest of this paper, we fix 50 hints for all cAdvk methods. 3 TEXTURE ATTACK (tADV) Background. Texture transfer extracts texture from one image and adds it to another. Transferring texture from one source image to another target image has been widely studied in computer vision ( Efros & Freeman (2001); Gatys et al. (2015)). The Convolutional Neural Network (CNN) based texture transfer from Gatys et al. (2015) led to a series of new ideas in the domain of artistic style transfer ( Gatys et al. (2016); Huang & Belongie (2017); Li et al. (2017); Yeh et al. (2019)). More recently, Geirhos et al. (2018) showed that DNNs trained on ImageNet are biased towards texture for making predictions. Our goal is to generate adversarial examples by infusing texture from another image without explicit constraints on Lp norm of the perturbation. For generating our tAdv examples, we used a pretrained VGG19 network (Simonyan & Zisserman, 2014) to extract textural features. We directly optimize our victim image (Iv) by adding texture from a target image (It). A natural strategy to transfer texture is by minimizing within-layer feature correlation statistics (gram matrices) between two images Gatys et al. (2015; 2016). Based on Yeh et al. (2019), we find that optimizing cross-layer gram matrices instead of within-layer gram matrices helps produce more natural looking adversarial examples. The difference between the within-layer and the cross-layer gram matrices is that for a within-layer, the feature’s statistics are computed between the same layer. For a cross-layer, the statistics are computed between two adjacent layers. tAdv Objectives. tAdv directly attacks the image to create adversarial examples without modifying network parameters. Moreover, there is no additional content loss that is used in style transfer methods (Gatys et al. (2016); Yeh et al. (2019)). Our overall objective function for the texture attack contains a texture transfer loss (LAt ) and an cross-entropy loss (Jadv). LAtAdv = αL A t (Iv, It) + βJadv(F(Iv), t) (3) Unlike style transfer methods, we do not want the adversarial examples to be artistically pleasing. Our goal is to infuse a reasonable texture from a target class image to the victim image and fool a classifier or captioning network. To ensure a reasonable texture is added without overly perturbing the victim image too much, we introduce an additional constraint on the variation in the gram matrices of the victim image. This constraint helps us to control the image transformation procedure and prevents it from producing artistic images. Let m and n denote two layers of a pretrained VGG-19 with a decreasing spatial resolution and C for number of filter maps in layer n, our texture transfer loss is then given by LAt (Iv, It) = ∑ (m,n)∈L 1 C2 ∑ ij ∥∥Gm,nij (Iv)−Gm,nij (It)∥∥2 std ( Gm,nij (Iv) ) (4) Let f be feature maps, Ufn be an upsampled fn that matches the spatial resolution of layer m. The cross layer gram matrices G between the victim image (Iv) and a target image (It) is given as Gm,nij (I) = ∑ p [ fmi,p(I) ] [ Ufnj,p(I) ]T (5) Texture Transfer. To create tAdv adversarial examples, we need to find images to extract the texture from, which we call “texture source” (Ts). A naive strategy is to randomly select an image from the data bank as Ts. Though this strategy is successful, their perturbations are clearly perceptible. Alternatively, we can randomly select Ts from the adversarial target class. This strategy produces less perceptible perturbations compared to the random Ts method as we are extracting a texture from the known target class. A better strategy to select Ts is to find a target class image that lies closest to the victim image in the feature space using nearest neighbors. This strategy is sensible as we assure our victim image has similar feature statistics as our target image. Consequently, minimizing gram matrices is easier and our attack generates more natural looking images (see Fig. 5). For texture transfer, we extract cross-layer statistics in Eq. 4 from the R11, R21, R31, R41, and R51 of a pretrained VGG19. We optimize our objective (Eq. 3) using an L-BFGS (Liu & Nocedal (1989)) optimizer. tAdv attacks are sensitive and if not controlled well, images get transformed into artistic images. Since we do not have any constraints over the perturbation norm, it is necessary to decide when to stop the texture transfer procedure. For a successful attack (images look realistic), we limit our L-BFGS to fixed number of small steps and perform two set of experiments: one with only one iteration or round of L-BFGS for 14 steps and another with three iterations of 14 steps. For the three iterations setup, after every iteration, we look at the confidence of our target class and stop if the confidence is greater than 0.9. Texture and Cross-Entropy Weights. Empirically, we found setting α to be in the range [150, 1000] and β in the range [ 10−4, 10−3 ] to be successful and also produce less perceptible tAdv examples. The additional cross-entropy based adversarial objective Jadv helps our optimization. We ensure large flow of gradients is from the texture loss and they are sufficiently larger than the adversarial crossentropy objective. The adversarial objective also helps in transforming victim image to adversarial without stylizing the image. All our tabulated results are shown for one iteration, α = 250 and β = 10−3, unless otherwise stated. We use the annotation tAdviterα for the rest of the paper to denote the texture method that we are using. Control over Texture. The amount of texture that gets added to our victim image is controlled by the texture weight coefficient (α). Increasing texture weights improves attack success rate at the cost of noticeable perturbation. When compared to within-layer statistics, the cross-layer statistics that we use are not only better at extracting texture, it is also easier to control the texture weight. 4 EXPERIMENTAL RESULTS In this section, we evaluate the two proposed attack methods both quantitatively, via attack success rate under different settings, and qualitatively, based on interesting case studies. We conduct our experiments on ImageNet Deng et al. (2009) by randomly selecting images from 10 sufficiently different classes predicted correctly for the classification attack. We use a pretrained ResNet 50 classifier (He et al. (2016)) for all our methods. DenseNet 121 and VGG 19 (Huang et al.; Simonyan & Zisserman (2014)) are used for our transferability analysis. 4.1 cADV ATTACK cAdv achieves high targeted attack success rate by adding realistic color perturbation. Our numbers in Table 1 and Table 2 also reveal that cAdv examples with larger color changes (consequently more color diversity) are more robust against transferability and adversarial defenses. However, these big changes are found to be slightly less realistic from our user study (Table 2, Table 4). Smooth cAdv perturbations. Fig. 8 in our Appendix shows interesting properties of the adversarial colors. We observe that cAdv perturbations are locally smooth and are relatively low-frequency. This is different from most adversarial attacks that generate high-frequency noise-like perturbations. This phenomenon can be explained by the observation that colors are usually smooth within object boundaries. The pretrained colorization model will thus produce smooth, low-frequency adversarial colors that conform to object boundaries. Importance of color in classification. From Fig. 2, we can compare how different target class affects our colorization results if we relax our constraints on colors (cAdv on Network Weights, 0 hints). In many cases, the images contain strong colors that are related to the target class. In the case of golf-cart, we get a green tint over the entire image. This can push the target classifier to misclassify the image as green grass is usually overabundant in benign golf-cart images. Fig. 2b shows our attack on an image of a car to tench (a type of fish). We observe that the gray road turned blue and that the colors are tinted. We can hypothesize that the blue colors and the tint fooled the classifier into thinking the image is a tench in the sea. The colorization model is originally trained to produce natural colorization that conforms to object boundaries. By adjusting its parameters, we are able to produce such large and abnormal color change that is impossible with our attack on hints and mask. These colors, however, show us some evidence that colors play a stronger role in classification than we thought. We reserve the exploration of this observation for future works. While this effect (strong color correlation to target class) is less pronounced for our attack on hints and mask, for all cAdv methods, we observe isoluminant color blobs. Isoluminant colors are characterized by a change in color without a corresponding change in luminance. As most color changes occur along edges in natural images, it is likely that classifiers trained on ImageNet have never seen isoluminant colors. This suggests that cAdv might be exploiting isoluminant colors to fool classifiers. 4.2 tADV ATTACK tAdv successfully fools the classifiers with a very small weighted adversarial cross-entropy objective (β) when combined with texture loss, while remaining realistic to humans. As shown in Table 1, our attacks are highly successful on white-box attacks tested on three different models with the nearest neighbor texture transfer approach. We also show our attacks are more transferable to other models. In our Appendix, we show ablation results for tAdv attacks along with other strategies that we used for generating tAdv adversarial examples. Structured tAdv Perturbations. Since we extract features across different layers of VGG, the tAdv perturbations follow a textural pattern. They are more structured and organized when compared to others. Our tAdv perturbations are big when compared with existing attack methods in Lp norm. They are of high-frequency and yet imperceptible (see Fig. 1 and Fig. 8). Importance of Texture in Classification. Textures are crucial descriptors for image classification and Imagenet trained models can be exploited by altering the texture. Their importance is also shown in the recent work from Geirhos et al. (2018). Our results also shows that even with a small or invisible change in the texture field can break the current state of the art classifiers. 4.3 DEFENSE AND TRANSFERABILITY ANALYSIS We test all our attacks and other existing methods with images attacked from Resnet50. We evaluate them on three defenses – JPEG defense (Das et al., 2017), feature squeezing (Xu et al., 2017) and adversarial training. By leveraging JPEG compression and decompression, adversarial noise may be removed. We tested our methods against JPEG compression of 75. Feature squeezing is a family of simple but surprisingly effective strategies, including reducing color bit depth and spatial smoothing. Adversarial training has been shown as an effective but costly method to defend against adversarial attacks. Mixing adversarial samples into training data of a classifier improves its robustness without affecting the overall accuracy. We were able to obtain an adversarially pretrained Resnet152 model on ImageNet dataset and hence we tested our Resnet50 attacked images with this model. Robustness. In general, our attacks are more robust to the considered defenses and transferable for targeted attacks. For cAdv, there is a trade-off between more realistic colors (using more hints and sampling from more clusters) and attack robustness. From Table 1 and 2, we show that as we progressively use more clusters, our transferability and defense numbers drop. A similar trend is observed with the change in the number of hints. cAdv is robust to JPEG defense and adversarial training because of their large and spatially smooth perturbations. For tAdv, increasing texture weight (α) does not necessarily perform well with the defense even though it increases attack success rate, but increasing texture flow with more iterations improves attack’s robustness against defenses. 5 HUMAN PERCEPTUAL STUDIES To quantify how realistic tAdv and cAdv examples are, we conducted a user study on Amazon Mechanical Turk (AMT). We follow the same procedure as described in (Zhang et al., 2016; Xiao et al., 2018b). For each attack, we choose the same 200 adversarial images and their corresponding benign ones. During each trial, one random adversarial-benign pair appears for three seconds and workers are given five minutes to identify the realistic one. Each attack has 600 unique pairs of images and each pair is evaluated by at least 10 unique workers. We restrict biases in this process by allowing each unique user up to 5 rounds of trials and also ignore users who complete the study in less than 30 seconds. In total, 598 unique workers completed at least one round of our user study. For each image, we can then calculate the user preference score as the number of times it is chosen divided by the number of times it is displayed. 0.5 represents that users are unable to distinguish if the image is fake. For cAdv and tAdv, user preferences averages at 0.476 and 0.433 respectively, indicating that workers have a hard time distinguishing them. The user preferences for all attacks are summarized in Table 2 and their comparison with Lp norm is in Table 4 and Table 5. 6 ATTACKING CAPTIONING MODEL Our methods are general and can be easily adapted for other learning tasks. As proof of concept, we test our attacks against image captioning task. Image captioning is the task of generating a sequence of word description for an image. The popular architecture for captioning is a Long-ShortTerm-Memory (LSTM) (Hochreiter & Schmidhuber, 1997) based models (Karpathy & Fei-Fei, 2015; Wang et al., 2017). Recently, (Aneja et al., 2018) proposed a convolutional based captioning model for a fast and accurate caption generation. This convolutional based approach does not suffer from the commonly known problems of vanishing gradients and overly confident predictions of LSTM network. Therefore, we choose to attack the current state of the art convolutional captioning model. We randomly selected images from MSCOCO (Lin et al., 2014) for image captioning attack. Attacking captioning models is harder than attacking classifiers when the goal is to change exactly one word in the benign image’s caption unlike pixel based attacks (Chen et al., 2017; Xu et al., 2019). We show that our attacks are successful and have no visible artifacts even for this challenging task. In Fig. 6, we change the second word of the caption to dog while keeping the rest of the caption the same. This is a challenging targeted attack because, in many untargeted attacks, the resulted captions do not make sense. More examples are in our Appendix. Adversarial Cross-Entropy Objective for Captioning. Let t be the target caption, w denote the word position of the caption, F for the captioning model, Iv for the victim image and Jadv for the cross-entropy loss LAcapt = ∑ w Jadv((F(Iv))w, tw) (6) For cAdv, we give all color hints and optimize to get an adversarial colored image to produce target caption. For tAdv, we add Eqn 6 to Eqn 4 to optimize the image. We select TS as the nearest neighbor of the victim image from the ones in the adversarial target class using ImageNet dataset. We stop our attack once we reach the target caption and the caption does not change in consecutive iterations. Note we do not change the network weights, we only optimize hints and mask (for cAdv) or the victim image (for tAdv) to achieve our target caption. 7 RELATED WORK Here we briefly summarize existing unrestricted and semantic adversarial attacks. Xiao et al. (2018b) proposed geometric or spatial distortion of pixels in image to create adversarial examples. They distort the input image by optimizing pixel flow instead of pixel values to generate adversarial examples. While this attack leads to “natural” looking adversarial examples with large L∞ norm, it does not take image semantics into account. Song et al (2018) and Dunn et al. (2019) considered GANs for adversarial attacks. This attack is unrestricted in Lp norm but they are restricted to simple datasets as it involves training GANs, which have been known to be unstable and computationally intensive for complex datasets like ImageNet (Karras et al., 2017; Brock et al., 2018). Hosseini & Poovendran (2018), changes the hue & saturation of an image randomly to create adversarial examples. It is similar to cAdv as they both involve changing colors, however, their search space is limited to two dimensions and their images are unrealistic, Appendix (Fig. 10). Also, while this method has a non-trivial untargeted attack success rate, it performs extremely poorly for targeted attacks (1.20% success rate in our own experiments on ImageNet). Our work is also related to Joshi et al. (2019) and Qiu et al. (2019), who manipulate images conditioned on face dataset attributes like glasses, beard for their attacks. These work focuses on changing single image visual attribute and are conditionally dependent. Our work focuses on changing visual semantic descriptors to misclassify images and are not conditioned to any semantic attributes. 8 CONCLUSION Our proposed two novel unrestricted semantic attacks shed light on the role of texture and color fields in influencing DNN’s predictions. They not only consistently fool human subjects but in general are harder to defend against. We hope by presenting our methods, we encourage future studies on unbounded adversarial attacks, better metrics for measuring perturbations, and more sophisticated defenses. ACKNOWLEDGEMENTS We thank Chaowei Xiao for sharing their code to compare our methods with Xiao et al. (2018b) and helping us setup the user study. We also thank Tianyuan Zhang for providing the AdvRes152 pretrained model. This work was supported by NSF Grant No. 1718221 and ONR MURI Award N00014-16-1-2007. A APPENDIX A.1 OTHER DETAILS ON HUMAN STUDY We also chose BIM (Kurakin et al., 2016) and CW (Carlini & Wagner, 2017) for comparing our perturbations. Since these attacks are known to have low Lp norm, we designed an aggressive version of BIM by relaxing its L∞ bound to match the norm of our attacks. We settled with two aggressive versions of BIM with average L∞ = {0.21, 0.347}, which we refer to as BIM0.21, BIM0.34. The average user preferences for BIM drops drastically from 0.497 to 0.332 when we relax the norm to BIM0.34; the decrease in user preferences for tAdv (0.433 to 0.406) and cAdv (0.476 to 0.437) is not significant. In Fig. 7, we plot a density plot of L∞ vs user preference scores. A.2 ADDITIONAL RESULTS Model Resnet50 Dense121 VGG 19 Accuracy 76.15 74.65 74.24 A tta ck Su cc es s Random Ts 99.67 99.72 96.16 Random Target Ts 99.72 99.89 99.94 Nearest Target Ts 97.99 99.72 99.50 cAdv4 25 hints 99.78 99.83 99.93 cAdv4 50 hints 99.78 99.83 100.00 cAdv4 100 hints 99.44 99.50 99.93 Whitebox target attack success rate. Our attacks are highly successful on different models across all strategies. tAdv results are for α = 250, β = 10−3 and iter= 1. β α 250 500 750 1000 0 25.00 99.61 98.55 95.92 10−4 99.88 99.61 98.55 95.92 10−3 97.99 99.27 99.66 99.50 10−2 96.26 95.42 96.32 96.59 tAdv ablation study. Whitebox target success rate with nearest target Ts (texture source). In columns, we have increasing texture weight (α) and in rows, we have increasing adversarial cross-entropy weight (β). All attacks are done on Resnet50. Table 6: Ablation Studies. GT k=1 k=2 k=4 k=8k=6 Figure 9: Additional qualitative examples for controlling cAdv. We show a comparison of sampling 50 color hints from k clusters with low-entropy. All images are attacked to golf-cart. Even numbered rows visualize our cluster segments, with darker colors representing higher mean entropy and red dots representing the location we sample hints from. Sampling hints across more clusters gives less color variety.
1. What are the novel aspects introduced by the paper in adversarial attacks? 2. How do the proposed approaches of attacking color and textures differ from traditional pixel value perturbations? 3. Can you provide more details about the experimental results, particularly the whitebox and blackbox attacks? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. What are the limitations or weaknesses of the paper regarding its contributions and comparisons with other works?
Review
Review This paper introduces two new adversarial attacks: one is generating adversarial examples by colouring the original images and the other is by changing textures of the original images. Specifically, the former one minimises the cross-entropy between the output of the classifier and the target label with the network weights of a pre-trained colourisation network. While the latter minimises the cross-entropy as well as the loss that defines the texture differences. I think the general idea of going beyond perturbations of pixel values in this paper is interesting and the proposed approaches of attacking on colour and textures are intuitive and reasonable. The results seem to be promising with comprehensive experiments including whitebox attack, blackbox attack by transferring, and attacks on defences. The paper overall is well-written and easy to follow. But I think the part of attacking for captioning is a bit distracted and there is no comparison with others on this task. I expect existing attacks on pixel can also do this task.
ICLR
Title Zero-Resource Multilingual Model Transfer: Learning What to Share Abstract Modern natural language processing and understanding applications have enjoyed a great boost utilizing neural networks models. However, this is not the case for most languages especially low-resource ones with insufficient annotated training data. Cross-lingual transfer learning methods improve the performance on a lowresource target language by leveraging labeled data from other (source) languages, typically with the help of cross-lingual resources such as parallel corpora. In this work, we propose a zero-resource multilingual transfer learning model1 that can utilize training data in multiple source languages, while not requiring target language training data nor cross-lingual supervision. Unlike most existing methods that only rely on language-invariant features for cross-lingual transfer, our approach utilizes both language-invariant and language-specific features in a coherent way. Our model leverages adversarial networks to learn language-invariant features and mixture-of-experts models to dynamically exploit the relation between the target language and each individual source language. This enables our model to learn effectively what to share between various languages in the multilingual setup. It results in significant performance gains over prior art, as shown in an extensive set of experiments over multiple text classification and sequence tagging tasks including a large-scale real-world industry dataset. 1 INTRODUCTION The recent deep learning revolution enables a wide variety of NLP models to achieve impressive performance, thanks in part to large-scale annotated datasets. However, such an advantage is not available to most of the world languages since only a handful of them have the labeled data necessary for training deep neural nets. As it is prohibitive to obtain training data for all languages of interest, cross-lingual transfer learning (CLTL) comes to the rescue to enable learning models for a target language using annotated data from other languages (source languages) (Yarowsky et al., 2001). In this paper, we study the more challenging unsupervised CLTL setting, where no target language labeled data is used for training.2 In this setting, most previous work relies on cross-lingual resources in one form or another in order to transfer models across languages, such as bilingual lexica (Mihalcea et al., 2007), parallel corpora (Yarowsky et al., 2001), or machine translation systems (Wan, 2009). In contrast, this work proposes a zero-resource CLTL framework that relies on no cross-lingual resources whatsoever. In addition, we focus on a multi-source CLTL, also known as multilingual transfer learning (MLTL) scenario that can leverage labeled data in multiple source languages simultaneously to improve the performance on the low-resource target language. Distinct from other transfer learning tasks such as domain adaptation, one unique difficulty faced by cross-lingual transfer learning is the disparate input space problem represented by disjoint sets of vocabulary which cripples the use of traditional feature representations. Therefore, the CLTL problem typically consists of two parts: cross-lingual language representation, and model transfer. Fortunately, recent research in unsupervised learning of cross-lingual word embeddings (Lample et al., 2018) provides a viable solution for the language representation problem without the need for parallel corpora. We hence focus on the model transfer problem in this work. 1The code will be available at http://[url redacted for anonymity]. 2In contrast, supervised CLTL assumes the availability of annotations in the target language. The most straightforward method for cross-lingual model transfer is through weight sharing, namely directly applying the model trained on the source language to the target language. However, as shown in previous work (Chen et al., 2016), the feature distributions of different languages extracted by the same neural net are still dissimilar, and weight sharing is not sufficient for learning languageinvariant features that generalize well across languages. Existing work, therefore, typically relies on language-adversarial training (Chen et al., 2016; Kim et al., 2017) to extract features that are invariant with respect to the shift in language, using only unlabeled texts from each language. Nonetheless, in the MLTL setting, where multiple source languages exist, language-adversarial training will only use for model transfer the features that are common among all source languages and the target, which may be too restrictive in many cases. For example, when transferring from English, Spanish and Chinese to German, language-adversarial training will retain only features that are invariant across all four languages, which can be too sparse to be informative. On the other hand, the fact that German is more similar to English than to Chinese is neglected and the transferred model is unable to utilize features that are shared only between English and German. To address these shortcomings, we propose a new model that not only exploits language-invariant features, but also allows the target language to dynamically and selectively leverage language-specific features through a probabilistic attention-style mixture of experts mechanism (see Section 3). This allows our model to learn what to share between various languages. For multiple CLTL tasks ranging from text classification to sequence tagging including a real-world large-scale industry dataset, our model beats all baseline models trained, like ours, without crosslingual supervision. More strikingly, it could in many cases match or outperform state-of-the-art models that have access to strong cross-lingual supervision (e.g. commercial machine translation systems or millions of parallel sentences). 2 RELATED WORK The diversity of human languages is a critical challenge for natural language processing. In order to alleviate the need for obtaining annotated data for each task in each language, cross-lingual transfer learning (CLTL) has long been studied (Yarowsky et al., 2001; Bel et al., 2003, inter alia). One CLTL direction is the supervised setting, where training data is available in the target language, and the goal is to further boost the performance by resorting to labeled data in additional languages. In the presence of target language training data, recent work in deep learning is able to perform CLTL without relying on additional cross-lingual resources (Kim et al., 2017; Yang et al., 2017). On the other hand, an arguably more challenging setting is the unsupervised setting, where no target language training data is available. Traditionally, research focuses on resource-based methods, where general-purpose cross-lingual resources such as MT systems or parallel corpora are utilized to replace task-specific annotated data (Wan, 2009; Prettenhofer & Stein, 2010). Zhang et al. (2016) could use as few as ten word translation pairs for CLTL, but is restricted to the part-of-speech tagging task. With the advent of deep learning, especially adversarial neural networks (Goodfellow et al., 2014; Ganin et al., 2016), progress has been made towards model-based CLTL methods. Chen et al. (2016) propose language-adversarial training that does not directly depend on parallel corpora, but instead only requires a set of bilingual word embeddings (BWEs). However, the BWEs used in their work were still trained using parallel corpus. Another important direction for CLTL is to learn cross-lingual word representations (Klementiev et al., 2012; Zou et al., 2013; Mikolov et al., 2013). Recently, there have been several notable work for learning fully unsupervised cross-lingual word embeddings, both for the bilingual (Zhang et al., 2017; Lample et al., 2018; Artetxe et al., 2018) and multilingual case (Chen & Cardie, 2018b). These efforts pave the road for performing CLTL without cross-lingual resources. 3 MODEL One commonly adopted paradigm for neural CLTL models is the shared-private model (Bousmalis et al., 2016; Kim et al., 2017), where the features are divided into two parts: shared (language-invariant) features and private (language-specific) features. As mentioned before, the shared features are enforced to be language-invariant via language-adversarial training, by attempt- ing to fool a language discriminator. Furthermore, Chen & Cardie (2018a) propose a generalized shared-private model for the multi-source setting, where a multinomial adversarial network (MAN) is adopted to extract common features shared by all source languages as well as the target. On the other hand, the private features are learned by separate feature extractors, one for each source language, capturing the remaining features outside the shared ones. During training, the labeled samples from a certain source language go through the corresponding private feature extractor for that particular language. At test time, there is no private feature extractor for the target language; only the shared features are used for cross-lingual transfer. As mentioned in Section 1, using only the shared features for model transfer imposes an overly strong constraint and many useful features may be wiped out by adversarial training if they are shared only between the target language and a subset of source languages. Therefore, we propose to use a mixture-of-experts (MoE) model (Shazeer et al., 2017; Gu et al., 2018) to learn the private features. The highlevel idea is to have a set of language expert networks, one per source language, each responsible for learning language-specific features for that source language during training. However, instead of hard-switching between the experts, each sample uses a convex combination of all experts, dictated by an expert gate. Thus, at test time, the trained expert gate can decide what combination to use for the unseen target language based on its similarity to the source languages. Figure 1 shows an overview of our full MAN-MoE model for multilingual model transfer. The boxes illustrate various components of the MAN-MoE model (§3.1), while the arrows depict the training flow. (More training details can be found in Appendix A.) 3.1 MODEL ARCHITECTURE Figure 1 portrays an abstract view of the MAN-MoE model with four major components: the Multilingual Word Representation, the Shared Feature ExtractorFs (together with the Language DiscriminatorD), the MoE Private Feature ExtractorFp, and finally the MoE Predictor C. Based on the actual task (e.g. sequence tagging, text classification, sequence to sequence, etc.) different architectures may be adopted, as explained below. Multilingual Word Representation embeds words from all languages into a single semantic space so that words with similar meanings are close to each other regardless of language. In this work, we mainly rely on the MUSE embeddings (Lample et al., 2018) that are fully unsupervised. We map all other languages into English to obtain a multilingual embedding space. However, in certain experiments, MUSE yields 0 accuracy on one or more language pairs (Søgaard et al., 2018), in which case the VecMap embeddings (Artetxe et al., 2017) are used. It uses identical strings as supervision, which does not require parallel corpus or human annotations to train. In addition, for tasks where morphological features are important, one can add character-level word embeddings (Dos Santos & Zadrozny, 2014) that captures sub-word information. When character embeddings are used, we add a single CharCNN that is shared across all languages, and the final word representation is the concatenation of the word embedding and the char-level embedding. The CharCNN can then be trained end to end with the rest of the model. Shared Feature Extractor Fs is a multinomial adversarial network (Chen & Cardie, 2018a), which is an adversarial pair of a feature extractor (e.g. LSTM or CNN) and a Language DiscriminatorD. D is a text classifier (Kim, 2014) that takes the shared features (extracted by Fs) of an input sequence and predicts which language it comes from. On the other hand, Fs strives to fool D so that it cannot identify the language of a sample. The hypothesis is that if D cannot recognize the language of the input, the shared features then do not contain language information and are hence languageinvariant. Note that D is trained only using unlabeled corpus, and can therefore be trained on all languages including the target language with no labeled data. MoE Private Feature Extractor Fp is a key difference of our model from previous work, which is shown in Figure 2. The figure shows the Mixture-of-Experts (Shazeer et al., 2017) model with three source languages, English, German and Spanish. Fp has a shared BiLSTM at the bottom that extracts contextualized word representations for each tokenw in the input sentence. The LSTM hidden representation hw is then fed into the MoEmodule, where each source language has a separate expert network (a MLP). In addition, the expert gate G is a linear transformation that takes hw as input and outputs a softmax score αi for each expert. The final private feature vector is a mixture of all expert outputs, dictated by the expert gate weights α. During training, the gate loss Jg is used to encourage samples from a certain source language to use the correct expert (see Appendix A for more details), and each expert is hence learning language-specific features for that language. At test time, the trained expert gate will examine the hidden representation of a token, and produces the optimal expert weights. Therefore, Fp is able to dynamically determine what knowledge to use at a token level, serving as a much more flexible and powerful feature extractor for those features that are not shared across all languages. MoE Task-Specific Predictor C is the final module that make predictions for the end task, and may take different forms depending on the task. For instance, Figure 3 shows the MoE predictor for sequence tagging, where one output label is predicted for each input token. It is straightforward to adapt C to work for other tasks. For example, for text classification, a pooling layer such as dot-product attention (Luong et al., 2015) can be added at the bottom to fuse token-level features into a single sentence feature vector. C first concatenates the shared and private features to form a single feature vector for each token. It then has another Mixture-of-Experts module that outputs a softmax probability over all labels for each token. The idea is that it may be favorable to put different weights between the language-invariant and language-specific features for different target languages. Again consider the example of English, German, Spanish and Chinese. When transferring to Chinese from the other three, the source languages are similar to each other while all being rather distant from Chinese. Therefore, the adversarially learned shared features might be more important in this case. On the other hand, when transferring to German, which is much more similar to English than to Chinese, we might want to pay more attention to the MoE private features. Therefore, we adopt a MoE module in C, which provides more flexibility than using a single MLP.3 3We also experimented with an attention mechanism between the shared and private features in C, or adding a gating mechanism to modulate each feature channel, but adding another MoE in C gave the best results. 4 EXPERIMENTS In this section, we present an extensive set of experiments across three datasets. The first experiment is on a large-scale real-world multilingual slot filling (sequence tagging) dataset, where the data is used in a commercial personal virtual assistant. In addition, we conduct experiments on two public academic datasets, namely the CoNLL 2002/2003 Multilingual Named Entity Recognition (sequence tagging) dataset (Sang, 2002; Sang & Meulder, 2003), and the Multilingual Amazon Reviews (text classification) dataset (Prettenhofer & Stein, 2010). 4.1 CROSS-LINGUAL SLOT FILLING FOR VIRTUAL ASSISTANTS As shown in Table 1, we collect data for four languages: English, German, Spanish, and Chinese, over three domains: Navigation, Calendar, and Files. Each domain has a set of pre-determined slots (the slots are the same across languages), and the user utterances in each language and domain are annotated by crowd workers with the correct slots (see the examples in Table 1). We employ the standard BIO tagging scheme to formulate the slot filling problem as a sequence tagging task. For each domain and language, the data is divided into a training, a validation, and a test set, with the corresponding number of samples in each split shown in Table 1. We can see that there is a natural imbalance in the amount of available data for each language, which further motivates cross-lingual transfer learning. In our experiments, we treat each domain as a separate experiment, and consider each of German, Spanish and Chinese as the target language while the remaining three being source languages, which results in a total of 9 experiments. 4.1.1 RESULTS In Table 2, we report the performance of MAN-MoE compared to a number of baseline systems. All systems adopt the same base architecture, which is a multi-layer BiLSTM sequence tagger (İrsoy & Cardie, 2014) with a token-level MLP on top (no CRFs used). MT baselines employ machine translation (MT) for cross-lingual transfer. In particular, the trainon-trans(lation) method translates the entire English training set into each target language which are in turn used to train a supervised system on the target language. On the other hand, the teston-trans(lation) method trains an English sequence tagger, and utilizes MT to translate the test set of each target language into English in order to make predictions. In this work, we adopt the Microsoft Translator4, a strong commercial MT system. Note that for a MT system to work for sequence tagging tasks, word alignment information must be available, in order to project wordlevel annotations across languages. This rules out many MT systems such as Google Translate since they do not provide word alignment information through their APIs. BWE baselines rely on Bilingual Word Embeddings (BWEs) and weight sharing for CLTL. Namely, the sequence tagger trained on the source language(s) are directly applied to the target language, in hopes that the BWEs could bridge the language gap. This simple method has been shown to yield strong results in recent work (Upadhyay et al., 2018). The MUSE (Lample et al., 2018) BWEs are 4https://azure.microsoft.com/en-us/services/cognitive-services/translator-text-api/ used by all systems in this experiment. 1-to-1 indicates that we are only transferring from English, while 3-to-1 means the training data from all other three languages are leveraged.5 The final baseline is the MAN model (Chen & Cardie, 2018a), presented before our MAN-MoE approach. As shown in Table 2, MAN-MoE substantially outperforms all baseline systems that do not employ cross-lingual supervision on almost all domains and languages. Another interesting observation is that MAN performs strongly on Chinese while being much worse on German and Spanish compared to the BWE baseline. This corroborates our hypothesis that MAN only leverages features that are invariant across all languages for CLTL, and it learns such features better than weight sharing. Therefore, when transferring to German or Spanish, which is similar to a subset of source languages, the performance of MAN degrades significantly. On the other hand, when Chinese serves as the target language, where all source languages are rather distant from it, MAN has its merit in extracting language-invariant features that could generalize to Chinese. With MAN-MoE, however, this trade-off between close and distant language pairs is well addressed by the combination of MAN and MoE. By utilizing both language-invariant and language-specific features for transfer, MAN-MoE outperforms all cross-lingually unsupervised baselines on all languages. Furthermore, even when compared with the MT baseline, which has access to hundreds of millions of parallel sentences, MAN-MoE performs competitively on German and Spanish. It even significantly beats both MT systems on German as MT sometimes fails to provide word alignment for German. On Chinese, where the unsupervised BWEs are much less accurate (BWE baselines only achieve 20% F1), MAN-MoE is able to greatly improve over the BWE and MAN baselines and shows promising results for fully unsupervised CLTL even between distant language pairs. 4.1.2 FEATURE ABLATION In this section, we take a closer look at the various modules of MAN-MoE and their impacts on performance. When the tagger (C) MoE is removed, moderate decrease is observed on all languages. The performance degrades the most on Chinese, suggesting that using a single MLP in C is not ideal when the target language is not similar to the sources. When removing the private MoE, the MoE in C no longer makes much sense as C only has access to the shared features, and the performance is 5MAN and MAN-MoE results are always 3-to-1 in this paper. even slightly worse than removing both MoEs. With both MoE modules removed, it reduces to the MAN model, and we saw a significant drop on German and Spanish. Finally, when removing MAN while keeping MoE, where the shared features are simply learned via weight-sharing, we see a slight drop on German and Spanish, but a rather great one on Chinese. The ablation results support our hypotheses and validate the merit of MAN-MoE. 4.2 CROSS-LINGUAL NAMED ENTITY RECOGNITION In this section, we present experiments on the CoNLL 2002/2003 multilingual named entity recognition (NER) dataset (Sang, 2002; Sang & Meulder, 2003), with four languages: English, German, Spanish and Dutch. The task is also formulated as a sequence tagging problem, with four types of tags: PER, LOC, ORG, and MISC. The results are summarized in Table 4. We observe that using only word embeddings does not yield satisfactory results, since the out-of-vocabulary problem is rather severe, and morphological features such as capitalization is crucial for NER. We hence add character-level word embeddings for this task (§3.1) to capture subword features and alleviate the OOV problem. For German, however, all nouns are capitalized, and the capitalization features learned on the other three languages would lead to poor results. Therefore, for German only, we lowercase all characters in systems that adopt CharCNN. Table 4 also shows state-of-the-art models in the literature. Note that most of these systems are specifically designed for the NER task, and exploit many task-specific resources, such as multilingual gazetteers, or the metadata in Freebase or Wikipedia (such as entity categories). Among these, Täckström et al. (2012) rely on parallel corpora to learn cross-lingual word clusters that serve as features. Nothman et al. (2013); Tsai et al. (2016) both leverage the structured and unstructured information in external knowledge bases such as Wikipedia to learn useful features for cross-lingual NER. Ni et al. (2017) employ noisy parallel corpora (aligned sentence pairs, but are not always translations) as well as bilingual dictionaries (5k words for each language pair) for model transfer. They further added external features such as entity types learned from Wikipedia for improved performance. Finally, Mayhew et al. (2017) propose a multi-source framework that utilizes large cross-lingual lexica. Despite not using any of these resources, general or task-specific, MAN-MoE nonetheless outperforms all these methods. The only exception is German, where task-specific resources may still be helpful, due to its high OOV rate and unique capitalization rules. In a contemporaneous work by (Xie et al., 2018), they propose a cross-lingual NER model using BiLSTM-CRF that achieves similar performance compared to MAN-MoE+CharCNN. However, our architecture is not specialized to the NER task, and we did not add task-specific modules such as a CRF decoding layer, etc. Last but not least, we replace the MUSE embeddings with the recently proposed unsupervised multilingual word embeddings (Chen & Cardie, 2018b), which further boosts the performance, achieving a new state-of-the-art performance. 4.3 CROSS-LINGUAL TEXT CLASSIFICATION ON AMAZON REVIEWS Finally, we report results on a multilingual text classification dataset (Prettenhofer & Stein, 2010). The dataset is a binary classification dataset where each review is classified into positive or negative sentiment. It has four languages: English, German, French and Japanese. As shown in Table 5, MT-BOW uses machine translation to translate the bag of words of a target sentence into the source language, while CL-SCL learns a cross-lingual feature space via structural correspondence learning (Prettenhofer & Stein, 2010). CR-RL (Xiao & Guo, 2013) learns bilingual word representations where part of the word vector is shared among languages. Bi-PV (Pham et al., 2015) extracts bilingual paragraph vector by sharing the representation between parallel documents. UMM (Xu & Wan, 2017) is a multilingual framework that could utilize parallel corpora between multiple language pairs, and pivot as needed when direct bitexts are not available for a specific source-target pair. Finally CLDFA (Xu & Yang, 2017) proposes cross-lingual distillation on parallel corpora for CLTL. Unlike other works listed, however, they adopt a task-specific parallel corpus (translated Amazon reviews) that are difficult to obtain in practice, making the numbers not directly comparable to others. Among these methods, UMM is the only one that does not require direct parallel corpus between all source-target pairs. It could instead utilize pivot languages (e.g. English) to connect multiple languages. MAN-MoE, however, takes another giant leap forward to completely remove the necessity of parallel corpora while achieving similar results on German and French compared to UMM. On Japanese, the performance of MAN-MoE is again limited by the quality of BWEs. (BWE baselines are merely better than randomness.) Nevertheless, MAN-MoE remains highly effective and the performance is only a few points below most methods with cross-lingual supervision. 4.4 VISUALIZATION OF EXPERT GATE WEIGHTS In Figure 4, we visualize the average expert gate weights for each of the three target languages in the Amazon dataset. For each sample, we first compute a sentence-level aggregation by averaging over the expert gate weights of all its tokens. These sentence-level expert gate weights are then further averaged across all samples in the validation set of all three domains (books, dvd, music), which forms a final language-level average expert gate weight for each target language. The visualization further collaborates with our hypothesis that our model makes informed decisions when selecting what features to share to the target language. It can be seen that when transferring to German or French (from the remaining three), the Japanese expert is less utilized compared to the European languages. On the other hand, it is interesting that when transferring to Japanese, the French and English experts are used more than the German one, and the exact reason remains to be investigated. However, this phenomenon might be of less significance since the private features may not play a very important role when transferring to Japanese as the model is probably focusing more on the shared features, according to the ablation study in §4.1.2. 5 CONCLUSION In this paper, we propose a zero-resource multilingual model transfer approach that requires neither target language training data nor general-purpose cross-lingual resources. Our MAN-MoE method exploits both language-invariant (shared) features and language-specific (private) features for CLTL, which departs from previous models that could only make use of shared features. Following earlier work, the shared features are learned via language-adversarial training (Chen et al., 2016). However, the key difference is that we employ a Mixture-of-Experts (MoE) module to extract the private features, which is able to dynamically capture the relation between the target language and each source language on a token level. This is extremely helpful when the target language is similar to a subset of source languages, in which case traditional models that solely rely on shared features would perform poorly. Our claim is supported by a wide range of experiments over multiple text classification and sequence tagging tasks, including a large-scale real-world industry dataset. MAN-MoE significantly outperforms all cross-lingually unsupervised baselines regardless of task or language. Furthermore, even considering methods with strong cross-lingual supervision (e.g. commercial machine translation systems or millions of parallel sentences), MAN-MoE is able to match or outperform these models on closer language pairs. On distant language pairs such as English-Chinese or English-Japanese, where the quality of cross-lingual word embeddings are unsatisfactory, MAN-MoE remains highly effective and substantially mitigates the performance gap introduced by cross-lingual supervision. APPENDIX A MODEL TRAINING Denote the set of all N source languages as S, where |S| = N . Denote the target language as T , and let ∆ = S ∪ T be the set of all languages. Denote the annotated corpus for a source language l ∈ S as Xl, where (x, y) ∼ Xl is a sample drawn from Xl. In addition, unlabeled data is required for all languages to facilitate the MAN training. We hence denote as Ul′ the unlabeled texts from a language l′ ∈ ∆. Algorithm 1 MAN-MoE Training Require: labeled corpus X; unlabeled corpus U; Hyperpamameter λ1, λ2 > 0, k ∈ N 1: repeat 2: . D iterations 3: for diter = 1 to k do 4: lD = 0 5: for all l ∈ ∆ do . For all languages 6: Sample a mini-batch x ∼ Ul 7: fs = Fs(x) . Shared features 8: lD += LD(D(fs); l) . D loss 9: Update D parameters using∇lD 10: . Main iteration 11: loss = 0 12: for all l ∈ S do . For all source languages 13: Sample a mini-batch (x,y) ∼ Xl 14: fs = Fs(x) . Shared features 15: fp, g1 = Fp(x) . Private features and gate outputs 16: ŷ, g2 = C(fs,fp) 17: loss += LC(ŷ;y) + λ2(Lg(g1; l) + Lg(g2; l)) . C loss and gate loss 18: for all l ∈ ∆ do . For all languages 19: Sample a mini-batch x ∼ Ul 20: fs = Fs(x) . Shared features 21: loss += −λ1 · LD(D(fs); l) . Language loss to confuse D 22: Update Fs, Fp, C parameters using∇loss 23: until convergence The overall training flow of variant components is illustrated in Figure 1, while the training algorithm is depicted in Algorithm 1. Similar to MAN, there are two separate optimizers to train MAN-MoE, one updating the parameters of D (red arrows), while the other updating the parameters of all other modules (green arrows). In Algorithm 1, LC , LD and Lg are the loss functions for the predictor C, the language discriminator D, and the expert gates in Fp and C, respectively. In practice, we adopt the NLL loss for LC for text classification, and token-level NLL loss for sequence tagging: LNLL(ŷ; y) = − logP (ŷ = y) (1) LT -NLL(ŷ;y) = − logP (ŷ = y) = − ∑ i logP (ŷi = yi) (2) where y is a scalar class label, and y is a vector of token labels. LC is hence interpreted as the negative log-likelihood of predicting the correct task label. Similarly, D adopts the NLL loss in (1) for predicting the correct language of a sample. Finally, the expert gates G use token-level NLL loss in (2), which translates to the negative log-likelihood of using the correct language expert for each token in a sample. Therefore, the objectives that C, D and G minimize are, respectively: JC = ∑ l∈S E (x,y)∈Xl [LC(C(Fs(x),Fp(x)); y)] (3) JD = ∑ l∈∆ E x∈Ul [LD(D(Fs(x)); l)] (4) JG = ∑ l∈S E x∈Xl [∑ w∈x LG(G(hw); l) ] (5) where hw in (5) is the BiLSTM hidden representation in Fp as shown in Figure 2. In addition, note thatD is trained using unlabeled corpora over all languages (∆), while the training of Fp and C (and hence G) only take place on source languages (S). Finally, the overall objective function is: J = JC − λ1JD + λ2(J (1)G + J (2) G ) (6) where J (1)G and J (2) G are the two expert gates in Fp and C, respectively. APPENDIX B IMPLEMENTATION DETAILS In all experiments, Adam (Kingma & Ba, 2015) is used for both optimizers (main optimizer and D optimizer), with learning rate 0.001 and weight decay 10−8. Batch size is 64 for the slot filling experiment and 16 for the NER and Amazon Reviews experiments, which is selected mainly due to memory concerns. CharCNN increases the GPU memory usage and NER hence could only use a batch size of 16 to fit in 12GB of GPU memory. The Amazon experiment does not employ character embeddings but the documents are much longer, and thus also using a smaller batch size. All embeddings are fixed during training. Dropout (Srivastava et al., 2014) with p = 0.5 is applied in all components. Unless otherwise mentioned, ReLU is used as non-linear activation. Bidirectional-LSTM is used in the feature extractors for all experiments. In particular, Fs is a twolayer BiLSTM of hidden size 128 (64 for each direction), and Fp is a two-layer BiLSTM of hidden size 128 stacked with a MoE module (see Figure 2). Each expert network in the MoE module of Fp is a two-layer MLP again of hidden size of 128. The final layer in the MLP has a tanh activation instead of ReLU to match the LSTM-extracted shared features (with tanh activations). The expert gate is a linear transformation (matrix) of size 128×N , whereN is the number of source languages. On the other hand, the architecture of the task specific predictor C depends on the task. For sequence tagging experiments, the structure of C is shown in Figure 3, where each expert in the MoE module is a token-level two-layer MLP with a softmax layer on top for making token label predictions. For text classification tasks, a dot-product attention mechanism (Luong et al., 2015) is added after the shared and private features are concatenated. It has a length 256 weight vector that attends to the feature vectors of each token and computes a softmax mixture that pools the token-level feature vectors into a single sentence-level feature vector. The rest of C remains the same for text classification. For the language discriminator D, a CNN text classifier (Kim, 2014) is adopted in all experiments. It takes as input the shared feature vectors of each token, and employs a CNN with max-pooling to pool them into a single fixed-length feature vector, which is then fed into a MLP for classifying the language of the input sequence. The number of kernels is 200 in the CNN, while the kernel sizes are 3, 4, and 5. The MLP has one hidden layer of size 128. The MUSE and VecMap embeddings are trained with the monolingual 300d fastText Wikipedia embeddings (Bojanowski et al., 2017). When character-level word embeddings are used, a CharCNN is added that takes randomly initialized character embeddings of each character in a word, and passes them through a CNN with kernel number 200 and kernel sizes 3, 4, and 5. Finally, the character embeddings are max-pooled and fed into a single fully-connected layer to form a 128 dimensional character-level word embedding, which is concatenated with the pre-trained cross-lingual word embedding to form the final word representation of that word. The remaining hyperparameters such as λ1, λ2 and k (see Algorithm 1) are tuned for each individual experiment, as shown in Table 6.
1. What is the focus of the paper regarding multilingual NLP models? 2. What are the strengths of the proposed approach, particularly in its novelty and performance? 3. What are the weaknesses of the paper, especially regarding the lack of evidence and analysis? 4. Do you have any questions about the methodology used in the paper, such as the training process for the CharCNN embeddings?
Review
Review This paper presents a multilingual NLP model which performs very well on a target language with any leveraging labeled data. The authors evaluated their framework on there different tasks: slot filling, named entity recognition and text classification. Overall, the results look very promising. - Strengthens: + The proposed idea is novel. + The results are very good for all three tasks. - Weaknesses: + The authors claimed that their model knows what to share. However, they did not provide any evidence proving this hypothesis. Only the experimental results are not enough. + The paper also lacks an analysis to show to some extent what the model learned, e.g. the attention weights or the value of the gate. Is there any correlation between the similarity among languages (source and target) and the attention weights. - What are not clear: + It is not clear to me what exactly has been done with the CharCNN embeddings in Section 4.2? How did the authors train the embeddings (only with the source languages or also with the target language)? It seems to me that the proposed model did not work well in this case.
ICLR
Title Zero-Resource Multilingual Model Transfer: Learning What to Share Abstract Modern natural language processing and understanding applications have enjoyed a great boost utilizing neural networks models. However, this is not the case for most languages especially low-resource ones with insufficient annotated training data. Cross-lingual transfer learning methods improve the performance on a lowresource target language by leveraging labeled data from other (source) languages, typically with the help of cross-lingual resources such as parallel corpora. In this work, we propose a zero-resource multilingual transfer learning model1 that can utilize training data in multiple source languages, while not requiring target language training data nor cross-lingual supervision. Unlike most existing methods that only rely on language-invariant features for cross-lingual transfer, our approach utilizes both language-invariant and language-specific features in a coherent way. Our model leverages adversarial networks to learn language-invariant features and mixture-of-experts models to dynamically exploit the relation between the target language and each individual source language. This enables our model to learn effectively what to share between various languages in the multilingual setup. It results in significant performance gains over prior art, as shown in an extensive set of experiments over multiple text classification and sequence tagging tasks including a large-scale real-world industry dataset. 1 INTRODUCTION The recent deep learning revolution enables a wide variety of NLP models to achieve impressive performance, thanks in part to large-scale annotated datasets. However, such an advantage is not available to most of the world languages since only a handful of them have the labeled data necessary for training deep neural nets. As it is prohibitive to obtain training data for all languages of interest, cross-lingual transfer learning (CLTL) comes to the rescue to enable learning models for a target language using annotated data from other languages (source languages) (Yarowsky et al., 2001). In this paper, we study the more challenging unsupervised CLTL setting, where no target language labeled data is used for training.2 In this setting, most previous work relies on cross-lingual resources in one form or another in order to transfer models across languages, such as bilingual lexica (Mihalcea et al., 2007), parallel corpora (Yarowsky et al., 2001), or machine translation systems (Wan, 2009). In contrast, this work proposes a zero-resource CLTL framework that relies on no cross-lingual resources whatsoever. In addition, we focus on a multi-source CLTL, also known as multilingual transfer learning (MLTL) scenario that can leverage labeled data in multiple source languages simultaneously to improve the performance on the low-resource target language. Distinct from other transfer learning tasks such as domain adaptation, one unique difficulty faced by cross-lingual transfer learning is the disparate input space problem represented by disjoint sets of vocabulary which cripples the use of traditional feature representations. Therefore, the CLTL problem typically consists of two parts: cross-lingual language representation, and model transfer. Fortunately, recent research in unsupervised learning of cross-lingual word embeddings (Lample et al., 2018) provides a viable solution for the language representation problem without the need for parallel corpora. We hence focus on the model transfer problem in this work. 1The code will be available at http://[url redacted for anonymity]. 2In contrast, supervised CLTL assumes the availability of annotations in the target language. The most straightforward method for cross-lingual model transfer is through weight sharing, namely directly applying the model trained on the source language to the target language. However, as shown in previous work (Chen et al., 2016), the feature distributions of different languages extracted by the same neural net are still dissimilar, and weight sharing is not sufficient for learning languageinvariant features that generalize well across languages. Existing work, therefore, typically relies on language-adversarial training (Chen et al., 2016; Kim et al., 2017) to extract features that are invariant with respect to the shift in language, using only unlabeled texts from each language. Nonetheless, in the MLTL setting, where multiple source languages exist, language-adversarial training will only use for model transfer the features that are common among all source languages and the target, which may be too restrictive in many cases. For example, when transferring from English, Spanish and Chinese to German, language-adversarial training will retain only features that are invariant across all four languages, which can be too sparse to be informative. On the other hand, the fact that German is more similar to English than to Chinese is neglected and the transferred model is unable to utilize features that are shared only between English and German. To address these shortcomings, we propose a new model that not only exploits language-invariant features, but also allows the target language to dynamically and selectively leverage language-specific features through a probabilistic attention-style mixture of experts mechanism (see Section 3). This allows our model to learn what to share between various languages. For multiple CLTL tasks ranging from text classification to sequence tagging including a real-world large-scale industry dataset, our model beats all baseline models trained, like ours, without crosslingual supervision. More strikingly, it could in many cases match or outperform state-of-the-art models that have access to strong cross-lingual supervision (e.g. commercial machine translation systems or millions of parallel sentences). 2 RELATED WORK The diversity of human languages is a critical challenge for natural language processing. In order to alleviate the need for obtaining annotated data for each task in each language, cross-lingual transfer learning (CLTL) has long been studied (Yarowsky et al., 2001; Bel et al., 2003, inter alia). One CLTL direction is the supervised setting, where training data is available in the target language, and the goal is to further boost the performance by resorting to labeled data in additional languages. In the presence of target language training data, recent work in deep learning is able to perform CLTL without relying on additional cross-lingual resources (Kim et al., 2017; Yang et al., 2017). On the other hand, an arguably more challenging setting is the unsupervised setting, where no target language training data is available. Traditionally, research focuses on resource-based methods, where general-purpose cross-lingual resources such as MT systems or parallel corpora are utilized to replace task-specific annotated data (Wan, 2009; Prettenhofer & Stein, 2010). Zhang et al. (2016) could use as few as ten word translation pairs for CLTL, but is restricted to the part-of-speech tagging task. With the advent of deep learning, especially adversarial neural networks (Goodfellow et al., 2014; Ganin et al., 2016), progress has been made towards model-based CLTL methods. Chen et al. (2016) propose language-adversarial training that does not directly depend on parallel corpora, but instead only requires a set of bilingual word embeddings (BWEs). However, the BWEs used in their work were still trained using parallel corpus. Another important direction for CLTL is to learn cross-lingual word representations (Klementiev et al., 2012; Zou et al., 2013; Mikolov et al., 2013). Recently, there have been several notable work for learning fully unsupervised cross-lingual word embeddings, both for the bilingual (Zhang et al., 2017; Lample et al., 2018; Artetxe et al., 2018) and multilingual case (Chen & Cardie, 2018b). These efforts pave the road for performing CLTL without cross-lingual resources. 3 MODEL One commonly adopted paradigm for neural CLTL models is the shared-private model (Bousmalis et al., 2016; Kim et al., 2017), where the features are divided into two parts: shared (language-invariant) features and private (language-specific) features. As mentioned before, the shared features are enforced to be language-invariant via language-adversarial training, by attempt- ing to fool a language discriminator. Furthermore, Chen & Cardie (2018a) propose a generalized shared-private model for the multi-source setting, where a multinomial adversarial network (MAN) is adopted to extract common features shared by all source languages as well as the target. On the other hand, the private features are learned by separate feature extractors, one for each source language, capturing the remaining features outside the shared ones. During training, the labeled samples from a certain source language go through the corresponding private feature extractor for that particular language. At test time, there is no private feature extractor for the target language; only the shared features are used for cross-lingual transfer. As mentioned in Section 1, using only the shared features for model transfer imposes an overly strong constraint and many useful features may be wiped out by adversarial training if they are shared only between the target language and a subset of source languages. Therefore, we propose to use a mixture-of-experts (MoE) model (Shazeer et al., 2017; Gu et al., 2018) to learn the private features. The highlevel idea is to have a set of language expert networks, one per source language, each responsible for learning language-specific features for that source language during training. However, instead of hard-switching between the experts, each sample uses a convex combination of all experts, dictated by an expert gate. Thus, at test time, the trained expert gate can decide what combination to use for the unseen target language based on its similarity to the source languages. Figure 1 shows an overview of our full MAN-MoE model for multilingual model transfer. The boxes illustrate various components of the MAN-MoE model (§3.1), while the arrows depict the training flow. (More training details can be found in Appendix A.) 3.1 MODEL ARCHITECTURE Figure 1 portrays an abstract view of the MAN-MoE model with four major components: the Multilingual Word Representation, the Shared Feature ExtractorFs (together with the Language DiscriminatorD), the MoE Private Feature ExtractorFp, and finally the MoE Predictor C. Based on the actual task (e.g. sequence tagging, text classification, sequence to sequence, etc.) different architectures may be adopted, as explained below. Multilingual Word Representation embeds words from all languages into a single semantic space so that words with similar meanings are close to each other regardless of language. In this work, we mainly rely on the MUSE embeddings (Lample et al., 2018) that are fully unsupervised. We map all other languages into English to obtain a multilingual embedding space. However, in certain experiments, MUSE yields 0 accuracy on one or more language pairs (Søgaard et al., 2018), in which case the VecMap embeddings (Artetxe et al., 2017) are used. It uses identical strings as supervision, which does not require parallel corpus or human annotations to train. In addition, for tasks where morphological features are important, one can add character-level word embeddings (Dos Santos & Zadrozny, 2014) that captures sub-word information. When character embeddings are used, we add a single CharCNN that is shared across all languages, and the final word representation is the concatenation of the word embedding and the char-level embedding. The CharCNN can then be trained end to end with the rest of the model. Shared Feature Extractor Fs is a multinomial adversarial network (Chen & Cardie, 2018a), which is an adversarial pair of a feature extractor (e.g. LSTM or CNN) and a Language DiscriminatorD. D is a text classifier (Kim, 2014) that takes the shared features (extracted by Fs) of an input sequence and predicts which language it comes from. On the other hand, Fs strives to fool D so that it cannot identify the language of a sample. The hypothesis is that if D cannot recognize the language of the input, the shared features then do not contain language information and are hence languageinvariant. Note that D is trained only using unlabeled corpus, and can therefore be trained on all languages including the target language with no labeled data. MoE Private Feature Extractor Fp is a key difference of our model from previous work, which is shown in Figure 2. The figure shows the Mixture-of-Experts (Shazeer et al., 2017) model with three source languages, English, German and Spanish. Fp has a shared BiLSTM at the bottom that extracts contextualized word representations for each tokenw in the input sentence. The LSTM hidden representation hw is then fed into the MoEmodule, where each source language has a separate expert network (a MLP). In addition, the expert gate G is a linear transformation that takes hw as input and outputs a softmax score αi for each expert. The final private feature vector is a mixture of all expert outputs, dictated by the expert gate weights α. During training, the gate loss Jg is used to encourage samples from a certain source language to use the correct expert (see Appendix A for more details), and each expert is hence learning language-specific features for that language. At test time, the trained expert gate will examine the hidden representation of a token, and produces the optimal expert weights. Therefore, Fp is able to dynamically determine what knowledge to use at a token level, serving as a much more flexible and powerful feature extractor for those features that are not shared across all languages. MoE Task-Specific Predictor C is the final module that make predictions for the end task, and may take different forms depending on the task. For instance, Figure 3 shows the MoE predictor for sequence tagging, where one output label is predicted for each input token. It is straightforward to adapt C to work for other tasks. For example, for text classification, a pooling layer such as dot-product attention (Luong et al., 2015) can be added at the bottom to fuse token-level features into a single sentence feature vector. C first concatenates the shared and private features to form a single feature vector for each token. It then has another Mixture-of-Experts module that outputs a softmax probability over all labels for each token. The idea is that it may be favorable to put different weights between the language-invariant and language-specific features for different target languages. Again consider the example of English, German, Spanish and Chinese. When transferring to Chinese from the other three, the source languages are similar to each other while all being rather distant from Chinese. Therefore, the adversarially learned shared features might be more important in this case. On the other hand, when transferring to German, which is much more similar to English than to Chinese, we might want to pay more attention to the MoE private features. Therefore, we adopt a MoE module in C, which provides more flexibility than using a single MLP.3 3We also experimented with an attention mechanism between the shared and private features in C, or adding a gating mechanism to modulate each feature channel, but adding another MoE in C gave the best results. 4 EXPERIMENTS In this section, we present an extensive set of experiments across three datasets. The first experiment is on a large-scale real-world multilingual slot filling (sequence tagging) dataset, where the data is used in a commercial personal virtual assistant. In addition, we conduct experiments on two public academic datasets, namely the CoNLL 2002/2003 Multilingual Named Entity Recognition (sequence tagging) dataset (Sang, 2002; Sang & Meulder, 2003), and the Multilingual Amazon Reviews (text classification) dataset (Prettenhofer & Stein, 2010). 4.1 CROSS-LINGUAL SLOT FILLING FOR VIRTUAL ASSISTANTS As shown in Table 1, we collect data for four languages: English, German, Spanish, and Chinese, over three domains: Navigation, Calendar, and Files. Each domain has a set of pre-determined slots (the slots are the same across languages), and the user utterances in each language and domain are annotated by crowd workers with the correct slots (see the examples in Table 1). We employ the standard BIO tagging scheme to formulate the slot filling problem as a sequence tagging task. For each domain and language, the data is divided into a training, a validation, and a test set, with the corresponding number of samples in each split shown in Table 1. We can see that there is a natural imbalance in the amount of available data for each language, which further motivates cross-lingual transfer learning. In our experiments, we treat each domain as a separate experiment, and consider each of German, Spanish and Chinese as the target language while the remaining three being source languages, which results in a total of 9 experiments. 4.1.1 RESULTS In Table 2, we report the performance of MAN-MoE compared to a number of baseline systems. All systems adopt the same base architecture, which is a multi-layer BiLSTM sequence tagger (İrsoy & Cardie, 2014) with a token-level MLP on top (no CRFs used). MT baselines employ machine translation (MT) for cross-lingual transfer. In particular, the trainon-trans(lation) method translates the entire English training set into each target language which are in turn used to train a supervised system on the target language. On the other hand, the teston-trans(lation) method trains an English sequence tagger, and utilizes MT to translate the test set of each target language into English in order to make predictions. In this work, we adopt the Microsoft Translator4, a strong commercial MT system. Note that for a MT system to work for sequence tagging tasks, word alignment information must be available, in order to project wordlevel annotations across languages. This rules out many MT systems such as Google Translate since they do not provide word alignment information through their APIs. BWE baselines rely on Bilingual Word Embeddings (BWEs) and weight sharing for CLTL. Namely, the sequence tagger trained on the source language(s) are directly applied to the target language, in hopes that the BWEs could bridge the language gap. This simple method has been shown to yield strong results in recent work (Upadhyay et al., 2018). The MUSE (Lample et al., 2018) BWEs are 4https://azure.microsoft.com/en-us/services/cognitive-services/translator-text-api/ used by all systems in this experiment. 1-to-1 indicates that we are only transferring from English, while 3-to-1 means the training data from all other three languages are leveraged.5 The final baseline is the MAN model (Chen & Cardie, 2018a), presented before our MAN-MoE approach. As shown in Table 2, MAN-MoE substantially outperforms all baseline systems that do not employ cross-lingual supervision on almost all domains and languages. Another interesting observation is that MAN performs strongly on Chinese while being much worse on German and Spanish compared to the BWE baseline. This corroborates our hypothesis that MAN only leverages features that are invariant across all languages for CLTL, and it learns such features better than weight sharing. Therefore, when transferring to German or Spanish, which is similar to a subset of source languages, the performance of MAN degrades significantly. On the other hand, when Chinese serves as the target language, where all source languages are rather distant from it, MAN has its merit in extracting language-invariant features that could generalize to Chinese. With MAN-MoE, however, this trade-off between close and distant language pairs is well addressed by the combination of MAN and MoE. By utilizing both language-invariant and language-specific features for transfer, MAN-MoE outperforms all cross-lingually unsupervised baselines on all languages. Furthermore, even when compared with the MT baseline, which has access to hundreds of millions of parallel sentences, MAN-MoE performs competitively on German and Spanish. It even significantly beats both MT systems on German as MT sometimes fails to provide word alignment for German. On Chinese, where the unsupervised BWEs are much less accurate (BWE baselines only achieve 20% F1), MAN-MoE is able to greatly improve over the BWE and MAN baselines and shows promising results for fully unsupervised CLTL even between distant language pairs. 4.1.2 FEATURE ABLATION In this section, we take a closer look at the various modules of MAN-MoE and their impacts on performance. When the tagger (C) MoE is removed, moderate decrease is observed on all languages. The performance degrades the most on Chinese, suggesting that using a single MLP in C is not ideal when the target language is not similar to the sources. When removing the private MoE, the MoE in C no longer makes much sense as C only has access to the shared features, and the performance is 5MAN and MAN-MoE results are always 3-to-1 in this paper. even slightly worse than removing both MoEs. With both MoE modules removed, it reduces to the MAN model, and we saw a significant drop on German and Spanish. Finally, when removing MAN while keeping MoE, where the shared features are simply learned via weight-sharing, we see a slight drop on German and Spanish, but a rather great one on Chinese. The ablation results support our hypotheses and validate the merit of MAN-MoE. 4.2 CROSS-LINGUAL NAMED ENTITY RECOGNITION In this section, we present experiments on the CoNLL 2002/2003 multilingual named entity recognition (NER) dataset (Sang, 2002; Sang & Meulder, 2003), with four languages: English, German, Spanish and Dutch. The task is also formulated as a sequence tagging problem, with four types of tags: PER, LOC, ORG, and MISC. The results are summarized in Table 4. We observe that using only word embeddings does not yield satisfactory results, since the out-of-vocabulary problem is rather severe, and morphological features such as capitalization is crucial for NER. We hence add character-level word embeddings for this task (§3.1) to capture subword features and alleviate the OOV problem. For German, however, all nouns are capitalized, and the capitalization features learned on the other three languages would lead to poor results. Therefore, for German only, we lowercase all characters in systems that adopt CharCNN. Table 4 also shows state-of-the-art models in the literature. Note that most of these systems are specifically designed for the NER task, and exploit many task-specific resources, such as multilingual gazetteers, or the metadata in Freebase or Wikipedia (such as entity categories). Among these, Täckström et al. (2012) rely on parallel corpora to learn cross-lingual word clusters that serve as features. Nothman et al. (2013); Tsai et al. (2016) both leverage the structured and unstructured information in external knowledge bases such as Wikipedia to learn useful features for cross-lingual NER. Ni et al. (2017) employ noisy parallel corpora (aligned sentence pairs, but are not always translations) as well as bilingual dictionaries (5k words for each language pair) for model transfer. They further added external features such as entity types learned from Wikipedia for improved performance. Finally, Mayhew et al. (2017) propose a multi-source framework that utilizes large cross-lingual lexica. Despite not using any of these resources, general or task-specific, MAN-MoE nonetheless outperforms all these methods. The only exception is German, where task-specific resources may still be helpful, due to its high OOV rate and unique capitalization rules. In a contemporaneous work by (Xie et al., 2018), they propose a cross-lingual NER model using BiLSTM-CRF that achieves similar performance compared to MAN-MoE+CharCNN. However, our architecture is not specialized to the NER task, and we did not add task-specific modules such as a CRF decoding layer, etc. Last but not least, we replace the MUSE embeddings with the recently proposed unsupervised multilingual word embeddings (Chen & Cardie, 2018b), which further boosts the performance, achieving a new state-of-the-art performance. 4.3 CROSS-LINGUAL TEXT CLASSIFICATION ON AMAZON REVIEWS Finally, we report results on a multilingual text classification dataset (Prettenhofer & Stein, 2010). The dataset is a binary classification dataset where each review is classified into positive or negative sentiment. It has four languages: English, German, French and Japanese. As shown in Table 5, MT-BOW uses machine translation to translate the bag of words of a target sentence into the source language, while CL-SCL learns a cross-lingual feature space via structural correspondence learning (Prettenhofer & Stein, 2010). CR-RL (Xiao & Guo, 2013) learns bilingual word representations where part of the word vector is shared among languages. Bi-PV (Pham et al., 2015) extracts bilingual paragraph vector by sharing the representation between parallel documents. UMM (Xu & Wan, 2017) is a multilingual framework that could utilize parallel corpora between multiple language pairs, and pivot as needed when direct bitexts are not available for a specific source-target pair. Finally CLDFA (Xu & Yang, 2017) proposes cross-lingual distillation on parallel corpora for CLTL. Unlike other works listed, however, they adopt a task-specific parallel corpus (translated Amazon reviews) that are difficult to obtain in practice, making the numbers not directly comparable to others. Among these methods, UMM is the only one that does not require direct parallel corpus between all source-target pairs. It could instead utilize pivot languages (e.g. English) to connect multiple languages. MAN-MoE, however, takes another giant leap forward to completely remove the necessity of parallel corpora while achieving similar results on German and French compared to UMM. On Japanese, the performance of MAN-MoE is again limited by the quality of BWEs. (BWE baselines are merely better than randomness.) Nevertheless, MAN-MoE remains highly effective and the performance is only a few points below most methods with cross-lingual supervision. 4.4 VISUALIZATION OF EXPERT GATE WEIGHTS In Figure 4, we visualize the average expert gate weights for each of the three target languages in the Amazon dataset. For each sample, we first compute a sentence-level aggregation by averaging over the expert gate weights of all its tokens. These sentence-level expert gate weights are then further averaged across all samples in the validation set of all three domains (books, dvd, music), which forms a final language-level average expert gate weight for each target language. The visualization further collaborates with our hypothesis that our model makes informed decisions when selecting what features to share to the target language. It can be seen that when transferring to German or French (from the remaining three), the Japanese expert is less utilized compared to the European languages. On the other hand, it is interesting that when transferring to Japanese, the French and English experts are used more than the German one, and the exact reason remains to be investigated. However, this phenomenon might be of less significance since the private features may not play a very important role when transferring to Japanese as the model is probably focusing more on the shared features, according to the ablation study in §4.1.2. 5 CONCLUSION In this paper, we propose a zero-resource multilingual model transfer approach that requires neither target language training data nor general-purpose cross-lingual resources. Our MAN-MoE method exploits both language-invariant (shared) features and language-specific (private) features for CLTL, which departs from previous models that could only make use of shared features. Following earlier work, the shared features are learned via language-adversarial training (Chen et al., 2016). However, the key difference is that we employ a Mixture-of-Experts (MoE) module to extract the private features, which is able to dynamically capture the relation between the target language and each source language on a token level. This is extremely helpful when the target language is similar to a subset of source languages, in which case traditional models that solely rely on shared features would perform poorly. Our claim is supported by a wide range of experiments over multiple text classification and sequence tagging tasks, including a large-scale real-world industry dataset. MAN-MoE significantly outperforms all cross-lingually unsupervised baselines regardless of task or language. Furthermore, even considering methods with strong cross-lingual supervision (e.g. commercial machine translation systems or millions of parallel sentences), MAN-MoE is able to match or outperform these models on closer language pairs. On distant language pairs such as English-Chinese or English-Japanese, where the quality of cross-lingual word embeddings are unsatisfactory, MAN-MoE remains highly effective and substantially mitigates the performance gap introduced by cross-lingual supervision. APPENDIX A MODEL TRAINING Denote the set of all N source languages as S, where |S| = N . Denote the target language as T , and let ∆ = S ∪ T be the set of all languages. Denote the annotated corpus for a source language l ∈ S as Xl, where (x, y) ∼ Xl is a sample drawn from Xl. In addition, unlabeled data is required for all languages to facilitate the MAN training. We hence denote as Ul′ the unlabeled texts from a language l′ ∈ ∆. Algorithm 1 MAN-MoE Training Require: labeled corpus X; unlabeled corpus U; Hyperpamameter λ1, λ2 > 0, k ∈ N 1: repeat 2: . D iterations 3: for diter = 1 to k do 4: lD = 0 5: for all l ∈ ∆ do . For all languages 6: Sample a mini-batch x ∼ Ul 7: fs = Fs(x) . Shared features 8: lD += LD(D(fs); l) . D loss 9: Update D parameters using∇lD 10: . Main iteration 11: loss = 0 12: for all l ∈ S do . For all source languages 13: Sample a mini-batch (x,y) ∼ Xl 14: fs = Fs(x) . Shared features 15: fp, g1 = Fp(x) . Private features and gate outputs 16: ŷ, g2 = C(fs,fp) 17: loss += LC(ŷ;y) + λ2(Lg(g1; l) + Lg(g2; l)) . C loss and gate loss 18: for all l ∈ ∆ do . For all languages 19: Sample a mini-batch x ∼ Ul 20: fs = Fs(x) . Shared features 21: loss += −λ1 · LD(D(fs); l) . Language loss to confuse D 22: Update Fs, Fp, C parameters using∇loss 23: until convergence The overall training flow of variant components is illustrated in Figure 1, while the training algorithm is depicted in Algorithm 1. Similar to MAN, there are two separate optimizers to train MAN-MoE, one updating the parameters of D (red arrows), while the other updating the parameters of all other modules (green arrows). In Algorithm 1, LC , LD and Lg are the loss functions for the predictor C, the language discriminator D, and the expert gates in Fp and C, respectively. In practice, we adopt the NLL loss for LC for text classification, and token-level NLL loss for sequence tagging: LNLL(ŷ; y) = − logP (ŷ = y) (1) LT -NLL(ŷ;y) = − logP (ŷ = y) = − ∑ i logP (ŷi = yi) (2) where y is a scalar class label, and y is a vector of token labels. LC is hence interpreted as the negative log-likelihood of predicting the correct task label. Similarly, D adopts the NLL loss in (1) for predicting the correct language of a sample. Finally, the expert gates G use token-level NLL loss in (2), which translates to the negative log-likelihood of using the correct language expert for each token in a sample. Therefore, the objectives that C, D and G minimize are, respectively: JC = ∑ l∈S E (x,y)∈Xl [LC(C(Fs(x),Fp(x)); y)] (3) JD = ∑ l∈∆ E x∈Ul [LD(D(Fs(x)); l)] (4) JG = ∑ l∈S E x∈Xl [∑ w∈x LG(G(hw); l) ] (5) where hw in (5) is the BiLSTM hidden representation in Fp as shown in Figure 2. In addition, note thatD is trained using unlabeled corpora over all languages (∆), while the training of Fp and C (and hence G) only take place on source languages (S). Finally, the overall objective function is: J = JC − λ1JD + λ2(J (1)G + J (2) G ) (6) where J (1)G and J (2) G are the two expert gates in Fp and C, respectively. APPENDIX B IMPLEMENTATION DETAILS In all experiments, Adam (Kingma & Ba, 2015) is used for both optimizers (main optimizer and D optimizer), with learning rate 0.001 and weight decay 10−8. Batch size is 64 for the slot filling experiment and 16 for the NER and Amazon Reviews experiments, which is selected mainly due to memory concerns. CharCNN increases the GPU memory usage and NER hence could only use a batch size of 16 to fit in 12GB of GPU memory. The Amazon experiment does not employ character embeddings but the documents are much longer, and thus also using a smaller batch size. All embeddings are fixed during training. Dropout (Srivastava et al., 2014) with p = 0.5 is applied in all components. Unless otherwise mentioned, ReLU is used as non-linear activation. Bidirectional-LSTM is used in the feature extractors for all experiments. In particular, Fs is a twolayer BiLSTM of hidden size 128 (64 for each direction), and Fp is a two-layer BiLSTM of hidden size 128 stacked with a MoE module (see Figure 2). Each expert network in the MoE module of Fp is a two-layer MLP again of hidden size of 128. The final layer in the MLP has a tanh activation instead of ReLU to match the LSTM-extracted shared features (with tanh activations). The expert gate is a linear transformation (matrix) of size 128×N , whereN is the number of source languages. On the other hand, the architecture of the task specific predictor C depends on the task. For sequence tagging experiments, the structure of C is shown in Figure 3, where each expert in the MoE module is a token-level two-layer MLP with a softmax layer on top for making token label predictions. For text classification tasks, a dot-product attention mechanism (Luong et al., 2015) is added after the shared and private features are concatenated. It has a length 256 weight vector that attends to the feature vectors of each token and computes a softmax mixture that pools the token-level feature vectors into a single sentence-level feature vector. The rest of C remains the same for text classification. For the language discriminator D, a CNN text classifier (Kim, 2014) is adopted in all experiments. It takes as input the shared feature vectors of each token, and employs a CNN with max-pooling to pool them into a single fixed-length feature vector, which is then fed into a MLP for classifying the language of the input sequence. The number of kernels is 200 in the CNN, while the kernel sizes are 3, 4, and 5. The MLP has one hidden layer of size 128. The MUSE and VecMap embeddings are trained with the monolingual 300d fastText Wikipedia embeddings (Bojanowski et al., 2017). When character-level word embeddings are used, a CharCNN is added that takes randomly initialized character embeddings of each character in a word, and passes them through a CNN with kernel number 200 and kernel sizes 3, 4, and 5. Finally, the character embeddings are max-pooled and fed into a single fully-connected layer to form a 128 dimensional character-level word embedding, which is concatenated with the pre-trained cross-lingual word embedding to form the final word representation of that word. The remaining hyperparameters such as λ1, λ2 and k (see Algorithm 1) are tuned for each individual experiment, as shown in Table 6.
1. What is the novelty of the paper's approach compared to prior works? 2. How does the proposed method differ from existing zero-shot learning models? 3. Are there any limitations to the claim of proposing the first zero-resource multilingual transfer learning model? 4. What additional analyses or visualizations could be provided to better understand the network's performance? 5. How do the results compare to other baselines such as simple adversarial training and GradNorm? 6. Would it be beneficial to include experiments with actual low-resource languages? 7. Could the authors provide more context or references for tasks with data in more languages, such as POS tagging, morphological analysis, or machine translation?
Review
Review My main reservation with this paper is the limited novelty. The approach seems to be a rather direct application of a subset of the sluice network architecture in [0] - which has been available on ArXiV since 2017 - with MUSE pre-trained embeddings. In particular, I don’t think the claim that the authors “propose the first zero-resource multilingual transfer learning model” is necessary - and I think it is way too strong a claim. Training an LSTM on English data with MUSE/vecmap embeddings is pretty standard by now, and this does not require any target language training data or cross-lingual supervision either. See zero-shot scenarios in [1-2], for example. Apart from that, I think the write-up is nice, the approach makes a lot of sense, and results are impressive. I would have liked to see a bit more analysis. In particular, the fact that you learn gate values, makes it easy to analyze/visualize what and how your networks learn to share. I think there’s a few baselines in between BWE and MAN, e.g., simple adversarial training and adversarial training with GradNorm [3], that would put your results in perspective. Finally, I would like to encourage the authors to run experiments with actual low-resource languages: A literature on cross-lingual transfer experimenting with German, Spanish, and Japanese, could end up being very heavily biased. For tasks with data in more languages, consider, for example, POS tagging [4], morphological analysis [5], or machine translation [6]. [0] https://arxiv.org/abs/1705.08142 [1] http://aclweb.org/anthology/P18-1074 [2] http://aclweb.org/anthology/P18-2063 [3] https://arxiv.org/abs/1711.02257 [4] http://universaldependencies.org/ [5] http://unimorph.org/ [6] http://christos-c.com/bible/
ICLR
Title Zero-Resource Multilingual Model Transfer: Learning What to Share Abstract Modern natural language processing and understanding applications have enjoyed a great boost utilizing neural networks models. However, this is not the case for most languages especially low-resource ones with insufficient annotated training data. Cross-lingual transfer learning methods improve the performance on a lowresource target language by leveraging labeled data from other (source) languages, typically with the help of cross-lingual resources such as parallel corpora. In this work, we propose a zero-resource multilingual transfer learning model1 that can utilize training data in multiple source languages, while not requiring target language training data nor cross-lingual supervision. Unlike most existing methods that only rely on language-invariant features for cross-lingual transfer, our approach utilizes both language-invariant and language-specific features in a coherent way. Our model leverages adversarial networks to learn language-invariant features and mixture-of-experts models to dynamically exploit the relation between the target language and each individual source language. This enables our model to learn effectively what to share between various languages in the multilingual setup. It results in significant performance gains over prior art, as shown in an extensive set of experiments over multiple text classification and sequence tagging tasks including a large-scale real-world industry dataset. 1 INTRODUCTION The recent deep learning revolution enables a wide variety of NLP models to achieve impressive performance, thanks in part to large-scale annotated datasets. However, such an advantage is not available to most of the world languages since only a handful of them have the labeled data necessary for training deep neural nets. As it is prohibitive to obtain training data for all languages of interest, cross-lingual transfer learning (CLTL) comes to the rescue to enable learning models for a target language using annotated data from other languages (source languages) (Yarowsky et al., 2001). In this paper, we study the more challenging unsupervised CLTL setting, where no target language labeled data is used for training.2 In this setting, most previous work relies on cross-lingual resources in one form or another in order to transfer models across languages, such as bilingual lexica (Mihalcea et al., 2007), parallel corpora (Yarowsky et al., 2001), or machine translation systems (Wan, 2009). In contrast, this work proposes a zero-resource CLTL framework that relies on no cross-lingual resources whatsoever. In addition, we focus on a multi-source CLTL, also known as multilingual transfer learning (MLTL) scenario that can leverage labeled data in multiple source languages simultaneously to improve the performance on the low-resource target language. Distinct from other transfer learning tasks such as domain adaptation, one unique difficulty faced by cross-lingual transfer learning is the disparate input space problem represented by disjoint sets of vocabulary which cripples the use of traditional feature representations. Therefore, the CLTL problem typically consists of two parts: cross-lingual language representation, and model transfer. Fortunately, recent research in unsupervised learning of cross-lingual word embeddings (Lample et al., 2018) provides a viable solution for the language representation problem without the need for parallel corpora. We hence focus on the model transfer problem in this work. 1The code will be available at http://[url redacted for anonymity]. 2In contrast, supervised CLTL assumes the availability of annotations in the target language. The most straightforward method for cross-lingual model transfer is through weight sharing, namely directly applying the model trained on the source language to the target language. However, as shown in previous work (Chen et al., 2016), the feature distributions of different languages extracted by the same neural net are still dissimilar, and weight sharing is not sufficient for learning languageinvariant features that generalize well across languages. Existing work, therefore, typically relies on language-adversarial training (Chen et al., 2016; Kim et al., 2017) to extract features that are invariant with respect to the shift in language, using only unlabeled texts from each language. Nonetheless, in the MLTL setting, where multiple source languages exist, language-adversarial training will only use for model transfer the features that are common among all source languages and the target, which may be too restrictive in many cases. For example, when transferring from English, Spanish and Chinese to German, language-adversarial training will retain only features that are invariant across all four languages, which can be too sparse to be informative. On the other hand, the fact that German is more similar to English than to Chinese is neglected and the transferred model is unable to utilize features that are shared only between English and German. To address these shortcomings, we propose a new model that not only exploits language-invariant features, but also allows the target language to dynamically and selectively leverage language-specific features through a probabilistic attention-style mixture of experts mechanism (see Section 3). This allows our model to learn what to share between various languages. For multiple CLTL tasks ranging from text classification to sequence tagging including a real-world large-scale industry dataset, our model beats all baseline models trained, like ours, without crosslingual supervision. More strikingly, it could in many cases match or outperform state-of-the-art models that have access to strong cross-lingual supervision (e.g. commercial machine translation systems or millions of parallel sentences). 2 RELATED WORK The diversity of human languages is a critical challenge for natural language processing. In order to alleviate the need for obtaining annotated data for each task in each language, cross-lingual transfer learning (CLTL) has long been studied (Yarowsky et al., 2001; Bel et al., 2003, inter alia). One CLTL direction is the supervised setting, where training data is available in the target language, and the goal is to further boost the performance by resorting to labeled data in additional languages. In the presence of target language training data, recent work in deep learning is able to perform CLTL without relying on additional cross-lingual resources (Kim et al., 2017; Yang et al., 2017). On the other hand, an arguably more challenging setting is the unsupervised setting, where no target language training data is available. Traditionally, research focuses on resource-based methods, where general-purpose cross-lingual resources such as MT systems or parallel corpora are utilized to replace task-specific annotated data (Wan, 2009; Prettenhofer & Stein, 2010). Zhang et al. (2016) could use as few as ten word translation pairs for CLTL, but is restricted to the part-of-speech tagging task. With the advent of deep learning, especially adversarial neural networks (Goodfellow et al., 2014; Ganin et al., 2016), progress has been made towards model-based CLTL methods. Chen et al. (2016) propose language-adversarial training that does not directly depend on parallel corpora, but instead only requires a set of bilingual word embeddings (BWEs). However, the BWEs used in their work were still trained using parallel corpus. Another important direction for CLTL is to learn cross-lingual word representations (Klementiev et al., 2012; Zou et al., 2013; Mikolov et al., 2013). Recently, there have been several notable work for learning fully unsupervised cross-lingual word embeddings, both for the bilingual (Zhang et al., 2017; Lample et al., 2018; Artetxe et al., 2018) and multilingual case (Chen & Cardie, 2018b). These efforts pave the road for performing CLTL without cross-lingual resources. 3 MODEL One commonly adopted paradigm for neural CLTL models is the shared-private model (Bousmalis et al., 2016; Kim et al., 2017), where the features are divided into two parts: shared (language-invariant) features and private (language-specific) features. As mentioned before, the shared features are enforced to be language-invariant via language-adversarial training, by attempt- ing to fool a language discriminator. Furthermore, Chen & Cardie (2018a) propose a generalized shared-private model for the multi-source setting, where a multinomial adversarial network (MAN) is adopted to extract common features shared by all source languages as well as the target. On the other hand, the private features are learned by separate feature extractors, one for each source language, capturing the remaining features outside the shared ones. During training, the labeled samples from a certain source language go through the corresponding private feature extractor for that particular language. At test time, there is no private feature extractor for the target language; only the shared features are used for cross-lingual transfer. As mentioned in Section 1, using only the shared features for model transfer imposes an overly strong constraint and many useful features may be wiped out by adversarial training if they are shared only between the target language and a subset of source languages. Therefore, we propose to use a mixture-of-experts (MoE) model (Shazeer et al., 2017; Gu et al., 2018) to learn the private features. The highlevel idea is to have a set of language expert networks, one per source language, each responsible for learning language-specific features for that source language during training. However, instead of hard-switching between the experts, each sample uses a convex combination of all experts, dictated by an expert gate. Thus, at test time, the trained expert gate can decide what combination to use for the unseen target language based on its similarity to the source languages. Figure 1 shows an overview of our full MAN-MoE model for multilingual model transfer. The boxes illustrate various components of the MAN-MoE model (§3.1), while the arrows depict the training flow. (More training details can be found in Appendix A.) 3.1 MODEL ARCHITECTURE Figure 1 portrays an abstract view of the MAN-MoE model with four major components: the Multilingual Word Representation, the Shared Feature ExtractorFs (together with the Language DiscriminatorD), the MoE Private Feature ExtractorFp, and finally the MoE Predictor C. Based on the actual task (e.g. sequence tagging, text classification, sequence to sequence, etc.) different architectures may be adopted, as explained below. Multilingual Word Representation embeds words from all languages into a single semantic space so that words with similar meanings are close to each other regardless of language. In this work, we mainly rely on the MUSE embeddings (Lample et al., 2018) that are fully unsupervised. We map all other languages into English to obtain a multilingual embedding space. However, in certain experiments, MUSE yields 0 accuracy on one or more language pairs (Søgaard et al., 2018), in which case the VecMap embeddings (Artetxe et al., 2017) are used. It uses identical strings as supervision, which does not require parallel corpus or human annotations to train. In addition, for tasks where morphological features are important, one can add character-level word embeddings (Dos Santos & Zadrozny, 2014) that captures sub-word information. When character embeddings are used, we add a single CharCNN that is shared across all languages, and the final word representation is the concatenation of the word embedding and the char-level embedding. The CharCNN can then be trained end to end with the rest of the model. Shared Feature Extractor Fs is a multinomial adversarial network (Chen & Cardie, 2018a), which is an adversarial pair of a feature extractor (e.g. LSTM or CNN) and a Language DiscriminatorD. D is a text classifier (Kim, 2014) that takes the shared features (extracted by Fs) of an input sequence and predicts which language it comes from. On the other hand, Fs strives to fool D so that it cannot identify the language of a sample. The hypothesis is that if D cannot recognize the language of the input, the shared features then do not contain language information and are hence languageinvariant. Note that D is trained only using unlabeled corpus, and can therefore be trained on all languages including the target language with no labeled data. MoE Private Feature Extractor Fp is a key difference of our model from previous work, which is shown in Figure 2. The figure shows the Mixture-of-Experts (Shazeer et al., 2017) model with three source languages, English, German and Spanish. Fp has a shared BiLSTM at the bottom that extracts contextualized word representations for each tokenw in the input sentence. The LSTM hidden representation hw is then fed into the MoEmodule, where each source language has a separate expert network (a MLP). In addition, the expert gate G is a linear transformation that takes hw as input and outputs a softmax score αi for each expert. The final private feature vector is a mixture of all expert outputs, dictated by the expert gate weights α. During training, the gate loss Jg is used to encourage samples from a certain source language to use the correct expert (see Appendix A for more details), and each expert is hence learning language-specific features for that language. At test time, the trained expert gate will examine the hidden representation of a token, and produces the optimal expert weights. Therefore, Fp is able to dynamically determine what knowledge to use at a token level, serving as a much more flexible and powerful feature extractor for those features that are not shared across all languages. MoE Task-Specific Predictor C is the final module that make predictions for the end task, and may take different forms depending on the task. For instance, Figure 3 shows the MoE predictor for sequence tagging, where one output label is predicted for each input token. It is straightforward to adapt C to work for other tasks. For example, for text classification, a pooling layer such as dot-product attention (Luong et al., 2015) can be added at the bottom to fuse token-level features into a single sentence feature vector. C first concatenates the shared and private features to form a single feature vector for each token. It then has another Mixture-of-Experts module that outputs a softmax probability over all labels for each token. The idea is that it may be favorable to put different weights between the language-invariant and language-specific features for different target languages. Again consider the example of English, German, Spanish and Chinese. When transferring to Chinese from the other three, the source languages are similar to each other while all being rather distant from Chinese. Therefore, the adversarially learned shared features might be more important in this case. On the other hand, when transferring to German, which is much more similar to English than to Chinese, we might want to pay more attention to the MoE private features. Therefore, we adopt a MoE module in C, which provides more flexibility than using a single MLP.3 3We also experimented with an attention mechanism between the shared and private features in C, or adding a gating mechanism to modulate each feature channel, but adding another MoE in C gave the best results. 4 EXPERIMENTS In this section, we present an extensive set of experiments across three datasets. The first experiment is on a large-scale real-world multilingual slot filling (sequence tagging) dataset, where the data is used in a commercial personal virtual assistant. In addition, we conduct experiments on two public academic datasets, namely the CoNLL 2002/2003 Multilingual Named Entity Recognition (sequence tagging) dataset (Sang, 2002; Sang & Meulder, 2003), and the Multilingual Amazon Reviews (text classification) dataset (Prettenhofer & Stein, 2010). 4.1 CROSS-LINGUAL SLOT FILLING FOR VIRTUAL ASSISTANTS As shown in Table 1, we collect data for four languages: English, German, Spanish, and Chinese, over three domains: Navigation, Calendar, and Files. Each domain has a set of pre-determined slots (the slots are the same across languages), and the user utterances in each language and domain are annotated by crowd workers with the correct slots (see the examples in Table 1). We employ the standard BIO tagging scheme to formulate the slot filling problem as a sequence tagging task. For each domain and language, the data is divided into a training, a validation, and a test set, with the corresponding number of samples in each split shown in Table 1. We can see that there is a natural imbalance in the amount of available data for each language, which further motivates cross-lingual transfer learning. In our experiments, we treat each domain as a separate experiment, and consider each of German, Spanish and Chinese as the target language while the remaining three being source languages, which results in a total of 9 experiments. 4.1.1 RESULTS In Table 2, we report the performance of MAN-MoE compared to a number of baseline systems. All systems adopt the same base architecture, which is a multi-layer BiLSTM sequence tagger (İrsoy & Cardie, 2014) with a token-level MLP on top (no CRFs used). MT baselines employ machine translation (MT) for cross-lingual transfer. In particular, the trainon-trans(lation) method translates the entire English training set into each target language which are in turn used to train a supervised system on the target language. On the other hand, the teston-trans(lation) method trains an English sequence tagger, and utilizes MT to translate the test set of each target language into English in order to make predictions. In this work, we adopt the Microsoft Translator4, a strong commercial MT system. Note that for a MT system to work for sequence tagging tasks, word alignment information must be available, in order to project wordlevel annotations across languages. This rules out many MT systems such as Google Translate since they do not provide word alignment information through their APIs. BWE baselines rely on Bilingual Word Embeddings (BWEs) and weight sharing for CLTL. Namely, the sequence tagger trained on the source language(s) are directly applied to the target language, in hopes that the BWEs could bridge the language gap. This simple method has been shown to yield strong results in recent work (Upadhyay et al., 2018). The MUSE (Lample et al., 2018) BWEs are 4https://azure.microsoft.com/en-us/services/cognitive-services/translator-text-api/ used by all systems in this experiment. 1-to-1 indicates that we are only transferring from English, while 3-to-1 means the training data from all other three languages are leveraged.5 The final baseline is the MAN model (Chen & Cardie, 2018a), presented before our MAN-MoE approach. As shown in Table 2, MAN-MoE substantially outperforms all baseline systems that do not employ cross-lingual supervision on almost all domains and languages. Another interesting observation is that MAN performs strongly on Chinese while being much worse on German and Spanish compared to the BWE baseline. This corroborates our hypothesis that MAN only leverages features that are invariant across all languages for CLTL, and it learns such features better than weight sharing. Therefore, when transferring to German or Spanish, which is similar to a subset of source languages, the performance of MAN degrades significantly. On the other hand, when Chinese serves as the target language, where all source languages are rather distant from it, MAN has its merit in extracting language-invariant features that could generalize to Chinese. With MAN-MoE, however, this trade-off between close and distant language pairs is well addressed by the combination of MAN and MoE. By utilizing both language-invariant and language-specific features for transfer, MAN-MoE outperforms all cross-lingually unsupervised baselines on all languages. Furthermore, even when compared with the MT baseline, which has access to hundreds of millions of parallel sentences, MAN-MoE performs competitively on German and Spanish. It even significantly beats both MT systems on German as MT sometimes fails to provide word alignment for German. On Chinese, where the unsupervised BWEs are much less accurate (BWE baselines only achieve 20% F1), MAN-MoE is able to greatly improve over the BWE and MAN baselines and shows promising results for fully unsupervised CLTL even between distant language pairs. 4.1.2 FEATURE ABLATION In this section, we take a closer look at the various modules of MAN-MoE and their impacts on performance. When the tagger (C) MoE is removed, moderate decrease is observed on all languages. The performance degrades the most on Chinese, suggesting that using a single MLP in C is not ideal when the target language is not similar to the sources. When removing the private MoE, the MoE in C no longer makes much sense as C only has access to the shared features, and the performance is 5MAN and MAN-MoE results are always 3-to-1 in this paper. even slightly worse than removing both MoEs. With both MoE modules removed, it reduces to the MAN model, and we saw a significant drop on German and Spanish. Finally, when removing MAN while keeping MoE, where the shared features are simply learned via weight-sharing, we see a slight drop on German and Spanish, but a rather great one on Chinese. The ablation results support our hypotheses and validate the merit of MAN-MoE. 4.2 CROSS-LINGUAL NAMED ENTITY RECOGNITION In this section, we present experiments on the CoNLL 2002/2003 multilingual named entity recognition (NER) dataset (Sang, 2002; Sang & Meulder, 2003), with four languages: English, German, Spanish and Dutch. The task is also formulated as a sequence tagging problem, with four types of tags: PER, LOC, ORG, and MISC. The results are summarized in Table 4. We observe that using only word embeddings does not yield satisfactory results, since the out-of-vocabulary problem is rather severe, and morphological features such as capitalization is crucial for NER. We hence add character-level word embeddings for this task (§3.1) to capture subword features and alleviate the OOV problem. For German, however, all nouns are capitalized, and the capitalization features learned on the other three languages would lead to poor results. Therefore, for German only, we lowercase all characters in systems that adopt CharCNN. Table 4 also shows state-of-the-art models in the literature. Note that most of these systems are specifically designed for the NER task, and exploit many task-specific resources, such as multilingual gazetteers, or the metadata in Freebase or Wikipedia (such as entity categories). Among these, Täckström et al. (2012) rely on parallel corpora to learn cross-lingual word clusters that serve as features. Nothman et al. (2013); Tsai et al. (2016) both leverage the structured and unstructured information in external knowledge bases such as Wikipedia to learn useful features for cross-lingual NER. Ni et al. (2017) employ noisy parallel corpora (aligned sentence pairs, but are not always translations) as well as bilingual dictionaries (5k words for each language pair) for model transfer. They further added external features such as entity types learned from Wikipedia for improved performance. Finally, Mayhew et al. (2017) propose a multi-source framework that utilizes large cross-lingual lexica. Despite not using any of these resources, general or task-specific, MAN-MoE nonetheless outperforms all these methods. The only exception is German, where task-specific resources may still be helpful, due to its high OOV rate and unique capitalization rules. In a contemporaneous work by (Xie et al., 2018), they propose a cross-lingual NER model using BiLSTM-CRF that achieves similar performance compared to MAN-MoE+CharCNN. However, our architecture is not specialized to the NER task, and we did not add task-specific modules such as a CRF decoding layer, etc. Last but not least, we replace the MUSE embeddings with the recently proposed unsupervised multilingual word embeddings (Chen & Cardie, 2018b), which further boosts the performance, achieving a new state-of-the-art performance. 4.3 CROSS-LINGUAL TEXT CLASSIFICATION ON AMAZON REVIEWS Finally, we report results on a multilingual text classification dataset (Prettenhofer & Stein, 2010). The dataset is a binary classification dataset where each review is classified into positive or negative sentiment. It has four languages: English, German, French and Japanese. As shown in Table 5, MT-BOW uses machine translation to translate the bag of words of a target sentence into the source language, while CL-SCL learns a cross-lingual feature space via structural correspondence learning (Prettenhofer & Stein, 2010). CR-RL (Xiao & Guo, 2013) learns bilingual word representations where part of the word vector is shared among languages. Bi-PV (Pham et al., 2015) extracts bilingual paragraph vector by sharing the representation between parallel documents. UMM (Xu & Wan, 2017) is a multilingual framework that could utilize parallel corpora between multiple language pairs, and pivot as needed when direct bitexts are not available for a specific source-target pair. Finally CLDFA (Xu & Yang, 2017) proposes cross-lingual distillation on parallel corpora for CLTL. Unlike other works listed, however, they adopt a task-specific parallel corpus (translated Amazon reviews) that are difficult to obtain in practice, making the numbers not directly comparable to others. Among these methods, UMM is the only one that does not require direct parallel corpus between all source-target pairs. It could instead utilize pivot languages (e.g. English) to connect multiple languages. MAN-MoE, however, takes another giant leap forward to completely remove the necessity of parallel corpora while achieving similar results on German and French compared to UMM. On Japanese, the performance of MAN-MoE is again limited by the quality of BWEs. (BWE baselines are merely better than randomness.) Nevertheless, MAN-MoE remains highly effective and the performance is only a few points below most methods with cross-lingual supervision. 4.4 VISUALIZATION OF EXPERT GATE WEIGHTS In Figure 4, we visualize the average expert gate weights for each of the three target languages in the Amazon dataset. For each sample, we first compute a sentence-level aggregation by averaging over the expert gate weights of all its tokens. These sentence-level expert gate weights are then further averaged across all samples in the validation set of all three domains (books, dvd, music), which forms a final language-level average expert gate weight for each target language. The visualization further collaborates with our hypothesis that our model makes informed decisions when selecting what features to share to the target language. It can be seen that when transferring to German or French (from the remaining three), the Japanese expert is less utilized compared to the European languages. On the other hand, it is interesting that when transferring to Japanese, the French and English experts are used more than the German one, and the exact reason remains to be investigated. However, this phenomenon might be of less significance since the private features may not play a very important role when transferring to Japanese as the model is probably focusing more on the shared features, according to the ablation study in §4.1.2. 5 CONCLUSION In this paper, we propose a zero-resource multilingual model transfer approach that requires neither target language training data nor general-purpose cross-lingual resources. Our MAN-MoE method exploits both language-invariant (shared) features and language-specific (private) features for CLTL, which departs from previous models that could only make use of shared features. Following earlier work, the shared features are learned via language-adversarial training (Chen et al., 2016). However, the key difference is that we employ a Mixture-of-Experts (MoE) module to extract the private features, which is able to dynamically capture the relation between the target language and each source language on a token level. This is extremely helpful when the target language is similar to a subset of source languages, in which case traditional models that solely rely on shared features would perform poorly. Our claim is supported by a wide range of experiments over multiple text classification and sequence tagging tasks, including a large-scale real-world industry dataset. MAN-MoE significantly outperforms all cross-lingually unsupervised baselines regardless of task or language. Furthermore, even considering methods with strong cross-lingual supervision (e.g. commercial machine translation systems or millions of parallel sentences), MAN-MoE is able to match or outperform these models on closer language pairs. On distant language pairs such as English-Chinese or English-Japanese, where the quality of cross-lingual word embeddings are unsatisfactory, MAN-MoE remains highly effective and substantially mitigates the performance gap introduced by cross-lingual supervision. APPENDIX A MODEL TRAINING Denote the set of all N source languages as S, where |S| = N . Denote the target language as T , and let ∆ = S ∪ T be the set of all languages. Denote the annotated corpus for a source language l ∈ S as Xl, where (x, y) ∼ Xl is a sample drawn from Xl. In addition, unlabeled data is required for all languages to facilitate the MAN training. We hence denote as Ul′ the unlabeled texts from a language l′ ∈ ∆. Algorithm 1 MAN-MoE Training Require: labeled corpus X; unlabeled corpus U; Hyperpamameter λ1, λ2 > 0, k ∈ N 1: repeat 2: . D iterations 3: for diter = 1 to k do 4: lD = 0 5: for all l ∈ ∆ do . For all languages 6: Sample a mini-batch x ∼ Ul 7: fs = Fs(x) . Shared features 8: lD += LD(D(fs); l) . D loss 9: Update D parameters using∇lD 10: . Main iteration 11: loss = 0 12: for all l ∈ S do . For all source languages 13: Sample a mini-batch (x,y) ∼ Xl 14: fs = Fs(x) . Shared features 15: fp, g1 = Fp(x) . Private features and gate outputs 16: ŷ, g2 = C(fs,fp) 17: loss += LC(ŷ;y) + λ2(Lg(g1; l) + Lg(g2; l)) . C loss and gate loss 18: for all l ∈ ∆ do . For all languages 19: Sample a mini-batch x ∼ Ul 20: fs = Fs(x) . Shared features 21: loss += −λ1 · LD(D(fs); l) . Language loss to confuse D 22: Update Fs, Fp, C parameters using∇loss 23: until convergence The overall training flow of variant components is illustrated in Figure 1, while the training algorithm is depicted in Algorithm 1. Similar to MAN, there are two separate optimizers to train MAN-MoE, one updating the parameters of D (red arrows), while the other updating the parameters of all other modules (green arrows). In Algorithm 1, LC , LD and Lg are the loss functions for the predictor C, the language discriminator D, and the expert gates in Fp and C, respectively. In practice, we adopt the NLL loss for LC for text classification, and token-level NLL loss for sequence tagging: LNLL(ŷ; y) = − logP (ŷ = y) (1) LT -NLL(ŷ;y) = − logP (ŷ = y) = − ∑ i logP (ŷi = yi) (2) where y is a scalar class label, and y is a vector of token labels. LC is hence interpreted as the negative log-likelihood of predicting the correct task label. Similarly, D adopts the NLL loss in (1) for predicting the correct language of a sample. Finally, the expert gates G use token-level NLL loss in (2), which translates to the negative log-likelihood of using the correct language expert for each token in a sample. Therefore, the objectives that C, D and G minimize are, respectively: JC = ∑ l∈S E (x,y)∈Xl [LC(C(Fs(x),Fp(x)); y)] (3) JD = ∑ l∈∆ E x∈Ul [LD(D(Fs(x)); l)] (4) JG = ∑ l∈S E x∈Xl [∑ w∈x LG(G(hw); l) ] (5) where hw in (5) is the BiLSTM hidden representation in Fp as shown in Figure 2. In addition, note thatD is trained using unlabeled corpora over all languages (∆), while the training of Fp and C (and hence G) only take place on source languages (S). Finally, the overall objective function is: J = JC − λ1JD + λ2(J (1)G + J (2) G ) (6) where J (1)G and J (2) G are the two expert gates in Fp and C, respectively. APPENDIX B IMPLEMENTATION DETAILS In all experiments, Adam (Kingma & Ba, 2015) is used for both optimizers (main optimizer and D optimizer), with learning rate 0.001 and weight decay 10−8. Batch size is 64 for the slot filling experiment and 16 for the NER and Amazon Reviews experiments, which is selected mainly due to memory concerns. CharCNN increases the GPU memory usage and NER hence could only use a batch size of 16 to fit in 12GB of GPU memory. The Amazon experiment does not employ character embeddings but the documents are much longer, and thus also using a smaller batch size. All embeddings are fixed during training. Dropout (Srivastava et al., 2014) with p = 0.5 is applied in all components. Unless otherwise mentioned, ReLU is used as non-linear activation. Bidirectional-LSTM is used in the feature extractors for all experiments. In particular, Fs is a twolayer BiLSTM of hidden size 128 (64 for each direction), and Fp is a two-layer BiLSTM of hidden size 128 stacked with a MoE module (see Figure 2). Each expert network in the MoE module of Fp is a two-layer MLP again of hidden size of 128. The final layer in the MLP has a tanh activation instead of ReLU to match the LSTM-extracted shared features (with tanh activations). The expert gate is a linear transformation (matrix) of size 128×N , whereN is the number of source languages. On the other hand, the architecture of the task specific predictor C depends on the task. For sequence tagging experiments, the structure of C is shown in Figure 3, where each expert in the MoE module is a token-level two-layer MLP with a softmax layer on top for making token label predictions. For text classification tasks, a dot-product attention mechanism (Luong et al., 2015) is added after the shared and private features are concatenated. It has a length 256 weight vector that attends to the feature vectors of each token and computes a softmax mixture that pools the token-level feature vectors into a single sentence-level feature vector. The rest of C remains the same for text classification. For the language discriminator D, a CNN text classifier (Kim, 2014) is adopted in all experiments. It takes as input the shared feature vectors of each token, and employs a CNN with max-pooling to pool them into a single fixed-length feature vector, which is then fed into a MLP for classifying the language of the input sequence. The number of kernels is 200 in the CNN, while the kernel sizes are 3, 4, and 5. The MLP has one hidden layer of size 128. The MUSE and VecMap embeddings are trained with the monolingual 300d fastText Wikipedia embeddings (Bojanowski et al., 2017). When character-level word embeddings are used, a CharCNN is added that takes randomly initialized character embeddings of each character in a word, and passes them through a CNN with kernel number 200 and kernel sizes 3, 4, and 5. Finally, the character embeddings are max-pooled and fed into a single fully-connected layer to form a 128 dimensional character-level word embedding, which is concatenated with the pre-trained cross-lingual word embedding to form the final word representation of that word. The remaining hyperparameters such as λ1, λ2 and k (see Algorithm 1) are tuned for each individual experiment, as shown in Table 6.
1. What is the focus of the paper regarding cross-lingual transfer? 2. What are the contributions of the paper, particularly in addressing the limitations of adversarial networks? 3. What are the weaknesses of the paper, especially regarding its claims and comparisons with other works? 4. How does the reviewer assess the originality and significance of the proposed approach? 5. What is the main concern of the reviewer regarding the paper's novelty?
Review
Review This paper describes a model for cross lingual transfer with no target language information. This is a well written paper that makes a number of contributions: 1. It provides an interesting discussion of transfer form multiple source languages into a target language. This is a timely problem and the paper points out that adversarial networks may be too limiting in this setup. 2. It provides a modeling approach that deals with the limitations of adversarial networks as mentioned in (1). 3. It demonstrates the value of the proposed approach through an extensive experimental setup. At the same time, I see two major limitations to the paper: 1. While the proposed approach is valid, it is not very original, at least in my subjective eyes. The authors integrate a classifier that combines the private, language-specific features so that not only features that are shared between all the involved languages can be used in the classification process. While this is a reasonable idea that works well in practice, IMO it is quite straight forward and builds on ideas that have been recently been proposed in many other works. 2. The authors claim that: "To our best knowledge, this work is the first to propose an unsupervised CLTL framework without depending on any cross-lingual resource" This is, unfortunately, not true. I refer the authors to the paper: Deep Pivot-Based Modeling for Cross-language Cross-domain Transfer with Minimal Guidance. Yftah Ziser and Roi Reichart. EMNLP 2018. In their lazy setup, the EMNLP authors do exactly that. They address the more complicated cross-language, cross-domain setup, but their model can be easily employed within a single domain. Their experiments even use the multilingual sentiment dataset used in the current paper. The model in the EMNLP paper shows to outperform adversarial networks, so it can be competitive here as well.
ICLR
Title Group-Disentangling Conditional Shift Abstract We propose a novel group disentanglement method called the Context-Aware Variational Autoencoder (CxVAE). Our model can learn disentangled representations on datasets with conditional shift. This phenomenon occurs when the distribution of the instance-level latent variable z conditional on the input observation x, p(z|x), changes from one group to another (i.e. pi(z|x) 6= pj(z|x), where i, j are two different groups). We show that existing methods fail to learn disentangled representations under this scenario because they infer the group u and instance z representations separately. CxVAE overcomes this limitation by conditioning the instance inference on the group variable q(z|x,u). Our model has the novel ability to disentangle ambiguous observations (those with incomplete information about the generative factors), which we evaluate on an image dataset. Additionally, we use a fair comparisons task to demonstrate empirically that conditional shift is the cause of our model’s improved performance. 1 INTRODUCTION Group disentanglement is the goal of learning representations that separate group-level variation from instance-level variation. Consider a dataset of observations organised into N groups of the form xn,1:Kn = {xn,1, ...,xn,Kn}, n ∈ 1 : N . These could be pictures grouped by author, clinical outcomes grouped by the patient, or film ratings grouped by user. We train a representation network r(xn) that encodes a group of observations {xn,1, . . .xn,Kn} into one group code un and a set of instance codes {zn,1, . . . zn,Kn}, one for each observation. We want u to capture only the variation across groups and z only the variation within groups. The current state-of-the-art approaches for group disentanglement train the representation network r by using it as the variational latent posterior distribution in a Variational Autoencoder (Bouchacourt et al., 2018; Hosoya, 2019; Németh, 2020). They assume a hierarchical generative model whereby the observation xn,k is generated by combining a group latent variable un and an independent instance latent variable zn,k (Figure 2-left). The standard setup involves training the variational latent posterior q(un, zn,1:Kn |xn) by maximising a lower bound to the data likelihood (Kingma & Welling, 2014; Rezende et al., 2014). In our work, we show that the variational latent posterior, as defined in existing models, is unsuited to datasets with conditional shift. This is a property of the data-generating process whereby the true conditional distribution of the instance latent variable z changes from one group to another pi(z|x) 6= pj(z|x) where i, j are two groups (Zhang et al., 2013; Gong et al., 2016). In our case, the conditional instance distribution for group i, which is pi(z|x), corresponds to p(zi,k|xi,k,ui) where k is a given instance in the group. Conditional shift occurs in many real-world datasets that we would like to group-disentangle. For example, in the 3DIdent dataset (Zimmermann et al., 2021), if we want to infer the colour of the teapot zn,k based on an image of that teapot xn,k, we should take into account the colour of the spotlight that illuminates the scene un; different coloured spotlights will make the same object appear different colours, as can be seen in Figure 1. Existing group-disentanglement methods, which infer the instance variable (teapot colour) independently of the group (spotlight colour) fail to disentangle the two colours. Existing VAE-based methods fail to disentangle in the conditional shift setting because they make the assumption that the group and instance variables can be inferred independently of each other from the input observation (Figure 2-middle): When defining the variational latent posterior, existing works based on the Group VAE (GVAE) (Bouchacourt et al., 2018; Hosoya, 2019; Németh, 2020; Chen & Batmanghelich, 2020) make the assumption that the group and instance variables are conditionally independent given the observations (Figure 2-middle): q(un, zn,1:Kn |xn,1:Kn) = q(un|xn,1:Kn) Kn∑ k=1 q(zn,k|xn,k). (1) The limitations of this assumption have not been identified so far in the literature because the datasets used to test disentanglement, such as Shapes3D (Kim & Mnih, 2018), SmallNORB (LeCun et al., 2004), dSprites (Higgins et al., 2017), Cars3D (Reed et al., 2015), MPI3D (Gondal et al., 2019), have the property that one image is always sufficient to accurately infer its latent variables. For example, we only require a single image from the MPI3D dataset to uniquely identify the colour, position, and rotation of the depicted object. In this work, we show that conditioning the instance encoder on the group latent vector enables the model to learn disentangled representations on datasets with conditional shift. 1. In the first instance, we show that only our method is able to correctly disentangle between object-colour and spotlight-colour in the 3DIdent dataset (Zimmermann et al., 2021), illustrated in Figure 1. 2. Then, we use the task of fair comparisons between student test-scores (Figure 3) to show that the amount of conditional shift in the dataset determines the performance gap between our model and the other approaches. 2 RELATED WORK Group Disentanglement. This class of problems comes under different names: style-content disentanglement (Tenenbaum & Freeman, 2000), content-transformation disentanglement (Hosoya, 2019), and disentanglement with group supervision (Shu et al., 2020), to name a few. Recent work (Shu et al., 2020; Locatello et al., 2020) has contextualised group disentanglement as a subproblem of weakly-supervised disentanglement, where disentangled representations are learned with the help of non-datapoint supervision (e.g. grouping, ranking, restricted labelling). Early work in this area focused on separating between visual concepts (Kulkarni et al., 2015; Reed et al., 2015). This area has received renewed interest after the theoretical impossibility result of Locatello et al. (2019) and the identifiability proofs of Khemakhem et al. (2020) and Mita et al. (2021). A key aspect of recent weakly-supervised models is the interpretation of the grouping as a signal of similarity between datapoints (Chen & Batmanghelich, 2020). Conditioning on the Group Variable. While conditioning the instance encoder on the group variable is common in the areas of semi-supervised learning and fair representations (Kingma et al., 2014; Louizos et al., 2016), we are the first to apply it to unsupervised group disentanglement where explicit group labels are not available. In the field of sequence disentanglement, stateof-the-art methods (Hsu et al., 2017; Denton & Birodkar, 2017; Li & Mandt, 2018) infer the instance variable (capturing shorter timescales) conditionally on the group variable (capturing longer timescales). Recent works in weakly-supervised disentanglement (Shu et al., 2020; Locatello et al., 2019; Roeder et al., 2019) also condition the instance variable on the group, but their group variable is a discrete variable used for selection rather than a representation. It marks which units of the instance representation are common within the group and which are free to vary. We argue that this is not sufficient to account for the variation in p(v|x, u) produced by the conditional shift, so we include AdaGVAE (Locatello et al., 2020) in our evaluation for comparison (see Section 7). Conditional Shift. Our instance encoder conditioned on the group variable is a new strategy to deal with conditional shift. This problem has been studied extensively in the context of supervised learning (Zhang et al., 2013; Gong et al., 2016). However, we are the first to explore the effect of conditional shift on unsupervised learning. Methods for mitigating the effects of conditional shift typically focus on learning domain-invariant representations (Ben-David et al., 2009). However, Zhao et al. (2019) show that learning a domain-invariant representation is not sufficient for learning a correct mapping between instance variables from different groups. Image Translation. Note that our variational latent posterior is different from the one used in COCO-FUNIT (Saito et al., 2020). The authors are motivated by the same limitations with existing works as we are, namely that unsupervised translation methods struggle to disentangle under conditional shift. However, because they train explicitly for translation rather than disentanglement, they arrive at a different solution than ours. When performing a translation, their approach is to condition the representation of the target group on the source image, thereby bypassing the need for an accurate instance representation. This mechanism produces impressive results on image translation tasks, but it cannot be extended to models based on the GVAE which do not train explicitly for translation; in our case, there are no source and target groups in the training set. Regardless, we evaluate COCO-FUNIT on the test score dataset and show that our model outperforms it both in terms of disentanglement and translation. 3 BACKGROUND The Group-Instance Generative Model (Bouchacourt et al., 2018; Hosoya, 2019) is a multi-level model that uses two latent variables to generate grouped data: the instance variable zn,k ∼ N (0, 1) controls the variation within groups, and the group variable un ∼ N (0, 1) controls the variation across groups (Figure 2-left). The likelihood of a group xn,1:Kn is: p(xn,1:Kn) = Ep(un) Kn∏ k=1 Ep(zn,k) [p(xn,k|un, zn,k)] 3.1 VARIATIONAL INFERENCE Because the exact likelihood is intractable, the standard approach to train the group-instance generative model is with a Variational Autoencoder (Kingma & Welling, 2014; Rezende et al., 2014) which performs optimisation by introducing a variational latent posterior q(un, zn,1:Kn |x) and maximizing the Evidence Lower Bound (Jordan et al., 2004): log p(xn,1:Kn) ≥ Eq(un,zn,1:Kn |xn,1:Kn ) [ Kn∑ k=1 log p(x|u, z) ] −KL[q(un, zn,1:Kn |xn,1:Kn)||p(un, zn,1:Kn)] (2) Existing methods use a class of variational distributions that assume conditional independence between the latent variables. q(un, zn,1:Kn |xn,1:Kn) = q(un|xn,1:Kn) Kn∏ k=1 q(zn,k|xn,k) 4 CONTEXT-AWARE VARIATIONAL AUTOENCODER We propose a new model which can perform well on group-confounded problems. We call our model the Context-Aware Variational Autoencoder (CxVAE). Its defining feature is a variational latent posterior whose instance variable is conditioned on the group variable: q(un, zn,1:Kn |xn,1:Kn) = q(un|xn,1:Kn) Kn∏ k=1 q(zn,k|xn,k,un) Thus, the instance encoder is implemented as a network which takes as input the concatenation of the observation xn,k and previously sampled group representation un. q(zn,k|xn,k,un) = N (µ, σ), (µ, σ) = f(xn,k,un), This form of the variational distribution reflects the correct factorisation of the generative model. It has the potential to learn the true generative latent posterior, which is disentangled by definition. The Evidence Lower Bound for our model is ELBO(xn,1:Kn) = Eq(un,zn,1:Kn |) [ Kn∑ k=1 log p(xn,k|un, zn,k) ] −KL[q(un|xn,1:Kn)||p(un)] − Eq(un|xn,1:Kn ) [ Kn∑ k=1 KL[q(zn,k|xn,k,un)||p(zn,k)] ] . (3) 5 EVALUATION METHODOLOGY We show that by making the instance encoder conditional on the inferred group variable, we obtain a considerable gain in translation accuracy and a marked improvement in disentanglement. We also demonstrate that the gap in performance between our model and other group disentanglement methods is caused by conditional shift in the data-generating process. 5.1 MODEL SETUP We compare our conditional CxVAE with the state-of-the-art in group disentanglement, namely the Group VAE (Hosoya, 2019; Bouchacourt et al., 2018), COCO-FUNIT (Saito et al., 2020), and AdaGVAE (Locatello et al., 2019). As in Hosoya (2019), the group encoder is applied to each datapoint in the group and then all the outputs are averaged. For all experiments, our CxVAE will be a modified GVAE such that the group variable un is concatenated with the observation xn,k and fed into the instance encoder in order to compute the instance variable zn,k. For sampling the variational latent posteriors, we use the standard reparametrisation trick. We use an Adam optimiser with learning rate of 1e-4 with β1 = 0.9, β2 = 0.5. For the 3DIdent dataset, we implement all networks (encoders and decoders) as convolutional nets with 4 hidden layers and 64 filters each. Both latent variables have 16 latent dimensions. For the test-score dataset, we use MLPs with 3 hidden layer of 32 activations each. The group variable will have 4 dimensions and the instance variable will have 2 dimension. We train each model for 64 epochs, and use the last 10 epochs for evaluation. Additionally, we run the experiment for 100 different random seeds initialisations, both for the data generating process and the networks. Confidence intervals are computed by resampling train-test splits, weight initialisations and sampling seeds. We use the same 100 seeds in each model. This gives 1000 measurements to plot in Table 1. 5.2 EVALUATION METRICS We compare the models with respect to 3 different criteria: 1) How well does the model fit the holdout data?, 2) How disentangled are the representations inferred by the encoder?, and 3) How well can the model answer the counterfactual question "What colour would appear on the screen if this object were lit by a different spotlight?". We assess each criterion with a different evaluation metric. Fitting the holdout data. As a general metric, we report the reconstruction error (MSE) on the holdout data for every experiment, commonly used as a proxy for the likelihood of the holdout set. Disentangled representations. We use the Mutual Information Gap (Chen et al., 2018) to measure the quality of the disentanglement. We measure empirically the amount of mutual information between the inferred latent variables u, z and the ground-truth group-level factor u′. Consequently, the goal is to have maximal mutual information between the group variable u and the ground-truth u′, and minimal mutual information between the instance variables z and the ground-truth u′. The gap between the two (normalized with the entropy of the ground-truth factors) is the metric of disentanglement: MIG = 1 H(u′) (I(u;u′)− I(z;u′)) (4) Since the data-generating process is known, the mutual information between the inferred group variable and the ground-truth group variable I(u;u′) is straightforward to implement by following the approach from Chen et al. (2018). We measure only the mutual information between the ground-truth group factor u′ and the latent variables because, as pointed out by Németh (2020), the common failure case we are trying to guard against in group disentanglement is that the instance variables z might learn information belonging to the ground-truth group factor u′. Translation task. We measure how well the learned representations can answer the question “What would the score of student k from school n have been if they had attended the typical school (the school with scores distributed according to N (0, I2))?”. This problem, also known as translation, is a commonly used downstream task for disentangled representations (Tenenbaum & Freeman, 2000). We translate the score of student k to the typical school, and then take the mean squared error against the ground-truth translation, which is generated when the data is generated using the ground-truth Student test scores GVAE (Hosoya, 2019) 0.41 ± 0.01 0.04 ± 0.02 0.49 ± 0.01 AdaGVAE (Locatello et al., 2020) 0.41 ± 0.01 0.05 ± 0.02 0.49 ± 0.01 COCO-FUNIT (Saito et al., 2020) 0.40 ± 0.02 0.04 ± 0.01 0.49 ± 0.02 CxVAE (ours) 0.35 ± 0.02 0.44 ± 0.08 0.37 ± 0.02 lower is better higher is better lower is better generative factors. For our dataset, the correct translation corresponds to the Earth-Mover distance between the multivariate normal distributions of scores in each school (Knott & Smith, 1984). In order to obtain the translation, we first infer the instance variable of student k from school n and the group variable of the typical school. We then feed the two variables to the decoder. For evaluation, we translate all the scores from each school n and then measure the distance between the predicted translation and the ground-truth translation. We compute the total error as an average over all the translation errors. We can also use translation as an additional qualitative comparison between CxVAE and other group disentanglement methods, which can be seen in Figure 3. 6 DISENTANGLING AMBIGUOUS OBSERVATIONS We show that our CxVAE is able to group-disentangle in a setting where conditional shift produces ambiguous observations. The dataset comprises images from the 3DIdent dataset (Zimmermann et al., 2021) depicting teapots of different colours being lit by spotlights of different colours. Within a group, the images have the same spotlight colour and only the colour of the teapot varies. The goal is for the group representation to encode the spotlight colour and for the instance representation to encode the object colour. The goal of disentangling between object-colour and spotlight-colour is useful for many real-world applications, such as object recognition. Once we have separated the group-level variation from the instance level variation, we can use the instance representation as a low-level feature on which to train a classifier that predicts the colour of the object. The better the disentanglement, the better performance this predictor would have at identifying the true colour of the object. This is a difficult problem because the data exhibits conditional shift. The same exact image could have been generated by different combination of spotlight colour and object colour, as can be seen in Figure 1. This makes it difficult to identify the true colour of the object by just looking at one single image.Indeed all previous methods, which infer the instance representation solely from the current image, fail to learn disentangled representations of this dataset. The results in Table 1 show that our model, the CxVAE, produces representations that disentangle between spotlight colour and object colour much more than a representative selection of existing models: GVAE, AdaGVAE, and COCO-FUNIT. Our model’s MIG score is much higher than the one produced by competing models, and the performance gap is greater than the 95% confidence interval for any one of these models. 7 CONDITIONAL SHIFT CAUSES THE GAP IN PERFORMANCE Our CxVAE produces considerable improvements over the competing methods in terms of fitting the holdout set, disentangled representations, translation accuracy and predicting the generative factors (Table 1). While the scores of the existing methods cluster together, the gap between them and the CxVAE is larger than the 95% confidence interval of any method. 7.1 FAIR COMPARISONS BETWEEN STUDENTS Consider the task of fair comparisons between students attending different schools based on their standardised test scores in maths and reading (Braun et al., 2006). The typical assumption in the literature is that each school has a similar distribution of aptitude among its students, so our goal is to learn disentangled student-level (instance) and school-level (group) representations of the scores. The instance representation zn,k should reflect the aptitude of student k from school n independently of what school the student had attended. This analysis is crucial for university admission boards aiming to judge students based on their aptitude regardless of the socio-economic circumstances associated with attending one school or another, such as affluence, location, or curriculum (Raudenbush & Willms, 1995; Braun et al., 2006). Group disentanglement has the potential to reduce the computational costs of this analysis compared to the current variational inference methods for mixed-effects models (Gelman & Hill, 2006). The variational latent posterior of a GVAE can perform scalable inference of the student and school representations requiring just one forward pass through the network for every additional student. Contrast this with running a separate optimisation routine for the latent variables of each student (Gelman et al., 1995). GVAEs can also optimise highly non-linear generative models which would otherwise have to be designed explicitly for the problem in hand (Pinheiro & Bates, 2001). Current methods fail to learn disentangled representations on this data. Figure 3b shows the GVAE model (Hosoya, 2019) incorrectly translating the scores from one school to another. Translation is a well-established downstream task for evaluating disentanglement (Tenenbaum & Freeman, 2000). In the case of test scores, translation corresponds to the counterfactual question “What score would student k from school A have obtained if they had attended the typical school (i.e. a school whose scores are distributed according to N (0, I2))?”. We do this by generating a new score by combining the instance representation of student k from school n with the group representation of the typical school. Raudenbush & Willms (1995) use translation to directly compare students from different schools. Notice that the unconditional GVAE scrambles the order of the scores and fails to capture the N (0, I2) distribution of the typical school. We propose a modification to the variational latent posterior in order to enable the GVAE to learn disentangled representations of test score data. Looking at the data (Figure 3a), it is clear that we are dealing with the conditional shift scenario. The same reading score could be obtained by either a highachieving student from school C or a low-achieving student from school B. Inferring the aptitude of the student requires knowing the distribution of scores within each school. In order to account for the variation across groups, we introduce the Context-Aware Variational Autoencoder (CxVAE), whose instance encoder is conditioned on the group representation (Figure 2-right). This reflects the correct factorisation of the generative latent posterior p(un, zn,1:Kn |xn,1:Kn), thus making no assumptions about the relationship between the group and the instance. We observe our variational model successfully translating test scores in Figure 3c. 7.2 DATASET We generate our dataset of test scores using the classic “varying intercept, varying slope” mixedeffects model (Laird & Ware, 1982; Pinheiro & Bates, 2001; Gelman & Hill, 2006). This is a well-established approach for modelling student scores xn,k as a function of individual aptitude an,k and school-level characteristics (bn, cn) (e.g. affluence, curriculum, location, etc.) (Raudenbush & Willms, 1995; Braun et al., 2006). We choose this model for its simplicity and for the wide variety of phenomena to which it can be applied. All the scores and factors are 2-dimensional vectors, with one component for the maths score and another for the reading score: xn,k = bn − cn an,k + n,k is the score of student k in school n. an,k ∼ N (0, I2) is the aptitude of student k in school n bn ∼ N (0, I2) is the mean score in school n cn ∼ Exp(1) is the standard deviation of scores in school n n,k ∼ N (0, 0.1 ∗ I2) is a per-student error term. (5) For the evaluation procedure, we use the above model to generate N = 32, 768 values for bn, cn. For each school n, we generate M = 128 values for an,k. We then randomly select half of the schools to assemble a training dataset with 2, 097, 152 scores split across 16, 384 schools. We take the other half of schools to create the holdout dataset, so that every testing school and student are unseen during training. We have chosen to evaluate on a synthetic dataset because it allows for fine control over the parameters of the data-generating process (especially relevant in Section 7.3) and it also enables us to measure the quality of disentanglement using the Mutual Information Gap (Chen et al., 2018). 7.3 CONDITIONAL SHIFT CAUSES THE GAP IN PERFORMANCE We show, by modifying the data-generating process (5), that conditional shift explains the increased performance of the CxVAE. We insert a hyper-parameter λ to control the strength of the conditional shift; λ = 1 means the conditional shift stays the same as in the previous experiment, while λ = 0 means there is no conditional shift. Consider the case where the maths score only depends on the school, and the reading score only depends on the student. In this situation, the two generative factors can be easily disentangled since you can infer the student aptitude from the reading score and the school profile from the maths score. We use this as an extreme case of lack of conditional shift, and insert a hyper-parameter λ in our data-generating process that will continuously move between this case and the original case. Our modified data-generating process is xn,k = [ λ 1 ] bn + [ 1 λ ] an,k ( cn ·̂ [ λ 1 ]) + n,k (6) where ·̂ denotes the elementwise power operation and the generative factors (an,k,bn, cn, n,k) are sampled as in (5). This model has no conditional shift when λ = 0 because each ground-truth factor controls a separate component of the data. Inferring the student aptitude requires only the reading score and can ignore the school characteristics. When λ = 1, the problem exhibits conditional shift in exactly the same way as in (5). If our hypothesis is correct, that the conditional shift causes the performance gap between CxVAE and other group disentanglement methods, then the gap should decrease as λ approaches 0. The measurements displayed in Figure 4 confirm our expectations. For low values of λ the performance of our CxVAE is evenly matched to the GVAE. As λ increases, CxVAE metrics remain stable while GVAE performance decreases substantially. It is clear that the degree of confounding in the dataset explains the performance gain that we see in the CxVAE. 8 CONCLUSIONS In this work, we show empirically that conditioning the instance encoder on the group variable produces group-disentangled representations on datasets with conditional shift. We also show that the strength of the conditional shift in the data-generating process determines the performance gap between our model and other group disentanglement methods. Our evaluation is run on the downstream task of extracting student aptitudes from a dataset of test scorse grouped by school, a problem on which group-instance models have not been applied before. The main limitation of our work is that we perform evaluation on a synthetic dataset of student scores rather than real data. Although this is a justifiable choice with respect to evaluation (it gives us access to the ground-truth values of the latent variables), future work should focus on evaluating on real-world datasets with conditional shift, such as user-item ratings (Koren et al., 2009).
1. What is the focus and contribution of the paper on context-aware variational autoencoders? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its modification from previous C-VAE and experimental setup? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. What are the questions raised by the reviewer regarding the proposed method, such as the addition of u and n into the distribution Q and the implementation of the ELBO loss in real-world scenarios? 5. What are the limitations of the paper, including the lack of intuition, unclear proofs, and limited experimental scope?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposes a context aware variational auto encoder which modified the structure of previous C-VAE. The evaluation is on synthetic data only. Strengths And Weaknesses Strength: the work touches a fundamental problem. Weakness: Only synthetic experiments are conducted. The VAE only tested with MLP. The data generated is in low dimensional and not very persuasive. Clarity, Quality, Novelty And Reproducibility The paper is clear, but lacking of intuition. For example, why we need to add u n into distribution Q ? Any intuition for doing that? What is the proof detail for eqn 8-10? Some equation is wield. Eqn 3-4 are also the same equation. It is unclear how to implement the proposed ELBO loss in real world? Which reparameter trick are you using? Why the method was only tested in synthetic data? How about high dimensional real world images? For example, the GVAE tested in image data. It is conventional to show in some real world high dimensional data.
ICLR
Title Group-Disentangling Conditional Shift Abstract We propose a novel group disentanglement method called the Context-Aware Variational Autoencoder (CxVAE). Our model can learn disentangled representations on datasets with conditional shift. This phenomenon occurs when the distribution of the instance-level latent variable z conditional on the input observation x, p(z|x), changes from one group to another (i.e. pi(z|x) 6= pj(z|x), where i, j are two different groups). We show that existing methods fail to learn disentangled representations under this scenario because they infer the group u and instance z representations separately. CxVAE overcomes this limitation by conditioning the instance inference on the group variable q(z|x,u). Our model has the novel ability to disentangle ambiguous observations (those with incomplete information about the generative factors), which we evaluate on an image dataset. Additionally, we use a fair comparisons task to demonstrate empirically that conditional shift is the cause of our model’s improved performance. 1 INTRODUCTION Group disentanglement is the goal of learning representations that separate group-level variation from instance-level variation. Consider a dataset of observations organised into N groups of the form xn,1:Kn = {xn,1, ...,xn,Kn}, n ∈ 1 : N . These could be pictures grouped by author, clinical outcomes grouped by the patient, or film ratings grouped by user. We train a representation network r(xn) that encodes a group of observations {xn,1, . . .xn,Kn} into one group code un and a set of instance codes {zn,1, . . . zn,Kn}, one for each observation. We want u to capture only the variation across groups and z only the variation within groups. The current state-of-the-art approaches for group disentanglement train the representation network r by using it as the variational latent posterior distribution in a Variational Autoencoder (Bouchacourt et al., 2018; Hosoya, 2019; Németh, 2020). They assume a hierarchical generative model whereby the observation xn,k is generated by combining a group latent variable un and an independent instance latent variable zn,k (Figure 2-left). The standard setup involves training the variational latent posterior q(un, zn,1:Kn |xn) by maximising a lower bound to the data likelihood (Kingma & Welling, 2014; Rezende et al., 2014). In our work, we show that the variational latent posterior, as defined in existing models, is unsuited to datasets with conditional shift. This is a property of the data-generating process whereby the true conditional distribution of the instance latent variable z changes from one group to another pi(z|x) 6= pj(z|x) where i, j are two groups (Zhang et al., 2013; Gong et al., 2016). In our case, the conditional instance distribution for group i, which is pi(z|x), corresponds to p(zi,k|xi,k,ui) where k is a given instance in the group. Conditional shift occurs in many real-world datasets that we would like to group-disentangle. For example, in the 3DIdent dataset (Zimmermann et al., 2021), if we want to infer the colour of the teapot zn,k based on an image of that teapot xn,k, we should take into account the colour of the spotlight that illuminates the scene un; different coloured spotlights will make the same object appear different colours, as can be seen in Figure 1. Existing group-disentanglement methods, which infer the instance variable (teapot colour) independently of the group (spotlight colour) fail to disentangle the two colours. Existing VAE-based methods fail to disentangle in the conditional shift setting because they make the assumption that the group and instance variables can be inferred independently of each other from the input observation (Figure 2-middle): When defining the variational latent posterior, existing works based on the Group VAE (GVAE) (Bouchacourt et al., 2018; Hosoya, 2019; Németh, 2020; Chen & Batmanghelich, 2020) make the assumption that the group and instance variables are conditionally independent given the observations (Figure 2-middle): q(un, zn,1:Kn |xn,1:Kn) = q(un|xn,1:Kn) Kn∑ k=1 q(zn,k|xn,k). (1) The limitations of this assumption have not been identified so far in the literature because the datasets used to test disentanglement, such as Shapes3D (Kim & Mnih, 2018), SmallNORB (LeCun et al., 2004), dSprites (Higgins et al., 2017), Cars3D (Reed et al., 2015), MPI3D (Gondal et al., 2019), have the property that one image is always sufficient to accurately infer its latent variables. For example, we only require a single image from the MPI3D dataset to uniquely identify the colour, position, and rotation of the depicted object. In this work, we show that conditioning the instance encoder on the group latent vector enables the model to learn disentangled representations on datasets with conditional shift. 1. In the first instance, we show that only our method is able to correctly disentangle between object-colour and spotlight-colour in the 3DIdent dataset (Zimmermann et al., 2021), illustrated in Figure 1. 2. Then, we use the task of fair comparisons between student test-scores (Figure 3) to show that the amount of conditional shift in the dataset determines the performance gap between our model and the other approaches. 2 RELATED WORK Group Disentanglement. This class of problems comes under different names: style-content disentanglement (Tenenbaum & Freeman, 2000), content-transformation disentanglement (Hosoya, 2019), and disentanglement with group supervision (Shu et al., 2020), to name a few. Recent work (Shu et al., 2020; Locatello et al., 2020) has contextualised group disentanglement as a subproblem of weakly-supervised disentanglement, where disentangled representations are learned with the help of non-datapoint supervision (e.g. grouping, ranking, restricted labelling). Early work in this area focused on separating between visual concepts (Kulkarni et al., 2015; Reed et al., 2015). This area has received renewed interest after the theoretical impossibility result of Locatello et al. (2019) and the identifiability proofs of Khemakhem et al. (2020) and Mita et al. (2021). A key aspect of recent weakly-supervised models is the interpretation of the grouping as a signal of similarity between datapoints (Chen & Batmanghelich, 2020). Conditioning on the Group Variable. While conditioning the instance encoder on the group variable is common in the areas of semi-supervised learning and fair representations (Kingma et al., 2014; Louizos et al., 2016), we are the first to apply it to unsupervised group disentanglement where explicit group labels are not available. In the field of sequence disentanglement, stateof-the-art methods (Hsu et al., 2017; Denton & Birodkar, 2017; Li & Mandt, 2018) infer the instance variable (capturing shorter timescales) conditionally on the group variable (capturing longer timescales). Recent works in weakly-supervised disentanglement (Shu et al., 2020; Locatello et al., 2019; Roeder et al., 2019) also condition the instance variable on the group, but their group variable is a discrete variable used for selection rather than a representation. It marks which units of the instance representation are common within the group and which are free to vary. We argue that this is not sufficient to account for the variation in p(v|x, u) produced by the conditional shift, so we include AdaGVAE (Locatello et al., 2020) in our evaluation for comparison (see Section 7). Conditional Shift. Our instance encoder conditioned on the group variable is a new strategy to deal with conditional shift. This problem has been studied extensively in the context of supervised learning (Zhang et al., 2013; Gong et al., 2016). However, we are the first to explore the effect of conditional shift on unsupervised learning. Methods for mitigating the effects of conditional shift typically focus on learning domain-invariant representations (Ben-David et al., 2009). However, Zhao et al. (2019) show that learning a domain-invariant representation is not sufficient for learning a correct mapping between instance variables from different groups. Image Translation. Note that our variational latent posterior is different from the one used in COCO-FUNIT (Saito et al., 2020). The authors are motivated by the same limitations with existing works as we are, namely that unsupervised translation methods struggle to disentangle under conditional shift. However, because they train explicitly for translation rather than disentanglement, they arrive at a different solution than ours. When performing a translation, their approach is to condition the representation of the target group on the source image, thereby bypassing the need for an accurate instance representation. This mechanism produces impressive results on image translation tasks, but it cannot be extended to models based on the GVAE which do not train explicitly for translation; in our case, there are no source and target groups in the training set. Regardless, we evaluate COCO-FUNIT on the test score dataset and show that our model outperforms it both in terms of disentanglement and translation. 3 BACKGROUND The Group-Instance Generative Model (Bouchacourt et al., 2018; Hosoya, 2019) is a multi-level model that uses two latent variables to generate grouped data: the instance variable zn,k ∼ N (0, 1) controls the variation within groups, and the group variable un ∼ N (0, 1) controls the variation across groups (Figure 2-left). The likelihood of a group xn,1:Kn is: p(xn,1:Kn) = Ep(un) Kn∏ k=1 Ep(zn,k) [p(xn,k|un, zn,k)] 3.1 VARIATIONAL INFERENCE Because the exact likelihood is intractable, the standard approach to train the group-instance generative model is with a Variational Autoencoder (Kingma & Welling, 2014; Rezende et al., 2014) which performs optimisation by introducing a variational latent posterior q(un, zn,1:Kn |x) and maximizing the Evidence Lower Bound (Jordan et al., 2004): log p(xn,1:Kn) ≥ Eq(un,zn,1:Kn |xn,1:Kn ) [ Kn∑ k=1 log p(x|u, z) ] −KL[q(un, zn,1:Kn |xn,1:Kn)||p(un, zn,1:Kn)] (2) Existing methods use a class of variational distributions that assume conditional independence between the latent variables. q(un, zn,1:Kn |xn,1:Kn) = q(un|xn,1:Kn) Kn∏ k=1 q(zn,k|xn,k) 4 CONTEXT-AWARE VARIATIONAL AUTOENCODER We propose a new model which can perform well on group-confounded problems. We call our model the Context-Aware Variational Autoencoder (CxVAE). Its defining feature is a variational latent posterior whose instance variable is conditioned on the group variable: q(un, zn,1:Kn |xn,1:Kn) = q(un|xn,1:Kn) Kn∏ k=1 q(zn,k|xn,k,un) Thus, the instance encoder is implemented as a network which takes as input the concatenation of the observation xn,k and previously sampled group representation un. q(zn,k|xn,k,un) = N (µ, σ), (µ, σ) = f(xn,k,un), This form of the variational distribution reflects the correct factorisation of the generative model. It has the potential to learn the true generative latent posterior, which is disentangled by definition. The Evidence Lower Bound for our model is ELBO(xn,1:Kn) = Eq(un,zn,1:Kn |) [ Kn∑ k=1 log p(xn,k|un, zn,k) ] −KL[q(un|xn,1:Kn)||p(un)] − Eq(un|xn,1:Kn ) [ Kn∑ k=1 KL[q(zn,k|xn,k,un)||p(zn,k)] ] . (3) 5 EVALUATION METHODOLOGY We show that by making the instance encoder conditional on the inferred group variable, we obtain a considerable gain in translation accuracy and a marked improvement in disentanglement. We also demonstrate that the gap in performance between our model and other group disentanglement methods is caused by conditional shift in the data-generating process. 5.1 MODEL SETUP We compare our conditional CxVAE with the state-of-the-art in group disentanglement, namely the Group VAE (Hosoya, 2019; Bouchacourt et al., 2018), COCO-FUNIT (Saito et al., 2020), and AdaGVAE (Locatello et al., 2019). As in Hosoya (2019), the group encoder is applied to each datapoint in the group and then all the outputs are averaged. For all experiments, our CxVAE will be a modified GVAE such that the group variable un is concatenated with the observation xn,k and fed into the instance encoder in order to compute the instance variable zn,k. For sampling the variational latent posteriors, we use the standard reparametrisation trick. We use an Adam optimiser with learning rate of 1e-4 with β1 = 0.9, β2 = 0.5. For the 3DIdent dataset, we implement all networks (encoders and decoders) as convolutional nets with 4 hidden layers and 64 filters each. Both latent variables have 16 latent dimensions. For the test-score dataset, we use MLPs with 3 hidden layer of 32 activations each. The group variable will have 4 dimensions and the instance variable will have 2 dimension. We train each model for 64 epochs, and use the last 10 epochs for evaluation. Additionally, we run the experiment for 100 different random seeds initialisations, both for the data generating process and the networks. Confidence intervals are computed by resampling train-test splits, weight initialisations and sampling seeds. We use the same 100 seeds in each model. This gives 1000 measurements to plot in Table 1. 5.2 EVALUATION METRICS We compare the models with respect to 3 different criteria: 1) How well does the model fit the holdout data?, 2) How disentangled are the representations inferred by the encoder?, and 3) How well can the model answer the counterfactual question "What colour would appear on the screen if this object were lit by a different spotlight?". We assess each criterion with a different evaluation metric. Fitting the holdout data. As a general metric, we report the reconstruction error (MSE) on the holdout data for every experiment, commonly used as a proxy for the likelihood of the holdout set. Disentangled representations. We use the Mutual Information Gap (Chen et al., 2018) to measure the quality of the disentanglement. We measure empirically the amount of mutual information between the inferred latent variables u, z and the ground-truth group-level factor u′. Consequently, the goal is to have maximal mutual information between the group variable u and the ground-truth u′, and minimal mutual information between the instance variables z and the ground-truth u′. The gap between the two (normalized with the entropy of the ground-truth factors) is the metric of disentanglement: MIG = 1 H(u′) (I(u;u′)− I(z;u′)) (4) Since the data-generating process is known, the mutual information between the inferred group variable and the ground-truth group variable I(u;u′) is straightforward to implement by following the approach from Chen et al. (2018). We measure only the mutual information between the ground-truth group factor u′ and the latent variables because, as pointed out by Németh (2020), the common failure case we are trying to guard against in group disentanglement is that the instance variables z might learn information belonging to the ground-truth group factor u′. Translation task. We measure how well the learned representations can answer the question “What would the score of student k from school n have been if they had attended the typical school (the school with scores distributed according to N (0, I2))?”. This problem, also known as translation, is a commonly used downstream task for disentangled representations (Tenenbaum & Freeman, 2000). We translate the score of student k to the typical school, and then take the mean squared error against the ground-truth translation, which is generated when the data is generated using the ground-truth Student test scores GVAE (Hosoya, 2019) 0.41 ± 0.01 0.04 ± 0.02 0.49 ± 0.01 AdaGVAE (Locatello et al., 2020) 0.41 ± 0.01 0.05 ± 0.02 0.49 ± 0.01 COCO-FUNIT (Saito et al., 2020) 0.40 ± 0.02 0.04 ± 0.01 0.49 ± 0.02 CxVAE (ours) 0.35 ± 0.02 0.44 ± 0.08 0.37 ± 0.02 lower is better higher is better lower is better generative factors. For our dataset, the correct translation corresponds to the Earth-Mover distance between the multivariate normal distributions of scores in each school (Knott & Smith, 1984). In order to obtain the translation, we first infer the instance variable of student k from school n and the group variable of the typical school. We then feed the two variables to the decoder. For evaluation, we translate all the scores from each school n and then measure the distance between the predicted translation and the ground-truth translation. We compute the total error as an average over all the translation errors. We can also use translation as an additional qualitative comparison between CxVAE and other group disentanglement methods, which can be seen in Figure 3. 6 DISENTANGLING AMBIGUOUS OBSERVATIONS We show that our CxVAE is able to group-disentangle in a setting where conditional shift produces ambiguous observations. The dataset comprises images from the 3DIdent dataset (Zimmermann et al., 2021) depicting teapots of different colours being lit by spotlights of different colours. Within a group, the images have the same spotlight colour and only the colour of the teapot varies. The goal is for the group representation to encode the spotlight colour and for the instance representation to encode the object colour. The goal of disentangling between object-colour and spotlight-colour is useful for many real-world applications, such as object recognition. Once we have separated the group-level variation from the instance level variation, we can use the instance representation as a low-level feature on which to train a classifier that predicts the colour of the object. The better the disentanglement, the better performance this predictor would have at identifying the true colour of the object. This is a difficult problem because the data exhibits conditional shift. The same exact image could have been generated by different combination of spotlight colour and object colour, as can be seen in Figure 1. This makes it difficult to identify the true colour of the object by just looking at one single image.Indeed all previous methods, which infer the instance representation solely from the current image, fail to learn disentangled representations of this dataset. The results in Table 1 show that our model, the CxVAE, produces representations that disentangle between spotlight colour and object colour much more than a representative selection of existing models: GVAE, AdaGVAE, and COCO-FUNIT. Our model’s MIG score is much higher than the one produced by competing models, and the performance gap is greater than the 95% confidence interval for any one of these models. 7 CONDITIONAL SHIFT CAUSES THE GAP IN PERFORMANCE Our CxVAE produces considerable improvements over the competing methods in terms of fitting the holdout set, disentangled representations, translation accuracy and predicting the generative factors (Table 1). While the scores of the existing methods cluster together, the gap between them and the CxVAE is larger than the 95% confidence interval of any method. 7.1 FAIR COMPARISONS BETWEEN STUDENTS Consider the task of fair comparisons between students attending different schools based on their standardised test scores in maths and reading (Braun et al., 2006). The typical assumption in the literature is that each school has a similar distribution of aptitude among its students, so our goal is to learn disentangled student-level (instance) and school-level (group) representations of the scores. The instance representation zn,k should reflect the aptitude of student k from school n independently of what school the student had attended. This analysis is crucial for university admission boards aiming to judge students based on their aptitude regardless of the socio-economic circumstances associated with attending one school or another, such as affluence, location, or curriculum (Raudenbush & Willms, 1995; Braun et al., 2006). Group disentanglement has the potential to reduce the computational costs of this analysis compared to the current variational inference methods for mixed-effects models (Gelman & Hill, 2006). The variational latent posterior of a GVAE can perform scalable inference of the student and school representations requiring just one forward pass through the network for every additional student. Contrast this with running a separate optimisation routine for the latent variables of each student (Gelman et al., 1995). GVAEs can also optimise highly non-linear generative models which would otherwise have to be designed explicitly for the problem in hand (Pinheiro & Bates, 2001). Current methods fail to learn disentangled representations on this data. Figure 3b shows the GVAE model (Hosoya, 2019) incorrectly translating the scores from one school to another. Translation is a well-established downstream task for evaluating disentanglement (Tenenbaum & Freeman, 2000). In the case of test scores, translation corresponds to the counterfactual question “What score would student k from school A have obtained if they had attended the typical school (i.e. a school whose scores are distributed according to N (0, I2))?”. We do this by generating a new score by combining the instance representation of student k from school n with the group representation of the typical school. Raudenbush & Willms (1995) use translation to directly compare students from different schools. Notice that the unconditional GVAE scrambles the order of the scores and fails to capture the N (0, I2) distribution of the typical school. We propose a modification to the variational latent posterior in order to enable the GVAE to learn disentangled representations of test score data. Looking at the data (Figure 3a), it is clear that we are dealing with the conditional shift scenario. The same reading score could be obtained by either a highachieving student from school C or a low-achieving student from school B. Inferring the aptitude of the student requires knowing the distribution of scores within each school. In order to account for the variation across groups, we introduce the Context-Aware Variational Autoencoder (CxVAE), whose instance encoder is conditioned on the group representation (Figure 2-right). This reflects the correct factorisation of the generative latent posterior p(un, zn,1:Kn |xn,1:Kn), thus making no assumptions about the relationship between the group and the instance. We observe our variational model successfully translating test scores in Figure 3c. 7.2 DATASET We generate our dataset of test scores using the classic “varying intercept, varying slope” mixedeffects model (Laird & Ware, 1982; Pinheiro & Bates, 2001; Gelman & Hill, 2006). This is a well-established approach for modelling student scores xn,k as a function of individual aptitude an,k and school-level characteristics (bn, cn) (e.g. affluence, curriculum, location, etc.) (Raudenbush & Willms, 1995; Braun et al., 2006). We choose this model for its simplicity and for the wide variety of phenomena to which it can be applied. All the scores and factors are 2-dimensional vectors, with one component for the maths score and another for the reading score: xn,k = bn − cn an,k + n,k is the score of student k in school n. an,k ∼ N (0, I2) is the aptitude of student k in school n bn ∼ N (0, I2) is the mean score in school n cn ∼ Exp(1) is the standard deviation of scores in school n n,k ∼ N (0, 0.1 ∗ I2) is a per-student error term. (5) For the evaluation procedure, we use the above model to generate N = 32, 768 values for bn, cn. For each school n, we generate M = 128 values for an,k. We then randomly select half of the schools to assemble a training dataset with 2, 097, 152 scores split across 16, 384 schools. We take the other half of schools to create the holdout dataset, so that every testing school and student are unseen during training. We have chosen to evaluate on a synthetic dataset because it allows for fine control over the parameters of the data-generating process (especially relevant in Section 7.3) and it also enables us to measure the quality of disentanglement using the Mutual Information Gap (Chen et al., 2018). 7.3 CONDITIONAL SHIFT CAUSES THE GAP IN PERFORMANCE We show, by modifying the data-generating process (5), that conditional shift explains the increased performance of the CxVAE. We insert a hyper-parameter λ to control the strength of the conditional shift; λ = 1 means the conditional shift stays the same as in the previous experiment, while λ = 0 means there is no conditional shift. Consider the case where the maths score only depends on the school, and the reading score only depends on the student. In this situation, the two generative factors can be easily disentangled since you can infer the student aptitude from the reading score and the school profile from the maths score. We use this as an extreme case of lack of conditional shift, and insert a hyper-parameter λ in our data-generating process that will continuously move between this case and the original case. Our modified data-generating process is xn,k = [ λ 1 ] bn + [ 1 λ ] an,k ( cn ·̂ [ λ 1 ]) + n,k (6) where ·̂ denotes the elementwise power operation and the generative factors (an,k,bn, cn, n,k) are sampled as in (5). This model has no conditional shift when λ = 0 because each ground-truth factor controls a separate component of the data. Inferring the student aptitude requires only the reading score and can ignore the school characteristics. When λ = 1, the problem exhibits conditional shift in exactly the same way as in (5). If our hypothesis is correct, that the conditional shift causes the performance gap between CxVAE and other group disentanglement methods, then the gap should decrease as λ approaches 0. The measurements displayed in Figure 4 confirm our expectations. For low values of λ the performance of our CxVAE is evenly matched to the GVAE. As λ increases, CxVAE metrics remain stable while GVAE performance decreases substantially. It is clear that the degree of confounding in the dataset explains the performance gain that we see in the CxVAE. 8 CONCLUSIONS In this work, we show empirically that conditioning the instance encoder on the group variable produces group-disentangled representations on datasets with conditional shift. We also show that the strength of the conditional shift in the data-generating process determines the performance gap between our model and other group disentanglement methods. Our evaluation is run on the downstream task of extracting student aptitudes from a dataset of test scorse grouped by school, a problem on which group-instance models have not been applied before. The main limitation of our work is that we perform evaluation on a synthetic dataset of student scores rather than real data. Although this is a justifiable choice with respect to evaluation (it gives us access to the ground-truth values of the latent variables), future work should focus on evaluating on real-world datasets with conditional shift, such as user-item ratings (Koren et al., 2009).
1. What is the focus and contribution of the paper regarding parameterized posterior distribution? 2. What are the strengths and weaknesses of the proposed approach, particularly its performance on synthetic data? 3. Do you have any concerns about the evaluation metrics used in the paper, such as reconstruction error and translation? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions raised by the reviewer regarding the definition of conditional shift and its relationship to the assumed data-generating process?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper makes a simple modification to the parameterized posterior distribution to allow for learning group-representations and the within-group instance representation when they are dependent conditional on the observed features: condition the instance representations on both the features and the group representation. As the paper puts it, this can handle conditional shift where changing the group changes the instance representation for the same features. Strengths And Weaknesses Strengths: Simple modification to the posterior affords good advantanges. Promising performance on synthetic data. Weaknesses: My main concern with the paper is acknowledged by the authors but nonetheless remains important: "The main limitation of our work is that we perform evaluation on a synthetic dataset of student scores rather than real data." I do not think such an evaluation can be avoided. Reconstruction error is the one metric that I can trust and that to me only sounds like a part of the story in the paper. Translation additionally seems to be important but I do not see it evaluated on real data. Important questions include: What is the point out the translation metric if it cannot be evaluated on real data? The authors say "Our model preserves the relative positions of the scores" in figure 2. It this something we desire naturally or something that comes out of an assumption? How can we guarantee relative positions of the features when translating without restrictions on q? What the desiderata for disentanglement here without stating the method? How should one evaluate them? The definition of conditional shift seems to say "changing the group changes the instance representation for the same features.". This is a natural consequence of conditioning on the collider as in figure 1 first figure (assuming a causal graph). Why call it conditional shift when it's a consequence of the assumed data generating process? Clarity, Quality, Novelty And Reproducibility The paper is written well. The idea is simple and exists in prior work, but the novelty seems to be in using the group-representation-conditioning to better learn instance representations.
ICLR
Title Group-Disentangling Conditional Shift Abstract We propose a novel group disentanglement method called the Context-Aware Variational Autoencoder (CxVAE). Our model can learn disentangled representations on datasets with conditional shift. This phenomenon occurs when the distribution of the instance-level latent variable z conditional on the input observation x, p(z|x), changes from one group to another (i.e. pi(z|x) 6= pj(z|x), where i, j are two different groups). We show that existing methods fail to learn disentangled representations under this scenario because they infer the group u and instance z representations separately. CxVAE overcomes this limitation by conditioning the instance inference on the group variable q(z|x,u). Our model has the novel ability to disentangle ambiguous observations (those with incomplete information about the generative factors), which we evaluate on an image dataset. Additionally, we use a fair comparisons task to demonstrate empirically that conditional shift is the cause of our model’s improved performance. 1 INTRODUCTION Group disentanglement is the goal of learning representations that separate group-level variation from instance-level variation. Consider a dataset of observations organised into N groups of the form xn,1:Kn = {xn,1, ...,xn,Kn}, n ∈ 1 : N . These could be pictures grouped by author, clinical outcomes grouped by the patient, or film ratings grouped by user. We train a representation network r(xn) that encodes a group of observations {xn,1, . . .xn,Kn} into one group code un and a set of instance codes {zn,1, . . . zn,Kn}, one for each observation. We want u to capture only the variation across groups and z only the variation within groups. The current state-of-the-art approaches for group disentanglement train the representation network r by using it as the variational latent posterior distribution in a Variational Autoencoder (Bouchacourt et al., 2018; Hosoya, 2019; Németh, 2020). They assume a hierarchical generative model whereby the observation xn,k is generated by combining a group latent variable un and an independent instance latent variable zn,k (Figure 2-left). The standard setup involves training the variational latent posterior q(un, zn,1:Kn |xn) by maximising a lower bound to the data likelihood (Kingma & Welling, 2014; Rezende et al., 2014). In our work, we show that the variational latent posterior, as defined in existing models, is unsuited to datasets with conditional shift. This is a property of the data-generating process whereby the true conditional distribution of the instance latent variable z changes from one group to another pi(z|x) 6= pj(z|x) where i, j are two groups (Zhang et al., 2013; Gong et al., 2016). In our case, the conditional instance distribution for group i, which is pi(z|x), corresponds to p(zi,k|xi,k,ui) where k is a given instance in the group. Conditional shift occurs in many real-world datasets that we would like to group-disentangle. For example, in the 3DIdent dataset (Zimmermann et al., 2021), if we want to infer the colour of the teapot zn,k based on an image of that teapot xn,k, we should take into account the colour of the spotlight that illuminates the scene un; different coloured spotlights will make the same object appear different colours, as can be seen in Figure 1. Existing group-disentanglement methods, which infer the instance variable (teapot colour) independently of the group (spotlight colour) fail to disentangle the two colours. Existing VAE-based methods fail to disentangle in the conditional shift setting because they make the assumption that the group and instance variables can be inferred independently of each other from the input observation (Figure 2-middle): When defining the variational latent posterior, existing works based on the Group VAE (GVAE) (Bouchacourt et al., 2018; Hosoya, 2019; Németh, 2020; Chen & Batmanghelich, 2020) make the assumption that the group and instance variables are conditionally independent given the observations (Figure 2-middle): q(un, zn,1:Kn |xn,1:Kn) = q(un|xn,1:Kn) Kn∑ k=1 q(zn,k|xn,k). (1) The limitations of this assumption have not been identified so far in the literature because the datasets used to test disentanglement, such as Shapes3D (Kim & Mnih, 2018), SmallNORB (LeCun et al., 2004), dSprites (Higgins et al., 2017), Cars3D (Reed et al., 2015), MPI3D (Gondal et al., 2019), have the property that one image is always sufficient to accurately infer its latent variables. For example, we only require a single image from the MPI3D dataset to uniquely identify the colour, position, and rotation of the depicted object. In this work, we show that conditioning the instance encoder on the group latent vector enables the model to learn disentangled representations on datasets with conditional shift. 1. In the first instance, we show that only our method is able to correctly disentangle between object-colour and spotlight-colour in the 3DIdent dataset (Zimmermann et al., 2021), illustrated in Figure 1. 2. Then, we use the task of fair comparisons between student test-scores (Figure 3) to show that the amount of conditional shift in the dataset determines the performance gap between our model and the other approaches. 2 RELATED WORK Group Disentanglement. This class of problems comes under different names: style-content disentanglement (Tenenbaum & Freeman, 2000), content-transformation disentanglement (Hosoya, 2019), and disentanglement with group supervision (Shu et al., 2020), to name a few. Recent work (Shu et al., 2020; Locatello et al., 2020) has contextualised group disentanglement as a subproblem of weakly-supervised disentanglement, where disentangled representations are learned with the help of non-datapoint supervision (e.g. grouping, ranking, restricted labelling). Early work in this area focused on separating between visual concepts (Kulkarni et al., 2015; Reed et al., 2015). This area has received renewed interest after the theoretical impossibility result of Locatello et al. (2019) and the identifiability proofs of Khemakhem et al. (2020) and Mita et al. (2021). A key aspect of recent weakly-supervised models is the interpretation of the grouping as a signal of similarity between datapoints (Chen & Batmanghelich, 2020). Conditioning on the Group Variable. While conditioning the instance encoder on the group variable is common in the areas of semi-supervised learning and fair representations (Kingma et al., 2014; Louizos et al., 2016), we are the first to apply it to unsupervised group disentanglement where explicit group labels are not available. In the field of sequence disentanglement, stateof-the-art methods (Hsu et al., 2017; Denton & Birodkar, 2017; Li & Mandt, 2018) infer the instance variable (capturing shorter timescales) conditionally on the group variable (capturing longer timescales). Recent works in weakly-supervised disentanglement (Shu et al., 2020; Locatello et al., 2019; Roeder et al., 2019) also condition the instance variable on the group, but their group variable is a discrete variable used for selection rather than a representation. It marks which units of the instance representation are common within the group and which are free to vary. We argue that this is not sufficient to account for the variation in p(v|x, u) produced by the conditional shift, so we include AdaGVAE (Locatello et al., 2020) in our evaluation for comparison (see Section 7). Conditional Shift. Our instance encoder conditioned on the group variable is a new strategy to deal with conditional shift. This problem has been studied extensively in the context of supervised learning (Zhang et al., 2013; Gong et al., 2016). However, we are the first to explore the effect of conditional shift on unsupervised learning. Methods for mitigating the effects of conditional shift typically focus on learning domain-invariant representations (Ben-David et al., 2009). However, Zhao et al. (2019) show that learning a domain-invariant representation is not sufficient for learning a correct mapping between instance variables from different groups. Image Translation. Note that our variational latent posterior is different from the one used in COCO-FUNIT (Saito et al., 2020). The authors are motivated by the same limitations with existing works as we are, namely that unsupervised translation methods struggle to disentangle under conditional shift. However, because they train explicitly for translation rather than disentanglement, they arrive at a different solution than ours. When performing a translation, their approach is to condition the representation of the target group on the source image, thereby bypassing the need for an accurate instance representation. This mechanism produces impressive results on image translation tasks, but it cannot be extended to models based on the GVAE which do not train explicitly for translation; in our case, there are no source and target groups in the training set. Regardless, we evaluate COCO-FUNIT on the test score dataset and show that our model outperforms it both in terms of disentanglement and translation. 3 BACKGROUND The Group-Instance Generative Model (Bouchacourt et al., 2018; Hosoya, 2019) is a multi-level model that uses two latent variables to generate grouped data: the instance variable zn,k ∼ N (0, 1) controls the variation within groups, and the group variable un ∼ N (0, 1) controls the variation across groups (Figure 2-left). The likelihood of a group xn,1:Kn is: p(xn,1:Kn) = Ep(un) Kn∏ k=1 Ep(zn,k) [p(xn,k|un, zn,k)] 3.1 VARIATIONAL INFERENCE Because the exact likelihood is intractable, the standard approach to train the group-instance generative model is with a Variational Autoencoder (Kingma & Welling, 2014; Rezende et al., 2014) which performs optimisation by introducing a variational latent posterior q(un, zn,1:Kn |x) and maximizing the Evidence Lower Bound (Jordan et al., 2004): log p(xn,1:Kn) ≥ Eq(un,zn,1:Kn |xn,1:Kn ) [ Kn∑ k=1 log p(x|u, z) ] −KL[q(un, zn,1:Kn |xn,1:Kn)||p(un, zn,1:Kn)] (2) Existing methods use a class of variational distributions that assume conditional independence between the latent variables. q(un, zn,1:Kn |xn,1:Kn) = q(un|xn,1:Kn) Kn∏ k=1 q(zn,k|xn,k) 4 CONTEXT-AWARE VARIATIONAL AUTOENCODER We propose a new model which can perform well on group-confounded problems. We call our model the Context-Aware Variational Autoencoder (CxVAE). Its defining feature is a variational latent posterior whose instance variable is conditioned on the group variable: q(un, zn,1:Kn |xn,1:Kn) = q(un|xn,1:Kn) Kn∏ k=1 q(zn,k|xn,k,un) Thus, the instance encoder is implemented as a network which takes as input the concatenation of the observation xn,k and previously sampled group representation un. q(zn,k|xn,k,un) = N (µ, σ), (µ, σ) = f(xn,k,un), This form of the variational distribution reflects the correct factorisation of the generative model. It has the potential to learn the true generative latent posterior, which is disentangled by definition. The Evidence Lower Bound for our model is ELBO(xn,1:Kn) = Eq(un,zn,1:Kn |) [ Kn∑ k=1 log p(xn,k|un, zn,k) ] −KL[q(un|xn,1:Kn)||p(un)] − Eq(un|xn,1:Kn ) [ Kn∑ k=1 KL[q(zn,k|xn,k,un)||p(zn,k)] ] . (3) 5 EVALUATION METHODOLOGY We show that by making the instance encoder conditional on the inferred group variable, we obtain a considerable gain in translation accuracy and a marked improvement in disentanglement. We also demonstrate that the gap in performance between our model and other group disentanglement methods is caused by conditional shift in the data-generating process. 5.1 MODEL SETUP We compare our conditional CxVAE with the state-of-the-art in group disentanglement, namely the Group VAE (Hosoya, 2019; Bouchacourt et al., 2018), COCO-FUNIT (Saito et al., 2020), and AdaGVAE (Locatello et al., 2019). As in Hosoya (2019), the group encoder is applied to each datapoint in the group and then all the outputs are averaged. For all experiments, our CxVAE will be a modified GVAE such that the group variable un is concatenated with the observation xn,k and fed into the instance encoder in order to compute the instance variable zn,k. For sampling the variational latent posteriors, we use the standard reparametrisation trick. We use an Adam optimiser with learning rate of 1e-4 with β1 = 0.9, β2 = 0.5. For the 3DIdent dataset, we implement all networks (encoders and decoders) as convolutional nets with 4 hidden layers and 64 filters each. Both latent variables have 16 latent dimensions. For the test-score dataset, we use MLPs with 3 hidden layer of 32 activations each. The group variable will have 4 dimensions and the instance variable will have 2 dimension. We train each model for 64 epochs, and use the last 10 epochs for evaluation. Additionally, we run the experiment for 100 different random seeds initialisations, both for the data generating process and the networks. Confidence intervals are computed by resampling train-test splits, weight initialisations and sampling seeds. We use the same 100 seeds in each model. This gives 1000 measurements to plot in Table 1. 5.2 EVALUATION METRICS We compare the models with respect to 3 different criteria: 1) How well does the model fit the holdout data?, 2) How disentangled are the representations inferred by the encoder?, and 3) How well can the model answer the counterfactual question "What colour would appear on the screen if this object were lit by a different spotlight?". We assess each criterion with a different evaluation metric. Fitting the holdout data. As a general metric, we report the reconstruction error (MSE) on the holdout data for every experiment, commonly used as a proxy for the likelihood of the holdout set. Disentangled representations. We use the Mutual Information Gap (Chen et al., 2018) to measure the quality of the disentanglement. We measure empirically the amount of mutual information between the inferred latent variables u, z and the ground-truth group-level factor u′. Consequently, the goal is to have maximal mutual information between the group variable u and the ground-truth u′, and minimal mutual information between the instance variables z and the ground-truth u′. The gap between the two (normalized with the entropy of the ground-truth factors) is the metric of disentanglement: MIG = 1 H(u′) (I(u;u′)− I(z;u′)) (4) Since the data-generating process is known, the mutual information between the inferred group variable and the ground-truth group variable I(u;u′) is straightforward to implement by following the approach from Chen et al. (2018). We measure only the mutual information between the ground-truth group factor u′ and the latent variables because, as pointed out by Németh (2020), the common failure case we are trying to guard against in group disentanglement is that the instance variables z might learn information belonging to the ground-truth group factor u′. Translation task. We measure how well the learned representations can answer the question “What would the score of student k from school n have been if they had attended the typical school (the school with scores distributed according to N (0, I2))?”. This problem, also known as translation, is a commonly used downstream task for disentangled representations (Tenenbaum & Freeman, 2000). We translate the score of student k to the typical school, and then take the mean squared error against the ground-truth translation, which is generated when the data is generated using the ground-truth Student test scores GVAE (Hosoya, 2019) 0.41 ± 0.01 0.04 ± 0.02 0.49 ± 0.01 AdaGVAE (Locatello et al., 2020) 0.41 ± 0.01 0.05 ± 0.02 0.49 ± 0.01 COCO-FUNIT (Saito et al., 2020) 0.40 ± 0.02 0.04 ± 0.01 0.49 ± 0.02 CxVAE (ours) 0.35 ± 0.02 0.44 ± 0.08 0.37 ± 0.02 lower is better higher is better lower is better generative factors. For our dataset, the correct translation corresponds to the Earth-Mover distance between the multivariate normal distributions of scores in each school (Knott & Smith, 1984). In order to obtain the translation, we first infer the instance variable of student k from school n and the group variable of the typical school. We then feed the two variables to the decoder. For evaluation, we translate all the scores from each school n and then measure the distance between the predicted translation and the ground-truth translation. We compute the total error as an average over all the translation errors. We can also use translation as an additional qualitative comparison between CxVAE and other group disentanglement methods, which can be seen in Figure 3. 6 DISENTANGLING AMBIGUOUS OBSERVATIONS We show that our CxVAE is able to group-disentangle in a setting where conditional shift produces ambiguous observations. The dataset comprises images from the 3DIdent dataset (Zimmermann et al., 2021) depicting teapots of different colours being lit by spotlights of different colours. Within a group, the images have the same spotlight colour and only the colour of the teapot varies. The goal is for the group representation to encode the spotlight colour and for the instance representation to encode the object colour. The goal of disentangling between object-colour and spotlight-colour is useful for many real-world applications, such as object recognition. Once we have separated the group-level variation from the instance level variation, we can use the instance representation as a low-level feature on which to train a classifier that predicts the colour of the object. The better the disentanglement, the better performance this predictor would have at identifying the true colour of the object. This is a difficult problem because the data exhibits conditional shift. The same exact image could have been generated by different combination of spotlight colour and object colour, as can be seen in Figure 1. This makes it difficult to identify the true colour of the object by just looking at one single image.Indeed all previous methods, which infer the instance representation solely from the current image, fail to learn disentangled representations of this dataset. The results in Table 1 show that our model, the CxVAE, produces representations that disentangle between spotlight colour and object colour much more than a representative selection of existing models: GVAE, AdaGVAE, and COCO-FUNIT. Our model’s MIG score is much higher than the one produced by competing models, and the performance gap is greater than the 95% confidence interval for any one of these models. 7 CONDITIONAL SHIFT CAUSES THE GAP IN PERFORMANCE Our CxVAE produces considerable improvements over the competing methods in terms of fitting the holdout set, disentangled representations, translation accuracy and predicting the generative factors (Table 1). While the scores of the existing methods cluster together, the gap between them and the CxVAE is larger than the 95% confidence interval of any method. 7.1 FAIR COMPARISONS BETWEEN STUDENTS Consider the task of fair comparisons between students attending different schools based on their standardised test scores in maths and reading (Braun et al., 2006). The typical assumption in the literature is that each school has a similar distribution of aptitude among its students, so our goal is to learn disentangled student-level (instance) and school-level (group) representations of the scores. The instance representation zn,k should reflect the aptitude of student k from school n independently of what school the student had attended. This analysis is crucial for university admission boards aiming to judge students based on their aptitude regardless of the socio-economic circumstances associated with attending one school or another, such as affluence, location, or curriculum (Raudenbush & Willms, 1995; Braun et al., 2006). Group disentanglement has the potential to reduce the computational costs of this analysis compared to the current variational inference methods for mixed-effects models (Gelman & Hill, 2006). The variational latent posterior of a GVAE can perform scalable inference of the student and school representations requiring just one forward pass through the network for every additional student. Contrast this with running a separate optimisation routine for the latent variables of each student (Gelman et al., 1995). GVAEs can also optimise highly non-linear generative models which would otherwise have to be designed explicitly for the problem in hand (Pinheiro & Bates, 2001). Current methods fail to learn disentangled representations on this data. Figure 3b shows the GVAE model (Hosoya, 2019) incorrectly translating the scores from one school to another. Translation is a well-established downstream task for evaluating disentanglement (Tenenbaum & Freeman, 2000). In the case of test scores, translation corresponds to the counterfactual question “What score would student k from school A have obtained if they had attended the typical school (i.e. a school whose scores are distributed according to N (0, I2))?”. We do this by generating a new score by combining the instance representation of student k from school n with the group representation of the typical school. Raudenbush & Willms (1995) use translation to directly compare students from different schools. Notice that the unconditional GVAE scrambles the order of the scores and fails to capture the N (0, I2) distribution of the typical school. We propose a modification to the variational latent posterior in order to enable the GVAE to learn disentangled representations of test score data. Looking at the data (Figure 3a), it is clear that we are dealing with the conditional shift scenario. The same reading score could be obtained by either a highachieving student from school C or a low-achieving student from school B. Inferring the aptitude of the student requires knowing the distribution of scores within each school. In order to account for the variation across groups, we introduce the Context-Aware Variational Autoencoder (CxVAE), whose instance encoder is conditioned on the group representation (Figure 2-right). This reflects the correct factorisation of the generative latent posterior p(un, zn,1:Kn |xn,1:Kn), thus making no assumptions about the relationship between the group and the instance. We observe our variational model successfully translating test scores in Figure 3c. 7.2 DATASET We generate our dataset of test scores using the classic “varying intercept, varying slope” mixedeffects model (Laird & Ware, 1982; Pinheiro & Bates, 2001; Gelman & Hill, 2006). This is a well-established approach for modelling student scores xn,k as a function of individual aptitude an,k and school-level characteristics (bn, cn) (e.g. affluence, curriculum, location, etc.) (Raudenbush & Willms, 1995; Braun et al., 2006). We choose this model for its simplicity and for the wide variety of phenomena to which it can be applied. All the scores and factors are 2-dimensional vectors, with one component for the maths score and another for the reading score: xn,k = bn − cn an,k + n,k is the score of student k in school n. an,k ∼ N (0, I2) is the aptitude of student k in school n bn ∼ N (0, I2) is the mean score in school n cn ∼ Exp(1) is the standard deviation of scores in school n n,k ∼ N (0, 0.1 ∗ I2) is a per-student error term. (5) For the evaluation procedure, we use the above model to generate N = 32, 768 values for bn, cn. For each school n, we generate M = 128 values for an,k. We then randomly select half of the schools to assemble a training dataset with 2, 097, 152 scores split across 16, 384 schools. We take the other half of schools to create the holdout dataset, so that every testing school and student are unseen during training. We have chosen to evaluate on a synthetic dataset because it allows for fine control over the parameters of the data-generating process (especially relevant in Section 7.3) and it also enables us to measure the quality of disentanglement using the Mutual Information Gap (Chen et al., 2018). 7.3 CONDITIONAL SHIFT CAUSES THE GAP IN PERFORMANCE We show, by modifying the data-generating process (5), that conditional shift explains the increased performance of the CxVAE. We insert a hyper-parameter λ to control the strength of the conditional shift; λ = 1 means the conditional shift stays the same as in the previous experiment, while λ = 0 means there is no conditional shift. Consider the case where the maths score only depends on the school, and the reading score only depends on the student. In this situation, the two generative factors can be easily disentangled since you can infer the student aptitude from the reading score and the school profile from the maths score. We use this as an extreme case of lack of conditional shift, and insert a hyper-parameter λ in our data-generating process that will continuously move between this case and the original case. Our modified data-generating process is xn,k = [ λ 1 ] bn + [ 1 λ ] an,k ( cn ·̂ [ λ 1 ]) + n,k (6) where ·̂ denotes the elementwise power operation and the generative factors (an,k,bn, cn, n,k) are sampled as in (5). This model has no conditional shift when λ = 0 because each ground-truth factor controls a separate component of the data. Inferring the student aptitude requires only the reading score and can ignore the school characteristics. When λ = 1, the problem exhibits conditional shift in exactly the same way as in (5). If our hypothesis is correct, that the conditional shift causes the performance gap between CxVAE and other group disentanglement methods, then the gap should decrease as λ approaches 0. The measurements displayed in Figure 4 confirm our expectations. For low values of λ the performance of our CxVAE is evenly matched to the GVAE. As λ increases, CxVAE metrics remain stable while GVAE performance decreases substantially. It is clear that the degree of confounding in the dataset explains the performance gain that we see in the CxVAE. 8 CONCLUSIONS In this work, we show empirically that conditioning the instance encoder on the group variable produces group-disentangled representations on datasets with conditional shift. We also show that the strength of the conditional shift in the data-generating process determines the performance gap between our model and other group disentanglement methods. Our evaluation is run on the downstream task of extracting student aptitudes from a dataset of test scorse grouped by school, a problem on which group-instance models have not been applied before. The main limitation of our work is that we perform evaluation on a synthetic dataset of student scores rather than real data. Although this is a justifiable choice with respect to evaluation (it gives us access to the ground-truth values of the latent variables), future work should focus on evaluating on real-world datasets with conditional shift, such as user-item ratings (Koren et al., 2009).
1. What is the focus of the paper regarding group disentanglement? 2. What are the strengths and weaknesses of the proposed Context-Aware Variational Autoencoder? 3. Do you have any concerns about the experiments conducted only on toy datasets? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions regarding the method description, particularly in Section 4, and the evaluation criteria used in Figures 3 and 4?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper tackles the problem of group disentanglement in presence of conditional shift. This work proposes a new group disentanglement method called the Context-Aware Variational Autoencoder. Experiments on toy datasets show that the proposed method can significantly improve over existing methods. Strengths And Weaknesses Strength Paper tackles an important and relevant problem Weaknesses Results are present only on the toy dataset in the paper. This is the biggest weakness of the paper. Moreover, since the data-generating process is also proposed in the paper, it is unclear if the dataset is specially designed that can show the failure modes of other methods and if those failure modes are present in other real-world datasets. Since all the experiments are on toy datasets, the claims made in the abstract and introduction are overstated. For example, "Our model has the novel ability to disentangle ambiguous observations". There is no concrete evidence in the paper when this will hold and how general of a statement this is? Tackling the problem of conditional shift is very general and ill-posed. It is unclear from the writing how the paper deals with inherent underspecification. Method description in Section 4 is a bit skim. Equations 8-10 appear to be a bit out of the place and it is unclear how the text above these equations follows. Only the toy dataset is considered in the paper. Any description evaluation criterion is missing. It is hard to understand the tasks considered in Figures 3 and 4. Reproducibility statement is not present and No code is provided as well. Authors can use the 9th page in the main paper and additional appendices to provide those details. Clarity, Quality, Novelty And Reproducibility Clarity and Writing issues Overall the writing in the introduction is not easy to follow. There is no clear flow between problems tackled in the paper, issues with the existing works, and contributions of the paper. The writing in contribution bullets is a bit hard to follow. The first bullet " We approach the task of learning fair representations of students from different schools/socio-economic backgrounds." appears to be a bit disconnected from the previous sentence. "conditional shift directly causes our model’s improvement in performance over existing methods", is a bit misleading. This statement can not be true in general without the additional assumptions on what is not shifting. Moreover, the sentence structure is also unnecessarily complex. Reproducibility concern Code or detailed experiment setting is not provided in the paper. Moreover, no hyperparameter details are shared
ICLR
Title Group-Disentangling Conditional Shift Abstract We propose a novel group disentanglement method called the Context-Aware Variational Autoencoder (CxVAE). Our model can learn disentangled representations on datasets with conditional shift. This phenomenon occurs when the distribution of the instance-level latent variable z conditional on the input observation x, p(z|x), changes from one group to another (i.e. pi(z|x) 6= pj(z|x), where i, j are two different groups). We show that existing methods fail to learn disentangled representations under this scenario because they infer the group u and instance z representations separately. CxVAE overcomes this limitation by conditioning the instance inference on the group variable q(z|x,u). Our model has the novel ability to disentangle ambiguous observations (those with incomplete information about the generative factors), which we evaluate on an image dataset. Additionally, we use a fair comparisons task to demonstrate empirically that conditional shift is the cause of our model’s improved performance. 1 INTRODUCTION Group disentanglement is the goal of learning representations that separate group-level variation from instance-level variation. Consider a dataset of observations organised into N groups of the form xn,1:Kn = {xn,1, ...,xn,Kn}, n ∈ 1 : N . These could be pictures grouped by author, clinical outcomes grouped by the patient, or film ratings grouped by user. We train a representation network r(xn) that encodes a group of observations {xn,1, . . .xn,Kn} into one group code un and a set of instance codes {zn,1, . . . zn,Kn}, one for each observation. We want u to capture only the variation across groups and z only the variation within groups. The current state-of-the-art approaches for group disentanglement train the representation network r by using it as the variational latent posterior distribution in a Variational Autoencoder (Bouchacourt et al., 2018; Hosoya, 2019; Németh, 2020). They assume a hierarchical generative model whereby the observation xn,k is generated by combining a group latent variable un and an independent instance latent variable zn,k (Figure 2-left). The standard setup involves training the variational latent posterior q(un, zn,1:Kn |xn) by maximising a lower bound to the data likelihood (Kingma & Welling, 2014; Rezende et al., 2014). In our work, we show that the variational latent posterior, as defined in existing models, is unsuited to datasets with conditional shift. This is a property of the data-generating process whereby the true conditional distribution of the instance latent variable z changes from one group to another pi(z|x) 6= pj(z|x) where i, j are two groups (Zhang et al., 2013; Gong et al., 2016). In our case, the conditional instance distribution for group i, which is pi(z|x), corresponds to p(zi,k|xi,k,ui) where k is a given instance in the group. Conditional shift occurs in many real-world datasets that we would like to group-disentangle. For example, in the 3DIdent dataset (Zimmermann et al., 2021), if we want to infer the colour of the teapot zn,k based on an image of that teapot xn,k, we should take into account the colour of the spotlight that illuminates the scene un; different coloured spotlights will make the same object appear different colours, as can be seen in Figure 1. Existing group-disentanglement methods, which infer the instance variable (teapot colour) independently of the group (spotlight colour) fail to disentangle the two colours. Existing VAE-based methods fail to disentangle in the conditional shift setting because they make the assumption that the group and instance variables can be inferred independently of each other from the input observation (Figure 2-middle): When defining the variational latent posterior, existing works based on the Group VAE (GVAE) (Bouchacourt et al., 2018; Hosoya, 2019; Németh, 2020; Chen & Batmanghelich, 2020) make the assumption that the group and instance variables are conditionally independent given the observations (Figure 2-middle): q(un, zn,1:Kn |xn,1:Kn) = q(un|xn,1:Kn) Kn∑ k=1 q(zn,k|xn,k). (1) The limitations of this assumption have not been identified so far in the literature because the datasets used to test disentanglement, such as Shapes3D (Kim & Mnih, 2018), SmallNORB (LeCun et al., 2004), dSprites (Higgins et al., 2017), Cars3D (Reed et al., 2015), MPI3D (Gondal et al., 2019), have the property that one image is always sufficient to accurately infer its latent variables. For example, we only require a single image from the MPI3D dataset to uniquely identify the colour, position, and rotation of the depicted object. In this work, we show that conditioning the instance encoder on the group latent vector enables the model to learn disentangled representations on datasets with conditional shift. 1. In the first instance, we show that only our method is able to correctly disentangle between object-colour and spotlight-colour in the 3DIdent dataset (Zimmermann et al., 2021), illustrated in Figure 1. 2. Then, we use the task of fair comparisons between student test-scores (Figure 3) to show that the amount of conditional shift in the dataset determines the performance gap between our model and the other approaches. 2 RELATED WORK Group Disentanglement. This class of problems comes under different names: style-content disentanglement (Tenenbaum & Freeman, 2000), content-transformation disentanglement (Hosoya, 2019), and disentanglement with group supervision (Shu et al., 2020), to name a few. Recent work (Shu et al., 2020; Locatello et al., 2020) has contextualised group disentanglement as a subproblem of weakly-supervised disentanglement, where disentangled representations are learned with the help of non-datapoint supervision (e.g. grouping, ranking, restricted labelling). Early work in this area focused on separating between visual concepts (Kulkarni et al., 2015; Reed et al., 2015). This area has received renewed interest after the theoretical impossibility result of Locatello et al. (2019) and the identifiability proofs of Khemakhem et al. (2020) and Mita et al. (2021). A key aspect of recent weakly-supervised models is the interpretation of the grouping as a signal of similarity between datapoints (Chen & Batmanghelich, 2020). Conditioning on the Group Variable. While conditioning the instance encoder on the group variable is common in the areas of semi-supervised learning and fair representations (Kingma et al., 2014; Louizos et al., 2016), we are the first to apply it to unsupervised group disentanglement where explicit group labels are not available. In the field of sequence disentanglement, stateof-the-art methods (Hsu et al., 2017; Denton & Birodkar, 2017; Li & Mandt, 2018) infer the instance variable (capturing shorter timescales) conditionally on the group variable (capturing longer timescales). Recent works in weakly-supervised disentanglement (Shu et al., 2020; Locatello et al., 2019; Roeder et al., 2019) also condition the instance variable on the group, but their group variable is a discrete variable used for selection rather than a representation. It marks which units of the instance representation are common within the group and which are free to vary. We argue that this is not sufficient to account for the variation in p(v|x, u) produced by the conditional shift, so we include AdaGVAE (Locatello et al., 2020) in our evaluation for comparison (see Section 7). Conditional Shift. Our instance encoder conditioned on the group variable is a new strategy to deal with conditional shift. This problem has been studied extensively in the context of supervised learning (Zhang et al., 2013; Gong et al., 2016). However, we are the first to explore the effect of conditional shift on unsupervised learning. Methods for mitigating the effects of conditional shift typically focus on learning domain-invariant representations (Ben-David et al., 2009). However, Zhao et al. (2019) show that learning a domain-invariant representation is not sufficient for learning a correct mapping between instance variables from different groups. Image Translation. Note that our variational latent posterior is different from the one used in COCO-FUNIT (Saito et al., 2020). The authors are motivated by the same limitations with existing works as we are, namely that unsupervised translation methods struggle to disentangle under conditional shift. However, because they train explicitly for translation rather than disentanglement, they arrive at a different solution than ours. When performing a translation, their approach is to condition the representation of the target group on the source image, thereby bypassing the need for an accurate instance representation. This mechanism produces impressive results on image translation tasks, but it cannot be extended to models based on the GVAE which do not train explicitly for translation; in our case, there are no source and target groups in the training set. Regardless, we evaluate COCO-FUNIT on the test score dataset and show that our model outperforms it both in terms of disentanglement and translation. 3 BACKGROUND The Group-Instance Generative Model (Bouchacourt et al., 2018; Hosoya, 2019) is a multi-level model that uses two latent variables to generate grouped data: the instance variable zn,k ∼ N (0, 1) controls the variation within groups, and the group variable un ∼ N (0, 1) controls the variation across groups (Figure 2-left). The likelihood of a group xn,1:Kn is: p(xn,1:Kn) = Ep(un) Kn∏ k=1 Ep(zn,k) [p(xn,k|un, zn,k)] 3.1 VARIATIONAL INFERENCE Because the exact likelihood is intractable, the standard approach to train the group-instance generative model is with a Variational Autoencoder (Kingma & Welling, 2014; Rezende et al., 2014) which performs optimisation by introducing a variational latent posterior q(un, zn,1:Kn |x) and maximizing the Evidence Lower Bound (Jordan et al., 2004): log p(xn,1:Kn) ≥ Eq(un,zn,1:Kn |xn,1:Kn ) [ Kn∑ k=1 log p(x|u, z) ] −KL[q(un, zn,1:Kn |xn,1:Kn)||p(un, zn,1:Kn)] (2) Existing methods use a class of variational distributions that assume conditional independence between the latent variables. q(un, zn,1:Kn |xn,1:Kn) = q(un|xn,1:Kn) Kn∏ k=1 q(zn,k|xn,k) 4 CONTEXT-AWARE VARIATIONAL AUTOENCODER We propose a new model which can perform well on group-confounded problems. We call our model the Context-Aware Variational Autoencoder (CxVAE). Its defining feature is a variational latent posterior whose instance variable is conditioned on the group variable: q(un, zn,1:Kn |xn,1:Kn) = q(un|xn,1:Kn) Kn∏ k=1 q(zn,k|xn,k,un) Thus, the instance encoder is implemented as a network which takes as input the concatenation of the observation xn,k and previously sampled group representation un. q(zn,k|xn,k,un) = N (µ, σ), (µ, σ) = f(xn,k,un), This form of the variational distribution reflects the correct factorisation of the generative model. It has the potential to learn the true generative latent posterior, which is disentangled by definition. The Evidence Lower Bound for our model is ELBO(xn,1:Kn) = Eq(un,zn,1:Kn |) [ Kn∑ k=1 log p(xn,k|un, zn,k) ] −KL[q(un|xn,1:Kn)||p(un)] − Eq(un|xn,1:Kn ) [ Kn∑ k=1 KL[q(zn,k|xn,k,un)||p(zn,k)] ] . (3) 5 EVALUATION METHODOLOGY We show that by making the instance encoder conditional on the inferred group variable, we obtain a considerable gain in translation accuracy and a marked improvement in disentanglement. We also demonstrate that the gap in performance between our model and other group disentanglement methods is caused by conditional shift in the data-generating process. 5.1 MODEL SETUP We compare our conditional CxVAE with the state-of-the-art in group disentanglement, namely the Group VAE (Hosoya, 2019; Bouchacourt et al., 2018), COCO-FUNIT (Saito et al., 2020), and AdaGVAE (Locatello et al., 2019). As in Hosoya (2019), the group encoder is applied to each datapoint in the group and then all the outputs are averaged. For all experiments, our CxVAE will be a modified GVAE such that the group variable un is concatenated with the observation xn,k and fed into the instance encoder in order to compute the instance variable zn,k. For sampling the variational latent posteriors, we use the standard reparametrisation trick. We use an Adam optimiser with learning rate of 1e-4 with β1 = 0.9, β2 = 0.5. For the 3DIdent dataset, we implement all networks (encoders and decoders) as convolutional nets with 4 hidden layers and 64 filters each. Both latent variables have 16 latent dimensions. For the test-score dataset, we use MLPs with 3 hidden layer of 32 activations each. The group variable will have 4 dimensions and the instance variable will have 2 dimension. We train each model for 64 epochs, and use the last 10 epochs for evaluation. Additionally, we run the experiment for 100 different random seeds initialisations, both for the data generating process and the networks. Confidence intervals are computed by resampling train-test splits, weight initialisations and sampling seeds. We use the same 100 seeds in each model. This gives 1000 measurements to plot in Table 1. 5.2 EVALUATION METRICS We compare the models with respect to 3 different criteria: 1) How well does the model fit the holdout data?, 2) How disentangled are the representations inferred by the encoder?, and 3) How well can the model answer the counterfactual question "What colour would appear on the screen if this object were lit by a different spotlight?". We assess each criterion with a different evaluation metric. Fitting the holdout data. As a general metric, we report the reconstruction error (MSE) on the holdout data for every experiment, commonly used as a proxy for the likelihood of the holdout set. Disentangled representations. We use the Mutual Information Gap (Chen et al., 2018) to measure the quality of the disentanglement. We measure empirically the amount of mutual information between the inferred latent variables u, z and the ground-truth group-level factor u′. Consequently, the goal is to have maximal mutual information between the group variable u and the ground-truth u′, and minimal mutual information between the instance variables z and the ground-truth u′. The gap between the two (normalized with the entropy of the ground-truth factors) is the metric of disentanglement: MIG = 1 H(u′) (I(u;u′)− I(z;u′)) (4) Since the data-generating process is known, the mutual information between the inferred group variable and the ground-truth group variable I(u;u′) is straightforward to implement by following the approach from Chen et al. (2018). We measure only the mutual information between the ground-truth group factor u′ and the latent variables because, as pointed out by Németh (2020), the common failure case we are trying to guard against in group disentanglement is that the instance variables z might learn information belonging to the ground-truth group factor u′. Translation task. We measure how well the learned representations can answer the question “What would the score of student k from school n have been if they had attended the typical school (the school with scores distributed according to N (0, I2))?”. This problem, also known as translation, is a commonly used downstream task for disentangled representations (Tenenbaum & Freeman, 2000). We translate the score of student k to the typical school, and then take the mean squared error against the ground-truth translation, which is generated when the data is generated using the ground-truth Student test scores GVAE (Hosoya, 2019) 0.41 ± 0.01 0.04 ± 0.02 0.49 ± 0.01 AdaGVAE (Locatello et al., 2020) 0.41 ± 0.01 0.05 ± 0.02 0.49 ± 0.01 COCO-FUNIT (Saito et al., 2020) 0.40 ± 0.02 0.04 ± 0.01 0.49 ± 0.02 CxVAE (ours) 0.35 ± 0.02 0.44 ± 0.08 0.37 ± 0.02 lower is better higher is better lower is better generative factors. For our dataset, the correct translation corresponds to the Earth-Mover distance between the multivariate normal distributions of scores in each school (Knott & Smith, 1984). In order to obtain the translation, we first infer the instance variable of student k from school n and the group variable of the typical school. We then feed the two variables to the decoder. For evaluation, we translate all the scores from each school n and then measure the distance between the predicted translation and the ground-truth translation. We compute the total error as an average over all the translation errors. We can also use translation as an additional qualitative comparison between CxVAE and other group disentanglement methods, which can be seen in Figure 3. 6 DISENTANGLING AMBIGUOUS OBSERVATIONS We show that our CxVAE is able to group-disentangle in a setting where conditional shift produces ambiguous observations. The dataset comprises images from the 3DIdent dataset (Zimmermann et al., 2021) depicting teapots of different colours being lit by spotlights of different colours. Within a group, the images have the same spotlight colour and only the colour of the teapot varies. The goal is for the group representation to encode the spotlight colour and for the instance representation to encode the object colour. The goal of disentangling between object-colour and spotlight-colour is useful for many real-world applications, such as object recognition. Once we have separated the group-level variation from the instance level variation, we can use the instance representation as a low-level feature on which to train a classifier that predicts the colour of the object. The better the disentanglement, the better performance this predictor would have at identifying the true colour of the object. This is a difficult problem because the data exhibits conditional shift. The same exact image could have been generated by different combination of spotlight colour and object colour, as can be seen in Figure 1. This makes it difficult to identify the true colour of the object by just looking at one single image.Indeed all previous methods, which infer the instance representation solely from the current image, fail to learn disentangled representations of this dataset. The results in Table 1 show that our model, the CxVAE, produces representations that disentangle between spotlight colour and object colour much more than a representative selection of existing models: GVAE, AdaGVAE, and COCO-FUNIT. Our model’s MIG score is much higher than the one produced by competing models, and the performance gap is greater than the 95% confidence interval for any one of these models. 7 CONDITIONAL SHIFT CAUSES THE GAP IN PERFORMANCE Our CxVAE produces considerable improvements over the competing methods in terms of fitting the holdout set, disentangled representations, translation accuracy and predicting the generative factors (Table 1). While the scores of the existing methods cluster together, the gap between them and the CxVAE is larger than the 95% confidence interval of any method. 7.1 FAIR COMPARISONS BETWEEN STUDENTS Consider the task of fair comparisons between students attending different schools based on their standardised test scores in maths and reading (Braun et al., 2006). The typical assumption in the literature is that each school has a similar distribution of aptitude among its students, so our goal is to learn disentangled student-level (instance) and school-level (group) representations of the scores. The instance representation zn,k should reflect the aptitude of student k from school n independently of what school the student had attended. This analysis is crucial for university admission boards aiming to judge students based on their aptitude regardless of the socio-economic circumstances associated with attending one school or another, such as affluence, location, or curriculum (Raudenbush & Willms, 1995; Braun et al., 2006). Group disentanglement has the potential to reduce the computational costs of this analysis compared to the current variational inference methods for mixed-effects models (Gelman & Hill, 2006). The variational latent posterior of a GVAE can perform scalable inference of the student and school representations requiring just one forward pass through the network for every additional student. Contrast this with running a separate optimisation routine for the latent variables of each student (Gelman et al., 1995). GVAEs can also optimise highly non-linear generative models which would otherwise have to be designed explicitly for the problem in hand (Pinheiro & Bates, 2001). Current methods fail to learn disentangled representations on this data. Figure 3b shows the GVAE model (Hosoya, 2019) incorrectly translating the scores from one school to another. Translation is a well-established downstream task for evaluating disentanglement (Tenenbaum & Freeman, 2000). In the case of test scores, translation corresponds to the counterfactual question “What score would student k from school A have obtained if they had attended the typical school (i.e. a school whose scores are distributed according to N (0, I2))?”. We do this by generating a new score by combining the instance representation of student k from school n with the group representation of the typical school. Raudenbush & Willms (1995) use translation to directly compare students from different schools. Notice that the unconditional GVAE scrambles the order of the scores and fails to capture the N (0, I2) distribution of the typical school. We propose a modification to the variational latent posterior in order to enable the GVAE to learn disentangled representations of test score data. Looking at the data (Figure 3a), it is clear that we are dealing with the conditional shift scenario. The same reading score could be obtained by either a highachieving student from school C or a low-achieving student from school B. Inferring the aptitude of the student requires knowing the distribution of scores within each school. In order to account for the variation across groups, we introduce the Context-Aware Variational Autoencoder (CxVAE), whose instance encoder is conditioned on the group representation (Figure 2-right). This reflects the correct factorisation of the generative latent posterior p(un, zn,1:Kn |xn,1:Kn), thus making no assumptions about the relationship between the group and the instance. We observe our variational model successfully translating test scores in Figure 3c. 7.2 DATASET We generate our dataset of test scores using the classic “varying intercept, varying slope” mixedeffects model (Laird & Ware, 1982; Pinheiro & Bates, 2001; Gelman & Hill, 2006). This is a well-established approach for modelling student scores xn,k as a function of individual aptitude an,k and school-level characteristics (bn, cn) (e.g. affluence, curriculum, location, etc.) (Raudenbush & Willms, 1995; Braun et al., 2006). We choose this model for its simplicity and for the wide variety of phenomena to which it can be applied. All the scores and factors are 2-dimensional vectors, with one component for the maths score and another for the reading score: xn,k = bn − cn an,k + n,k is the score of student k in school n. an,k ∼ N (0, I2) is the aptitude of student k in school n bn ∼ N (0, I2) is the mean score in school n cn ∼ Exp(1) is the standard deviation of scores in school n n,k ∼ N (0, 0.1 ∗ I2) is a per-student error term. (5) For the evaluation procedure, we use the above model to generate N = 32, 768 values for bn, cn. For each school n, we generate M = 128 values for an,k. We then randomly select half of the schools to assemble a training dataset with 2, 097, 152 scores split across 16, 384 schools. We take the other half of schools to create the holdout dataset, so that every testing school and student are unseen during training. We have chosen to evaluate on a synthetic dataset because it allows for fine control over the parameters of the data-generating process (especially relevant in Section 7.3) and it also enables us to measure the quality of disentanglement using the Mutual Information Gap (Chen et al., 2018). 7.3 CONDITIONAL SHIFT CAUSES THE GAP IN PERFORMANCE We show, by modifying the data-generating process (5), that conditional shift explains the increased performance of the CxVAE. We insert a hyper-parameter λ to control the strength of the conditional shift; λ = 1 means the conditional shift stays the same as in the previous experiment, while λ = 0 means there is no conditional shift. Consider the case where the maths score only depends on the school, and the reading score only depends on the student. In this situation, the two generative factors can be easily disentangled since you can infer the student aptitude from the reading score and the school profile from the maths score. We use this as an extreme case of lack of conditional shift, and insert a hyper-parameter λ in our data-generating process that will continuously move between this case and the original case. Our modified data-generating process is xn,k = [ λ 1 ] bn + [ 1 λ ] an,k ( cn ·̂ [ λ 1 ]) + n,k (6) where ·̂ denotes the elementwise power operation and the generative factors (an,k,bn, cn, n,k) are sampled as in (5). This model has no conditional shift when λ = 0 because each ground-truth factor controls a separate component of the data. Inferring the student aptitude requires only the reading score and can ignore the school characteristics. When λ = 1, the problem exhibits conditional shift in exactly the same way as in (5). If our hypothesis is correct, that the conditional shift causes the performance gap between CxVAE and other group disentanglement methods, then the gap should decrease as λ approaches 0. The measurements displayed in Figure 4 confirm our expectations. For low values of λ the performance of our CxVAE is evenly matched to the GVAE. As λ increases, CxVAE metrics remain stable while GVAE performance decreases substantially. It is clear that the degree of confounding in the dataset explains the performance gain that we see in the CxVAE. 8 CONCLUSIONS In this work, we show empirically that conditioning the instance encoder on the group variable produces group-disentangled representations on datasets with conditional shift. We also show that the strength of the conditional shift in the data-generating process determines the performance gap between our model and other group disentanglement methods. Our evaluation is run on the downstream task of extracting student aptitudes from a dataset of test scorse grouped by school, a problem on which group-instance models have not been applied before. The main limitation of our work is that we perform evaluation on a synthetic dataset of student scores rather than real data. Although this is a justifiable choice with respect to evaluation (it gives us access to the ground-truth values of the latent variables), future work should focus on evaluating on real-world datasets with conditional shift, such as user-item ratings (Koren et al., 2009).
1. What is the focus of the paper regarding group entanglement and conditional shifts? 2. What are the strengths and weaknesses of the proposed approach, particularly in handling conditional shifts? 3. Do you have any concerns about the method's application in high-dimensional problems? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposed a novel group entanglement method under the concern of conditional shift in dataset. In the paper, the author argues that under the conditional shift, the group representation and instance representation cannot be inferred independently since the instance distribution is confounded by the group identity. The proposal is to control the group variable while learning the individual representation. This paper claims to be the first in unsupervised group disentanglement to condition on group variables while learning the individual variables. Strengths And Weaknesses The major strength of this paper can be summarized in the following aspects: This paper correctly points out the weakness of existing methods in group disentanglement when the conditional shift exists. The idea that the group variables are confounders when inferring the individual variables is important to know. Learning disentangled representation to handle the conditional shift is relatively new since as the paper stated, most methods focus on learning invariant representation under the shift. The synthesis examples are easy to follow and it demonstrates the impact of conditional shift over existing group entanglement methods The major weakness of this paper can be summarized in the following aspects: The idea of conditioning/controlling on group variables when inferring the individual variables is not novel. It follows naturally by the definition of conditional shift, i.e. the group conditional distribution of instances are different. As the paper points that it is widely used in semi-supervised learning. The innovation point is its use in unsupervised learning and generative models. However, it is not sufficient to meet the bar for ICLR. One of main concerns for the conditioning method is when it deals with high dimensional problems. In high dimensional space, conditioning would restrict the set of samples that are available to the model in each group. Note that this method essentially learns a set of group specific models. In high dimensional setting, the generative model needs a lot more samples generated before a robust inference result is obtained. This is partially why the conditional independence assumption was used, since it would save a lot of time in data generation. That being said, the experiments are too simple. It is a low dimensional example, while the high dimensional data set such as image dataset are mentioned but not tried. It is important to demonstrate the strength and weakness of this method in high dimensional setting, esp the time for inference and the robustness of the inference. No code provided. Although this is not hard to implement, it is better to have some code to prove the reproducibility. Clarity, Quality, Novelty And Reproducibility The paper is well written with clear demonstration of the problem and solution. The work is partially original but there are many existing models using the similar ideas. The code is not available thus cannot demonstrate its reproducibility.
ICLR
Title Learning Collision-free Latent Space for Bayesian Optimization Abstract Learning and optimizing a blackbox function is a common task in Bayesian optimization and experimental design. In real-world scenarios (e.g., tuning hyperparameters for deep learning models, synthesizing a protein sequence, etc.), these functions tend to be expensive to evaluate and often rely on high-dimensional inputs. While classical Bayesian optimization algorithms struggle in handling the scale and complexity of modern experimental design tasks, recent works attempt to get around this issue by applying neural networks ahead of the Gaussian process to learn a (low-dimensional) latent representation. We show that such learned representation often leads to collision in the latent space: two points with significantly different observations collide in the learned latent space. Collisions could be regarded as additional noise introduced by the traditional neural network, leading to degraded optimization performance. To address this issue, we propose Collision-Free Latent Space Optimization (CoFLO), which employs a novel regularizer to reduce the collision in the learned latent space and encourage the mapping from the latent space to objective value to be Lipschitz continuous. CoFLO takes in pairs of data points and penalizes those too close in the latent space compared to their target space distance. We provide a rigorous theoretical justification for the regularizer by inspecting the regret of the proposed algorithm. Our empirical results further demonstrate the effectiveness of CoFLO on several synthetic and real-world Bayesian optimization tasks, including a case study for computational cosmic experimental design. 1 INTRODUCTION Bayesian optimization is a classical sequential optimization method and is widely used in various fields, including recommender systems, scientific experimental design, hyper-parameter optimization, etc. Many of theses applications involve evaluating an expensive blackbox function; therefore the number of queries should be minimized. A common way to model the unknown function is via Gaussian processes (GPs) Rasmussen and Williams (2006). GPs have been extensively studied under the bandit setting, and has proven to be an effective approach for addressing a broad class of black-box function optimization problems. One of the key computational challenges for learning with GPs concerns with optimizing specific kernels used to model the covariance structures of GPs. As such optimization task depends on the dimension of feature space, for high dimensional input, it is often prohibitively expensive to train a Gaussian process model. Meanwhile, Gaussian processes are not intrinsically designed to deal with structured input that has a strong correlations among different dimensions, e.g., the graphs and time sequences. Therefore, dimensionality reduction algorithms are needed to speed up the learning process. Recently, it has become popular to investigate GPs in the context of latent space models. As an example, deep kernel learning (Wilson et al., 2016) simultaneously learns a (low-dimensional) data representation and a scalable kernel via an end-to-end trainable deep neural network. In general, the neural network is trained to learn a simpler latent representation with reduced dimension and has the structure information already embedded for the Gaussian process. Such a combination of neural network and Gaussian process could improve the scalability and extensibility of classical Bayesian optimization, but it also poses new challenges for the optimization task (Tripp et al., 2020). As we later demonstrate, one critical challenge brought by introducing the neural network is that the latent representation is prone to collisions: two points with significant different observations can get too close in the latent space. The collision effect is especially evident when information is lost by dimension reduction, and/or when the training data is limited in size in Bayesian optimization. As illustrated in Figure 1, when passed through the neural network, data points with drastically different observations are mapped to close positions in the latent space. Such collisions could be regarded as additional noise introduced by the neural network. Although Bayesian optimization is known to be robust to mild noisy observations, the collision in latent space could be harmful to the optimization performance, as it is non-trivial to explicitly model the collision into the acquisition function. In addition, the additional noise induced by the collision effect will further loosen the regret bound for classical Bayesian optimization algorithms (Srinivas et al., 2010). Overview of main results To mitigate the collision effect, we propose a novel regularization scheme which can be applied as a simple plugin amendment for the latent space-based Bayesian optimization models. The proposed algorithm, namely Collision-Free Latent Space Optimization (CoFLO), leverages a regularized regression loss function, to periodically optimize the latent space for Bayesian optimization. Concretely, our regularizer is encoded by a novel pairwise collision penalty function defined jointly on the latent space and the output domain. In order to mitigate the risk of collision in the latent space (and consequently boost the optimization performance), one can apply the regularizer uniformly to the latent space to minimize the collisions. However, in Bayesian global optimization tasks, we seek to prioritize the regions close to the possible optimum, as collisions in these regions are more likely to mislead the optimization algorithm. Based on this insight, we propose a optimizationaware regularization scheme, where we assign a higher weight for the collision penalty on those pairs of points closer to the optimum region in the latent space. This algorithm—which we refer to as dynamically-weighted CoFLO—is designed to dynamically assess the importance of a collision during optimization. Comparing to the uniform collision penalty over the latent space, the dynamic weighting mechanism has demonstrated drastic improvement over the state-of-the-art latent spacebased Bayesian optimization models. We summarize our the key contributions below: • We propose a novel regularization scheme, as a simple plugin amendment for latent spacebased Bayesian optimization models. Our regularizer penalizes collisions in the latent space and effectively reduces the collision effect. • We propose an optimization-aware dynamic weighting mechanism for adjusting the collision penalty to further improve the effectiveness of regularization for Bayesian optimization. • We provide theoretical analysis for the performance of Bayesian optimization on regularized latent space. • We conducted an extensive empirical study on four synthetic and real-world datasets, including a real-world case study for cosmic experimental design, and demonstrate strong empirical performance for our algorithm. 2 RELATED WORK Bayesian optimization has demonstrated promisming performance in various cost-sensitive global optimization tasks (Shahriari et al., 2016). However, due to its intrinsic computational limitation in the high-dimensional regime, its applicability has been restricted to relatively simple tasks. In this section, we provide a short survey on recent work in Bayesian learning, which were designed to overcome the high-dimensionality challenge for both Bayesian optimization and regression tasks. Deep kernel learning Deep kernel learning (DKL) (Wilson et al., 2016) combines the power of the Gaussian process and that of neural network by introducing a deep neural network g to learn a mapping g : X → Z from the input domain X to a latent space Z , and use the latent representation z ∈ Z as the input of the Gaussian process. The neural network g and a spectral mixture base kernel k forms a scalable expressive closed-form covariance kernel, denoted by kDK(xi, xj) → k(g(xi), g(xj)), for Gaussian processes. Despite of encouraging results in numerous regression tasks, it remains unclear whether DKL is readily applicable to Bayesian optimization. One key difference in Bayesian regression and optimization tasks is the assumption on the accessibility of training data: Bayesian optimization often assumes limited access to labeled data, while DKL for regression relies on abundant access to data in order to train a deep kernel function. Another problem lies in the difference between the objective functionn. While DKL focuses on improving the general regression performance, it does not specifically address the problem caused by collisions, which—as we later demonstrate in section 3.3—could be harmful for sequential decision making tasks. Representation learning and latent space optimization Aiming at improving the scalability and extensibility of the Gaussian process, various methods are proposed to reduce the dimensionality of the original input. Djolonga et al. (2013) assume that only a subset of input dimensions varies, and the kernel is smooth (i.e. with bounded RKHS norm). Under these assumptions, they underlying subspace via low-rank matrix completion. Huang et al. (2015) use Autoencoder to learn a lowdimensional representation of the inputs to increase GP’s scalability in regression tasks. Snoek et al. (2015) further propose to learn a pre-trained encoder neural network before BO. Lu et al. (2018) learn a variational auto-encoder iteratively during sequential optimization to embed the structure of the input. The challenge for combining latent space learning with Bayesian optimization lies in that a pre-trained neural network may not extract adequate information around the more promising region of the input space. Furthermore, the latent space could be outdated without continuous updates with the latest acquired observation. Tripp et al. (2020) propose to periodically retrain the neural network to learn a better latent space, in order to minimize the number of iterations needed for LSO. They claim that by prioritizing the loss of more promising data points in the original input space (i.e. by assigning a higher weight to these data points), the model could focus more on learning highvalue regions and allow a substantial extrapolation in the latent space to accelerate the optimization. However, such a framework does not explicitly deal with collisions in the latent space, which we found to be a key factor in the poor performance of modern latent space optimization algorithms. 3 PROBLEM STATEMENT In this section, we introduce necessary notations and formally state the problem. We focus on the problem of sequentially optimizing the function f : X → R, where X ⊆ Rd is the input domain. In each round t, we pick a point xt ∈ X , and observe the function value perturbed by an additive noise: yt = f(xt) + t with t ∼ N (0, σ2) being i.i.d. Gaussian noise. Our goal is to maximize the sum of rewards ∑T t=1 f(xt) over T iterations, or equivalently, to minimize the cumulative regret RT := ∑T t rt, where rt := maxx∈X f(x)− f(xt) denote the instantaneous regret of xt. 3.1 BAYESIAN OPTIMIZATION Bayesian optimization typically employs Gaussian processes as the statistic tool for modeling the unknown objective function. The major advantage of using GP is that it presents a computationally tractable solution to depict a sophisticated and consistent view across the space of all possible function (Rasmussen and Williams, 2005), which allows closed-form posterior estimation in the function space. BO methods starts with a prior on the black-box function. Upon observing new labels, BO then iteratively updates the posterior distribution in the function space, and maximizes an acquisition function measuring each point’s contribution to finding the optimum, in order to select the next point for evaluation. Formally, in Bayesian optimization we assume that f follows a GP(m(x), k(x, x′)), where m(x) is the mean function, k(x, x′) is the kernel or covariance function. Throughout this paper, we use squared exponential kernel, kSE(x, x′) = σ2SE exp ( − (x−x ′) 2l ) , where the length scale l determines the length of the ”wiggles” and the output variance σ2SE determines the average distance of the function away from its mean. At iteration T , given the historically selected pointsAT = {x1, ..., xt} and the corresponding noisy evaluations yT = [y1, ...yT ], the posterior over f also takes the form of a GP, with mean µT (x), covariance kT (x, x′), and variance σ2T (x): µT (x) = kT (x) T (KT + σ 2I)−1yT kT (x, x ′) = k(x, x′)− kT (x)T (KT + σ2I)−1kT (x′) σ2T (x) = kT (x, x) where kT (x) = [k(x1, x), ..., k(xT , x)]T and KT is the positive definite kernel matrix [k(x, x′)]x,x′∈AT . After obtaining the posterior, one can compute the acquisition function α : X → R, which is used to select the next point to be evaluated. Various acquisition functions have been proposed in the literature, including popular choices such as Upper Confidence Bound (UCB) (Srinivas et al., 2010) and Thompson sampling (TS) (THOMPSON, 1933). UCB uses the upper confidence bound αUCB(x) = µt(x)+β1/2σt(x) with β(x) being the confidence coeffecient, and enjoys rigorous sublinear regret bound. TS usually outperform UCB in practice and has been shown to enjoy a similar regret bound Agrawal and Goyal (2012). It samples a function f̃t from the GP posterior f̃t ∼ GP(µt(x), kt(x, x′)) and then uses the sample as an acquisition function: αTS(x) = f̃t(x). Remark. Regret is commonly used as performance metric for BO methods. In this work we focus on the simple regret r∗T = max x∈X f(x)−max t<T f(xt) and cumulative regret R(T ) = ∑T t rt. 3.2 LATENT SPACE OPTIMIZATION Recently, Latent Space Optimization (LSO) has been proposed to solve Bayesian optimization problems on complex input domains (Tripp et al., 2020). LSO first learns a latent space mapping g : X → Z to convert the input space X to the latent space Z . Then, it constructs an objective mapping h : Z → R such that f(x) ≈ h(g(x)), ∀z ∈ Z . The latent space mapping g and base kernel k could be regarded as a deep kernel, denote by knn(x, x′) = k(g(x), g(x′)). Thus, the actual input space for BO is the latent space Z and the objective function is h. With acquisition function αnn(x) := α(g(x)), it is unnecessary to compute an inverse mapping g−1 as discussed in Tripp et al. (2020), as BO could directly select xt = arg max xt∈X αnn(x) ∀t ≤ T and evaluate f . In the meantime, BO can leverage the latent space mapping g, usually represented by a neural network, to effectively learn and optimize the target function h on a lower-dimension input space. 3.3 THE COLLISION EFFECT OF LSO When the mapping g : X → Z is represented by a neural network, it may cause undesirable collisions between different input points in the latent space Z . Under the noise-free setting, we say there exists a collision in Z , if ∃xi, xj ∈ X , such that when g(xi) = g(xj), |f(xi) − f(xj)| > 0. Such collision could be regarded as additional (unknown) noise on the observations introduced by the neural network g. For a noisy observation y = f(x) + , we define a collision as follows: for ρ > 0, ∃xi, xj ∈ D, |g(xi)− g(xj)| < ρ|yi − yj |. When the distance between a pair of points in the latent space is too close comparing to their difference in the output space, the different output values for the collided points in the latent space could be interpreted as the effect of additional observation noise. In general, collisions could degrade the performance of LSO. Since the collision effect is a priori unknown, it is often challenging to deal with collisions in LSO, even if we regard it as additional observation noise and increase the (default) noise variance in the Gaussian process. Thus, it is necessary to mitigate the collision effect, by directly restraining it in the representation learning phase. 4 COLLISION-FREE LATENT SPACE OPTIMIZATION In this section, we introduce Collision-Free Latent Space Optimization (CoFLO), an algorithmic framework designed to mitigate the collision effect. 4.1 OVERVIEW OF THE COFLO ALGORITHM The major challenge in restraining collisions in the latent space is that, unlike the traditional regression problem, we cannot quantify it on a single point’s observation. We can, however, quantify collisions by grouping pairs of data points and inspecting their corresponding observations. We define the collision penalty based on pairs of inputs, and further introduce a pair loss function to characterize the collision effect. Based on this pair loss, we propose a novel regularized latent space optimization algorithm1, as summarized in Algorithm 1. The proposed algorithm first uses the pair-wise input and concurrently feeds them into the same network and then calculates the pair loss function. We demonstrate this process in Figure 2. Given a set of labeled data points, we can train the neural network to create an initial latent space representation2, similar to DKL (Wilson et al., 2016). Once provided with the initial representation, we can then refine the latent space by running CoFLO and periodically update the latent space (i.e. updating the latent representation after collection a batch of data points) to mitigate the collision effect as we collect more labels Algorithm 1 Collision-Regularized Latent Space Optimization (CoFLO) 1: Input: Regularization weight ρ (cf. Equation 3), penalty parameter λ (cf. Equation 1), retrain interval T̃ , importance weight parameter γ (cf. Equation 2), neural network M0, base kernel K0, prior mean µ0, total time steps T ; 2: for t = 1 to T do 3: xt ← arg max x∈D α(Mt(x)) . maximize acquisition function 4: yt ← evaluation on xt . update observation 5: if t ≡ 0 (mod T̃ ) then 6: Mt+1,Kt+1 ← retrain Mt and Kt with the pair loss function Lρ,λ,γ(Mt,Kt, Dt) as defined in equation 3 . periodical retrain 7: end if 8: end for 9: Output: max t yt 4.2 COLLISION PENALTY In this subsection, we aim to quantify the collision effect based on the definition proposed in Section 3.3. As illustrated in Figure 2, we feed pairs of data points into the neural network and obtain their latent space representations. Apart from maximizing the GP’s likelihood, we concurrently calculate the amount of collision on each pair, and penalize only if the value is positive. For xi, xj ∈ X , yi = f(xi) + , yi = f(xi) + are the corresponding observations, and zi = g(xi), zj = g(xj) are the corresponding latent space representations. 1Note that we have introduced several hyper-parameters in the algorithm design; we will defer our discussion on the choice of these parameters to Section 5. 2To obtain an initial latent space representation, the labels do not have to be exact and could be collected from a related task of cheaper cost We define the collision penalty as pij = max(λ|yi − yj | − |zi − zj |, 0) (1) where λ is a penalty parameter that controls the smoothness of the target function h : Z → R. 4.3 DYNAMIC WEIGHT Note that it is challenging to universally eliminate the collision effect by minimizing the collision penalty and the GP’s regression loss—this is particularly true with a limited amount of training data. Fortunately, in the optimization tasks it is often unnecessary to learn equally good representation for suboptimal regions. Therefore, we can dedicate more training resources to improve the learned latent space by focusing on the (potentially) near-optimal regions. Following this insight, we propose to use a weighted collision penalty function, which uses the objective values for each pair as importance weight in each iteration. Formally, for any pair ((xj , zj , yj), (xi, zi, yi)) in a batch of observation pairsDt = {((xm, zm, ym), (xn, zn, yn))}m,n, we define the importance-weighted penalty function as p̃ij = pijwij with wij = eγ(yi+yj)∑ (m,n)∈Dt eγ(ym+yn) . (2) Here the importance weight γ is used to control the aggressiveness of the weighting strategy. Combining the collision penalty and regression loss of GP, we define the pair loss function L as Lρ,λ,γ(Mt,Kt, Dt) = 1 ||Dt||2 ∑ i∈Dt,j∈Dt (GPKt(Mt(xi))− yi)2 + (GPKt(Mt(xj))− yj)2 + ρp̃ij , (3) Here, GPKt(Mt(xi)) denotes the Gaussian process’s posterior mean on xi with kernel Kt and neural networkMt at timestep t. ρ denotes the regularization weight; as we demonstrate in Section 5, in practice we often choose ρ to keep the penalty at a order close to the regression loss. 4.4 THEORETICAL ANALYSIS In this subsection, we provide a theoretical justification for the collision-free regularizer, by inspecting the effect of regularization on the regret bound of CoFLO. We first connect the proposed collision penalty in Equation 1 to Lipschitz-continuity, and then integrate it into the regret analysis to provide an improved regret bound. Lipschitz continuity of the target function h The collision penalty encourages the Lipschitzcontinuity for h. Formally, the proposed regularization promotes to learn a latent space where ∀xi, xj ∈ D, zi = g(xi), zj = g(xj) ∈ Z, |g(xi)− g(xj)| ≤ λ|f(xi)− f(xj)| The above inquality reduces to the Lipschitz-continuity for h. Unlike typical smoothness assumptions in GPs, a function can be non-smooth and still Lipschitz continuous. Recently, Ahmed et al. (2019) leverage the Lipschitz-continuity of the objective function to propose improved acquisition functions based on the common acquisition functions, and provide an improved regret bound both theoretically and empirically. In the following, we show that running GP-UCB on the collision-free latent space amounts to an improvement in terms of its regret bound: Theorem 1. Let Z ⊂ [0, r]d be compact and convex, d ∈ N, r > 0, λ ≥ 0. Suppose that the objective function h defined on Z is a sample from GP and is Lipschitz continuous with Lipschitz constant λ. Let δ ∈ (0, 1), and define βt = 2 log ( π2t2/6δ ) + 2d log ( λrdt2 ) . Running the GP-UCB with βt for a sample h of a GP with mean function zero and covariance function k(x, x′), we obtain a regret bound of O∗( √ dTγT ) with high probability. Precisely, with C1 = 8/ log ( 1 + σ−2 ) , we have P [ RT ≤ √ C1TβT γT + 2 ] ≥ 1− δ. Here γT is the maximum information gain after T iterations, defined as γT := max A⊂Z,|A|=T I(yA;hA). Comparing the above result to Theorem 2 of Srinivas et al. (2010) which offers the regret bound with the sub-gaussianity assumption on the objective functions derivative, the second part of our regret bound does not rely on δ. The coefficients are also smaller as the deterministic bound on the derivative of f avoids union bound. Remark. The collision penalty encourages h to be Lipschitz-continuous on the latent space. Ideally, when the collision penalty pi,j(λ) term converges to zero for all data points in the latent space, we can claim that h is Lipschitz-continuous with Lipschitz constant at most λ. Applying Theorem 1 with βt = 2 log ( π2t2/6δ ) + 2d log ( λrdt2 ) , we can reduce the regret bound by choosing smaller λ. However, in practice, since the observation can be noisy, we need to choose a λ big enough to tolerant the noise. A small λ could make it difficult to learn a meaningful representation. 5 EXPERIMENTS In this section, we empirically evaluate our algorithm on two synthetic blackbox function optimization tasks and two real-world optimization problems. 5.1 EXPERIMENTAL SETUP We consider four baselines in our experiments. The rudimentary random selection algorithm (RS) shows the task complexity. Three popular optimization algorithms, namely particle swarm optimization (PSO) (Miranda, 2018), Tree-structured Parzen Estimator Approach (TPE) (Bergstra et al., 2011), and standard Bayesian optimization (BO) (Nogueira, 2014) which uses Gaussian process as the statistical model and the upper confidence bound (UCB) as its acquisition function, are tuned in each task. Another baseline we consider is the sample-efficient LSO (SE LSO) algorithm, which is implemented based on the algorithm proposed by Tripp et al. (2020). We also compare the non-regularized latent space optimization (LSO), Collision-Free Latent Space Optimization (CoFLO) and the dynamically-weighted CoFLO (DW CoFLO) proposed in this paper. The performance for each task is measured on 10,000 pre-collected data points. One crucial problem in practice is tuning the hyper-parameters. The hyper-parameters for GP are tuned for periodically retraining in the optimization process, by minimizing the loss function on a validation set. For all our tasks, we choose a simplistic neural network architecture M , due to limited and expensive access to labeled data under the BO setting. The coefficient ρ is, in general, selected to guarantee a similar order for the collision penalty to GP loss. The λ should be estimated according to the first several sampled data and tolerant the additive noise in the evaluation. γ controls the aggressiveness of the importance weight. While γ should not be too close to zero (which is equivalent to uniform weight), an extremely high value could make the regularization overly biased. Such a severe bias could possibly allow a heavily collided representation in most of the latent space and degrade regularization effectiveness. The value choice is similar to the inverse of the temperature parameter of softmax in deep learning Hinton et al. (2015). Here we use the first batch of observed samples to estimate the order of all observations and choose the appropriate γ. 5.2 DATASETS AND RESULTS We now evaluate CoFLO on two synthetic datasets and two real-world datasets. In the experiments, all the input data points are mapped to a one-dimensional latent space by via the neural network. We demonstrated the improvement of CoFLO brought by the explicit collision mitigation in the lowerdimensional latent space in terms of average simple regret. We also include the median result and statistical test in the appendix. 2D-Rastrigin The Rastrigin function is a non-convex function used as a performance test problem for optimization algorithms. It was first proposed by RASTRIGIN (1974) and used as a popular benchmark dataset for evaluating Gaussian process regression algorithms (Cully et al., 2018). Formally, the 2D Rastrigin function is f(x) = 10d+ d∑ i=1 x2i − 10cos(2πxi), d = 2 For convenience of comparison, we take the −f(x) as the objective value to make the optimization tasks a maximization task. The neural network is pretrained on 100 data points. As illustrated by figure 8a , CoFLO and DW CoFLO could quickly reach the (near-) optimal region, while the baselines generally suffer a bigger simple regret even after an excessive number of iterations. Feynman III.9.52 Equation Growing datasets have motivated pure data-backed analysis in physics. The dataset of 100 equations from the Feynman Lecture on Physics for the symbolic regression tasks in physics (Udrescu and Tegmark, 2020) could play the role as a test set for databack analysis algorithms in physics. The III.9.52 we choose to test the optimization algorithms is ργ = pdEf t h/2π sin((ω − ω0)t/2)2 ((ω − ω0)t/2)2 The equations have 6 variables as inputs and are reported to require at least 103 data for the regression task. The neural network is randomly initialized at the beginning. As illustrated by figure 8b, in the first 100 iterations, CoFLO and DW CoFLO behaves similarly to random selection. After the first training at iteration 100, CoFLO and DW CoFLO approach the optimum at a much faster pace compared to the baselines; among them, DW CoFLO shows a faster reduction in simple regret. Supernova Our first real-world task is to perform maximum likelihood inference on 3 cosmological parameters, the Hubble constant H0 ∈ (60, 80), the dark matter fraction ΩM ∈ (0, 1) and the dark energy fraction ΩA ∈ (0, 1). The likelihood is given by the Roberson-Walker metric, which requires a one-dimensional numerical integration for each point in the dataset from Davis et al. (2007). The neural network is pretrained on one hundred data points. As illustrated by figure 8c, the simple regret SE LSO has a faster drop at the beginning, while later remained relatively stable and eventually ends at a similar level to LSO. These results demonstrate the efficiency of SE LSO when finding sub-optimal. However, without collision reduction, SE LSO couldn’t outperform the LSO in the long run, where both reach their limitation. And the CoFLO and DW CoFLO demonstrate its robustness when close to the optimal as both constantly approach the optimal. Among them, DW CoFLO slightly outperform CoFLO. Redshift Distribution The challenges in designing and optimizing cosmological experiments grow commensurately with their scale and complexity. Careful accounting of all the requirements and features of these experiments becomes increasingly necessary to achieve the goals of a given cosmic survey. SPOKES (SPectrOscopic KEn Simulation) is an end-to-end framework that can simulate all the operations and key decisions of a cosmic survey (Nord et al., 2016). It can be used for the design, optimization, and forecasting of any cosmic experiment. For example, some cosmic survey campaigns endeavor to observe populations of galaxies that exist at a specific range of redshifts (distances) from us. In this work, we use SPOKES to generate galaxies within a specified window of distances from Earth. We then minimize the Hausdorff distance between the desired redshift distribution and the simulation of specific cosmological surveys generated by SPOKES. In our experiments, the neural network is pretrained on 200 data points. As illustrated by figure 8d, the simple regret of SE LSO drops faster at the initial phase. However, when it gets close to the (near-) optimal region where simple regret is approximately 0.15, it is caught up by both CoFLO and DW CoFLO, and eventually gets slightly outperformed. Such a result indicates that the collision problem could have more impact when the algorithm gets close to the optimal region. Notice that the rudimentary BO eventually outperformed the non-regularized LSO, indicate that without mitigation of collision, the learned representation could worsen the performance in the later stage when the algorithm gets close to the optimal. In conclusion, the mitigation of collision like CoFLO is necessary to further improve the later performance of LSO, when collision matters more in the near-optimal areas. 5.3 DISCUSSION In general, our experimental results consistently demonstrate the robustness of our methods against collisions in the learned latent space. Our method outperforms all baselines; when compared to the sample-efficient LSO, the dynamically-weighted LSO performs better in most cases and shows a steady capability to reach the optimum by explicitly mitigating the collision in the latent space. In contrast, the Sample-efficient LSO might fail due to the collision problem. 6 CONCLUSION We have proposed a novel regularization scheme for latent space based Bayesian optimization. Our algorithm—namely CoFLO—addresses the collision problem induced by dimensionality reduction, and improves the performance for latent space-based optimization algorithms. The regularization is proved to be effective in mitigating the collision problem in learned latent space, and therefore can boost the performance of the Bayesian optimization in the latent space. We demonstrate strong empirical results for CoFLO on several synthetic and real-world datasets, and show that CoFLO is capable of dealing with high-dimensional input that could be highly valuable for real-world experiment design tasks such as cosmological survey scheduling. A REGRET BOUND FOR A LIPSCHITZ-CONTINUOUS OBJECTIVE FUNCTION In this section, we provide the detailed proof for Theorem 1. We first modify Lemma 5.7 and Lemma 5.8 in Srinivas et al. (2010) since we are assuming the deterministic Lipschitz-continuity for h. Use the same analysis tool Zt defined as a set of discretization Zt ⊂ Z where Zt will be used at time t in the analysis. We choose a discretization Zt of size (τt)d. so that ∀z ∈ Z, ||z − [z]t||1 ≤ rd/τt (4) where [z]t denotes the closest point in Zt to z. Lemma 1. Pick δ ∈ (0, 1) and set β = 2log(πt/δ)+2dlog(Lrdt2), where ∑ t≥1 π −1 t = 1, πt > 0. Let τt = Lrdt2. Hence then |h(z∗)− µt−1([z∗]t)| ≤ β1/2t σt−1([z∗]t) + 1/t2 ∀t ≥ 1 holds with probability ≥ 1− δ. Here z∗ := g(x∗). Proof. Using the Lipschitz-continuity and equation 4, we have that ∀z ∈ Z, |h(z)− h([z]t)| ≤ Lrd/τt By choosing τt = Lrdt2, we have |Zt| = (Lrdt2)d and ∀z ∈ Z, |h(z)− h([z]t)| ≤ 1/t2 Then using Lemma 5.6 in Srinivas et al. (2010), we reach the expected result. Base on Lemma 5.5 in Srinivas et al. (2010) and Lemma 1, we could have the following result directly. Lemma 2. Pick δ ∈ (0, 1) and set β = 2log(2πt/δ) + 2dlog(Lrdt2), where ∑ t≥1 π −1 t = 1, πt > 0. Then with probability ≥ 1− δ, for all t ∈ N , the regret is bounded as follows: rt ≤ 2β1/2t σt−1(zt) + 1/t2 Proof. Using the union bound of δ/2 in both Lemma 5.5 in Srinivas et al. (2010) and Lemma 1, we have that with probability 1− δ: rt = h(z ∗)− h(zt) ≤ β1/2t σt−1(zt) + 1/t2 + µt−1(zt)− h(zt) ≤ 2β1/2t σt−1(zt) + 1/t2 which complete the proof. Now we are ready to use the Lemma5.4 in Srinivas et al. (2010) and Lemma 2 to complete the proof of Theorem 1. Proof. Using Lemma5.4 in Srinivas et al. (2010), we have that with probability ≥ 1− δ: T∑ t=1 4βtσ 2 t−1(xt) ≤ C1βT γT ∀T ≥ 1 By Cauchy-Schwarz: T∑ t=1 2β 1/2 t σt−1(xt) ≤ √ C1βT γT ∀T ≥ 1 Finally, substitute πt with π2t2/6 (since ∑ 1/t2 = π2/6). Theorem 1 follows. B VISUALIZATION OF THE COLLISION EFFECT IN LATENT SPACE We demonstrate the collision effect in the latent space. We trained the same neural network on Feynman dataset with 101 data points which demonstrate the latent space after two retrains with the retrain interval set to be 50 data points. The regularized one employed DW CoFLO, with the regularization parameter ρ = 1e5, penalty parameter λ = 1e−2, retrain interval T̃ , weighting parameter γ = 1e−2 and the base kernel set to be square exponential kernel. The non-regularized one employed LSO. C SUPPLEMENTAL MATERIALS ON ALGORITHMIC DETAILS C.1 ALGORITHMIC DETAILS ON NEURAL NETWORK ARCHITECTURE As the main goal of our paper was to showcase the performance of a novel collision-free regularizer, we picked our network architectures to be basic multi-layer dense neural network: For SPOKES, we used a 5-layer dense neural network. Its hidden layers consist of 16 neurons with Leaky Relu nonlinearities, 8 neurons with Sigmoid nonlinearities, 4 neurons with Sigmoid nonlinearities, and 2 neurons with Sigmoid nonlinearities respectively. Each hidden layer also applies a 0.2 dropout rate. The output layer applies Leaky Relu nonlinearity. For SuperNova, Feynman, and Rastrigin 2D, we used a 4-layer dense neural network. Its hidden layers consist of 8 neurons with Sigmoid nonlinearities, 4 neurons with Leaky Relu nonlinearities, and 2 neurons with Leaky Relu nonlinearities respectively. Each hidden layer applies a 0.2 dropout rate. The output layer also applies Leaky Relu nonlinearity. The neural networks are trained using ADAM with a learning rate of 1e−2. C.2 PARAMETER CHOICES We further investigate the robustness of parameter choices of both the regularization parameter ρ and the penalty parameter λ on SPOKES dataset. We show the result in the figures below. D ADDITIONAL EXPERIMENTAL RESULTS We added both the detailed median curves and the p-values of the Welch’s t-tests of the experiments we discussed in section 5. D.1 MEDIAN CURVE The median curves demonstrate similar trends to the mean curves. In the four experiments, DW CoFLO consistently demonstrates a superior performance over the baselines. (a) Rastrigin 2D (b) Feynman III.9.52 (c) Supernova (d) SPOKES D.2 P-VALUES The table shows the p-values of Welch’s t-tests of the experiments. It demonstrates the significance of the improvement brought by DW CoFLO over the baselines. Data BO RO TPE LSO SE-LSO CoFLO Rastrigin-2D 1.07e−4 3.88e−8 1.01e−2 6.38e−3 1.10e−5 4.23e−1 Supernova 3.24e−3 3.61e−3 3.18e−2 3.43e−1 1.41e−8 2.62e−1 Feynman 1.73e−1 1.52e−07 8.20e−1 288e−1 6.37e−1 2.25e−1 SPOKES 4.62e−1 9.90e−3 2.64e−1 4.17e−2 2.87e−3 4.11e−1
1. What is the main contribution of the paper regarding latent space collision penalties? 2. What are the strengths and weaknesses of the proposed method compared to prior works like siamese networks and triplet loss? 3. Do you have any concerns about the correctness of Equation (1) and the batched update of GP? 4. How does the reviewer assess the experimental settings and results of the paper? 5. What are the limitations of the paper regarding its lack of detail and theoretical analysis?
Review
Review summary This paper proposed a method that can penalize collisions in latent space. To be more concrete, for a model which combines a neural network and a Gaussian Process, e.g. deep kernel learning, the learned latent features for two different inputs can be very close. In the following GP modeling, these two similar latent features will cause difficulties since the covariance between them will be large although these two inputs are quite different. cons The general idea presented in this work is very interesting. This is also a very realistic treatment, i.e. collisions in the latent space. Although not a new technique, e.g. siamese network, triplet loss, I like the idea of penalizing close points in latent space combined with a GP. It implicitly incorporate prior knowledge in modeling the GP, which usually boost the performance of a GP. pros I am doubtful about the correctness of eq(1). Without a treatment of stochastic variational inference, the marginal likelihood of GP cannot be factorized into a product of per data points. This means the batched update of GP in eq(1) will not produce a correct GP model, if I understand eq(1) correctly. Can the authors explain this batched update? I think experimental results should be extended to include a comparison with SMAC and TPE, which are two strong baselines. Although this work focus on GP based BO, empirical results of SMAC and TPE without considering collisions will make this work more convincing. The experimental settings used in this work are not detailed, e.g. how many units in each layer in the neural network, etc. Empirical results are also not sufficient. In Figure3, the line is the mean curve instead of median of at least 10 experiments. However, without a statistical test, it is hard to tell whether the proposed approach is better than other competing methods. questions It is not clear to me why the retrain interval T ~ is set to be 100 for 3c, 3a and 3d in Figure 3. In Algorithm 1, the latent model is updated every T ~ iterations. In Figure 3, the total iteration numbers for 3c, 3a and 3d are 100, 200 and 100 respectively. This means for 3c and 3d, the latent model is updated only once and this update happens at the end of BO. After the update, the model will never be used. Can the authors comment on this? Overall speaking, I am afraid this paper doesn't contain necessary details and the theoretical results are not strong enough.
ICLR
Title Learning Collision-free Latent Space for Bayesian Optimization Abstract Learning and optimizing a blackbox function is a common task in Bayesian optimization and experimental design. In real-world scenarios (e.g., tuning hyperparameters for deep learning models, synthesizing a protein sequence, etc.), these functions tend to be expensive to evaluate and often rely on high-dimensional inputs. While classical Bayesian optimization algorithms struggle in handling the scale and complexity of modern experimental design tasks, recent works attempt to get around this issue by applying neural networks ahead of the Gaussian process to learn a (low-dimensional) latent representation. We show that such learned representation often leads to collision in the latent space: two points with significantly different observations collide in the learned latent space. Collisions could be regarded as additional noise introduced by the traditional neural network, leading to degraded optimization performance. To address this issue, we propose Collision-Free Latent Space Optimization (CoFLO), which employs a novel regularizer to reduce the collision in the learned latent space and encourage the mapping from the latent space to objective value to be Lipschitz continuous. CoFLO takes in pairs of data points and penalizes those too close in the latent space compared to their target space distance. We provide a rigorous theoretical justification for the regularizer by inspecting the regret of the proposed algorithm. Our empirical results further demonstrate the effectiveness of CoFLO on several synthetic and real-world Bayesian optimization tasks, including a case study for computational cosmic experimental design. 1 INTRODUCTION Bayesian optimization is a classical sequential optimization method and is widely used in various fields, including recommender systems, scientific experimental design, hyper-parameter optimization, etc. Many of theses applications involve evaluating an expensive blackbox function; therefore the number of queries should be minimized. A common way to model the unknown function is via Gaussian processes (GPs) Rasmussen and Williams (2006). GPs have been extensively studied under the bandit setting, and has proven to be an effective approach for addressing a broad class of black-box function optimization problems. One of the key computational challenges for learning with GPs concerns with optimizing specific kernels used to model the covariance structures of GPs. As such optimization task depends on the dimension of feature space, for high dimensional input, it is often prohibitively expensive to train a Gaussian process model. Meanwhile, Gaussian processes are not intrinsically designed to deal with structured input that has a strong correlations among different dimensions, e.g., the graphs and time sequences. Therefore, dimensionality reduction algorithms are needed to speed up the learning process. Recently, it has become popular to investigate GPs in the context of latent space models. As an example, deep kernel learning (Wilson et al., 2016) simultaneously learns a (low-dimensional) data representation and a scalable kernel via an end-to-end trainable deep neural network. In general, the neural network is trained to learn a simpler latent representation with reduced dimension and has the structure information already embedded for the Gaussian process. Such a combination of neural network and Gaussian process could improve the scalability and extensibility of classical Bayesian optimization, but it also poses new challenges for the optimization task (Tripp et al., 2020). As we later demonstrate, one critical challenge brought by introducing the neural network is that the latent representation is prone to collisions: two points with significant different observations can get too close in the latent space. The collision effect is especially evident when information is lost by dimension reduction, and/or when the training data is limited in size in Bayesian optimization. As illustrated in Figure 1, when passed through the neural network, data points with drastically different observations are mapped to close positions in the latent space. Such collisions could be regarded as additional noise introduced by the neural network. Although Bayesian optimization is known to be robust to mild noisy observations, the collision in latent space could be harmful to the optimization performance, as it is non-trivial to explicitly model the collision into the acquisition function. In addition, the additional noise induced by the collision effect will further loosen the regret bound for classical Bayesian optimization algorithms (Srinivas et al., 2010). Overview of main results To mitigate the collision effect, we propose a novel regularization scheme which can be applied as a simple plugin amendment for the latent space-based Bayesian optimization models. The proposed algorithm, namely Collision-Free Latent Space Optimization (CoFLO), leverages a regularized regression loss function, to periodically optimize the latent space for Bayesian optimization. Concretely, our regularizer is encoded by a novel pairwise collision penalty function defined jointly on the latent space and the output domain. In order to mitigate the risk of collision in the latent space (and consequently boost the optimization performance), one can apply the regularizer uniformly to the latent space to minimize the collisions. However, in Bayesian global optimization tasks, we seek to prioritize the regions close to the possible optimum, as collisions in these regions are more likely to mislead the optimization algorithm. Based on this insight, we propose a optimizationaware regularization scheme, where we assign a higher weight for the collision penalty on those pairs of points closer to the optimum region in the latent space. This algorithm—which we refer to as dynamically-weighted CoFLO—is designed to dynamically assess the importance of a collision during optimization. Comparing to the uniform collision penalty over the latent space, the dynamic weighting mechanism has demonstrated drastic improvement over the state-of-the-art latent spacebased Bayesian optimization models. We summarize our the key contributions below: • We propose a novel regularization scheme, as a simple plugin amendment for latent spacebased Bayesian optimization models. Our regularizer penalizes collisions in the latent space and effectively reduces the collision effect. • We propose an optimization-aware dynamic weighting mechanism for adjusting the collision penalty to further improve the effectiveness of regularization for Bayesian optimization. • We provide theoretical analysis for the performance of Bayesian optimization on regularized latent space. • We conducted an extensive empirical study on four synthetic and real-world datasets, including a real-world case study for cosmic experimental design, and demonstrate strong empirical performance for our algorithm. 2 RELATED WORK Bayesian optimization has demonstrated promisming performance in various cost-sensitive global optimization tasks (Shahriari et al., 2016). However, due to its intrinsic computational limitation in the high-dimensional regime, its applicability has been restricted to relatively simple tasks. In this section, we provide a short survey on recent work in Bayesian learning, which were designed to overcome the high-dimensionality challenge for both Bayesian optimization and regression tasks. Deep kernel learning Deep kernel learning (DKL) (Wilson et al., 2016) combines the power of the Gaussian process and that of neural network by introducing a deep neural network g to learn a mapping g : X → Z from the input domain X to a latent space Z , and use the latent representation z ∈ Z as the input of the Gaussian process. The neural network g and a spectral mixture base kernel k forms a scalable expressive closed-form covariance kernel, denoted by kDK(xi, xj) → k(g(xi), g(xj)), for Gaussian processes. Despite of encouraging results in numerous regression tasks, it remains unclear whether DKL is readily applicable to Bayesian optimization. One key difference in Bayesian regression and optimization tasks is the assumption on the accessibility of training data: Bayesian optimization often assumes limited access to labeled data, while DKL for regression relies on abundant access to data in order to train a deep kernel function. Another problem lies in the difference between the objective functionn. While DKL focuses on improving the general regression performance, it does not specifically address the problem caused by collisions, which—as we later demonstrate in section 3.3—could be harmful for sequential decision making tasks. Representation learning and latent space optimization Aiming at improving the scalability and extensibility of the Gaussian process, various methods are proposed to reduce the dimensionality of the original input. Djolonga et al. (2013) assume that only a subset of input dimensions varies, and the kernel is smooth (i.e. with bounded RKHS norm). Under these assumptions, they underlying subspace via low-rank matrix completion. Huang et al. (2015) use Autoencoder to learn a lowdimensional representation of the inputs to increase GP’s scalability in regression tasks. Snoek et al. (2015) further propose to learn a pre-trained encoder neural network before BO. Lu et al. (2018) learn a variational auto-encoder iteratively during sequential optimization to embed the structure of the input. The challenge for combining latent space learning with Bayesian optimization lies in that a pre-trained neural network may not extract adequate information around the more promising region of the input space. Furthermore, the latent space could be outdated without continuous updates with the latest acquired observation. Tripp et al. (2020) propose to periodically retrain the neural network to learn a better latent space, in order to minimize the number of iterations needed for LSO. They claim that by prioritizing the loss of more promising data points in the original input space (i.e. by assigning a higher weight to these data points), the model could focus more on learning highvalue regions and allow a substantial extrapolation in the latent space to accelerate the optimization. However, such a framework does not explicitly deal with collisions in the latent space, which we found to be a key factor in the poor performance of modern latent space optimization algorithms. 3 PROBLEM STATEMENT In this section, we introduce necessary notations and formally state the problem. We focus on the problem of sequentially optimizing the function f : X → R, where X ⊆ Rd is the input domain. In each round t, we pick a point xt ∈ X , and observe the function value perturbed by an additive noise: yt = f(xt) + t with t ∼ N (0, σ2) being i.i.d. Gaussian noise. Our goal is to maximize the sum of rewards ∑T t=1 f(xt) over T iterations, or equivalently, to minimize the cumulative regret RT := ∑T t rt, where rt := maxx∈X f(x)− f(xt) denote the instantaneous regret of xt. 3.1 BAYESIAN OPTIMIZATION Bayesian optimization typically employs Gaussian processes as the statistic tool for modeling the unknown objective function. The major advantage of using GP is that it presents a computationally tractable solution to depict a sophisticated and consistent view across the space of all possible function (Rasmussen and Williams, 2005), which allows closed-form posterior estimation in the function space. BO methods starts with a prior on the black-box function. Upon observing new labels, BO then iteratively updates the posterior distribution in the function space, and maximizes an acquisition function measuring each point’s contribution to finding the optimum, in order to select the next point for evaluation. Formally, in Bayesian optimization we assume that f follows a GP(m(x), k(x, x′)), where m(x) is the mean function, k(x, x′) is the kernel or covariance function. Throughout this paper, we use squared exponential kernel, kSE(x, x′) = σ2SE exp ( − (x−x ′) 2l ) , where the length scale l determines the length of the ”wiggles” and the output variance σ2SE determines the average distance of the function away from its mean. At iteration T , given the historically selected pointsAT = {x1, ..., xt} and the corresponding noisy evaluations yT = [y1, ...yT ], the posterior over f also takes the form of a GP, with mean µT (x), covariance kT (x, x′), and variance σ2T (x): µT (x) = kT (x) T (KT + σ 2I)−1yT kT (x, x ′) = k(x, x′)− kT (x)T (KT + σ2I)−1kT (x′) σ2T (x) = kT (x, x) where kT (x) = [k(x1, x), ..., k(xT , x)]T and KT is the positive definite kernel matrix [k(x, x′)]x,x′∈AT . After obtaining the posterior, one can compute the acquisition function α : X → R, which is used to select the next point to be evaluated. Various acquisition functions have been proposed in the literature, including popular choices such as Upper Confidence Bound (UCB) (Srinivas et al., 2010) and Thompson sampling (TS) (THOMPSON, 1933). UCB uses the upper confidence bound αUCB(x) = µt(x)+β1/2σt(x) with β(x) being the confidence coeffecient, and enjoys rigorous sublinear regret bound. TS usually outperform UCB in practice and has been shown to enjoy a similar regret bound Agrawal and Goyal (2012). It samples a function f̃t from the GP posterior f̃t ∼ GP(µt(x), kt(x, x′)) and then uses the sample as an acquisition function: αTS(x) = f̃t(x). Remark. Regret is commonly used as performance metric for BO methods. In this work we focus on the simple regret r∗T = max x∈X f(x)−max t<T f(xt) and cumulative regret R(T ) = ∑T t rt. 3.2 LATENT SPACE OPTIMIZATION Recently, Latent Space Optimization (LSO) has been proposed to solve Bayesian optimization problems on complex input domains (Tripp et al., 2020). LSO first learns a latent space mapping g : X → Z to convert the input space X to the latent space Z . Then, it constructs an objective mapping h : Z → R such that f(x) ≈ h(g(x)), ∀z ∈ Z . The latent space mapping g and base kernel k could be regarded as a deep kernel, denote by knn(x, x′) = k(g(x), g(x′)). Thus, the actual input space for BO is the latent space Z and the objective function is h. With acquisition function αnn(x) := α(g(x)), it is unnecessary to compute an inverse mapping g−1 as discussed in Tripp et al. (2020), as BO could directly select xt = arg max xt∈X αnn(x) ∀t ≤ T and evaluate f . In the meantime, BO can leverage the latent space mapping g, usually represented by a neural network, to effectively learn and optimize the target function h on a lower-dimension input space. 3.3 THE COLLISION EFFECT OF LSO When the mapping g : X → Z is represented by a neural network, it may cause undesirable collisions between different input points in the latent space Z . Under the noise-free setting, we say there exists a collision in Z , if ∃xi, xj ∈ X , such that when g(xi) = g(xj), |f(xi) − f(xj)| > 0. Such collision could be regarded as additional (unknown) noise on the observations introduced by the neural network g. For a noisy observation y = f(x) + , we define a collision as follows: for ρ > 0, ∃xi, xj ∈ D, |g(xi)− g(xj)| < ρ|yi − yj |. When the distance between a pair of points in the latent space is too close comparing to their difference in the output space, the different output values for the collided points in the latent space could be interpreted as the effect of additional observation noise. In general, collisions could degrade the performance of LSO. Since the collision effect is a priori unknown, it is often challenging to deal with collisions in LSO, even if we regard it as additional observation noise and increase the (default) noise variance in the Gaussian process. Thus, it is necessary to mitigate the collision effect, by directly restraining it in the representation learning phase. 4 COLLISION-FREE LATENT SPACE OPTIMIZATION In this section, we introduce Collision-Free Latent Space Optimization (CoFLO), an algorithmic framework designed to mitigate the collision effect. 4.1 OVERVIEW OF THE COFLO ALGORITHM The major challenge in restraining collisions in the latent space is that, unlike the traditional regression problem, we cannot quantify it on a single point’s observation. We can, however, quantify collisions by grouping pairs of data points and inspecting their corresponding observations. We define the collision penalty based on pairs of inputs, and further introduce a pair loss function to characterize the collision effect. Based on this pair loss, we propose a novel regularized latent space optimization algorithm1, as summarized in Algorithm 1. The proposed algorithm first uses the pair-wise input and concurrently feeds them into the same network and then calculates the pair loss function. We demonstrate this process in Figure 2. Given a set of labeled data points, we can train the neural network to create an initial latent space representation2, similar to DKL (Wilson et al., 2016). Once provided with the initial representation, we can then refine the latent space by running CoFLO and periodically update the latent space (i.e. updating the latent representation after collection a batch of data points) to mitigate the collision effect as we collect more labels Algorithm 1 Collision-Regularized Latent Space Optimization (CoFLO) 1: Input: Regularization weight ρ (cf. Equation 3), penalty parameter λ (cf. Equation 1), retrain interval T̃ , importance weight parameter γ (cf. Equation 2), neural network M0, base kernel K0, prior mean µ0, total time steps T ; 2: for t = 1 to T do 3: xt ← arg max x∈D α(Mt(x)) . maximize acquisition function 4: yt ← evaluation on xt . update observation 5: if t ≡ 0 (mod T̃ ) then 6: Mt+1,Kt+1 ← retrain Mt and Kt with the pair loss function Lρ,λ,γ(Mt,Kt, Dt) as defined in equation 3 . periodical retrain 7: end if 8: end for 9: Output: max t yt 4.2 COLLISION PENALTY In this subsection, we aim to quantify the collision effect based on the definition proposed in Section 3.3. As illustrated in Figure 2, we feed pairs of data points into the neural network and obtain their latent space representations. Apart from maximizing the GP’s likelihood, we concurrently calculate the amount of collision on each pair, and penalize only if the value is positive. For xi, xj ∈ X , yi = f(xi) + , yi = f(xi) + are the corresponding observations, and zi = g(xi), zj = g(xj) are the corresponding latent space representations. 1Note that we have introduced several hyper-parameters in the algorithm design; we will defer our discussion on the choice of these parameters to Section 5. 2To obtain an initial latent space representation, the labels do not have to be exact and could be collected from a related task of cheaper cost We define the collision penalty as pij = max(λ|yi − yj | − |zi − zj |, 0) (1) where λ is a penalty parameter that controls the smoothness of the target function h : Z → R. 4.3 DYNAMIC WEIGHT Note that it is challenging to universally eliminate the collision effect by minimizing the collision penalty and the GP’s regression loss—this is particularly true with a limited amount of training data. Fortunately, in the optimization tasks it is often unnecessary to learn equally good representation for suboptimal regions. Therefore, we can dedicate more training resources to improve the learned latent space by focusing on the (potentially) near-optimal regions. Following this insight, we propose to use a weighted collision penalty function, which uses the objective values for each pair as importance weight in each iteration. Formally, for any pair ((xj , zj , yj), (xi, zi, yi)) in a batch of observation pairsDt = {((xm, zm, ym), (xn, zn, yn))}m,n, we define the importance-weighted penalty function as p̃ij = pijwij with wij = eγ(yi+yj)∑ (m,n)∈Dt eγ(ym+yn) . (2) Here the importance weight γ is used to control the aggressiveness of the weighting strategy. Combining the collision penalty and regression loss of GP, we define the pair loss function L as Lρ,λ,γ(Mt,Kt, Dt) = 1 ||Dt||2 ∑ i∈Dt,j∈Dt (GPKt(Mt(xi))− yi)2 + (GPKt(Mt(xj))− yj)2 + ρp̃ij , (3) Here, GPKt(Mt(xi)) denotes the Gaussian process’s posterior mean on xi with kernel Kt and neural networkMt at timestep t. ρ denotes the regularization weight; as we demonstrate in Section 5, in practice we often choose ρ to keep the penalty at a order close to the regression loss. 4.4 THEORETICAL ANALYSIS In this subsection, we provide a theoretical justification for the collision-free regularizer, by inspecting the effect of regularization on the regret bound of CoFLO. We first connect the proposed collision penalty in Equation 1 to Lipschitz-continuity, and then integrate it into the regret analysis to provide an improved regret bound. Lipschitz continuity of the target function h The collision penalty encourages the Lipschitzcontinuity for h. Formally, the proposed regularization promotes to learn a latent space where ∀xi, xj ∈ D, zi = g(xi), zj = g(xj) ∈ Z, |g(xi)− g(xj)| ≤ λ|f(xi)− f(xj)| The above inquality reduces to the Lipschitz-continuity for h. Unlike typical smoothness assumptions in GPs, a function can be non-smooth and still Lipschitz continuous. Recently, Ahmed et al. (2019) leverage the Lipschitz-continuity of the objective function to propose improved acquisition functions based on the common acquisition functions, and provide an improved regret bound both theoretically and empirically. In the following, we show that running GP-UCB on the collision-free latent space amounts to an improvement in terms of its regret bound: Theorem 1. Let Z ⊂ [0, r]d be compact and convex, d ∈ N, r > 0, λ ≥ 0. Suppose that the objective function h defined on Z is a sample from GP and is Lipschitz continuous with Lipschitz constant λ. Let δ ∈ (0, 1), and define βt = 2 log ( π2t2/6δ ) + 2d log ( λrdt2 ) . Running the GP-UCB with βt for a sample h of a GP with mean function zero and covariance function k(x, x′), we obtain a regret bound of O∗( √ dTγT ) with high probability. Precisely, with C1 = 8/ log ( 1 + σ−2 ) , we have P [ RT ≤ √ C1TβT γT + 2 ] ≥ 1− δ. Here γT is the maximum information gain after T iterations, defined as γT := max A⊂Z,|A|=T I(yA;hA). Comparing the above result to Theorem 2 of Srinivas et al. (2010) which offers the regret bound with the sub-gaussianity assumption on the objective functions derivative, the second part of our regret bound does not rely on δ. The coefficients are also smaller as the deterministic bound on the derivative of f avoids union bound. Remark. The collision penalty encourages h to be Lipschitz-continuous on the latent space. Ideally, when the collision penalty pi,j(λ) term converges to zero for all data points in the latent space, we can claim that h is Lipschitz-continuous with Lipschitz constant at most λ. Applying Theorem 1 with βt = 2 log ( π2t2/6δ ) + 2d log ( λrdt2 ) , we can reduce the regret bound by choosing smaller λ. However, in practice, since the observation can be noisy, we need to choose a λ big enough to tolerant the noise. A small λ could make it difficult to learn a meaningful representation. 5 EXPERIMENTS In this section, we empirically evaluate our algorithm on two synthetic blackbox function optimization tasks and two real-world optimization problems. 5.1 EXPERIMENTAL SETUP We consider four baselines in our experiments. The rudimentary random selection algorithm (RS) shows the task complexity. Three popular optimization algorithms, namely particle swarm optimization (PSO) (Miranda, 2018), Tree-structured Parzen Estimator Approach (TPE) (Bergstra et al., 2011), and standard Bayesian optimization (BO) (Nogueira, 2014) which uses Gaussian process as the statistical model and the upper confidence bound (UCB) as its acquisition function, are tuned in each task. Another baseline we consider is the sample-efficient LSO (SE LSO) algorithm, which is implemented based on the algorithm proposed by Tripp et al. (2020). We also compare the non-regularized latent space optimization (LSO), Collision-Free Latent Space Optimization (CoFLO) and the dynamically-weighted CoFLO (DW CoFLO) proposed in this paper. The performance for each task is measured on 10,000 pre-collected data points. One crucial problem in practice is tuning the hyper-parameters. The hyper-parameters for GP are tuned for periodically retraining in the optimization process, by minimizing the loss function on a validation set. For all our tasks, we choose a simplistic neural network architecture M , due to limited and expensive access to labeled data under the BO setting. The coefficient ρ is, in general, selected to guarantee a similar order for the collision penalty to GP loss. The λ should be estimated according to the first several sampled data and tolerant the additive noise in the evaluation. γ controls the aggressiveness of the importance weight. While γ should not be too close to zero (which is equivalent to uniform weight), an extremely high value could make the regularization overly biased. Such a severe bias could possibly allow a heavily collided representation in most of the latent space and degrade regularization effectiveness. The value choice is similar to the inverse of the temperature parameter of softmax in deep learning Hinton et al. (2015). Here we use the first batch of observed samples to estimate the order of all observations and choose the appropriate γ. 5.2 DATASETS AND RESULTS We now evaluate CoFLO on two synthetic datasets and two real-world datasets. In the experiments, all the input data points are mapped to a one-dimensional latent space by via the neural network. We demonstrated the improvement of CoFLO brought by the explicit collision mitigation in the lowerdimensional latent space in terms of average simple regret. We also include the median result and statistical test in the appendix. 2D-Rastrigin The Rastrigin function is a non-convex function used as a performance test problem for optimization algorithms. It was first proposed by RASTRIGIN (1974) and used as a popular benchmark dataset for evaluating Gaussian process regression algorithms (Cully et al., 2018). Formally, the 2D Rastrigin function is f(x) = 10d+ d∑ i=1 x2i − 10cos(2πxi), d = 2 For convenience of comparison, we take the −f(x) as the objective value to make the optimization tasks a maximization task. The neural network is pretrained on 100 data points. As illustrated by figure 8a , CoFLO and DW CoFLO could quickly reach the (near-) optimal region, while the baselines generally suffer a bigger simple regret even after an excessive number of iterations. Feynman III.9.52 Equation Growing datasets have motivated pure data-backed analysis in physics. The dataset of 100 equations from the Feynman Lecture on Physics for the symbolic regression tasks in physics (Udrescu and Tegmark, 2020) could play the role as a test set for databack analysis algorithms in physics. The III.9.52 we choose to test the optimization algorithms is ργ = pdEf t h/2π sin((ω − ω0)t/2)2 ((ω − ω0)t/2)2 The equations have 6 variables as inputs and are reported to require at least 103 data for the regression task. The neural network is randomly initialized at the beginning. As illustrated by figure 8b, in the first 100 iterations, CoFLO and DW CoFLO behaves similarly to random selection. After the first training at iteration 100, CoFLO and DW CoFLO approach the optimum at a much faster pace compared to the baselines; among them, DW CoFLO shows a faster reduction in simple regret. Supernova Our first real-world task is to perform maximum likelihood inference on 3 cosmological parameters, the Hubble constant H0 ∈ (60, 80), the dark matter fraction ΩM ∈ (0, 1) and the dark energy fraction ΩA ∈ (0, 1). The likelihood is given by the Roberson-Walker metric, which requires a one-dimensional numerical integration for each point in the dataset from Davis et al. (2007). The neural network is pretrained on one hundred data points. As illustrated by figure 8c, the simple regret SE LSO has a faster drop at the beginning, while later remained relatively stable and eventually ends at a similar level to LSO. These results demonstrate the efficiency of SE LSO when finding sub-optimal. However, without collision reduction, SE LSO couldn’t outperform the LSO in the long run, where both reach their limitation. And the CoFLO and DW CoFLO demonstrate its robustness when close to the optimal as both constantly approach the optimal. Among them, DW CoFLO slightly outperform CoFLO. Redshift Distribution The challenges in designing and optimizing cosmological experiments grow commensurately with their scale and complexity. Careful accounting of all the requirements and features of these experiments becomes increasingly necessary to achieve the goals of a given cosmic survey. SPOKES (SPectrOscopic KEn Simulation) is an end-to-end framework that can simulate all the operations and key decisions of a cosmic survey (Nord et al., 2016). It can be used for the design, optimization, and forecasting of any cosmic experiment. For example, some cosmic survey campaigns endeavor to observe populations of galaxies that exist at a specific range of redshifts (distances) from us. In this work, we use SPOKES to generate galaxies within a specified window of distances from Earth. We then minimize the Hausdorff distance between the desired redshift distribution and the simulation of specific cosmological surveys generated by SPOKES. In our experiments, the neural network is pretrained on 200 data points. As illustrated by figure 8d, the simple regret of SE LSO drops faster at the initial phase. However, when it gets close to the (near-) optimal region where simple regret is approximately 0.15, it is caught up by both CoFLO and DW CoFLO, and eventually gets slightly outperformed. Such a result indicates that the collision problem could have more impact when the algorithm gets close to the optimal region. Notice that the rudimentary BO eventually outperformed the non-regularized LSO, indicate that without mitigation of collision, the learned representation could worsen the performance in the later stage when the algorithm gets close to the optimal. In conclusion, the mitigation of collision like CoFLO is necessary to further improve the later performance of LSO, when collision matters more in the near-optimal areas. 5.3 DISCUSSION In general, our experimental results consistently demonstrate the robustness of our methods against collisions in the learned latent space. Our method outperforms all baselines; when compared to the sample-efficient LSO, the dynamically-weighted LSO performs better in most cases and shows a steady capability to reach the optimum by explicitly mitigating the collision in the latent space. In contrast, the Sample-efficient LSO might fail due to the collision problem. 6 CONCLUSION We have proposed a novel regularization scheme for latent space based Bayesian optimization. Our algorithm—namely CoFLO—addresses the collision problem induced by dimensionality reduction, and improves the performance for latent space-based optimization algorithms. The regularization is proved to be effective in mitigating the collision problem in learned latent space, and therefore can boost the performance of the Bayesian optimization in the latent space. We demonstrate strong empirical results for CoFLO on several synthetic and real-world datasets, and show that CoFLO is capable of dealing with high-dimensional input that could be highly valuable for real-world experiment design tasks such as cosmological survey scheduling. A REGRET BOUND FOR A LIPSCHITZ-CONTINUOUS OBJECTIVE FUNCTION In this section, we provide the detailed proof for Theorem 1. We first modify Lemma 5.7 and Lemma 5.8 in Srinivas et al. (2010) since we are assuming the deterministic Lipschitz-continuity for h. Use the same analysis tool Zt defined as a set of discretization Zt ⊂ Z where Zt will be used at time t in the analysis. We choose a discretization Zt of size (τt)d. so that ∀z ∈ Z, ||z − [z]t||1 ≤ rd/τt (4) where [z]t denotes the closest point in Zt to z. Lemma 1. Pick δ ∈ (0, 1) and set β = 2log(πt/δ)+2dlog(Lrdt2), where ∑ t≥1 π −1 t = 1, πt > 0. Let τt = Lrdt2. Hence then |h(z∗)− µt−1([z∗]t)| ≤ β1/2t σt−1([z∗]t) + 1/t2 ∀t ≥ 1 holds with probability ≥ 1− δ. Here z∗ := g(x∗). Proof. Using the Lipschitz-continuity and equation 4, we have that ∀z ∈ Z, |h(z)− h([z]t)| ≤ Lrd/τt By choosing τt = Lrdt2, we have |Zt| = (Lrdt2)d and ∀z ∈ Z, |h(z)− h([z]t)| ≤ 1/t2 Then using Lemma 5.6 in Srinivas et al. (2010), we reach the expected result. Base on Lemma 5.5 in Srinivas et al. (2010) and Lemma 1, we could have the following result directly. Lemma 2. Pick δ ∈ (0, 1) and set β = 2log(2πt/δ) + 2dlog(Lrdt2), where ∑ t≥1 π −1 t = 1, πt > 0. Then with probability ≥ 1− δ, for all t ∈ N , the regret is bounded as follows: rt ≤ 2β1/2t σt−1(zt) + 1/t2 Proof. Using the union bound of δ/2 in both Lemma 5.5 in Srinivas et al. (2010) and Lemma 1, we have that with probability 1− δ: rt = h(z ∗)− h(zt) ≤ β1/2t σt−1(zt) + 1/t2 + µt−1(zt)− h(zt) ≤ 2β1/2t σt−1(zt) + 1/t2 which complete the proof. Now we are ready to use the Lemma5.4 in Srinivas et al. (2010) and Lemma 2 to complete the proof of Theorem 1. Proof. Using Lemma5.4 in Srinivas et al. (2010), we have that with probability ≥ 1− δ: T∑ t=1 4βtσ 2 t−1(xt) ≤ C1βT γT ∀T ≥ 1 By Cauchy-Schwarz: T∑ t=1 2β 1/2 t σt−1(xt) ≤ √ C1βT γT ∀T ≥ 1 Finally, substitute πt with π2t2/6 (since ∑ 1/t2 = π2/6). Theorem 1 follows. B VISUALIZATION OF THE COLLISION EFFECT IN LATENT SPACE We demonstrate the collision effect in the latent space. We trained the same neural network on Feynman dataset with 101 data points which demonstrate the latent space after two retrains with the retrain interval set to be 50 data points. The regularized one employed DW CoFLO, with the regularization parameter ρ = 1e5, penalty parameter λ = 1e−2, retrain interval T̃ , weighting parameter γ = 1e−2 and the base kernel set to be square exponential kernel. The non-regularized one employed LSO. C SUPPLEMENTAL MATERIALS ON ALGORITHMIC DETAILS C.1 ALGORITHMIC DETAILS ON NEURAL NETWORK ARCHITECTURE As the main goal of our paper was to showcase the performance of a novel collision-free regularizer, we picked our network architectures to be basic multi-layer dense neural network: For SPOKES, we used a 5-layer dense neural network. Its hidden layers consist of 16 neurons with Leaky Relu nonlinearities, 8 neurons with Sigmoid nonlinearities, 4 neurons with Sigmoid nonlinearities, and 2 neurons with Sigmoid nonlinearities respectively. Each hidden layer also applies a 0.2 dropout rate. The output layer applies Leaky Relu nonlinearity. For SuperNova, Feynman, and Rastrigin 2D, we used a 4-layer dense neural network. Its hidden layers consist of 8 neurons with Sigmoid nonlinearities, 4 neurons with Leaky Relu nonlinearities, and 2 neurons with Leaky Relu nonlinearities respectively. Each hidden layer applies a 0.2 dropout rate. The output layer also applies Leaky Relu nonlinearity. The neural networks are trained using ADAM with a learning rate of 1e−2. C.2 PARAMETER CHOICES We further investigate the robustness of parameter choices of both the regularization parameter ρ and the penalty parameter λ on SPOKES dataset. We show the result in the figures below. D ADDITIONAL EXPERIMENTAL RESULTS We added both the detailed median curves and the p-values of the Welch’s t-tests of the experiments we discussed in section 5. D.1 MEDIAN CURVE The median curves demonstrate similar trends to the mean curves. In the four experiments, DW CoFLO consistently demonstrates a superior performance over the baselines. (a) Rastrigin 2D (b) Feynman III.9.52 (c) Supernova (d) SPOKES D.2 P-VALUES The table shows the p-values of Welch’s t-tests of the experiments. It demonstrates the significance of the improvement brought by DW CoFLO over the baselines. Data BO RO TPE LSO SE-LSO CoFLO Rastrigin-2D 1.07e−4 3.88e−8 1.01e−2 6.38e−3 1.10e−5 4.23e−1 Supernova 3.24e−3 3.61e−3 3.18e−2 3.43e−1 1.41e−8 2.62e−1 Feynman 1.73e−1 1.52e−07 8.20e−1 288e−1 6.37e−1 2.25e−1 SPOKES 4.62e−1 9.90e−3 2.64e−1 4.17e−2 2.87e−3 4.11e−1
1. What is the main contribution of the paper regarding regularization techniques in latent variable models? 2. What are the strengths and weaknesses of the proposed method, particularly in its theoretical result and empirical performance? 3. Do you have any concerns or questions about the presentation and content of the paper, such as the choice of parameters, experimental design, and novelty of the work? 4. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content? 5. Are there any minor issues or typos in the paper that should be addressed?
Review
Review This paper proposes a regularization technique in training a latent variable model so that points with different functions are pushed apart. It’s demonstrated that the proposed technique can boost regret bound and empirical performance. Overall, I think it’s a nice paper, but I don’t think the current presentation is good enough for publication at ICLR. comments & questions: It’s a natural idea to add a Lipchitz-like regularization loss to mitigate “collision”. The theoretical result seems a straightforward derivative of the Srinivas et al. (2010), but I don’t really see the novelty of the theoretical result, since Lipchitz continuity is implicitly determined by the kernel function? the proposed method needs to pretrain the neural network with 100 or 200 points. It’s not clear to me what it means by “pre-train”. Is it supervised or unsupervised? Which 100 points are chosen for pretraining? if it’s supervised, did you count them in the optimization budget? that means if you pre-train on 100 labeled points, then perform 100 BO iterations, a fair comparison to standard BO would grant it a budget of 200 function evaluations. there are several parameters, such as λ , γ , how are they chosen exactly? how sensitive are these parameters? What exactly is the “standard BO” algorithm from Nogueira (2014)? Is it UCB? EI? Seems all the benchmark functions have continuous domain with already low dimensions, e.g., the Rastrigin 2D only has 2 dimensions. Do you further reduce the dimension to one with the neural network? It would be great if you could plot the function on latent space. Same for other benchmarks, since they are not very high dimensional. From the experiments, I don’t really see if it’s true that the baselines lose because they have collision problems. Is it possible to design some experiment to demonstrate that? To me seems the work could be more motivated by input domains such as graphs or other discrete structures, at least for the benchmarks in the experiments I don’t see why they need this method despite the claimed superior performance. For your reference, some notable work on Bayesian optimization in latent space for discrete objects: Kusner et al. (ICML 2017), grammar VAE Jin et al. (ICML 2018), JT-VAE Zhang et al. (NeurIPS 2019), D-VAE ... Minor: the formula for posterior GP mean and covariance assumes zero prior mean, which was not explicitly pointed out. in 3.1, most popular acquisition should definitely include expected improvement there are many typos: in Abstract: significant different -> significantly different in Related Work: taskss -> tasks in Related Work third paragraph: smooth(of… -> add space (and many other places) in 3.1: ”wiggles” first quote wrong direction In 3.1: “the acquisition function α … use it -> an acquisition function … uses it in 3.1: “then use the sample as the acquisition function …, need to add period in 4.1: base on -> based on in 5.1: “promotse” -> promotes? ...
ICLR
Title Learning Collision-free Latent Space for Bayesian Optimization Abstract Learning and optimizing a blackbox function is a common task in Bayesian optimization and experimental design. In real-world scenarios (e.g., tuning hyperparameters for deep learning models, synthesizing a protein sequence, etc.), these functions tend to be expensive to evaluate and often rely on high-dimensional inputs. While classical Bayesian optimization algorithms struggle in handling the scale and complexity of modern experimental design tasks, recent works attempt to get around this issue by applying neural networks ahead of the Gaussian process to learn a (low-dimensional) latent representation. We show that such learned representation often leads to collision in the latent space: two points with significantly different observations collide in the learned latent space. Collisions could be regarded as additional noise introduced by the traditional neural network, leading to degraded optimization performance. To address this issue, we propose Collision-Free Latent Space Optimization (CoFLO), which employs a novel regularizer to reduce the collision in the learned latent space and encourage the mapping from the latent space to objective value to be Lipschitz continuous. CoFLO takes in pairs of data points and penalizes those too close in the latent space compared to their target space distance. We provide a rigorous theoretical justification for the regularizer by inspecting the regret of the proposed algorithm. Our empirical results further demonstrate the effectiveness of CoFLO on several synthetic and real-world Bayesian optimization tasks, including a case study for computational cosmic experimental design. 1 INTRODUCTION Bayesian optimization is a classical sequential optimization method and is widely used in various fields, including recommender systems, scientific experimental design, hyper-parameter optimization, etc. Many of theses applications involve evaluating an expensive blackbox function; therefore the number of queries should be minimized. A common way to model the unknown function is via Gaussian processes (GPs) Rasmussen and Williams (2006). GPs have been extensively studied under the bandit setting, and has proven to be an effective approach for addressing a broad class of black-box function optimization problems. One of the key computational challenges for learning with GPs concerns with optimizing specific kernels used to model the covariance structures of GPs. As such optimization task depends on the dimension of feature space, for high dimensional input, it is often prohibitively expensive to train a Gaussian process model. Meanwhile, Gaussian processes are not intrinsically designed to deal with structured input that has a strong correlations among different dimensions, e.g., the graphs and time sequences. Therefore, dimensionality reduction algorithms are needed to speed up the learning process. Recently, it has become popular to investigate GPs in the context of latent space models. As an example, deep kernel learning (Wilson et al., 2016) simultaneously learns a (low-dimensional) data representation and a scalable kernel via an end-to-end trainable deep neural network. In general, the neural network is trained to learn a simpler latent representation with reduced dimension and has the structure information already embedded for the Gaussian process. Such a combination of neural network and Gaussian process could improve the scalability and extensibility of classical Bayesian optimization, but it also poses new challenges for the optimization task (Tripp et al., 2020). As we later demonstrate, one critical challenge brought by introducing the neural network is that the latent representation is prone to collisions: two points with significant different observations can get too close in the latent space. The collision effect is especially evident when information is lost by dimension reduction, and/or when the training data is limited in size in Bayesian optimization. As illustrated in Figure 1, when passed through the neural network, data points with drastically different observations are mapped to close positions in the latent space. Such collisions could be regarded as additional noise introduced by the neural network. Although Bayesian optimization is known to be robust to mild noisy observations, the collision in latent space could be harmful to the optimization performance, as it is non-trivial to explicitly model the collision into the acquisition function. In addition, the additional noise induced by the collision effect will further loosen the regret bound for classical Bayesian optimization algorithms (Srinivas et al., 2010). Overview of main results To mitigate the collision effect, we propose a novel regularization scheme which can be applied as a simple plugin amendment for the latent space-based Bayesian optimization models. The proposed algorithm, namely Collision-Free Latent Space Optimization (CoFLO), leverages a regularized regression loss function, to periodically optimize the latent space for Bayesian optimization. Concretely, our regularizer is encoded by a novel pairwise collision penalty function defined jointly on the latent space and the output domain. In order to mitigate the risk of collision in the latent space (and consequently boost the optimization performance), one can apply the regularizer uniformly to the latent space to minimize the collisions. However, in Bayesian global optimization tasks, we seek to prioritize the regions close to the possible optimum, as collisions in these regions are more likely to mislead the optimization algorithm. Based on this insight, we propose a optimizationaware regularization scheme, where we assign a higher weight for the collision penalty on those pairs of points closer to the optimum region in the latent space. This algorithm—which we refer to as dynamically-weighted CoFLO—is designed to dynamically assess the importance of a collision during optimization. Comparing to the uniform collision penalty over the latent space, the dynamic weighting mechanism has demonstrated drastic improvement over the state-of-the-art latent spacebased Bayesian optimization models. We summarize our the key contributions below: • We propose a novel regularization scheme, as a simple plugin amendment for latent spacebased Bayesian optimization models. Our regularizer penalizes collisions in the latent space and effectively reduces the collision effect. • We propose an optimization-aware dynamic weighting mechanism for adjusting the collision penalty to further improve the effectiveness of regularization for Bayesian optimization. • We provide theoretical analysis for the performance of Bayesian optimization on regularized latent space. • We conducted an extensive empirical study on four synthetic and real-world datasets, including a real-world case study for cosmic experimental design, and demonstrate strong empirical performance for our algorithm. 2 RELATED WORK Bayesian optimization has demonstrated promisming performance in various cost-sensitive global optimization tasks (Shahriari et al., 2016). However, due to its intrinsic computational limitation in the high-dimensional regime, its applicability has been restricted to relatively simple tasks. In this section, we provide a short survey on recent work in Bayesian learning, which were designed to overcome the high-dimensionality challenge for both Bayesian optimization and regression tasks. Deep kernel learning Deep kernel learning (DKL) (Wilson et al., 2016) combines the power of the Gaussian process and that of neural network by introducing a deep neural network g to learn a mapping g : X → Z from the input domain X to a latent space Z , and use the latent representation z ∈ Z as the input of the Gaussian process. The neural network g and a spectral mixture base kernel k forms a scalable expressive closed-form covariance kernel, denoted by kDK(xi, xj) → k(g(xi), g(xj)), for Gaussian processes. Despite of encouraging results in numerous regression tasks, it remains unclear whether DKL is readily applicable to Bayesian optimization. One key difference in Bayesian regression and optimization tasks is the assumption on the accessibility of training data: Bayesian optimization often assumes limited access to labeled data, while DKL for regression relies on abundant access to data in order to train a deep kernel function. Another problem lies in the difference between the objective functionn. While DKL focuses on improving the general regression performance, it does not specifically address the problem caused by collisions, which—as we later demonstrate in section 3.3—could be harmful for sequential decision making tasks. Representation learning and latent space optimization Aiming at improving the scalability and extensibility of the Gaussian process, various methods are proposed to reduce the dimensionality of the original input. Djolonga et al. (2013) assume that only a subset of input dimensions varies, and the kernel is smooth (i.e. with bounded RKHS norm). Under these assumptions, they underlying subspace via low-rank matrix completion. Huang et al. (2015) use Autoencoder to learn a lowdimensional representation of the inputs to increase GP’s scalability in regression tasks. Snoek et al. (2015) further propose to learn a pre-trained encoder neural network before BO. Lu et al. (2018) learn a variational auto-encoder iteratively during sequential optimization to embed the structure of the input. The challenge for combining latent space learning with Bayesian optimization lies in that a pre-trained neural network may not extract adequate information around the more promising region of the input space. Furthermore, the latent space could be outdated without continuous updates with the latest acquired observation. Tripp et al. (2020) propose to periodically retrain the neural network to learn a better latent space, in order to minimize the number of iterations needed for LSO. They claim that by prioritizing the loss of more promising data points in the original input space (i.e. by assigning a higher weight to these data points), the model could focus more on learning highvalue regions and allow a substantial extrapolation in the latent space to accelerate the optimization. However, such a framework does not explicitly deal with collisions in the latent space, which we found to be a key factor in the poor performance of modern latent space optimization algorithms. 3 PROBLEM STATEMENT In this section, we introduce necessary notations and formally state the problem. We focus on the problem of sequentially optimizing the function f : X → R, where X ⊆ Rd is the input domain. In each round t, we pick a point xt ∈ X , and observe the function value perturbed by an additive noise: yt = f(xt) + t with t ∼ N (0, σ2) being i.i.d. Gaussian noise. Our goal is to maximize the sum of rewards ∑T t=1 f(xt) over T iterations, or equivalently, to minimize the cumulative regret RT := ∑T t rt, where rt := maxx∈X f(x)− f(xt) denote the instantaneous regret of xt. 3.1 BAYESIAN OPTIMIZATION Bayesian optimization typically employs Gaussian processes as the statistic tool for modeling the unknown objective function. The major advantage of using GP is that it presents a computationally tractable solution to depict a sophisticated and consistent view across the space of all possible function (Rasmussen and Williams, 2005), which allows closed-form posterior estimation in the function space. BO methods starts with a prior on the black-box function. Upon observing new labels, BO then iteratively updates the posterior distribution in the function space, and maximizes an acquisition function measuring each point’s contribution to finding the optimum, in order to select the next point for evaluation. Formally, in Bayesian optimization we assume that f follows a GP(m(x), k(x, x′)), where m(x) is the mean function, k(x, x′) is the kernel or covariance function. Throughout this paper, we use squared exponential kernel, kSE(x, x′) = σ2SE exp ( − (x−x ′) 2l ) , where the length scale l determines the length of the ”wiggles” and the output variance σ2SE determines the average distance of the function away from its mean. At iteration T , given the historically selected pointsAT = {x1, ..., xt} and the corresponding noisy evaluations yT = [y1, ...yT ], the posterior over f also takes the form of a GP, with mean µT (x), covariance kT (x, x′), and variance σ2T (x): µT (x) = kT (x) T (KT + σ 2I)−1yT kT (x, x ′) = k(x, x′)− kT (x)T (KT + σ2I)−1kT (x′) σ2T (x) = kT (x, x) where kT (x) = [k(x1, x), ..., k(xT , x)]T and KT is the positive definite kernel matrix [k(x, x′)]x,x′∈AT . After obtaining the posterior, one can compute the acquisition function α : X → R, which is used to select the next point to be evaluated. Various acquisition functions have been proposed in the literature, including popular choices such as Upper Confidence Bound (UCB) (Srinivas et al., 2010) and Thompson sampling (TS) (THOMPSON, 1933). UCB uses the upper confidence bound αUCB(x) = µt(x)+β1/2σt(x) with β(x) being the confidence coeffecient, and enjoys rigorous sublinear regret bound. TS usually outperform UCB in practice and has been shown to enjoy a similar regret bound Agrawal and Goyal (2012). It samples a function f̃t from the GP posterior f̃t ∼ GP(µt(x), kt(x, x′)) and then uses the sample as an acquisition function: αTS(x) = f̃t(x). Remark. Regret is commonly used as performance metric for BO methods. In this work we focus on the simple regret r∗T = max x∈X f(x)−max t<T f(xt) and cumulative regret R(T ) = ∑T t rt. 3.2 LATENT SPACE OPTIMIZATION Recently, Latent Space Optimization (LSO) has been proposed to solve Bayesian optimization problems on complex input domains (Tripp et al., 2020). LSO first learns a latent space mapping g : X → Z to convert the input space X to the latent space Z . Then, it constructs an objective mapping h : Z → R such that f(x) ≈ h(g(x)), ∀z ∈ Z . The latent space mapping g and base kernel k could be regarded as a deep kernel, denote by knn(x, x′) = k(g(x), g(x′)). Thus, the actual input space for BO is the latent space Z and the objective function is h. With acquisition function αnn(x) := α(g(x)), it is unnecessary to compute an inverse mapping g−1 as discussed in Tripp et al. (2020), as BO could directly select xt = arg max xt∈X αnn(x) ∀t ≤ T and evaluate f . In the meantime, BO can leverage the latent space mapping g, usually represented by a neural network, to effectively learn and optimize the target function h on a lower-dimension input space. 3.3 THE COLLISION EFFECT OF LSO When the mapping g : X → Z is represented by a neural network, it may cause undesirable collisions between different input points in the latent space Z . Under the noise-free setting, we say there exists a collision in Z , if ∃xi, xj ∈ X , such that when g(xi) = g(xj), |f(xi) − f(xj)| > 0. Such collision could be regarded as additional (unknown) noise on the observations introduced by the neural network g. For a noisy observation y = f(x) + , we define a collision as follows: for ρ > 0, ∃xi, xj ∈ D, |g(xi)− g(xj)| < ρ|yi − yj |. When the distance between a pair of points in the latent space is too close comparing to their difference in the output space, the different output values for the collided points in the latent space could be interpreted as the effect of additional observation noise. In general, collisions could degrade the performance of LSO. Since the collision effect is a priori unknown, it is often challenging to deal with collisions in LSO, even if we regard it as additional observation noise and increase the (default) noise variance in the Gaussian process. Thus, it is necessary to mitigate the collision effect, by directly restraining it in the representation learning phase. 4 COLLISION-FREE LATENT SPACE OPTIMIZATION In this section, we introduce Collision-Free Latent Space Optimization (CoFLO), an algorithmic framework designed to mitigate the collision effect. 4.1 OVERVIEW OF THE COFLO ALGORITHM The major challenge in restraining collisions in the latent space is that, unlike the traditional regression problem, we cannot quantify it on a single point’s observation. We can, however, quantify collisions by grouping pairs of data points and inspecting their corresponding observations. We define the collision penalty based on pairs of inputs, and further introduce a pair loss function to characterize the collision effect. Based on this pair loss, we propose a novel regularized latent space optimization algorithm1, as summarized in Algorithm 1. The proposed algorithm first uses the pair-wise input and concurrently feeds them into the same network and then calculates the pair loss function. We demonstrate this process in Figure 2. Given a set of labeled data points, we can train the neural network to create an initial latent space representation2, similar to DKL (Wilson et al., 2016). Once provided with the initial representation, we can then refine the latent space by running CoFLO and periodically update the latent space (i.e. updating the latent representation after collection a batch of data points) to mitigate the collision effect as we collect more labels Algorithm 1 Collision-Regularized Latent Space Optimization (CoFLO) 1: Input: Regularization weight ρ (cf. Equation 3), penalty parameter λ (cf. Equation 1), retrain interval T̃ , importance weight parameter γ (cf. Equation 2), neural network M0, base kernel K0, prior mean µ0, total time steps T ; 2: for t = 1 to T do 3: xt ← arg max x∈D α(Mt(x)) . maximize acquisition function 4: yt ← evaluation on xt . update observation 5: if t ≡ 0 (mod T̃ ) then 6: Mt+1,Kt+1 ← retrain Mt and Kt with the pair loss function Lρ,λ,γ(Mt,Kt, Dt) as defined in equation 3 . periodical retrain 7: end if 8: end for 9: Output: max t yt 4.2 COLLISION PENALTY In this subsection, we aim to quantify the collision effect based on the definition proposed in Section 3.3. As illustrated in Figure 2, we feed pairs of data points into the neural network and obtain their latent space representations. Apart from maximizing the GP’s likelihood, we concurrently calculate the amount of collision on each pair, and penalize only if the value is positive. For xi, xj ∈ X , yi = f(xi) + , yi = f(xi) + are the corresponding observations, and zi = g(xi), zj = g(xj) are the corresponding latent space representations. 1Note that we have introduced several hyper-parameters in the algorithm design; we will defer our discussion on the choice of these parameters to Section 5. 2To obtain an initial latent space representation, the labels do not have to be exact and could be collected from a related task of cheaper cost We define the collision penalty as pij = max(λ|yi − yj | − |zi − zj |, 0) (1) where λ is a penalty parameter that controls the smoothness of the target function h : Z → R. 4.3 DYNAMIC WEIGHT Note that it is challenging to universally eliminate the collision effect by minimizing the collision penalty and the GP’s regression loss—this is particularly true with a limited amount of training data. Fortunately, in the optimization tasks it is often unnecessary to learn equally good representation for suboptimal regions. Therefore, we can dedicate more training resources to improve the learned latent space by focusing on the (potentially) near-optimal regions. Following this insight, we propose to use a weighted collision penalty function, which uses the objective values for each pair as importance weight in each iteration. Formally, for any pair ((xj , zj , yj), (xi, zi, yi)) in a batch of observation pairsDt = {((xm, zm, ym), (xn, zn, yn))}m,n, we define the importance-weighted penalty function as p̃ij = pijwij with wij = eγ(yi+yj)∑ (m,n)∈Dt eγ(ym+yn) . (2) Here the importance weight γ is used to control the aggressiveness of the weighting strategy. Combining the collision penalty and regression loss of GP, we define the pair loss function L as Lρ,λ,γ(Mt,Kt, Dt) = 1 ||Dt||2 ∑ i∈Dt,j∈Dt (GPKt(Mt(xi))− yi)2 + (GPKt(Mt(xj))− yj)2 + ρp̃ij , (3) Here, GPKt(Mt(xi)) denotes the Gaussian process’s posterior mean on xi with kernel Kt and neural networkMt at timestep t. ρ denotes the regularization weight; as we demonstrate in Section 5, in practice we often choose ρ to keep the penalty at a order close to the regression loss. 4.4 THEORETICAL ANALYSIS In this subsection, we provide a theoretical justification for the collision-free regularizer, by inspecting the effect of regularization on the regret bound of CoFLO. We first connect the proposed collision penalty in Equation 1 to Lipschitz-continuity, and then integrate it into the regret analysis to provide an improved regret bound. Lipschitz continuity of the target function h The collision penalty encourages the Lipschitzcontinuity for h. Formally, the proposed regularization promotes to learn a latent space where ∀xi, xj ∈ D, zi = g(xi), zj = g(xj) ∈ Z, |g(xi)− g(xj)| ≤ λ|f(xi)− f(xj)| The above inquality reduces to the Lipschitz-continuity for h. Unlike typical smoothness assumptions in GPs, a function can be non-smooth and still Lipschitz continuous. Recently, Ahmed et al. (2019) leverage the Lipschitz-continuity of the objective function to propose improved acquisition functions based on the common acquisition functions, and provide an improved regret bound both theoretically and empirically. In the following, we show that running GP-UCB on the collision-free latent space amounts to an improvement in terms of its regret bound: Theorem 1. Let Z ⊂ [0, r]d be compact and convex, d ∈ N, r > 0, λ ≥ 0. Suppose that the objective function h defined on Z is a sample from GP and is Lipschitz continuous with Lipschitz constant λ. Let δ ∈ (0, 1), and define βt = 2 log ( π2t2/6δ ) + 2d log ( λrdt2 ) . Running the GP-UCB with βt for a sample h of a GP with mean function zero and covariance function k(x, x′), we obtain a regret bound of O∗( √ dTγT ) with high probability. Precisely, with C1 = 8/ log ( 1 + σ−2 ) , we have P [ RT ≤ √ C1TβT γT + 2 ] ≥ 1− δ. Here γT is the maximum information gain after T iterations, defined as γT := max A⊂Z,|A|=T I(yA;hA). Comparing the above result to Theorem 2 of Srinivas et al. (2010) which offers the regret bound with the sub-gaussianity assumption on the objective functions derivative, the second part of our regret bound does not rely on δ. The coefficients are also smaller as the deterministic bound on the derivative of f avoids union bound. Remark. The collision penalty encourages h to be Lipschitz-continuous on the latent space. Ideally, when the collision penalty pi,j(λ) term converges to zero for all data points in the latent space, we can claim that h is Lipschitz-continuous with Lipschitz constant at most λ. Applying Theorem 1 with βt = 2 log ( π2t2/6δ ) + 2d log ( λrdt2 ) , we can reduce the regret bound by choosing smaller λ. However, in practice, since the observation can be noisy, we need to choose a λ big enough to tolerant the noise. A small λ could make it difficult to learn a meaningful representation. 5 EXPERIMENTS In this section, we empirically evaluate our algorithm on two synthetic blackbox function optimization tasks and two real-world optimization problems. 5.1 EXPERIMENTAL SETUP We consider four baselines in our experiments. The rudimentary random selection algorithm (RS) shows the task complexity. Three popular optimization algorithms, namely particle swarm optimization (PSO) (Miranda, 2018), Tree-structured Parzen Estimator Approach (TPE) (Bergstra et al., 2011), and standard Bayesian optimization (BO) (Nogueira, 2014) which uses Gaussian process as the statistical model and the upper confidence bound (UCB) as its acquisition function, are tuned in each task. Another baseline we consider is the sample-efficient LSO (SE LSO) algorithm, which is implemented based on the algorithm proposed by Tripp et al. (2020). We also compare the non-regularized latent space optimization (LSO), Collision-Free Latent Space Optimization (CoFLO) and the dynamically-weighted CoFLO (DW CoFLO) proposed in this paper. The performance for each task is measured on 10,000 pre-collected data points. One crucial problem in practice is tuning the hyper-parameters. The hyper-parameters for GP are tuned for periodically retraining in the optimization process, by minimizing the loss function on a validation set. For all our tasks, we choose a simplistic neural network architecture M , due to limited and expensive access to labeled data under the BO setting. The coefficient ρ is, in general, selected to guarantee a similar order for the collision penalty to GP loss. The λ should be estimated according to the first several sampled data and tolerant the additive noise in the evaluation. γ controls the aggressiveness of the importance weight. While γ should not be too close to zero (which is equivalent to uniform weight), an extremely high value could make the regularization overly biased. Such a severe bias could possibly allow a heavily collided representation in most of the latent space and degrade regularization effectiveness. The value choice is similar to the inverse of the temperature parameter of softmax in deep learning Hinton et al. (2015). Here we use the first batch of observed samples to estimate the order of all observations and choose the appropriate γ. 5.2 DATASETS AND RESULTS We now evaluate CoFLO on two synthetic datasets and two real-world datasets. In the experiments, all the input data points are mapped to a one-dimensional latent space by via the neural network. We demonstrated the improvement of CoFLO brought by the explicit collision mitigation in the lowerdimensional latent space in terms of average simple regret. We also include the median result and statistical test in the appendix. 2D-Rastrigin The Rastrigin function is a non-convex function used as a performance test problem for optimization algorithms. It was first proposed by RASTRIGIN (1974) and used as a popular benchmark dataset for evaluating Gaussian process regression algorithms (Cully et al., 2018). Formally, the 2D Rastrigin function is f(x) = 10d+ d∑ i=1 x2i − 10cos(2πxi), d = 2 For convenience of comparison, we take the −f(x) as the objective value to make the optimization tasks a maximization task. The neural network is pretrained on 100 data points. As illustrated by figure 8a , CoFLO and DW CoFLO could quickly reach the (near-) optimal region, while the baselines generally suffer a bigger simple regret even after an excessive number of iterations. Feynman III.9.52 Equation Growing datasets have motivated pure data-backed analysis in physics. The dataset of 100 equations from the Feynman Lecture on Physics for the symbolic regression tasks in physics (Udrescu and Tegmark, 2020) could play the role as a test set for databack analysis algorithms in physics. The III.9.52 we choose to test the optimization algorithms is ργ = pdEf t h/2π sin((ω − ω0)t/2)2 ((ω − ω0)t/2)2 The equations have 6 variables as inputs and are reported to require at least 103 data for the regression task. The neural network is randomly initialized at the beginning. As illustrated by figure 8b, in the first 100 iterations, CoFLO and DW CoFLO behaves similarly to random selection. After the first training at iteration 100, CoFLO and DW CoFLO approach the optimum at a much faster pace compared to the baselines; among them, DW CoFLO shows a faster reduction in simple regret. Supernova Our first real-world task is to perform maximum likelihood inference on 3 cosmological parameters, the Hubble constant H0 ∈ (60, 80), the dark matter fraction ΩM ∈ (0, 1) and the dark energy fraction ΩA ∈ (0, 1). The likelihood is given by the Roberson-Walker metric, which requires a one-dimensional numerical integration for each point in the dataset from Davis et al. (2007). The neural network is pretrained on one hundred data points. As illustrated by figure 8c, the simple regret SE LSO has a faster drop at the beginning, while later remained relatively stable and eventually ends at a similar level to LSO. These results demonstrate the efficiency of SE LSO when finding sub-optimal. However, without collision reduction, SE LSO couldn’t outperform the LSO in the long run, where both reach their limitation. And the CoFLO and DW CoFLO demonstrate its robustness when close to the optimal as both constantly approach the optimal. Among them, DW CoFLO slightly outperform CoFLO. Redshift Distribution The challenges in designing and optimizing cosmological experiments grow commensurately with their scale and complexity. Careful accounting of all the requirements and features of these experiments becomes increasingly necessary to achieve the goals of a given cosmic survey. SPOKES (SPectrOscopic KEn Simulation) is an end-to-end framework that can simulate all the operations and key decisions of a cosmic survey (Nord et al., 2016). It can be used for the design, optimization, and forecasting of any cosmic experiment. For example, some cosmic survey campaigns endeavor to observe populations of galaxies that exist at a specific range of redshifts (distances) from us. In this work, we use SPOKES to generate galaxies within a specified window of distances from Earth. We then minimize the Hausdorff distance between the desired redshift distribution and the simulation of specific cosmological surveys generated by SPOKES. In our experiments, the neural network is pretrained on 200 data points. As illustrated by figure 8d, the simple regret of SE LSO drops faster at the initial phase. However, when it gets close to the (near-) optimal region where simple regret is approximately 0.15, it is caught up by both CoFLO and DW CoFLO, and eventually gets slightly outperformed. Such a result indicates that the collision problem could have more impact when the algorithm gets close to the optimal region. Notice that the rudimentary BO eventually outperformed the non-regularized LSO, indicate that without mitigation of collision, the learned representation could worsen the performance in the later stage when the algorithm gets close to the optimal. In conclusion, the mitigation of collision like CoFLO is necessary to further improve the later performance of LSO, when collision matters more in the near-optimal areas. 5.3 DISCUSSION In general, our experimental results consistently demonstrate the robustness of our methods against collisions in the learned latent space. Our method outperforms all baselines; when compared to the sample-efficient LSO, the dynamically-weighted LSO performs better in most cases and shows a steady capability to reach the optimum by explicitly mitigating the collision in the latent space. In contrast, the Sample-efficient LSO might fail due to the collision problem. 6 CONCLUSION We have proposed a novel regularization scheme for latent space based Bayesian optimization. Our algorithm—namely CoFLO—addresses the collision problem induced by dimensionality reduction, and improves the performance for latent space-based optimization algorithms. The regularization is proved to be effective in mitigating the collision problem in learned latent space, and therefore can boost the performance of the Bayesian optimization in the latent space. We demonstrate strong empirical results for CoFLO on several synthetic and real-world datasets, and show that CoFLO is capable of dealing with high-dimensional input that could be highly valuable for real-world experiment design tasks such as cosmological survey scheduling. A REGRET BOUND FOR A LIPSCHITZ-CONTINUOUS OBJECTIVE FUNCTION In this section, we provide the detailed proof for Theorem 1. We first modify Lemma 5.7 and Lemma 5.8 in Srinivas et al. (2010) since we are assuming the deterministic Lipschitz-continuity for h. Use the same analysis tool Zt defined as a set of discretization Zt ⊂ Z where Zt will be used at time t in the analysis. We choose a discretization Zt of size (τt)d. so that ∀z ∈ Z, ||z − [z]t||1 ≤ rd/τt (4) where [z]t denotes the closest point in Zt to z. Lemma 1. Pick δ ∈ (0, 1) and set β = 2log(πt/δ)+2dlog(Lrdt2), where ∑ t≥1 π −1 t = 1, πt > 0. Let τt = Lrdt2. Hence then |h(z∗)− µt−1([z∗]t)| ≤ β1/2t σt−1([z∗]t) + 1/t2 ∀t ≥ 1 holds with probability ≥ 1− δ. Here z∗ := g(x∗). Proof. Using the Lipschitz-continuity and equation 4, we have that ∀z ∈ Z, |h(z)− h([z]t)| ≤ Lrd/τt By choosing τt = Lrdt2, we have |Zt| = (Lrdt2)d and ∀z ∈ Z, |h(z)− h([z]t)| ≤ 1/t2 Then using Lemma 5.6 in Srinivas et al. (2010), we reach the expected result. Base on Lemma 5.5 in Srinivas et al. (2010) and Lemma 1, we could have the following result directly. Lemma 2. Pick δ ∈ (0, 1) and set β = 2log(2πt/δ) + 2dlog(Lrdt2), where ∑ t≥1 π −1 t = 1, πt > 0. Then with probability ≥ 1− δ, for all t ∈ N , the regret is bounded as follows: rt ≤ 2β1/2t σt−1(zt) + 1/t2 Proof. Using the union bound of δ/2 in both Lemma 5.5 in Srinivas et al. (2010) and Lemma 1, we have that with probability 1− δ: rt = h(z ∗)− h(zt) ≤ β1/2t σt−1(zt) + 1/t2 + µt−1(zt)− h(zt) ≤ 2β1/2t σt−1(zt) + 1/t2 which complete the proof. Now we are ready to use the Lemma5.4 in Srinivas et al. (2010) and Lemma 2 to complete the proof of Theorem 1. Proof. Using Lemma5.4 in Srinivas et al. (2010), we have that with probability ≥ 1− δ: T∑ t=1 4βtσ 2 t−1(xt) ≤ C1βT γT ∀T ≥ 1 By Cauchy-Schwarz: T∑ t=1 2β 1/2 t σt−1(xt) ≤ √ C1βT γT ∀T ≥ 1 Finally, substitute πt with π2t2/6 (since ∑ 1/t2 = π2/6). Theorem 1 follows. B VISUALIZATION OF THE COLLISION EFFECT IN LATENT SPACE We demonstrate the collision effect in the latent space. We trained the same neural network on Feynman dataset with 101 data points which demonstrate the latent space after two retrains with the retrain interval set to be 50 data points. The regularized one employed DW CoFLO, with the regularization parameter ρ = 1e5, penalty parameter λ = 1e−2, retrain interval T̃ , weighting parameter γ = 1e−2 and the base kernel set to be square exponential kernel. The non-regularized one employed LSO. C SUPPLEMENTAL MATERIALS ON ALGORITHMIC DETAILS C.1 ALGORITHMIC DETAILS ON NEURAL NETWORK ARCHITECTURE As the main goal of our paper was to showcase the performance of a novel collision-free regularizer, we picked our network architectures to be basic multi-layer dense neural network: For SPOKES, we used a 5-layer dense neural network. Its hidden layers consist of 16 neurons with Leaky Relu nonlinearities, 8 neurons with Sigmoid nonlinearities, 4 neurons with Sigmoid nonlinearities, and 2 neurons with Sigmoid nonlinearities respectively. Each hidden layer also applies a 0.2 dropout rate. The output layer applies Leaky Relu nonlinearity. For SuperNova, Feynman, and Rastrigin 2D, we used a 4-layer dense neural network. Its hidden layers consist of 8 neurons with Sigmoid nonlinearities, 4 neurons with Leaky Relu nonlinearities, and 2 neurons with Leaky Relu nonlinearities respectively. Each hidden layer applies a 0.2 dropout rate. The output layer also applies Leaky Relu nonlinearity. The neural networks are trained using ADAM with a learning rate of 1e−2. C.2 PARAMETER CHOICES We further investigate the robustness of parameter choices of both the regularization parameter ρ and the penalty parameter λ on SPOKES dataset. We show the result in the figures below. D ADDITIONAL EXPERIMENTAL RESULTS We added both the detailed median curves and the p-values of the Welch’s t-tests of the experiments we discussed in section 5. D.1 MEDIAN CURVE The median curves demonstrate similar trends to the mean curves. In the four experiments, DW CoFLO consistently demonstrates a superior performance over the baselines. (a) Rastrigin 2D (b) Feynman III.9.52 (c) Supernova (d) SPOKES D.2 P-VALUES The table shows the p-values of Welch’s t-tests of the experiments. It demonstrates the significance of the improvement brought by DW CoFLO over the baselines. Data BO RO TPE LSO SE-LSO CoFLO Rastrigin-2D 1.07e−4 3.88e−8 1.01e−2 6.38e−3 1.10e−5 4.23e−1 Supernova 3.24e−3 3.61e−3 3.18e−2 3.43e−1 1.41e−8 2.62e−1 Feynman 1.73e−1 1.52e−07 8.20e−1 288e−1 6.37e−1 2.25e−1 SPOKES 4.62e−1 9.90e−3 2.64e−1 4.17e−2 2.87e−3 4.11e−1
1. What is the main idea of the proposed method in the paper? 2. What are some concerns regarding the writing and notation in the paper? 3. What is unclear about Section 4.2 and how can it be improved? 4. What is unclear about Section 4.3 and how can it be improved? 5. How does the theoretical analysis in the paper compare to previous works, specifically Srinivas et al. (2010)? 6. What are some issues with the experimental design in the paper? 7. How does the proposed method handle high-dimensional input problems?
Review
Review The paper proposes a method to avoid collision for the latent space based Bayesian optimization method. The main idea is to add a regularization term into the training process. Theoretical analysis is also conducted to understand the performance of the proposed method. Although the idea of the proposed method is somewhat interesting, I do have many concerns for the paper. The writing is not good, so it makes it hard to understand the work. In particular, the English of the paper is frequently bad (wrong grammar, typos, unfinished sentences). The maths notations are occasionally not consistent. For example sometimes, the penalty is defined as p_[i,j}, sometimes it is denoted as p_{ij}. Section 4.2 is too ambiguous. What are z_i, z_j in the equation in Section 4.2? Based on the notation of the latent space Z, I can guess z_i, z_j are the values in the latent space, but this should be clearly mentioned in the paper. Also, what does \lambda represent? And how to set it in practice? I went through the 2nd paragraph in Section 6.2 and still feel unclear how to set this hyperparameter in practice. Section 4.3 is also not clear. What is the intuition behind the weight \omega_{ij}? What do \gamma and \rho represent? How to set them? And what does GP_{Kt}(M_t(x_i)) (in Eq. (1)) denote? Regarding the theoretical analysis, unless I miss something, it is just the standard theorem as in Srinivas et al. (2010), but replace the assumption of the objective function f being a sample path from the GP, by the assumption of the latent space function h being a sample path from the GP? In which cases this assumption is satisfied? And what does it mean by “comparing to Theorem 2 in Srinivas et al. (2010), the second part of the regret bound doesn’t rely on \delta"? As much as I understand, the regret bound in Theorem 1 is the same as the one in Theorem 2 in Srinivas et al. (2010). Regarding the experiments, the experiments are only conducted on low-dimensional problems (2D, 6D, 3D, …), which is contradict with the motivation of the work (BO for high dimensional inputs). Besides, what does it mean when the neural networks are pretrained on a number of data points? Do we know the corresponding function values of these data points in advance? If yes, for the baseline methods the paper compares with, are these data points employed in these baseline optimization procedures?
ICLR
Title Learning Collision-free Latent Space for Bayesian Optimization Abstract Learning and optimizing a blackbox function is a common task in Bayesian optimization and experimental design. In real-world scenarios (e.g., tuning hyperparameters for deep learning models, synthesizing a protein sequence, etc.), these functions tend to be expensive to evaluate and often rely on high-dimensional inputs. While classical Bayesian optimization algorithms struggle in handling the scale and complexity of modern experimental design tasks, recent works attempt to get around this issue by applying neural networks ahead of the Gaussian process to learn a (low-dimensional) latent representation. We show that such learned representation often leads to collision in the latent space: two points with significantly different observations collide in the learned latent space. Collisions could be regarded as additional noise introduced by the traditional neural network, leading to degraded optimization performance. To address this issue, we propose Collision-Free Latent Space Optimization (CoFLO), which employs a novel regularizer to reduce the collision in the learned latent space and encourage the mapping from the latent space to objective value to be Lipschitz continuous. CoFLO takes in pairs of data points and penalizes those too close in the latent space compared to their target space distance. We provide a rigorous theoretical justification for the regularizer by inspecting the regret of the proposed algorithm. Our empirical results further demonstrate the effectiveness of CoFLO on several synthetic and real-world Bayesian optimization tasks, including a case study for computational cosmic experimental design. 1 INTRODUCTION Bayesian optimization is a classical sequential optimization method and is widely used in various fields, including recommender systems, scientific experimental design, hyper-parameter optimization, etc. Many of theses applications involve evaluating an expensive blackbox function; therefore the number of queries should be minimized. A common way to model the unknown function is via Gaussian processes (GPs) Rasmussen and Williams (2006). GPs have been extensively studied under the bandit setting, and has proven to be an effective approach for addressing a broad class of black-box function optimization problems. One of the key computational challenges for learning with GPs concerns with optimizing specific kernels used to model the covariance structures of GPs. As such optimization task depends on the dimension of feature space, for high dimensional input, it is often prohibitively expensive to train a Gaussian process model. Meanwhile, Gaussian processes are not intrinsically designed to deal with structured input that has a strong correlations among different dimensions, e.g., the graphs and time sequences. Therefore, dimensionality reduction algorithms are needed to speed up the learning process. Recently, it has become popular to investigate GPs in the context of latent space models. As an example, deep kernel learning (Wilson et al., 2016) simultaneously learns a (low-dimensional) data representation and a scalable kernel via an end-to-end trainable deep neural network. In general, the neural network is trained to learn a simpler latent representation with reduced dimension and has the structure information already embedded for the Gaussian process. Such a combination of neural network and Gaussian process could improve the scalability and extensibility of classical Bayesian optimization, but it also poses new challenges for the optimization task (Tripp et al., 2020). As we later demonstrate, one critical challenge brought by introducing the neural network is that the latent representation is prone to collisions: two points with significant different observations can get too close in the latent space. The collision effect is especially evident when information is lost by dimension reduction, and/or when the training data is limited in size in Bayesian optimization. As illustrated in Figure 1, when passed through the neural network, data points with drastically different observations are mapped to close positions in the latent space. Such collisions could be regarded as additional noise introduced by the neural network. Although Bayesian optimization is known to be robust to mild noisy observations, the collision in latent space could be harmful to the optimization performance, as it is non-trivial to explicitly model the collision into the acquisition function. In addition, the additional noise induced by the collision effect will further loosen the regret bound for classical Bayesian optimization algorithms (Srinivas et al., 2010). Overview of main results To mitigate the collision effect, we propose a novel regularization scheme which can be applied as a simple plugin amendment for the latent space-based Bayesian optimization models. The proposed algorithm, namely Collision-Free Latent Space Optimization (CoFLO), leverages a regularized regression loss function, to periodically optimize the latent space for Bayesian optimization. Concretely, our regularizer is encoded by a novel pairwise collision penalty function defined jointly on the latent space and the output domain. In order to mitigate the risk of collision in the latent space (and consequently boost the optimization performance), one can apply the regularizer uniformly to the latent space to minimize the collisions. However, in Bayesian global optimization tasks, we seek to prioritize the regions close to the possible optimum, as collisions in these regions are more likely to mislead the optimization algorithm. Based on this insight, we propose a optimizationaware regularization scheme, where we assign a higher weight for the collision penalty on those pairs of points closer to the optimum region in the latent space. This algorithm—which we refer to as dynamically-weighted CoFLO—is designed to dynamically assess the importance of a collision during optimization. Comparing to the uniform collision penalty over the latent space, the dynamic weighting mechanism has demonstrated drastic improvement over the state-of-the-art latent spacebased Bayesian optimization models. We summarize our the key contributions below: • We propose a novel regularization scheme, as a simple plugin amendment for latent spacebased Bayesian optimization models. Our regularizer penalizes collisions in the latent space and effectively reduces the collision effect. • We propose an optimization-aware dynamic weighting mechanism for adjusting the collision penalty to further improve the effectiveness of regularization for Bayesian optimization. • We provide theoretical analysis for the performance of Bayesian optimization on regularized latent space. • We conducted an extensive empirical study on four synthetic and real-world datasets, including a real-world case study for cosmic experimental design, and demonstrate strong empirical performance for our algorithm. 2 RELATED WORK Bayesian optimization has demonstrated promisming performance in various cost-sensitive global optimization tasks (Shahriari et al., 2016). However, due to its intrinsic computational limitation in the high-dimensional regime, its applicability has been restricted to relatively simple tasks. In this section, we provide a short survey on recent work in Bayesian learning, which were designed to overcome the high-dimensionality challenge for both Bayesian optimization and regression tasks. Deep kernel learning Deep kernel learning (DKL) (Wilson et al., 2016) combines the power of the Gaussian process and that of neural network by introducing a deep neural network g to learn a mapping g : X → Z from the input domain X to a latent space Z , and use the latent representation z ∈ Z as the input of the Gaussian process. The neural network g and a spectral mixture base kernel k forms a scalable expressive closed-form covariance kernel, denoted by kDK(xi, xj) → k(g(xi), g(xj)), for Gaussian processes. Despite of encouraging results in numerous regression tasks, it remains unclear whether DKL is readily applicable to Bayesian optimization. One key difference in Bayesian regression and optimization tasks is the assumption on the accessibility of training data: Bayesian optimization often assumes limited access to labeled data, while DKL for regression relies on abundant access to data in order to train a deep kernel function. Another problem lies in the difference between the objective functionn. While DKL focuses on improving the general regression performance, it does not specifically address the problem caused by collisions, which—as we later demonstrate in section 3.3—could be harmful for sequential decision making tasks. Representation learning and latent space optimization Aiming at improving the scalability and extensibility of the Gaussian process, various methods are proposed to reduce the dimensionality of the original input. Djolonga et al. (2013) assume that only a subset of input dimensions varies, and the kernel is smooth (i.e. with bounded RKHS norm). Under these assumptions, they underlying subspace via low-rank matrix completion. Huang et al. (2015) use Autoencoder to learn a lowdimensional representation of the inputs to increase GP’s scalability in regression tasks. Snoek et al. (2015) further propose to learn a pre-trained encoder neural network before BO. Lu et al. (2018) learn a variational auto-encoder iteratively during sequential optimization to embed the structure of the input. The challenge for combining latent space learning with Bayesian optimization lies in that a pre-trained neural network may not extract adequate information around the more promising region of the input space. Furthermore, the latent space could be outdated without continuous updates with the latest acquired observation. Tripp et al. (2020) propose to periodically retrain the neural network to learn a better latent space, in order to minimize the number of iterations needed for LSO. They claim that by prioritizing the loss of more promising data points in the original input space (i.e. by assigning a higher weight to these data points), the model could focus more on learning highvalue regions and allow a substantial extrapolation in the latent space to accelerate the optimization. However, such a framework does not explicitly deal with collisions in the latent space, which we found to be a key factor in the poor performance of modern latent space optimization algorithms. 3 PROBLEM STATEMENT In this section, we introduce necessary notations and formally state the problem. We focus on the problem of sequentially optimizing the function f : X → R, where X ⊆ Rd is the input domain. In each round t, we pick a point xt ∈ X , and observe the function value perturbed by an additive noise: yt = f(xt) + t with t ∼ N (0, σ2) being i.i.d. Gaussian noise. Our goal is to maximize the sum of rewards ∑T t=1 f(xt) over T iterations, or equivalently, to minimize the cumulative regret RT := ∑T t rt, where rt := maxx∈X f(x)− f(xt) denote the instantaneous regret of xt. 3.1 BAYESIAN OPTIMIZATION Bayesian optimization typically employs Gaussian processes as the statistic tool for modeling the unknown objective function. The major advantage of using GP is that it presents a computationally tractable solution to depict a sophisticated and consistent view across the space of all possible function (Rasmussen and Williams, 2005), which allows closed-form posterior estimation in the function space. BO methods starts with a prior on the black-box function. Upon observing new labels, BO then iteratively updates the posterior distribution in the function space, and maximizes an acquisition function measuring each point’s contribution to finding the optimum, in order to select the next point for evaluation. Formally, in Bayesian optimization we assume that f follows a GP(m(x), k(x, x′)), where m(x) is the mean function, k(x, x′) is the kernel or covariance function. Throughout this paper, we use squared exponential kernel, kSE(x, x′) = σ2SE exp ( − (x−x ′) 2l ) , where the length scale l determines the length of the ”wiggles” and the output variance σ2SE determines the average distance of the function away from its mean. At iteration T , given the historically selected pointsAT = {x1, ..., xt} and the corresponding noisy evaluations yT = [y1, ...yT ], the posterior over f also takes the form of a GP, with mean µT (x), covariance kT (x, x′), and variance σ2T (x): µT (x) = kT (x) T (KT + σ 2I)−1yT kT (x, x ′) = k(x, x′)− kT (x)T (KT + σ2I)−1kT (x′) σ2T (x) = kT (x, x) where kT (x) = [k(x1, x), ..., k(xT , x)]T and KT is the positive definite kernel matrix [k(x, x′)]x,x′∈AT . After obtaining the posterior, one can compute the acquisition function α : X → R, which is used to select the next point to be evaluated. Various acquisition functions have been proposed in the literature, including popular choices such as Upper Confidence Bound (UCB) (Srinivas et al., 2010) and Thompson sampling (TS) (THOMPSON, 1933). UCB uses the upper confidence bound αUCB(x) = µt(x)+β1/2σt(x) with β(x) being the confidence coeffecient, and enjoys rigorous sublinear regret bound. TS usually outperform UCB in practice and has been shown to enjoy a similar regret bound Agrawal and Goyal (2012). It samples a function f̃t from the GP posterior f̃t ∼ GP(µt(x), kt(x, x′)) and then uses the sample as an acquisition function: αTS(x) = f̃t(x). Remark. Regret is commonly used as performance metric for BO methods. In this work we focus on the simple regret r∗T = max x∈X f(x)−max t<T f(xt) and cumulative regret R(T ) = ∑T t rt. 3.2 LATENT SPACE OPTIMIZATION Recently, Latent Space Optimization (LSO) has been proposed to solve Bayesian optimization problems on complex input domains (Tripp et al., 2020). LSO first learns a latent space mapping g : X → Z to convert the input space X to the latent space Z . Then, it constructs an objective mapping h : Z → R such that f(x) ≈ h(g(x)), ∀z ∈ Z . The latent space mapping g and base kernel k could be regarded as a deep kernel, denote by knn(x, x′) = k(g(x), g(x′)). Thus, the actual input space for BO is the latent space Z and the objective function is h. With acquisition function αnn(x) := α(g(x)), it is unnecessary to compute an inverse mapping g−1 as discussed in Tripp et al. (2020), as BO could directly select xt = arg max xt∈X αnn(x) ∀t ≤ T and evaluate f . In the meantime, BO can leverage the latent space mapping g, usually represented by a neural network, to effectively learn and optimize the target function h on a lower-dimension input space. 3.3 THE COLLISION EFFECT OF LSO When the mapping g : X → Z is represented by a neural network, it may cause undesirable collisions between different input points in the latent space Z . Under the noise-free setting, we say there exists a collision in Z , if ∃xi, xj ∈ X , such that when g(xi) = g(xj), |f(xi) − f(xj)| > 0. Such collision could be regarded as additional (unknown) noise on the observations introduced by the neural network g. For a noisy observation y = f(x) + , we define a collision as follows: for ρ > 0, ∃xi, xj ∈ D, |g(xi)− g(xj)| < ρ|yi − yj |. When the distance between a pair of points in the latent space is too close comparing to their difference in the output space, the different output values for the collided points in the latent space could be interpreted as the effect of additional observation noise. In general, collisions could degrade the performance of LSO. Since the collision effect is a priori unknown, it is often challenging to deal with collisions in LSO, even if we regard it as additional observation noise and increase the (default) noise variance in the Gaussian process. Thus, it is necessary to mitigate the collision effect, by directly restraining it in the representation learning phase. 4 COLLISION-FREE LATENT SPACE OPTIMIZATION In this section, we introduce Collision-Free Latent Space Optimization (CoFLO), an algorithmic framework designed to mitigate the collision effect. 4.1 OVERVIEW OF THE COFLO ALGORITHM The major challenge in restraining collisions in the latent space is that, unlike the traditional regression problem, we cannot quantify it on a single point’s observation. We can, however, quantify collisions by grouping pairs of data points and inspecting their corresponding observations. We define the collision penalty based on pairs of inputs, and further introduce a pair loss function to characterize the collision effect. Based on this pair loss, we propose a novel regularized latent space optimization algorithm1, as summarized in Algorithm 1. The proposed algorithm first uses the pair-wise input and concurrently feeds them into the same network and then calculates the pair loss function. We demonstrate this process in Figure 2. Given a set of labeled data points, we can train the neural network to create an initial latent space representation2, similar to DKL (Wilson et al., 2016). Once provided with the initial representation, we can then refine the latent space by running CoFLO and periodically update the latent space (i.e. updating the latent representation after collection a batch of data points) to mitigate the collision effect as we collect more labels Algorithm 1 Collision-Regularized Latent Space Optimization (CoFLO) 1: Input: Regularization weight ρ (cf. Equation 3), penalty parameter λ (cf. Equation 1), retrain interval T̃ , importance weight parameter γ (cf. Equation 2), neural network M0, base kernel K0, prior mean µ0, total time steps T ; 2: for t = 1 to T do 3: xt ← arg max x∈D α(Mt(x)) . maximize acquisition function 4: yt ← evaluation on xt . update observation 5: if t ≡ 0 (mod T̃ ) then 6: Mt+1,Kt+1 ← retrain Mt and Kt with the pair loss function Lρ,λ,γ(Mt,Kt, Dt) as defined in equation 3 . periodical retrain 7: end if 8: end for 9: Output: max t yt 4.2 COLLISION PENALTY In this subsection, we aim to quantify the collision effect based on the definition proposed in Section 3.3. As illustrated in Figure 2, we feed pairs of data points into the neural network and obtain their latent space representations. Apart from maximizing the GP’s likelihood, we concurrently calculate the amount of collision on each pair, and penalize only if the value is positive. For xi, xj ∈ X , yi = f(xi) + , yi = f(xi) + are the corresponding observations, and zi = g(xi), zj = g(xj) are the corresponding latent space representations. 1Note that we have introduced several hyper-parameters in the algorithm design; we will defer our discussion on the choice of these parameters to Section 5. 2To obtain an initial latent space representation, the labels do not have to be exact and could be collected from a related task of cheaper cost We define the collision penalty as pij = max(λ|yi − yj | − |zi − zj |, 0) (1) where λ is a penalty parameter that controls the smoothness of the target function h : Z → R. 4.3 DYNAMIC WEIGHT Note that it is challenging to universally eliminate the collision effect by minimizing the collision penalty and the GP’s regression loss—this is particularly true with a limited amount of training data. Fortunately, in the optimization tasks it is often unnecessary to learn equally good representation for suboptimal regions. Therefore, we can dedicate more training resources to improve the learned latent space by focusing on the (potentially) near-optimal regions. Following this insight, we propose to use a weighted collision penalty function, which uses the objective values for each pair as importance weight in each iteration. Formally, for any pair ((xj , zj , yj), (xi, zi, yi)) in a batch of observation pairsDt = {((xm, zm, ym), (xn, zn, yn))}m,n, we define the importance-weighted penalty function as p̃ij = pijwij with wij = eγ(yi+yj)∑ (m,n)∈Dt eγ(ym+yn) . (2) Here the importance weight γ is used to control the aggressiveness of the weighting strategy. Combining the collision penalty and regression loss of GP, we define the pair loss function L as Lρ,λ,γ(Mt,Kt, Dt) = 1 ||Dt||2 ∑ i∈Dt,j∈Dt (GPKt(Mt(xi))− yi)2 + (GPKt(Mt(xj))− yj)2 + ρp̃ij , (3) Here, GPKt(Mt(xi)) denotes the Gaussian process’s posterior mean on xi with kernel Kt and neural networkMt at timestep t. ρ denotes the regularization weight; as we demonstrate in Section 5, in practice we often choose ρ to keep the penalty at a order close to the regression loss. 4.4 THEORETICAL ANALYSIS In this subsection, we provide a theoretical justification for the collision-free regularizer, by inspecting the effect of regularization on the regret bound of CoFLO. We first connect the proposed collision penalty in Equation 1 to Lipschitz-continuity, and then integrate it into the regret analysis to provide an improved regret bound. Lipschitz continuity of the target function h The collision penalty encourages the Lipschitzcontinuity for h. Formally, the proposed regularization promotes to learn a latent space where ∀xi, xj ∈ D, zi = g(xi), zj = g(xj) ∈ Z, |g(xi)− g(xj)| ≤ λ|f(xi)− f(xj)| The above inquality reduces to the Lipschitz-continuity for h. Unlike typical smoothness assumptions in GPs, a function can be non-smooth and still Lipschitz continuous. Recently, Ahmed et al. (2019) leverage the Lipschitz-continuity of the objective function to propose improved acquisition functions based on the common acquisition functions, and provide an improved regret bound both theoretically and empirically. In the following, we show that running GP-UCB on the collision-free latent space amounts to an improvement in terms of its regret bound: Theorem 1. Let Z ⊂ [0, r]d be compact and convex, d ∈ N, r > 0, λ ≥ 0. Suppose that the objective function h defined on Z is a sample from GP and is Lipschitz continuous with Lipschitz constant λ. Let δ ∈ (0, 1), and define βt = 2 log ( π2t2/6δ ) + 2d log ( λrdt2 ) . Running the GP-UCB with βt for a sample h of a GP with mean function zero and covariance function k(x, x′), we obtain a regret bound of O∗( √ dTγT ) with high probability. Precisely, with C1 = 8/ log ( 1 + σ−2 ) , we have P [ RT ≤ √ C1TβT γT + 2 ] ≥ 1− δ. Here γT is the maximum information gain after T iterations, defined as γT := max A⊂Z,|A|=T I(yA;hA). Comparing the above result to Theorem 2 of Srinivas et al. (2010) which offers the regret bound with the sub-gaussianity assumption on the objective functions derivative, the second part of our regret bound does not rely on δ. The coefficients are also smaller as the deterministic bound on the derivative of f avoids union bound. Remark. The collision penalty encourages h to be Lipschitz-continuous on the latent space. Ideally, when the collision penalty pi,j(λ) term converges to zero for all data points in the latent space, we can claim that h is Lipschitz-continuous with Lipschitz constant at most λ. Applying Theorem 1 with βt = 2 log ( π2t2/6δ ) + 2d log ( λrdt2 ) , we can reduce the regret bound by choosing smaller λ. However, in practice, since the observation can be noisy, we need to choose a λ big enough to tolerant the noise. A small λ could make it difficult to learn a meaningful representation. 5 EXPERIMENTS In this section, we empirically evaluate our algorithm on two synthetic blackbox function optimization tasks and two real-world optimization problems. 5.1 EXPERIMENTAL SETUP We consider four baselines in our experiments. The rudimentary random selection algorithm (RS) shows the task complexity. Three popular optimization algorithms, namely particle swarm optimization (PSO) (Miranda, 2018), Tree-structured Parzen Estimator Approach (TPE) (Bergstra et al., 2011), and standard Bayesian optimization (BO) (Nogueira, 2014) which uses Gaussian process as the statistical model and the upper confidence bound (UCB) as its acquisition function, are tuned in each task. Another baseline we consider is the sample-efficient LSO (SE LSO) algorithm, which is implemented based on the algorithm proposed by Tripp et al. (2020). We also compare the non-regularized latent space optimization (LSO), Collision-Free Latent Space Optimization (CoFLO) and the dynamically-weighted CoFLO (DW CoFLO) proposed in this paper. The performance for each task is measured on 10,000 pre-collected data points. One crucial problem in practice is tuning the hyper-parameters. The hyper-parameters for GP are tuned for periodically retraining in the optimization process, by minimizing the loss function on a validation set. For all our tasks, we choose a simplistic neural network architecture M , due to limited and expensive access to labeled data under the BO setting. The coefficient ρ is, in general, selected to guarantee a similar order for the collision penalty to GP loss. The λ should be estimated according to the first several sampled data and tolerant the additive noise in the evaluation. γ controls the aggressiveness of the importance weight. While γ should not be too close to zero (which is equivalent to uniform weight), an extremely high value could make the regularization overly biased. Such a severe bias could possibly allow a heavily collided representation in most of the latent space and degrade regularization effectiveness. The value choice is similar to the inverse of the temperature parameter of softmax in deep learning Hinton et al. (2015). Here we use the first batch of observed samples to estimate the order of all observations and choose the appropriate γ. 5.2 DATASETS AND RESULTS We now evaluate CoFLO on two synthetic datasets and two real-world datasets. In the experiments, all the input data points are mapped to a one-dimensional latent space by via the neural network. We demonstrated the improvement of CoFLO brought by the explicit collision mitigation in the lowerdimensional latent space in terms of average simple regret. We also include the median result and statistical test in the appendix. 2D-Rastrigin The Rastrigin function is a non-convex function used as a performance test problem for optimization algorithms. It was first proposed by RASTRIGIN (1974) and used as a popular benchmark dataset for evaluating Gaussian process regression algorithms (Cully et al., 2018). Formally, the 2D Rastrigin function is f(x) = 10d+ d∑ i=1 x2i − 10cos(2πxi), d = 2 For convenience of comparison, we take the −f(x) as the objective value to make the optimization tasks a maximization task. The neural network is pretrained on 100 data points. As illustrated by figure 8a , CoFLO and DW CoFLO could quickly reach the (near-) optimal region, while the baselines generally suffer a bigger simple regret even after an excessive number of iterations. Feynman III.9.52 Equation Growing datasets have motivated pure data-backed analysis in physics. The dataset of 100 equations from the Feynman Lecture on Physics for the symbolic regression tasks in physics (Udrescu and Tegmark, 2020) could play the role as a test set for databack analysis algorithms in physics. The III.9.52 we choose to test the optimization algorithms is ργ = pdEf t h/2π sin((ω − ω0)t/2)2 ((ω − ω0)t/2)2 The equations have 6 variables as inputs and are reported to require at least 103 data for the regression task. The neural network is randomly initialized at the beginning. As illustrated by figure 8b, in the first 100 iterations, CoFLO and DW CoFLO behaves similarly to random selection. After the first training at iteration 100, CoFLO and DW CoFLO approach the optimum at a much faster pace compared to the baselines; among them, DW CoFLO shows a faster reduction in simple regret. Supernova Our first real-world task is to perform maximum likelihood inference on 3 cosmological parameters, the Hubble constant H0 ∈ (60, 80), the dark matter fraction ΩM ∈ (0, 1) and the dark energy fraction ΩA ∈ (0, 1). The likelihood is given by the Roberson-Walker metric, which requires a one-dimensional numerical integration for each point in the dataset from Davis et al. (2007). The neural network is pretrained on one hundred data points. As illustrated by figure 8c, the simple regret SE LSO has a faster drop at the beginning, while later remained relatively stable and eventually ends at a similar level to LSO. These results demonstrate the efficiency of SE LSO when finding sub-optimal. However, without collision reduction, SE LSO couldn’t outperform the LSO in the long run, where both reach their limitation. And the CoFLO and DW CoFLO demonstrate its robustness when close to the optimal as both constantly approach the optimal. Among them, DW CoFLO slightly outperform CoFLO. Redshift Distribution The challenges in designing and optimizing cosmological experiments grow commensurately with their scale and complexity. Careful accounting of all the requirements and features of these experiments becomes increasingly necessary to achieve the goals of a given cosmic survey. SPOKES (SPectrOscopic KEn Simulation) is an end-to-end framework that can simulate all the operations and key decisions of a cosmic survey (Nord et al., 2016). It can be used for the design, optimization, and forecasting of any cosmic experiment. For example, some cosmic survey campaigns endeavor to observe populations of galaxies that exist at a specific range of redshifts (distances) from us. In this work, we use SPOKES to generate galaxies within a specified window of distances from Earth. We then minimize the Hausdorff distance between the desired redshift distribution and the simulation of specific cosmological surveys generated by SPOKES. In our experiments, the neural network is pretrained on 200 data points. As illustrated by figure 8d, the simple regret of SE LSO drops faster at the initial phase. However, when it gets close to the (near-) optimal region where simple regret is approximately 0.15, it is caught up by both CoFLO and DW CoFLO, and eventually gets slightly outperformed. Such a result indicates that the collision problem could have more impact when the algorithm gets close to the optimal region. Notice that the rudimentary BO eventually outperformed the non-regularized LSO, indicate that without mitigation of collision, the learned representation could worsen the performance in the later stage when the algorithm gets close to the optimal. In conclusion, the mitigation of collision like CoFLO is necessary to further improve the later performance of LSO, when collision matters more in the near-optimal areas. 5.3 DISCUSSION In general, our experimental results consistently demonstrate the robustness of our methods against collisions in the learned latent space. Our method outperforms all baselines; when compared to the sample-efficient LSO, the dynamically-weighted LSO performs better in most cases and shows a steady capability to reach the optimum by explicitly mitigating the collision in the latent space. In contrast, the Sample-efficient LSO might fail due to the collision problem. 6 CONCLUSION We have proposed a novel regularization scheme for latent space based Bayesian optimization. Our algorithm—namely CoFLO—addresses the collision problem induced by dimensionality reduction, and improves the performance for latent space-based optimization algorithms. The regularization is proved to be effective in mitigating the collision problem in learned latent space, and therefore can boost the performance of the Bayesian optimization in the latent space. We demonstrate strong empirical results for CoFLO on several synthetic and real-world datasets, and show that CoFLO is capable of dealing with high-dimensional input that could be highly valuable for real-world experiment design tasks such as cosmological survey scheduling. A REGRET BOUND FOR A LIPSCHITZ-CONTINUOUS OBJECTIVE FUNCTION In this section, we provide the detailed proof for Theorem 1. We first modify Lemma 5.7 and Lemma 5.8 in Srinivas et al. (2010) since we are assuming the deterministic Lipschitz-continuity for h. Use the same analysis tool Zt defined as a set of discretization Zt ⊂ Z where Zt will be used at time t in the analysis. We choose a discretization Zt of size (τt)d. so that ∀z ∈ Z, ||z − [z]t||1 ≤ rd/τt (4) where [z]t denotes the closest point in Zt to z. Lemma 1. Pick δ ∈ (0, 1) and set β = 2log(πt/δ)+2dlog(Lrdt2), where ∑ t≥1 π −1 t = 1, πt > 0. Let τt = Lrdt2. Hence then |h(z∗)− µt−1([z∗]t)| ≤ β1/2t σt−1([z∗]t) + 1/t2 ∀t ≥ 1 holds with probability ≥ 1− δ. Here z∗ := g(x∗). Proof. Using the Lipschitz-continuity and equation 4, we have that ∀z ∈ Z, |h(z)− h([z]t)| ≤ Lrd/τt By choosing τt = Lrdt2, we have |Zt| = (Lrdt2)d and ∀z ∈ Z, |h(z)− h([z]t)| ≤ 1/t2 Then using Lemma 5.6 in Srinivas et al. (2010), we reach the expected result. Base on Lemma 5.5 in Srinivas et al. (2010) and Lemma 1, we could have the following result directly. Lemma 2. Pick δ ∈ (0, 1) and set β = 2log(2πt/δ) + 2dlog(Lrdt2), where ∑ t≥1 π −1 t = 1, πt > 0. Then with probability ≥ 1− δ, for all t ∈ N , the regret is bounded as follows: rt ≤ 2β1/2t σt−1(zt) + 1/t2 Proof. Using the union bound of δ/2 in both Lemma 5.5 in Srinivas et al. (2010) and Lemma 1, we have that with probability 1− δ: rt = h(z ∗)− h(zt) ≤ β1/2t σt−1(zt) + 1/t2 + µt−1(zt)− h(zt) ≤ 2β1/2t σt−1(zt) + 1/t2 which complete the proof. Now we are ready to use the Lemma5.4 in Srinivas et al. (2010) and Lemma 2 to complete the proof of Theorem 1. Proof. Using Lemma5.4 in Srinivas et al. (2010), we have that with probability ≥ 1− δ: T∑ t=1 4βtσ 2 t−1(xt) ≤ C1βT γT ∀T ≥ 1 By Cauchy-Schwarz: T∑ t=1 2β 1/2 t σt−1(xt) ≤ √ C1βT γT ∀T ≥ 1 Finally, substitute πt with π2t2/6 (since ∑ 1/t2 = π2/6). Theorem 1 follows. B VISUALIZATION OF THE COLLISION EFFECT IN LATENT SPACE We demonstrate the collision effect in the latent space. We trained the same neural network on Feynman dataset with 101 data points which demonstrate the latent space after two retrains with the retrain interval set to be 50 data points. The regularized one employed DW CoFLO, with the regularization parameter ρ = 1e5, penalty parameter λ = 1e−2, retrain interval T̃ , weighting parameter γ = 1e−2 and the base kernel set to be square exponential kernel. The non-regularized one employed LSO. C SUPPLEMENTAL MATERIALS ON ALGORITHMIC DETAILS C.1 ALGORITHMIC DETAILS ON NEURAL NETWORK ARCHITECTURE As the main goal of our paper was to showcase the performance of a novel collision-free regularizer, we picked our network architectures to be basic multi-layer dense neural network: For SPOKES, we used a 5-layer dense neural network. Its hidden layers consist of 16 neurons with Leaky Relu nonlinearities, 8 neurons with Sigmoid nonlinearities, 4 neurons with Sigmoid nonlinearities, and 2 neurons with Sigmoid nonlinearities respectively. Each hidden layer also applies a 0.2 dropout rate. The output layer applies Leaky Relu nonlinearity. For SuperNova, Feynman, and Rastrigin 2D, we used a 4-layer dense neural network. Its hidden layers consist of 8 neurons with Sigmoid nonlinearities, 4 neurons with Leaky Relu nonlinearities, and 2 neurons with Leaky Relu nonlinearities respectively. Each hidden layer applies a 0.2 dropout rate. The output layer also applies Leaky Relu nonlinearity. The neural networks are trained using ADAM with a learning rate of 1e−2. C.2 PARAMETER CHOICES We further investigate the robustness of parameter choices of both the regularization parameter ρ and the penalty parameter λ on SPOKES dataset. We show the result in the figures below. D ADDITIONAL EXPERIMENTAL RESULTS We added both the detailed median curves and the p-values of the Welch’s t-tests of the experiments we discussed in section 5. D.1 MEDIAN CURVE The median curves demonstrate similar trends to the mean curves. In the four experiments, DW CoFLO consistently demonstrates a superior performance over the baselines. (a) Rastrigin 2D (b) Feynman III.9.52 (c) Supernova (d) SPOKES D.2 P-VALUES The table shows the p-values of Welch’s t-tests of the experiments. It demonstrates the significance of the improvement brought by DW CoFLO over the baselines. Data BO RO TPE LSO SE-LSO CoFLO Rastrigin-2D 1.07e−4 3.88e−8 1.01e−2 6.38e−3 1.10e−5 4.23e−1 Supernova 3.24e−3 3.61e−3 3.18e−2 3.43e−1 1.41e−8 2.62e−1 Feynman 1.73e−1 1.52e−07 8.20e−1 288e−1 6.37e−1 2.25e−1 SPOKES 4.62e−1 9.90e−3 2.64e−1 4.17e−2 2.87e−3 4.11e−1
1. What are the key contributions and novel aspects introduced by the paper regarding latent space-based BO? 2. What are the strengths of the proposed approach, particularly in addressing the collision problem in the latent space? 3. Do you have any concerns or suggestions regarding the experimental settings and demonstrations? 4. How does the reviewer assess the clarity, quality, and presentation of the paper's content? 5. Are there any typos or errors in the paper that need to be addressed?
Review
Review The paper proposes (1) a new regularization strategy for the latent space-based BO, (2) an optimization aware dynamic weighting for adjusting the collison penalty to improve BO, (3) theoretical analysis for the BO on the latent space. The idea for the regularization is to take in pairs of data points and penalizes those too close in the latent space compared to their target space distance The paper makes an interesting observation that the learned representation (for BO to deal with complex object or high-dimension) often leads to collision in the latent space: two points with significant different observations get too close in the learned latent space. Collisions could be regarded as additional noise introduced by the traditional neural network, leading to degraded optimization performance The mapping by neural network to learn g: D->Z is typically considered as the regression problem in which the neural network should learn the property that similar input should have similar output. A pair loss is integrated in learning the neural network. The dynamic weight improves the learned latent space by focusing on the potential high-value region. The idea of using constraint in the latent space has also been studied in [1]. Despite of the good motivation, the paper execution is not yet demonstrated the effectiveness of the proposed approach for three main reasons: (1) The experiments using 4 settings are quite simple and havenot yet satistisfactorily convinced why the proposed approach performs intuitively better. It can be improved further by demonstrating the collison-effect in more challenging task, such as automatic chemical design [2]. (2) The theoretical analysis follows and extends from Srinivas et al 2010. (3) Fig 1 demonstrates the collision in 1d using the non-regularized latent space? It will be useful if you can add another figure in the same setting using the regularized latent space. The writing and presentation can be improved more. Typo: Section 5.1: “promotse" Remark: why “Choosing” is capitalized in the middle of the sentence? “UCB use the upper…” => “UCB uses the upper….” [1] Kusner, M. J., Paige, B., & Hernández-Lobato, J. M. (2017, June). Grammar Variational Autoencoder. In Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017 (Vol. 70, pp. 1945-1954). ACM. [2] Griffiths, Ryan-Rhys, and José Miguel Hernández-Lobato. "Constrained Bayesian optimization for automatic chemical design using variational autoencoders." Chemical science 11.2 (2020): 577-586.
ICLR
Title Sparse matrix products for neural network compression Abstract Over-parameterization of neural networks is a well known issue that comes along with their great performance. Among the many approaches proposed to tackle this problem, low-rank tensor decompositions are largely investigated to compress deep neural networks. Such techniques rely on a low-rank assumption of the layer weight tensors that does not always hold in practice. Following this observation, this paper studies sparsity inducing techniques to build new sparse matrix product layer for high-rate neural networks compression. Specifically, we explore recent advances in sparse optimization to replace each layer’s weight matrix, either convolutional or fully connected, by a product of sparse matrices. Our experiments validate that our approach provides a better compression-accuracy trade-off than most popular low-rank-based compression techniques. 1 Introduction The success of neural networks in the processing of structured data is in part due to their over-parametrization which plays a key role in their ability to learn rich features from the data (Neyshabur et al., 2018). Unfortunately, this also makes most state-of-the-art models so huge that they are expensive to store and impossible to operate on devices with limited resources (memory, computing capacity) or that cannot integrate GPUs (Cheng et al., 2017). This problem has led to a popular line of research for “neural networks compression”, which aims at building models with few parameters while preserving their accuracy. State of the art techniques for neural network compression. Popular matrix or tensor decomposition methods including Singular Value Decomposition (SVD), CANDECOMP/PARAFAC (CP) and Tucker have been used to address the problem of model compression by a low-rank approximation of the neural network’s weights after learning. Sainath et al. (2013) describe a method based on SVD to compress weight matrices in fully connected layers. Denton et al. (2014); Lebedev et al. (2015); Kim et al. (2016) generalize this idea to convolutional layers and then reduce the memory footprint of convolution kernels by using higher-order low-rank decompositions such as CP or Tucker decompositions. Besides, the Tensor-Train (TT) decomposition has been explored to compress both dense and convolutional layers after a pre-training step (Novikov et al., 2015). This approach may achieve extreme compression rates but it also have impractical downsides that we demonstrate now. In a TT format, all the elements of a M -order tensor are expressed by a product ofM matrices whose dimensions are determined by the TT-ranks (R0, R1, . . . , RM ). For each of theM dimension of the initial tensor, the corresponding matrices can be stacked into an order 3 tensor called a “core” of the decomposition. Hence, the layer weight is decomposed as a set of M cores of small dimensions. Novikov et al. (2015) use this tensor representation to factorize fully connected layers. They first reshape the matrix of weights into an M -order tensor, then apply the TT decomposition. By choosing sufficiently small Rm values, this technique allows to obtain a high compression ratio on extremely wide ad hoc neural architectures. Garipov et al. (2016) have adapted this idea to convolutional layers. However, the current formulation of such TT convolutional layer involves the multiplication of all input values by a matrix of dimension 1 × R1 thus causing an inflation of R1 times the size of the input in memory. This makes the available implementation (Garipov, 2020) unusable for recent wide convolutional networks at inference time. Other compression methods include unstructured pruning techniques that we review more in details in Section 2.3 and structured pruning techniques that reduce the inner hidden dimensions of the network by completely removing neurons (Anwar et al., 2017). According to the recent paper of Liu et al. (2018) however, these techniques are more akin to Neural Architecture Search than actual network compression. Finally, quantization-based compression maps the columns of the weight matrices in the network to a subset of reference columns with lower memory footprint (Guo, 2018). Sparse matrices product for full rank decompositions. We are specifically interested in high-rate compression of neural networks via the efficient factorization of the layer weight matrices. Most known approaches to layer decomposition usually makes low-rank assumption on the layer weight tensors which does not always hold in practice. As we will show in the experiments, this makes the Tucker and SVD based techniques unable to effectively reach high compression rates for standard architectures including both convolutional and fully connected layers, such as VGG19 or ResNet50, whose weight matrices usually exhibit full rank. In this paper, we propose instead to express the weight matrices of fully-connected or convolutional layers as a product of sparse factors which contains very little parameters but still can represent high-rank matrices. Moreover, products of matrices with a total sparsity budget are strictly more expressive than single matrices with that sparsity (Dao et al., 2019), which motivates our interest in products of multiple matrices. Usually, a linear operator (a matrix) from RD to RD has a time and space complexities of O(D2). But some well known operators like the Hadamard or the Fourier transforms can be expressed in the form of a product of logD sparse matrices, each having O(D) non-zero values (Dao et al., 2019; Magoarou & Gribonval, 2016). These linear operators, called fast-operators, thus have a time and space complexities lowered to O(D logD). This interesting feature of fast-operators have inspired the design of new algorithms that learn sparse matrix product representations of existing fast-transforms (Dao et al., 2019) or even that computes sparse product approximations of any matrix in order to accelerate learning and inference (Magoarou & Gribonval, 2016; Giffon et al., 2019). Even though these new methods were initially designed to recover the logD factors corresponding to a fasttransform, they are more general than that and can actually be used to find a factorization with Q < logD sparse matrices. Contributions. We introduce a general framework for neural network compression using the factorization of layers into sparse matrix products. We explore the use of the recently proposed palm4MSA algorithm (Magoarou & Gribonval, 2016) on every layer of a pre-trained neural network to express them as a product of sparse matrices. The obtained sparse matrices are then refined by gradient descent to best fit the final prediction task. When there is only one sparse matrix in the decomposition, our approach recovers the simple procedure of hard thresholding the weights of a matrix after pre-training. We evaluate the effect of different hyper-parameters on our method and show that layers can be factorized into two or three sparse matrices to obtain high compression rates while preserving good performance, compared to several main state-of-the-art methods for neural network compression. 2 Learning sparse matrix products for network compression We describe how to compress NN weight matrices by sparse matrix factorization. We call our procedure PSM for Product of Sparse Matrices. It is obvious to see that a product of sparse matrices with a given sparsity budget can recover a full rank matrix or a matrix with more non-zero values than the initial sparsity budget. This observation motivates the use of a sparse matrix factorization in place of usual low-rank decomposition and sparsity inducing techniques for neural network compression. We first recall linear transform operations in fully-connected and convolutional layers. Then, inspired by recent work on learning linear operators with fast-transform structures, we propose to use a product of sparse matrices to replace linear transforms in neural networks. We also introduce a procedure to learn such factorization for every layers in deep architecture. Finally, we review some known neural network compression techniques that appear as particular cases of our framework. 2.1 Weight matrices as product of sparse matrices Fully-connected and convolutional layers are based on the computation of linear operations. In a fully-connected layer, the output z ∈ RD′ is simply given by z = a(Wx) where a is some non-linear activation function. W ∈ RD′×D is the weight matrix of the layer and x ∈ RD is the output of the preceding layer. The linear operation in a convolutional layer can be represented by a doubly-block Toeplitz matrix (Wang et al., 2020). An other way to perform the operation is to employ reshaping operators to represent the linear operator as a dense matrix applied to all the patches extracted from the input (Garipov et al., 2016). In this work, we focus on this latter representation of the convolution operation. More formally, let rS : RH×W×C 7→ RHW×CS 2 be the reshape operation that creates the matrix of all vectorized patches of size (height and width) S2 on an input image with C channels. The matrix of K filters W ∈ RCS2×K can then be applied to these patches (multiplied with rS) to produce the output of the convolutional layer in a matrix shape. Finally, a second reshape operator t : RHW×K 7→ RH×W×K is applied on the feature map matrix to reconstruct the output tensor of the layer Z ∈ RH×W×K . Altogether, the convolution operation can be written as Z = a(t(rS(X )W)) where a is some non-linear activation function and X is the output 3-D tensor of the preceding layer. We preserve simplicity in notation here, assuming without loss of generality that the stride used by rS is equal to 1 and that the input tensor is padded with bS2 c zeros vertically and horizontally. The whole process is depicted in Supplementary Material A.2. Our general idea is to replace the weight matrix of each neural network layer with a product of Q sparse matrices, hence reducing the storage and computational complexities of the layer. Indeed, for an initial matrix of dimension (D × D′), if all sparse matrices store O(D) non-zero values, then the total complexity of the product becomes O(QD) instead of O(DD′). To define a fast-transform operator, one would use Q = logD but in practice we show that we can chose much smaller Q and achieve huge compression rates without lowering much the performance. Supplementary Material A.1 illustrates the effect of our compression scheme on a simple architecture including one convolution layer and a single dense layer. Given an input vector x ∈ RD, expressing the weight matrix W ∈ RD′×D of a fully connected layer as a product of sparse matrices gives output z such that: z = a ( Q∏ i=1 Six ) , (1) where ||Si||0 = O(D) so that the complexity in time and space of this layer is reduced to O(QD) instead of O(DD′). Similarly, in the convolution layers, the output Z ∈ RH×W×K is obtained from an input tensor X ∈ RH×W×C : Z = a ( t ( rS (X ) Q∏ i=1 Si )) , (2) where ||Si||0 = O(max(S2C,K)) so that the time complexity of the layer is reduced from O(HWCS2K) to O(HWQ ·max(CS2,K)) and the complexity in space is reduced from O(CS2K) to O(Q ·max (CS2,K)). Since there is no constraint on the rank of factors, the sparse matrix products of each layer can reach full rank, unlike low-rank decomposition methods. Moreover, the reconstruction of a sparse matrix product with a total of O(QD) non-zero values can produce a matrix with more than O(QD) non-zero values. This is consistent with the intuition that a product of sparse matrices can be more expressive than a single sparse matrix. 2.2 Full Neural Network Compression The full compression pipeline we propose includes first the learning of a standard NN, second the compression of each layer independently as a product of sparse matrices, and finally a fine tuning of the compressed NN whose layers are all expressed as PSM layers. The second step requires approximating each weight matrix W (of a dense or a convolutional layer) as a product of sparse factors, which is cast as the following optimization problem: min {Si}Qi=1 ∥∥∥∥∥W− Q∏ i=1 Si ∥∥∥∥∥ 2 F + Q∑ i=1 δEi(Si), (3) where for each i ∈ JQK, δEi(Si) = 0 if Si ∈ Ei and δEi(Si) = +∞ otherwise. Ei is the set of solutions that respect a sparsity constraint (e.g. number of non zeros values). Although this problem is non-convex, non-differentiable, and the computation of a global optimum cannot be ascertained, the palm4MSA algorithm proposed by Magoarou & Gribonval (2016) is able to learn such factorization by finding a local minimum with convergence guarantees. For more details about palm4MSA, see Supplmentary Materiel A.4. Once every layer’s weight matrix is approximated by a product of sparse matrices, these PSM layers are assembled in a compressed NN which is refined to optimize the initial task objective while the sparsity support of all factors is kept fixed. 2.3 Related work Some techniques based on inducing sparsity in neural connections, e.g. zeroing single weights in layer tensors, can be seen as particular cases of our method. The most straightforward approach to this is to simply remove the weights with lowest magnitude until a given sparsity ratio is reached. This can be done in a very trivial fashion by just removing weights on a pre-trained network and then finetuning the remaining connections. This method can be seen as the particular case of ours when there is only one factor to approximate weight matrices (i.e., Q = 1). As we show in the experiments, this method doesn’t allow high compression rate without degradation of the accuracy. Zhu & Gupta (2017) proposed instead to intertwine the removal of the connections by finetuning remaining weights, achieving better classification performance. Others, approaches for inducing sparsity in the network were proposed (Molchanov et al., 2017; Louizos et al., 2018-06-22) but they do not seem to offer performance improvements in general settings (Gale et al., 2019). The idea of replacing layers by sparse factorization has been previously explored but restricted to particular structures. In Deep Fried Convnets, Yang et al. (2015) propose to replace dense layers of convolutional neural networks by the Fastfood approximation (Le et al., 2013). This approximation is a product of diagonal matrices, a permutation matrix, and a Hadamard matrix which can itself be expressed as a product of logD sparse matrices (Dao et al., 2019). The Fastfood approximation thus provides a product of sparse factors and from this perspective, the Fastfood layer proposed by Yang et al. (2015) is a particular, constrained, case of our more general framework. Moreover, the Deep Fried Convnets architecture is based on the Hadamard matrix that imposes a strong structural constraint on the factorization, which might not be suitable for all layers of a deep architecture. The term sparse decomposition used in Liu et al. (2015) for network compression refers to separate products between dense and sparse matrices to represent the weights of the convolution kernels in a network. Finally, Wu et al. (2019) have recently proposed a very similar framework than ours along with a regularization strategy to learn the sparsity in the sparse factors but their method does not allow for more than two sparse factors and the compression of the convolutional layers is not considered although best performing architectures tend to store most of their parameters in these layers. 3 Experiments Section 3.1 details the experimental settings and parameters to ensure reproducibility. We provide an in depth analysis of our method in Section 3.2. Finally we report in Section 3.3 a comparative study of our method with state-of-the-art methods. 3.1 Experimental setting Our analysis is focused on image classification tasks, we investigate the compression of standard architectures (pretrained models) with our approach and with a few state of the art methods. We evaluate all methods by measuring both the compression ratio and the accuracy of compressed models. We first present implementation details and datasets, then we detail the baselines and the hyperparameters we used. Implementation details. The code was written in Python, including the palm4MSA algorithm (it will be made available on github). NNs were implemented with Keras (Chollet, 2015) and Tensorflow (Abadi et al., 2015). Due to the lack of efficient implementation of the sparse matrices in Tensorflow, we had to hijack the implementations of the dense matrix product and convolution offered by Keras in order to make the learning of networks as efficient as possible (See Supplementary material A.3 for details). Datasets and Neural Architectures. Our experiments are conducted on four standard image classification data sets of varying difficulty: MNIST (Deng, 2012), SVHN (Netzer et al., 2011), CIFAR10, CIFAR100 (Krizhevsky, 2009). Pretrained NNs to be compressed are classically used with these data sets i.e. Lenet(LeCun et al., 1998), VGG19(Simonyan & Zisserman, 2015), Resnet50 and Resnet20 (He et al., 2016). Details on datasets and neural architectures may be found in Supplementary Material A.5. Competing baselines. We present below the baselines and the variants of our approach that we considered. In all cases the methods were applied on the same pre-trained models and all compressed models were fine-tuned after compression: • Low-rank factorization methods, including Tensor-Train decomposition (Novikov et al., 2015; Garipov et al., 2016) (named TT hereafter) and an hybrid of Tucker decomposition (Kim et al., 2016) and SVD (Sainath et al., 2013) (named Tucker-SVD) where Tucker and SVD decomposition are used respectively for the compression of convolutional and dense layers. • Two sparsity inducing pruning techniques. The first one is a simple magnitude based projection of weight matrices on a sparse support. This can be seen as a particular case of our model where only one sparse factor is required. We name this strategy “Hard pruning” (HP in the following). The second method, named Iterative pruning (IP) is the iterative strategy proposed by Zhu & Gupta (2017), which refine magnitude based projection with finetuning steps. • Finally, we evaluate the interest of using the palm4MSA algorithm to discover a sparse matrix decomposition of the layers compared to some random decomposition. More precisely, we evaluate (I) the performance of a model whose layers would be decomposed by palm4MSA but whose weights would be re-initialized while peserving the sparsituy support; and (II) the performance of a model whose weights and sparsity support would be randomly sampled at initialization. Note that we initially considered Deep Fried convnets as an additional baseline but we finally did not include it on our experimental study since this method is originally dedicated to compress fully connected layers and our attempts to make it able to compress convolutional layers as well actually failed, preventing the method to be applied to compress many state of the art architectures that contain mostly convolutional layers. Hyper-parameters. The stopping criteria used for the palm4MSA algorithm is 300 iterations or a relative change between two iterations below 10−6. The projection method is the one used in Magoarou & Gribonval (2016). With K the desired level of sparsity, this method ensures that each sparse factor contains at least K non-zero values per row and per column and at most 2K on average. For experiments with random sparsity support, the initialization of the weights must be adapted to the reduced number of connections in the layer. To do this, we adapt the initializations Xavier (Glorot & Bengio, 2010) and He (He et al., 2015) to the initialization of sparse matrices (See Supplementary Material A.3). We chose M = 4 cores for the Tensor-Train decomposition of any tensor and the maximum possible rank value of the decompositions is specified by the value R in the experiments. In the hybrid of Tucker and SVD decomposition, the rank of the Tucker decomposition is automatically detected by the Variational Bayes Matrix Factorization method, as explained by Kim et al. (2016); the rank of the SVD in the dense layers is chosen such that only a certain percentage (specified in the experiments) of the singular values are kept. Fine-tuning of the Lenet network was done with the RMSProp optimizer and 100 learning epochs. The VGG19 and Resnet networks are fine-tuned with Adam (Kingma & Ba, 2014) optimizer and respectively 300 and 200 epochs. For each compression method and configuration, the best learning rate is chosen using the classification error on a validation sample after 10 iterations, with learning rate values in {10−3, 10−4, 10−5}. Standard data augmentation is applied: translation, rotation and flipping. 3.2 Analysis of the method We first provide an in-depth analysis of our method to validate the use of Palm4MSA for the decomposition of layer weight matrices into a product of sparse matrices. We then evaluate the impact of hyper-parameters Q and K on model accuracy and compression rate. Approximation error. We evaluate the quality of the Palm4MSA approximation of original weight matrices, and its influence on the final performance of the method. We report results gained on the compression VGG19 trained on CIFAR100 as illustration. Figure 2 shows the approximation error between the product of Q sparse factors W̃ := ∏Q q=1 Sq and the original weights W for each layer. It is computed as the normalized distance between the matrices: error = ‖W− W̃‖2F /‖W‖2F . Figure 2 further shows that higher K, i.e. the minimum number of non-zero values per row and per column, yield better approximation. We observe that for some layers, the error can be high, despite relative good accuracy performance observed in Table 1, suggesting the use of the Palm4MSA is limited. In order to examine this usefulness, we have built two other types of models that implement the decomposition of layers into product of sparse matrices, either with random sparsity support, or with random weights, rather than considering those provided by Palm4MSA: • “PSM random”: We construct completely random sparse factorization. The sparsity support is chosen by projecting a reduced centered Gaussian matrix. The weights are initialized using the procedure described in Section 3.1; • “PSM re-init.”: We use the sparsity support obtained by Palm4MSA but we reinitialize the weights using the procedure described in Section 3.1. Table 1 shows the network with the Palm4MSA method obtains the best performance in classification after refining the network (more results in Table 4 of Supplementary Material A.7). Overall «PSM re-init» and «PSM random» may perform very badly on some cases, which suggests the importance of both weights and sparsity support found by Palm4MSA. Sparsity level and the number of factors. Sparsity level corresponds to the approximate number of non-zero values in each row and each column of the sparse matrices constituting the sparse decomposition. Figure 1 and Table 4 show the performance of our model obtained with various sparsity levels K and numbers of factors Q. We notice that the number of factors seems to have a rather limited effect on the quality of the performance, while sparsity level is a more determining factor of the quality of the approximation. Note that we did not considered the Hierarchical version of Palm4MSA (Magoarou & Gribonval, 2016) since it requires Q = logD factors, which greatly increases the number of non-zero values in the decomposition without significantly improving the final performance of the model. 3.3 Comparative study Figure 1 reports comparative results gained on standard benchmark datasets with well known architectures. We discuss below the main comments to be drawn from these results. Reliability. First of all, the behaviour of the baseline methods seems quite dependent on the experiment. For instance TT performance may vary depending on the chosen rank (e.g. see rank 10 in figure 1-(b)); Hard Pruning technique performs badly on MNIST. Moreover these baselines are not always manageable in practice (e.g. no results of TT on Resnet compression, see below). On the contrary, we observe some more stable performances with regards to the choice of hyper-parameters and a systematic very low variance obtained with our method. Comparison to low rank decomposition techniques. Our approach significantly outperforms the Tucker-SVD method in any case. We believe that the low rank approximation may make more difficult to reach high compression rates while preserving good performance. This emphasizes our point that standard low rank decomposition methods cannot afford high compression regime, e.g. low-rank assumption, without degradation of the accuracy of the network. On the contrary, the TT formulation may achieve higher compression rates than Tucker, as was already observed in past works. It seems better suited than Tucker decomposition and may performs similarly or better as our method in some cases. Yet, the method has few drawbacks: first it may exhibit very strong variance, especially for high compression rates (see results in figures 1-(b) to 1-(d) on SVHN, CIFAR10-100) ; second, as illustrated in Supplementary Material A.6, the implementation provided by authors do not allow getting results in any case, when the product of the number of filters and of the TT rank is large. In particular we were unable to run experiments on models such as Resnet20 and Resnet50 because the memory footprint is increased considerably (figures 1-(e) and 1-(f)). We thus are unable to get results for higher compression rates that those in the figure with VGG19 (figures 1-(c) and 1-(d)). Comparison with pruning techniques. In the “Hard” pruning case, the compressed network perform very badly. This confirms that a sparse factorization with more than one factor is profitable. When applying the procedure of Zhu & Gupta (2017), however, the magnitude based pruning method preserve good accuracy while removing up to 98% of the connections from the model, except for the MNIST dataset. While our approach significantly outperforms the Hard pruning technique in any case, its Iterative pruning variant Zhu & Gupta (2017) may sometimes leads to significantly higher performance compression in high compression settings than our approach, this is the case in particular with Resnet models on CIFAR100 (figures 1-(e) and 1-(f)). Otherwise, in other settings on Resnet models, and for compressing other models, this technique allows similar performance vs compression tradeoff than our method. Since the Hard pruning technique may be viewed as a special case of our method, this suggests that an iterative-like extension of our method could reach even better results, which is a perspective of this work. 4 Conclusion The proposed approach is able to compress dense and convolutional layers of a neural network by decomposing weight matrices into sparse matrices products. Unlike common decomposition methods, our method does not make any assumptions on the rank of the weight matrices and allows high compression rate while maintaining good accuracy. The experiments show the interest of using a product of multiple sparse matrices instead of a single sparse matrix to convey, in theory, more information for a fixed number of non-zero values. A possible extension of this work would be to study a strategy for progressive sparcity inducing Zhu & Gupta (2017) that could offer further improvements. A Supplementary Material A.1 Illustration of the proposed method The proposed method allows network compression through the factorization of layers weights. Figure 3 highlight the difference between the standard and compressed architecture composed of a convolutional and a fully connected layer, where each layer weight matrix is replaced by a product of sparce matrices. A.2 Reshape operations in convolutional layers In order to apply factorization of convolutions weights, we employ reshaping operators to represent the linear operator as a dense matrix applied to all the patches extracted from an input (Garipov et al., 2016), see Figure 4 for more details. A.3 Implementation details Implementation of sparse matrices. In our implementation, a sparse matrix is in fact defined as the Hadamard product , or pairwise product, between a dense matrix that encodes the weights to be learned and a matrix of constant binary elements that encodes the sparsity support. Thus, the implementation of a sparse matrix S ∈ RD×D, ‖S‖0 = O(D) is: S = W M, (4) where W ∈ RD×D is a dense weight matrix and M ∈ {0, 1}D×D, ‖M‖0 = O(D) is a constant binary matrix. With this implementation, the values of the gradient weights of W outside the sparsity support defined by M are always equal to zero and thus the corresponding weights of W are never updated. In other words, the sparsity support of S is fixed by M. Although this implementation allows to evaluate ou method, the dense storage of all the values of W and M is required. Specifically, 2D2 values are stored to simulate the sparsity of a matrix of size D2 containing O(D) non-zero values. This non-optimal implementation takes advantage of the parallel algorithms of the matrix product and is actually faster on GPU than an implementation that would use Tensorflow’s SparseTensor class 1. Implementation of the convolution. To compute the convolutions in the network, we rebuild the convolution kernel from the product of sparse matrices. Then this convolution kernel is directly used as a parameter of the Tensorflow function conv2d for fast computation. Implementation of the Tensortrain decomposition. The decomposition is performed by applying the decomposition function matrix_product_state provided by the Tensorly library on the tensors obtained on the pre-trained networks. Implementation of pruning as a function of magnitude. To implement the method from Zhu & Gupta (2017), we used the function prune_low_magnitude from the library tensorflow_model_optimization provided by Tensorflow. With this method, the pruning and the refinement of the weights are combined by progressively removing the connections during the learning process until the desired percentage of pruning is obtained. (Re)-Initialization of the weights of a sparse matrix decomposition. When the weights of a sparse matrix factorization are not provided by palm4MSA, the initialization of the weights must be adapted to the reduced number of connections in the layer. We adapt the Xavier (Glorot & Bengio, 2010) and He (He et al., 2015) initialization methods for sparse matrices. Specifically, the first sparse factor is initialized using the He method because ReLU activation function is apllied yielding values that are not zero-centered. The following sparse factors are initialized using the Xavier method since there is no non-linearity between factors. A.4 palm4MSA algorithm The palm4MSA algorithm Magoarou & Gribonval (2016) is given in Algorithm 1 together with the time complexity of each line, using A = min(D,D′) and B = max(D,D′) for a matrix to factorize W ∈ RD×D′ . Even more general constraints can be used, the constraint 1One can use the SparseTensor in conjunction with the Variable class to implement these layers sparingly and have exactly a reduced number of parameters but this implementation was slower and we preferred not to use it for experiments. sets Eq are typically defined as the intersection of the set of unit Froebenius-norm matrices and of a set of sparse matrices. The unit Froebenius norm is used together with the λ factor to avoid a scaling indeterminacy. Note that to simplify the model presentation, factor λ is used internally in palm4MSA and is integrated in factor S1 at the end of the algorithm (Line 14) so that S1 does not satisfy the unit Froebenius norm in E1 at the end of the algorithm. The sparse constraints we used, as in Magoarou & Gribonval (2016), consist of trying to have a given number of non-zero coefficients in each row and in each column. This number of non-zero coefficients is called sparsity level in this paper. In practice, the projection function at Line 9 keeps the largest non-zero coefficients in each row and in each column, which only guarantees the actual number of non-zero coefficients is at least equal to the sparsity level. Algorithm 1 palm4MSA algorithm Require: The matrix to factorize U ∈ RD×D′ , the desired number of factors Q, the constraint sets Eq , q ∈ JQK and a stopping criterion (e.g., here, a number of iterations I ). 1: λ← ||S1||F {O(B)} 2: S1 ← 1λS1 {O(B)} 3: for i ∈ JIK while the stopping criterion is not met do 4: for q = Q down to 1 do 5: Lq ← ∏q−1 l=1 S (i) l 6: Rq ← ∏Q l=q+1 S (i+1) l 7: Choose c > λ2||Rq||22||Lq||22 {O(A logB +B)} 8: D← Siq − 1cλL T q ( λLqSiqRq −U ) RTq {O(AB logB)} 9: S(i+1)q ← PEq (D) {O(A2 logA) or O(AB logB)} 10: end for 11: Û:= ∏Q j=1 S (i+1) q {O(A2 logB +AB)} 12: λ← Trace(U T Û) Trace(ÛT Û) {O(AB)} 13: end for 14: S1 ← λS1 {O(B)} Ensure: {Sq : Sq ∈ Eq}q∈JQK such that ∏ q∈JQK Sq ≈ U A.5 Dataset details Experiments are conducted on four public image classification dataset: MNIST, SVHN, CIFAR10, and CIFAR100, with three pretrained network architectures: Lenet, VGG-19, Resnet50, and Resnet20. Table 2 details the datasets’ characteristics and the corresponding NN models on which we evaluated compression methods. A.6 Experiments on Tensor Train memory overhead Although Tensor Train can obtain impressive compression rates the method may require large amounts of memory. This memory requirement does not allow to experiment on architectures with many convolution layers with many filters, such as ResNet. Table 3 highlights the increase in memory when experimenting on investigated architectures for few hyperparameter settings. The row Others stands for requirement of all other methods. A.7 Experiments of the proposed method Since the error of the matrix approximation can be high, despite the relative good accuracy, we investigate other factorization method than Palm4MSA. Specifically, two other methods are evaluated. «PSM re-init.» use the same sparsity support as palm with re-initialized weights and «PSM random» has random sparsity support and weights. Table 4 present the results and shows the supperiority of Palm4MSA.
1. What is the main contribution of the paper regarding low-rank compression methods for neural networks? 2. What are the concerns regarding the proposed approach, particularly in terms of its computational overhead and deployment on mobile devices? 3. How does the reviewer assess the effectiveness and efficiency of the proposed method compared to prior works? 4. What is the primary objective of compressing neural networks for mobile devices, and how well does the proposed approach align with this goal? 5. Are there any questions or suggestions for future work related to improving the accuracy-efficiency tradeoff in compressed neural networks?
Review
Review In this paper, the authors propose to impose sparsity upon the low-rank compressions methods. Imposing sparsity is generally interesting and meaningful. The experimental results are appreciated. There are some concerns that make this paper cannot be fully appreciated: There are so many structured matrix schemes to compress neural networks. Now, the authors claim sparsity + low-rank can provide better accuracy-efficiency tradeoff. This is surely not enough. We compress neural networks for mobile computing devices, while the "best" accuracy-efficiency tradeoff is not the final goal, unless you are theoretically drawing the boundaries. A more important question with the proposed scheme is as follows: 1). the current compression schemes in the literature are good enough in balancing accuracy-efficiency. Sparsity will surely contributed. 2). however, when the compressed model is downloaded onto a mobile device, will sparse neural networks easy to run on such platforms? This is a naturally question that needs an answer. The best accuracy-efficiency tradeoff is not the final goal; a high cost-performance is the objective to optimize. I mean you may add sparsity to achieve better accuracy-efficiency tradeoff, but we do not allow it to introducing much computation overhead of sparse computations. I have to raise the principle of "simple and effective" is the best, for this kind of tasks. The previous schemes (circulant, low-rank, sparsity only, etc) are simple and shown to be effective. Now, your model is more sophisticated and the overhead is higher. Will this solution serve the purpose of compressing neural networks for mobile devices? I would hope the authors do not answer too quickly. The purpose of achieving better accuracy-efficiency tradeoff is finally making deep learning be practically deployed onto mobile devices, while computing power, memory, bandwidth and energy are limited.
ICLR
Title Sparse matrix products for neural network compression Abstract Over-parameterization of neural networks is a well known issue that comes along with their great performance. Among the many approaches proposed to tackle this problem, low-rank tensor decompositions are largely investigated to compress deep neural networks. Such techniques rely on a low-rank assumption of the layer weight tensors that does not always hold in practice. Following this observation, this paper studies sparsity inducing techniques to build new sparse matrix product layer for high-rate neural networks compression. Specifically, we explore recent advances in sparse optimization to replace each layer’s weight matrix, either convolutional or fully connected, by a product of sparse matrices. Our experiments validate that our approach provides a better compression-accuracy trade-off than most popular low-rank-based compression techniques. 1 Introduction The success of neural networks in the processing of structured data is in part due to their over-parametrization which plays a key role in their ability to learn rich features from the data (Neyshabur et al., 2018). Unfortunately, this also makes most state-of-the-art models so huge that they are expensive to store and impossible to operate on devices with limited resources (memory, computing capacity) or that cannot integrate GPUs (Cheng et al., 2017). This problem has led to a popular line of research for “neural networks compression”, which aims at building models with few parameters while preserving their accuracy. State of the art techniques for neural network compression. Popular matrix or tensor decomposition methods including Singular Value Decomposition (SVD), CANDECOMP/PARAFAC (CP) and Tucker have been used to address the problem of model compression by a low-rank approximation of the neural network’s weights after learning. Sainath et al. (2013) describe a method based on SVD to compress weight matrices in fully connected layers. Denton et al. (2014); Lebedev et al. (2015); Kim et al. (2016) generalize this idea to convolutional layers and then reduce the memory footprint of convolution kernels by using higher-order low-rank decompositions such as CP or Tucker decompositions. Besides, the Tensor-Train (TT) decomposition has been explored to compress both dense and convolutional layers after a pre-training step (Novikov et al., 2015). This approach may achieve extreme compression rates but it also have impractical downsides that we demonstrate now. In a TT format, all the elements of a M -order tensor are expressed by a product ofM matrices whose dimensions are determined by the TT-ranks (R0, R1, . . . , RM ). For each of theM dimension of the initial tensor, the corresponding matrices can be stacked into an order 3 tensor called a “core” of the decomposition. Hence, the layer weight is decomposed as a set of M cores of small dimensions. Novikov et al. (2015) use this tensor representation to factorize fully connected layers. They first reshape the matrix of weights into an M -order tensor, then apply the TT decomposition. By choosing sufficiently small Rm values, this technique allows to obtain a high compression ratio on extremely wide ad hoc neural architectures. Garipov et al. (2016) have adapted this idea to convolutional layers. However, the current formulation of such TT convolutional layer involves the multiplication of all input values by a matrix of dimension 1 × R1 thus causing an inflation of R1 times the size of the input in memory. This makes the available implementation (Garipov, 2020) unusable for recent wide convolutional networks at inference time. Other compression methods include unstructured pruning techniques that we review more in details in Section 2.3 and structured pruning techniques that reduce the inner hidden dimensions of the network by completely removing neurons (Anwar et al., 2017). According to the recent paper of Liu et al. (2018) however, these techniques are more akin to Neural Architecture Search than actual network compression. Finally, quantization-based compression maps the columns of the weight matrices in the network to a subset of reference columns with lower memory footprint (Guo, 2018). Sparse matrices product for full rank decompositions. We are specifically interested in high-rate compression of neural networks via the efficient factorization of the layer weight matrices. Most known approaches to layer decomposition usually makes low-rank assumption on the layer weight tensors which does not always hold in practice. As we will show in the experiments, this makes the Tucker and SVD based techniques unable to effectively reach high compression rates for standard architectures including both convolutional and fully connected layers, such as VGG19 or ResNet50, whose weight matrices usually exhibit full rank. In this paper, we propose instead to express the weight matrices of fully-connected or convolutional layers as a product of sparse factors which contains very little parameters but still can represent high-rank matrices. Moreover, products of matrices with a total sparsity budget are strictly more expressive than single matrices with that sparsity (Dao et al., 2019), which motivates our interest in products of multiple matrices. Usually, a linear operator (a matrix) from RD to RD has a time and space complexities of O(D2). But some well known operators like the Hadamard or the Fourier transforms can be expressed in the form of a product of logD sparse matrices, each having O(D) non-zero values (Dao et al., 2019; Magoarou & Gribonval, 2016). These linear operators, called fast-operators, thus have a time and space complexities lowered to O(D logD). This interesting feature of fast-operators have inspired the design of new algorithms that learn sparse matrix product representations of existing fast-transforms (Dao et al., 2019) or even that computes sparse product approximations of any matrix in order to accelerate learning and inference (Magoarou & Gribonval, 2016; Giffon et al., 2019). Even though these new methods were initially designed to recover the logD factors corresponding to a fasttransform, they are more general than that and can actually be used to find a factorization with Q < logD sparse matrices. Contributions. We introduce a general framework for neural network compression using the factorization of layers into sparse matrix products. We explore the use of the recently proposed palm4MSA algorithm (Magoarou & Gribonval, 2016) on every layer of a pre-trained neural network to express them as a product of sparse matrices. The obtained sparse matrices are then refined by gradient descent to best fit the final prediction task. When there is only one sparse matrix in the decomposition, our approach recovers the simple procedure of hard thresholding the weights of a matrix after pre-training. We evaluate the effect of different hyper-parameters on our method and show that layers can be factorized into two or three sparse matrices to obtain high compression rates while preserving good performance, compared to several main state-of-the-art methods for neural network compression. 2 Learning sparse matrix products for network compression We describe how to compress NN weight matrices by sparse matrix factorization. We call our procedure PSM for Product of Sparse Matrices. It is obvious to see that a product of sparse matrices with a given sparsity budget can recover a full rank matrix or a matrix with more non-zero values than the initial sparsity budget. This observation motivates the use of a sparse matrix factorization in place of usual low-rank decomposition and sparsity inducing techniques for neural network compression. We first recall linear transform operations in fully-connected and convolutional layers. Then, inspired by recent work on learning linear operators with fast-transform structures, we propose to use a product of sparse matrices to replace linear transforms in neural networks. We also introduce a procedure to learn such factorization for every layers in deep architecture. Finally, we review some known neural network compression techniques that appear as particular cases of our framework. 2.1 Weight matrices as product of sparse matrices Fully-connected and convolutional layers are based on the computation of linear operations. In a fully-connected layer, the output z ∈ RD′ is simply given by z = a(Wx) where a is some non-linear activation function. W ∈ RD′×D is the weight matrix of the layer and x ∈ RD is the output of the preceding layer. The linear operation in a convolutional layer can be represented by a doubly-block Toeplitz matrix (Wang et al., 2020). An other way to perform the operation is to employ reshaping operators to represent the linear operator as a dense matrix applied to all the patches extracted from the input (Garipov et al., 2016). In this work, we focus on this latter representation of the convolution operation. More formally, let rS : RH×W×C 7→ RHW×CS 2 be the reshape operation that creates the matrix of all vectorized patches of size (height and width) S2 on an input image with C channels. The matrix of K filters W ∈ RCS2×K can then be applied to these patches (multiplied with rS) to produce the output of the convolutional layer in a matrix shape. Finally, a second reshape operator t : RHW×K 7→ RH×W×K is applied on the feature map matrix to reconstruct the output tensor of the layer Z ∈ RH×W×K . Altogether, the convolution operation can be written as Z = a(t(rS(X )W)) where a is some non-linear activation function and X is the output 3-D tensor of the preceding layer. We preserve simplicity in notation here, assuming without loss of generality that the stride used by rS is equal to 1 and that the input tensor is padded with bS2 c zeros vertically and horizontally. The whole process is depicted in Supplementary Material A.2. Our general idea is to replace the weight matrix of each neural network layer with a product of Q sparse matrices, hence reducing the storage and computational complexities of the layer. Indeed, for an initial matrix of dimension (D × D′), if all sparse matrices store O(D) non-zero values, then the total complexity of the product becomes O(QD) instead of O(DD′). To define a fast-transform operator, one would use Q = logD but in practice we show that we can chose much smaller Q and achieve huge compression rates without lowering much the performance. Supplementary Material A.1 illustrates the effect of our compression scheme on a simple architecture including one convolution layer and a single dense layer. Given an input vector x ∈ RD, expressing the weight matrix W ∈ RD′×D of a fully connected layer as a product of sparse matrices gives output z such that: z = a ( Q∏ i=1 Six ) , (1) where ||Si||0 = O(D) so that the complexity in time and space of this layer is reduced to O(QD) instead of O(DD′). Similarly, in the convolution layers, the output Z ∈ RH×W×K is obtained from an input tensor X ∈ RH×W×C : Z = a ( t ( rS (X ) Q∏ i=1 Si )) , (2) where ||Si||0 = O(max(S2C,K)) so that the time complexity of the layer is reduced from O(HWCS2K) to O(HWQ ·max(CS2,K)) and the complexity in space is reduced from O(CS2K) to O(Q ·max (CS2,K)). Since there is no constraint on the rank of factors, the sparse matrix products of each layer can reach full rank, unlike low-rank decomposition methods. Moreover, the reconstruction of a sparse matrix product with a total of O(QD) non-zero values can produce a matrix with more than O(QD) non-zero values. This is consistent with the intuition that a product of sparse matrices can be more expressive than a single sparse matrix. 2.2 Full Neural Network Compression The full compression pipeline we propose includes first the learning of a standard NN, second the compression of each layer independently as a product of sparse matrices, and finally a fine tuning of the compressed NN whose layers are all expressed as PSM layers. The second step requires approximating each weight matrix W (of a dense or a convolutional layer) as a product of sparse factors, which is cast as the following optimization problem: min {Si}Qi=1 ∥∥∥∥∥W− Q∏ i=1 Si ∥∥∥∥∥ 2 F + Q∑ i=1 δEi(Si), (3) where for each i ∈ JQK, δEi(Si) = 0 if Si ∈ Ei and δEi(Si) = +∞ otherwise. Ei is the set of solutions that respect a sparsity constraint (e.g. number of non zeros values). Although this problem is non-convex, non-differentiable, and the computation of a global optimum cannot be ascertained, the palm4MSA algorithm proposed by Magoarou & Gribonval (2016) is able to learn such factorization by finding a local minimum with convergence guarantees. For more details about palm4MSA, see Supplmentary Materiel A.4. Once every layer’s weight matrix is approximated by a product of sparse matrices, these PSM layers are assembled in a compressed NN which is refined to optimize the initial task objective while the sparsity support of all factors is kept fixed. 2.3 Related work Some techniques based on inducing sparsity in neural connections, e.g. zeroing single weights in layer tensors, can be seen as particular cases of our method. The most straightforward approach to this is to simply remove the weights with lowest magnitude until a given sparsity ratio is reached. This can be done in a very trivial fashion by just removing weights on a pre-trained network and then finetuning the remaining connections. This method can be seen as the particular case of ours when there is only one factor to approximate weight matrices (i.e., Q = 1). As we show in the experiments, this method doesn’t allow high compression rate without degradation of the accuracy. Zhu & Gupta (2017) proposed instead to intertwine the removal of the connections by finetuning remaining weights, achieving better classification performance. Others, approaches for inducing sparsity in the network were proposed (Molchanov et al., 2017; Louizos et al., 2018-06-22) but they do not seem to offer performance improvements in general settings (Gale et al., 2019). The idea of replacing layers by sparse factorization has been previously explored but restricted to particular structures. In Deep Fried Convnets, Yang et al. (2015) propose to replace dense layers of convolutional neural networks by the Fastfood approximation (Le et al., 2013). This approximation is a product of diagonal matrices, a permutation matrix, and a Hadamard matrix which can itself be expressed as a product of logD sparse matrices (Dao et al., 2019). The Fastfood approximation thus provides a product of sparse factors and from this perspective, the Fastfood layer proposed by Yang et al. (2015) is a particular, constrained, case of our more general framework. Moreover, the Deep Fried Convnets architecture is based on the Hadamard matrix that imposes a strong structural constraint on the factorization, which might not be suitable for all layers of a deep architecture. The term sparse decomposition used in Liu et al. (2015) for network compression refers to separate products between dense and sparse matrices to represent the weights of the convolution kernels in a network. Finally, Wu et al. (2019) have recently proposed a very similar framework than ours along with a regularization strategy to learn the sparsity in the sparse factors but their method does not allow for more than two sparse factors and the compression of the convolutional layers is not considered although best performing architectures tend to store most of their parameters in these layers. 3 Experiments Section 3.1 details the experimental settings and parameters to ensure reproducibility. We provide an in depth analysis of our method in Section 3.2. Finally we report in Section 3.3 a comparative study of our method with state-of-the-art methods. 3.1 Experimental setting Our analysis is focused on image classification tasks, we investigate the compression of standard architectures (pretrained models) with our approach and with a few state of the art methods. We evaluate all methods by measuring both the compression ratio and the accuracy of compressed models. We first present implementation details and datasets, then we detail the baselines and the hyperparameters we used. Implementation details. The code was written in Python, including the palm4MSA algorithm (it will be made available on github). NNs were implemented with Keras (Chollet, 2015) and Tensorflow (Abadi et al., 2015). Due to the lack of efficient implementation of the sparse matrices in Tensorflow, we had to hijack the implementations of the dense matrix product and convolution offered by Keras in order to make the learning of networks as efficient as possible (See Supplementary material A.3 for details). Datasets and Neural Architectures. Our experiments are conducted on four standard image classification data sets of varying difficulty: MNIST (Deng, 2012), SVHN (Netzer et al., 2011), CIFAR10, CIFAR100 (Krizhevsky, 2009). Pretrained NNs to be compressed are classically used with these data sets i.e. Lenet(LeCun et al., 1998), VGG19(Simonyan & Zisserman, 2015), Resnet50 and Resnet20 (He et al., 2016). Details on datasets and neural architectures may be found in Supplementary Material A.5. Competing baselines. We present below the baselines and the variants of our approach that we considered. In all cases the methods were applied on the same pre-trained models and all compressed models were fine-tuned after compression: • Low-rank factorization methods, including Tensor-Train decomposition (Novikov et al., 2015; Garipov et al., 2016) (named TT hereafter) and an hybrid of Tucker decomposition (Kim et al., 2016) and SVD (Sainath et al., 2013) (named Tucker-SVD) where Tucker and SVD decomposition are used respectively for the compression of convolutional and dense layers. • Two sparsity inducing pruning techniques. The first one is a simple magnitude based projection of weight matrices on a sparse support. This can be seen as a particular case of our model where only one sparse factor is required. We name this strategy “Hard pruning” (HP in the following). The second method, named Iterative pruning (IP) is the iterative strategy proposed by Zhu & Gupta (2017), which refine magnitude based projection with finetuning steps. • Finally, we evaluate the interest of using the palm4MSA algorithm to discover a sparse matrix decomposition of the layers compared to some random decomposition. More precisely, we evaluate (I) the performance of a model whose layers would be decomposed by palm4MSA but whose weights would be re-initialized while peserving the sparsituy support; and (II) the performance of a model whose weights and sparsity support would be randomly sampled at initialization. Note that we initially considered Deep Fried convnets as an additional baseline but we finally did not include it on our experimental study since this method is originally dedicated to compress fully connected layers and our attempts to make it able to compress convolutional layers as well actually failed, preventing the method to be applied to compress many state of the art architectures that contain mostly convolutional layers. Hyper-parameters. The stopping criteria used for the palm4MSA algorithm is 300 iterations or a relative change between two iterations below 10−6. The projection method is the one used in Magoarou & Gribonval (2016). With K the desired level of sparsity, this method ensures that each sparse factor contains at least K non-zero values per row and per column and at most 2K on average. For experiments with random sparsity support, the initialization of the weights must be adapted to the reduced number of connections in the layer. To do this, we adapt the initializations Xavier (Glorot & Bengio, 2010) and He (He et al., 2015) to the initialization of sparse matrices (See Supplementary Material A.3). We chose M = 4 cores for the Tensor-Train decomposition of any tensor and the maximum possible rank value of the decompositions is specified by the value R in the experiments. In the hybrid of Tucker and SVD decomposition, the rank of the Tucker decomposition is automatically detected by the Variational Bayes Matrix Factorization method, as explained by Kim et al. (2016); the rank of the SVD in the dense layers is chosen such that only a certain percentage (specified in the experiments) of the singular values are kept. Fine-tuning of the Lenet network was done with the RMSProp optimizer and 100 learning epochs. The VGG19 and Resnet networks are fine-tuned with Adam (Kingma & Ba, 2014) optimizer and respectively 300 and 200 epochs. For each compression method and configuration, the best learning rate is chosen using the classification error on a validation sample after 10 iterations, with learning rate values in {10−3, 10−4, 10−5}. Standard data augmentation is applied: translation, rotation and flipping. 3.2 Analysis of the method We first provide an in-depth analysis of our method to validate the use of Palm4MSA for the decomposition of layer weight matrices into a product of sparse matrices. We then evaluate the impact of hyper-parameters Q and K on model accuracy and compression rate. Approximation error. We evaluate the quality of the Palm4MSA approximation of original weight matrices, and its influence on the final performance of the method. We report results gained on the compression VGG19 trained on CIFAR100 as illustration. Figure 2 shows the approximation error between the product of Q sparse factors W̃ := ∏Q q=1 Sq and the original weights W for each layer. It is computed as the normalized distance between the matrices: error = ‖W− W̃‖2F /‖W‖2F . Figure 2 further shows that higher K, i.e. the minimum number of non-zero values per row and per column, yield better approximation. We observe that for some layers, the error can be high, despite relative good accuracy performance observed in Table 1, suggesting the use of the Palm4MSA is limited. In order to examine this usefulness, we have built two other types of models that implement the decomposition of layers into product of sparse matrices, either with random sparsity support, or with random weights, rather than considering those provided by Palm4MSA: • “PSM random”: We construct completely random sparse factorization. The sparsity support is chosen by projecting a reduced centered Gaussian matrix. The weights are initialized using the procedure described in Section 3.1; • “PSM re-init.”: We use the sparsity support obtained by Palm4MSA but we reinitialize the weights using the procedure described in Section 3.1. Table 1 shows the network with the Palm4MSA method obtains the best performance in classification after refining the network (more results in Table 4 of Supplementary Material A.7). Overall «PSM re-init» and «PSM random» may perform very badly on some cases, which suggests the importance of both weights and sparsity support found by Palm4MSA. Sparsity level and the number of factors. Sparsity level corresponds to the approximate number of non-zero values in each row and each column of the sparse matrices constituting the sparse decomposition. Figure 1 and Table 4 show the performance of our model obtained with various sparsity levels K and numbers of factors Q. We notice that the number of factors seems to have a rather limited effect on the quality of the performance, while sparsity level is a more determining factor of the quality of the approximation. Note that we did not considered the Hierarchical version of Palm4MSA (Magoarou & Gribonval, 2016) since it requires Q = logD factors, which greatly increases the number of non-zero values in the decomposition without significantly improving the final performance of the model. 3.3 Comparative study Figure 1 reports comparative results gained on standard benchmark datasets with well known architectures. We discuss below the main comments to be drawn from these results. Reliability. First of all, the behaviour of the baseline methods seems quite dependent on the experiment. For instance TT performance may vary depending on the chosen rank (e.g. see rank 10 in figure 1-(b)); Hard Pruning technique performs badly on MNIST. Moreover these baselines are not always manageable in practice (e.g. no results of TT on Resnet compression, see below). On the contrary, we observe some more stable performances with regards to the choice of hyper-parameters and a systematic very low variance obtained with our method. Comparison to low rank decomposition techniques. Our approach significantly outperforms the Tucker-SVD method in any case. We believe that the low rank approximation may make more difficult to reach high compression rates while preserving good performance. This emphasizes our point that standard low rank decomposition methods cannot afford high compression regime, e.g. low-rank assumption, without degradation of the accuracy of the network. On the contrary, the TT formulation may achieve higher compression rates than Tucker, as was already observed in past works. It seems better suited than Tucker decomposition and may performs similarly or better as our method in some cases. Yet, the method has few drawbacks: first it may exhibit very strong variance, especially for high compression rates (see results in figures 1-(b) to 1-(d) on SVHN, CIFAR10-100) ; second, as illustrated in Supplementary Material A.6, the implementation provided by authors do not allow getting results in any case, when the product of the number of filters and of the TT rank is large. In particular we were unable to run experiments on models such as Resnet20 and Resnet50 because the memory footprint is increased considerably (figures 1-(e) and 1-(f)). We thus are unable to get results for higher compression rates that those in the figure with VGG19 (figures 1-(c) and 1-(d)). Comparison with pruning techniques. In the “Hard” pruning case, the compressed network perform very badly. This confirms that a sparse factorization with more than one factor is profitable. When applying the procedure of Zhu & Gupta (2017), however, the magnitude based pruning method preserve good accuracy while removing up to 98% of the connections from the model, except for the MNIST dataset. While our approach significantly outperforms the Hard pruning technique in any case, its Iterative pruning variant Zhu & Gupta (2017) may sometimes leads to significantly higher performance compression in high compression settings than our approach, this is the case in particular with Resnet models on CIFAR100 (figures 1-(e) and 1-(f)). Otherwise, in other settings on Resnet models, and for compressing other models, this technique allows similar performance vs compression tradeoff than our method. Since the Hard pruning technique may be viewed as a special case of our method, this suggests that an iterative-like extension of our method could reach even better results, which is a perspective of this work. 4 Conclusion The proposed approach is able to compress dense and convolutional layers of a neural network by decomposing weight matrices into sparse matrices products. Unlike common decomposition methods, our method does not make any assumptions on the rank of the weight matrices and allows high compression rate while maintaining good accuracy. The experiments show the interest of using a product of multiple sparse matrices instead of a single sparse matrix to convey, in theory, more information for a fixed number of non-zero values. A possible extension of this work would be to study a strategy for progressive sparcity inducing Zhu & Gupta (2017) that could offer further improvements. A Supplementary Material A.1 Illustration of the proposed method The proposed method allows network compression through the factorization of layers weights. Figure 3 highlight the difference between the standard and compressed architecture composed of a convolutional and a fully connected layer, where each layer weight matrix is replaced by a product of sparce matrices. A.2 Reshape operations in convolutional layers In order to apply factorization of convolutions weights, we employ reshaping operators to represent the linear operator as a dense matrix applied to all the patches extracted from an input (Garipov et al., 2016), see Figure 4 for more details. A.3 Implementation details Implementation of sparse matrices. In our implementation, a sparse matrix is in fact defined as the Hadamard product , or pairwise product, between a dense matrix that encodes the weights to be learned and a matrix of constant binary elements that encodes the sparsity support. Thus, the implementation of a sparse matrix S ∈ RD×D, ‖S‖0 = O(D) is: S = W M, (4) where W ∈ RD×D is a dense weight matrix and M ∈ {0, 1}D×D, ‖M‖0 = O(D) is a constant binary matrix. With this implementation, the values of the gradient weights of W outside the sparsity support defined by M are always equal to zero and thus the corresponding weights of W are never updated. In other words, the sparsity support of S is fixed by M. Although this implementation allows to evaluate ou method, the dense storage of all the values of W and M is required. Specifically, 2D2 values are stored to simulate the sparsity of a matrix of size D2 containing O(D) non-zero values. This non-optimal implementation takes advantage of the parallel algorithms of the matrix product and is actually faster on GPU than an implementation that would use Tensorflow’s SparseTensor class 1. Implementation of the convolution. To compute the convolutions in the network, we rebuild the convolution kernel from the product of sparse matrices. Then this convolution kernel is directly used as a parameter of the Tensorflow function conv2d for fast computation. Implementation of the Tensortrain decomposition. The decomposition is performed by applying the decomposition function matrix_product_state provided by the Tensorly library on the tensors obtained on the pre-trained networks. Implementation of pruning as a function of magnitude. To implement the method from Zhu & Gupta (2017), we used the function prune_low_magnitude from the library tensorflow_model_optimization provided by Tensorflow. With this method, the pruning and the refinement of the weights are combined by progressively removing the connections during the learning process until the desired percentage of pruning is obtained. (Re)-Initialization of the weights of a sparse matrix decomposition. When the weights of a sparse matrix factorization are not provided by palm4MSA, the initialization of the weights must be adapted to the reduced number of connections in the layer. We adapt the Xavier (Glorot & Bengio, 2010) and He (He et al., 2015) initialization methods for sparse matrices. Specifically, the first sparse factor is initialized using the He method because ReLU activation function is apllied yielding values that are not zero-centered. The following sparse factors are initialized using the Xavier method since there is no non-linearity between factors. A.4 palm4MSA algorithm The palm4MSA algorithm Magoarou & Gribonval (2016) is given in Algorithm 1 together with the time complexity of each line, using A = min(D,D′) and B = max(D,D′) for a matrix to factorize W ∈ RD×D′ . Even more general constraints can be used, the constraint 1One can use the SparseTensor in conjunction with the Variable class to implement these layers sparingly and have exactly a reduced number of parameters but this implementation was slower and we preferred not to use it for experiments. sets Eq are typically defined as the intersection of the set of unit Froebenius-norm matrices and of a set of sparse matrices. The unit Froebenius norm is used together with the λ factor to avoid a scaling indeterminacy. Note that to simplify the model presentation, factor λ is used internally in palm4MSA and is integrated in factor S1 at the end of the algorithm (Line 14) so that S1 does not satisfy the unit Froebenius norm in E1 at the end of the algorithm. The sparse constraints we used, as in Magoarou & Gribonval (2016), consist of trying to have a given number of non-zero coefficients in each row and in each column. This number of non-zero coefficients is called sparsity level in this paper. In practice, the projection function at Line 9 keeps the largest non-zero coefficients in each row and in each column, which only guarantees the actual number of non-zero coefficients is at least equal to the sparsity level. Algorithm 1 palm4MSA algorithm Require: The matrix to factorize U ∈ RD×D′ , the desired number of factors Q, the constraint sets Eq , q ∈ JQK and a stopping criterion (e.g., here, a number of iterations I ). 1: λ← ||S1||F {O(B)} 2: S1 ← 1λS1 {O(B)} 3: for i ∈ JIK while the stopping criterion is not met do 4: for q = Q down to 1 do 5: Lq ← ∏q−1 l=1 S (i) l 6: Rq ← ∏Q l=q+1 S (i+1) l 7: Choose c > λ2||Rq||22||Lq||22 {O(A logB +B)} 8: D← Siq − 1cλL T q ( λLqSiqRq −U ) RTq {O(AB logB)} 9: S(i+1)q ← PEq (D) {O(A2 logA) or O(AB logB)} 10: end for 11: Û:= ∏Q j=1 S (i+1) q {O(A2 logB +AB)} 12: λ← Trace(U T Û) Trace(ÛT Û) {O(AB)} 13: end for 14: S1 ← λS1 {O(B)} Ensure: {Sq : Sq ∈ Eq}q∈JQK such that ∏ q∈JQK Sq ≈ U A.5 Dataset details Experiments are conducted on four public image classification dataset: MNIST, SVHN, CIFAR10, and CIFAR100, with three pretrained network architectures: Lenet, VGG-19, Resnet50, and Resnet20. Table 2 details the datasets’ characteristics and the corresponding NN models on which we evaluated compression methods. A.6 Experiments on Tensor Train memory overhead Although Tensor Train can obtain impressive compression rates the method may require large amounts of memory. This memory requirement does not allow to experiment on architectures with many convolution layers with many filters, such as ResNet. Table 3 highlights the increase in memory when experimenting on investigated architectures for few hyperparameter settings. The row Others stands for requirement of all other methods. A.7 Experiments of the proposed method Since the error of the matrix approximation can be high, despite the relative good accuracy, we investigate other factorization method than Palm4MSA. Specifically, two other methods are evaluated. «PSM re-init.» use the same sparsity support as palm with re-initialized weights and «PSM random» has random sparsity support and weights. Table 4 present the results and shows the supperiority of Palm4MSA.
1. What is the focus of the paper regarding neural network compression? 2. What are the strengths of the proposed approach, particularly in its simplicity and idea behind it? 3. What are the weaknesses of the paper, especially when compared to other methods like Iterative pruning and TT? 4. Do you have any questions or suggestions regarding the presentation of the method, such as the notation in Equation 1 or the choice of measurements in Figure 1?
Review
Review The authors introduced a neural network compressing method, based on factorization of weight matrix to the products of multiple sparse matrices. The goal is to achieve high compression rate. The author used a previous algorithm (Palm4MSA) to implement the method. The experiment result is better than other low-rank-based method, but is similar or worse to Iterative pruning and TT method. Pros: The introduced method is easy to understand and seems to make sense. The idea of using products of multiple sparse matrices to represent the weight matrix is nice. Cons: The introduced method is almost direct application of existing algorithm (Palm4MSA). The experiment result is not state-of-the-art: it is worse than Iterative pruning. The authors stated that "an iterative-like extension of our method could reach even better results", so it would be important to include these results in the paper. Questions and suggestions: In Eq. 1, the \prod S_i x is confusing. (\prod S_i) x could be better. In Fig. 1, you can use compression rate rather than actual # of parameters as the measurement. You can also use a curve to better illustrate the compression-accuracy tradeoff.
ICLR
Title Sparse matrix products for neural network compression Abstract Over-parameterization of neural networks is a well known issue that comes along with their great performance. Among the many approaches proposed to tackle this problem, low-rank tensor decompositions are largely investigated to compress deep neural networks. Such techniques rely on a low-rank assumption of the layer weight tensors that does not always hold in practice. Following this observation, this paper studies sparsity inducing techniques to build new sparse matrix product layer for high-rate neural networks compression. Specifically, we explore recent advances in sparse optimization to replace each layer’s weight matrix, either convolutional or fully connected, by a product of sparse matrices. Our experiments validate that our approach provides a better compression-accuracy trade-off than most popular low-rank-based compression techniques. 1 Introduction The success of neural networks in the processing of structured data is in part due to their over-parametrization which plays a key role in their ability to learn rich features from the data (Neyshabur et al., 2018). Unfortunately, this also makes most state-of-the-art models so huge that they are expensive to store and impossible to operate on devices with limited resources (memory, computing capacity) or that cannot integrate GPUs (Cheng et al., 2017). This problem has led to a popular line of research for “neural networks compression”, which aims at building models with few parameters while preserving their accuracy. State of the art techniques for neural network compression. Popular matrix or tensor decomposition methods including Singular Value Decomposition (SVD), CANDECOMP/PARAFAC (CP) and Tucker have been used to address the problem of model compression by a low-rank approximation of the neural network’s weights after learning. Sainath et al. (2013) describe a method based on SVD to compress weight matrices in fully connected layers. Denton et al. (2014); Lebedev et al. (2015); Kim et al. (2016) generalize this idea to convolutional layers and then reduce the memory footprint of convolution kernels by using higher-order low-rank decompositions such as CP or Tucker decompositions. Besides, the Tensor-Train (TT) decomposition has been explored to compress both dense and convolutional layers after a pre-training step (Novikov et al., 2015). This approach may achieve extreme compression rates but it also have impractical downsides that we demonstrate now. In a TT format, all the elements of a M -order tensor are expressed by a product ofM matrices whose dimensions are determined by the TT-ranks (R0, R1, . . . , RM ). For each of theM dimension of the initial tensor, the corresponding matrices can be stacked into an order 3 tensor called a “core” of the decomposition. Hence, the layer weight is decomposed as a set of M cores of small dimensions. Novikov et al. (2015) use this tensor representation to factorize fully connected layers. They first reshape the matrix of weights into an M -order tensor, then apply the TT decomposition. By choosing sufficiently small Rm values, this technique allows to obtain a high compression ratio on extremely wide ad hoc neural architectures. Garipov et al. (2016) have adapted this idea to convolutional layers. However, the current formulation of such TT convolutional layer involves the multiplication of all input values by a matrix of dimension 1 × R1 thus causing an inflation of R1 times the size of the input in memory. This makes the available implementation (Garipov, 2020) unusable for recent wide convolutional networks at inference time. Other compression methods include unstructured pruning techniques that we review more in details in Section 2.3 and structured pruning techniques that reduce the inner hidden dimensions of the network by completely removing neurons (Anwar et al., 2017). According to the recent paper of Liu et al. (2018) however, these techniques are more akin to Neural Architecture Search than actual network compression. Finally, quantization-based compression maps the columns of the weight matrices in the network to a subset of reference columns with lower memory footprint (Guo, 2018). Sparse matrices product for full rank decompositions. We are specifically interested in high-rate compression of neural networks via the efficient factorization of the layer weight matrices. Most known approaches to layer decomposition usually makes low-rank assumption on the layer weight tensors which does not always hold in practice. As we will show in the experiments, this makes the Tucker and SVD based techniques unable to effectively reach high compression rates for standard architectures including both convolutional and fully connected layers, such as VGG19 or ResNet50, whose weight matrices usually exhibit full rank. In this paper, we propose instead to express the weight matrices of fully-connected or convolutional layers as a product of sparse factors which contains very little parameters but still can represent high-rank matrices. Moreover, products of matrices with a total sparsity budget are strictly more expressive than single matrices with that sparsity (Dao et al., 2019), which motivates our interest in products of multiple matrices. Usually, a linear operator (a matrix) from RD to RD has a time and space complexities of O(D2). But some well known operators like the Hadamard or the Fourier transforms can be expressed in the form of a product of logD sparse matrices, each having O(D) non-zero values (Dao et al., 2019; Magoarou & Gribonval, 2016). These linear operators, called fast-operators, thus have a time and space complexities lowered to O(D logD). This interesting feature of fast-operators have inspired the design of new algorithms that learn sparse matrix product representations of existing fast-transforms (Dao et al., 2019) or even that computes sparse product approximations of any matrix in order to accelerate learning and inference (Magoarou & Gribonval, 2016; Giffon et al., 2019). Even though these new methods were initially designed to recover the logD factors corresponding to a fasttransform, they are more general than that and can actually be used to find a factorization with Q < logD sparse matrices. Contributions. We introduce a general framework for neural network compression using the factorization of layers into sparse matrix products. We explore the use of the recently proposed palm4MSA algorithm (Magoarou & Gribonval, 2016) on every layer of a pre-trained neural network to express them as a product of sparse matrices. The obtained sparse matrices are then refined by gradient descent to best fit the final prediction task. When there is only one sparse matrix in the decomposition, our approach recovers the simple procedure of hard thresholding the weights of a matrix after pre-training. We evaluate the effect of different hyper-parameters on our method and show that layers can be factorized into two or three sparse matrices to obtain high compression rates while preserving good performance, compared to several main state-of-the-art methods for neural network compression. 2 Learning sparse matrix products for network compression We describe how to compress NN weight matrices by sparse matrix factorization. We call our procedure PSM for Product of Sparse Matrices. It is obvious to see that a product of sparse matrices with a given sparsity budget can recover a full rank matrix or a matrix with more non-zero values than the initial sparsity budget. This observation motivates the use of a sparse matrix factorization in place of usual low-rank decomposition and sparsity inducing techniques for neural network compression. We first recall linear transform operations in fully-connected and convolutional layers. Then, inspired by recent work on learning linear operators with fast-transform structures, we propose to use a product of sparse matrices to replace linear transforms in neural networks. We also introduce a procedure to learn such factorization for every layers in deep architecture. Finally, we review some known neural network compression techniques that appear as particular cases of our framework. 2.1 Weight matrices as product of sparse matrices Fully-connected and convolutional layers are based on the computation of linear operations. In a fully-connected layer, the output z ∈ RD′ is simply given by z = a(Wx) where a is some non-linear activation function. W ∈ RD′×D is the weight matrix of the layer and x ∈ RD is the output of the preceding layer. The linear operation in a convolutional layer can be represented by a doubly-block Toeplitz matrix (Wang et al., 2020). An other way to perform the operation is to employ reshaping operators to represent the linear operator as a dense matrix applied to all the patches extracted from the input (Garipov et al., 2016). In this work, we focus on this latter representation of the convolution operation. More formally, let rS : RH×W×C 7→ RHW×CS 2 be the reshape operation that creates the matrix of all vectorized patches of size (height and width) S2 on an input image with C channels. The matrix of K filters W ∈ RCS2×K can then be applied to these patches (multiplied with rS) to produce the output of the convolutional layer in a matrix shape. Finally, a second reshape operator t : RHW×K 7→ RH×W×K is applied on the feature map matrix to reconstruct the output tensor of the layer Z ∈ RH×W×K . Altogether, the convolution operation can be written as Z = a(t(rS(X )W)) where a is some non-linear activation function and X is the output 3-D tensor of the preceding layer. We preserve simplicity in notation here, assuming without loss of generality that the stride used by rS is equal to 1 and that the input tensor is padded with bS2 c zeros vertically and horizontally. The whole process is depicted in Supplementary Material A.2. Our general idea is to replace the weight matrix of each neural network layer with a product of Q sparse matrices, hence reducing the storage and computational complexities of the layer. Indeed, for an initial matrix of dimension (D × D′), if all sparse matrices store O(D) non-zero values, then the total complexity of the product becomes O(QD) instead of O(DD′). To define a fast-transform operator, one would use Q = logD but in practice we show that we can chose much smaller Q and achieve huge compression rates without lowering much the performance. Supplementary Material A.1 illustrates the effect of our compression scheme on a simple architecture including one convolution layer and a single dense layer. Given an input vector x ∈ RD, expressing the weight matrix W ∈ RD′×D of a fully connected layer as a product of sparse matrices gives output z such that: z = a ( Q∏ i=1 Six ) , (1) where ||Si||0 = O(D) so that the complexity in time and space of this layer is reduced to O(QD) instead of O(DD′). Similarly, in the convolution layers, the output Z ∈ RH×W×K is obtained from an input tensor X ∈ RH×W×C : Z = a ( t ( rS (X ) Q∏ i=1 Si )) , (2) where ||Si||0 = O(max(S2C,K)) so that the time complexity of the layer is reduced from O(HWCS2K) to O(HWQ ·max(CS2,K)) and the complexity in space is reduced from O(CS2K) to O(Q ·max (CS2,K)). Since there is no constraint on the rank of factors, the sparse matrix products of each layer can reach full rank, unlike low-rank decomposition methods. Moreover, the reconstruction of a sparse matrix product with a total of O(QD) non-zero values can produce a matrix with more than O(QD) non-zero values. This is consistent with the intuition that a product of sparse matrices can be more expressive than a single sparse matrix. 2.2 Full Neural Network Compression The full compression pipeline we propose includes first the learning of a standard NN, second the compression of each layer independently as a product of sparse matrices, and finally a fine tuning of the compressed NN whose layers are all expressed as PSM layers. The second step requires approximating each weight matrix W (of a dense or a convolutional layer) as a product of sparse factors, which is cast as the following optimization problem: min {Si}Qi=1 ∥∥∥∥∥W− Q∏ i=1 Si ∥∥∥∥∥ 2 F + Q∑ i=1 δEi(Si), (3) where for each i ∈ JQK, δEi(Si) = 0 if Si ∈ Ei and δEi(Si) = +∞ otherwise. Ei is the set of solutions that respect a sparsity constraint (e.g. number of non zeros values). Although this problem is non-convex, non-differentiable, and the computation of a global optimum cannot be ascertained, the palm4MSA algorithm proposed by Magoarou & Gribonval (2016) is able to learn such factorization by finding a local minimum with convergence guarantees. For more details about palm4MSA, see Supplmentary Materiel A.4. Once every layer’s weight matrix is approximated by a product of sparse matrices, these PSM layers are assembled in a compressed NN which is refined to optimize the initial task objective while the sparsity support of all factors is kept fixed. 2.3 Related work Some techniques based on inducing sparsity in neural connections, e.g. zeroing single weights in layer tensors, can be seen as particular cases of our method. The most straightforward approach to this is to simply remove the weights with lowest magnitude until a given sparsity ratio is reached. This can be done in a very trivial fashion by just removing weights on a pre-trained network and then finetuning the remaining connections. This method can be seen as the particular case of ours when there is only one factor to approximate weight matrices (i.e., Q = 1). As we show in the experiments, this method doesn’t allow high compression rate without degradation of the accuracy. Zhu & Gupta (2017) proposed instead to intertwine the removal of the connections by finetuning remaining weights, achieving better classification performance. Others, approaches for inducing sparsity in the network were proposed (Molchanov et al., 2017; Louizos et al., 2018-06-22) but they do not seem to offer performance improvements in general settings (Gale et al., 2019). The idea of replacing layers by sparse factorization has been previously explored but restricted to particular structures. In Deep Fried Convnets, Yang et al. (2015) propose to replace dense layers of convolutional neural networks by the Fastfood approximation (Le et al., 2013). This approximation is a product of diagonal matrices, a permutation matrix, and a Hadamard matrix which can itself be expressed as a product of logD sparse matrices (Dao et al., 2019). The Fastfood approximation thus provides a product of sparse factors and from this perspective, the Fastfood layer proposed by Yang et al. (2015) is a particular, constrained, case of our more general framework. Moreover, the Deep Fried Convnets architecture is based on the Hadamard matrix that imposes a strong structural constraint on the factorization, which might not be suitable for all layers of a deep architecture. The term sparse decomposition used in Liu et al. (2015) for network compression refers to separate products between dense and sparse matrices to represent the weights of the convolution kernels in a network. Finally, Wu et al. (2019) have recently proposed a very similar framework than ours along with a regularization strategy to learn the sparsity in the sparse factors but their method does not allow for more than two sparse factors and the compression of the convolutional layers is not considered although best performing architectures tend to store most of their parameters in these layers. 3 Experiments Section 3.1 details the experimental settings and parameters to ensure reproducibility. We provide an in depth analysis of our method in Section 3.2. Finally we report in Section 3.3 a comparative study of our method with state-of-the-art methods. 3.1 Experimental setting Our analysis is focused on image classification tasks, we investigate the compression of standard architectures (pretrained models) with our approach and with a few state of the art methods. We evaluate all methods by measuring both the compression ratio and the accuracy of compressed models. We first present implementation details and datasets, then we detail the baselines and the hyperparameters we used. Implementation details. The code was written in Python, including the palm4MSA algorithm (it will be made available on github). NNs were implemented with Keras (Chollet, 2015) and Tensorflow (Abadi et al., 2015). Due to the lack of efficient implementation of the sparse matrices in Tensorflow, we had to hijack the implementations of the dense matrix product and convolution offered by Keras in order to make the learning of networks as efficient as possible (See Supplementary material A.3 for details). Datasets and Neural Architectures. Our experiments are conducted on four standard image classification data sets of varying difficulty: MNIST (Deng, 2012), SVHN (Netzer et al., 2011), CIFAR10, CIFAR100 (Krizhevsky, 2009). Pretrained NNs to be compressed are classically used with these data sets i.e. Lenet(LeCun et al., 1998), VGG19(Simonyan & Zisserman, 2015), Resnet50 and Resnet20 (He et al., 2016). Details on datasets and neural architectures may be found in Supplementary Material A.5. Competing baselines. We present below the baselines and the variants of our approach that we considered. In all cases the methods were applied on the same pre-trained models and all compressed models were fine-tuned after compression: • Low-rank factorization methods, including Tensor-Train decomposition (Novikov et al., 2015; Garipov et al., 2016) (named TT hereafter) and an hybrid of Tucker decomposition (Kim et al., 2016) and SVD (Sainath et al., 2013) (named Tucker-SVD) where Tucker and SVD decomposition are used respectively for the compression of convolutional and dense layers. • Two sparsity inducing pruning techniques. The first one is a simple magnitude based projection of weight matrices on a sparse support. This can be seen as a particular case of our model where only one sparse factor is required. We name this strategy “Hard pruning” (HP in the following). The second method, named Iterative pruning (IP) is the iterative strategy proposed by Zhu & Gupta (2017), which refine magnitude based projection with finetuning steps. • Finally, we evaluate the interest of using the palm4MSA algorithm to discover a sparse matrix decomposition of the layers compared to some random decomposition. More precisely, we evaluate (I) the performance of a model whose layers would be decomposed by palm4MSA but whose weights would be re-initialized while peserving the sparsituy support; and (II) the performance of a model whose weights and sparsity support would be randomly sampled at initialization. Note that we initially considered Deep Fried convnets as an additional baseline but we finally did not include it on our experimental study since this method is originally dedicated to compress fully connected layers and our attempts to make it able to compress convolutional layers as well actually failed, preventing the method to be applied to compress many state of the art architectures that contain mostly convolutional layers. Hyper-parameters. The stopping criteria used for the palm4MSA algorithm is 300 iterations or a relative change between two iterations below 10−6. The projection method is the one used in Magoarou & Gribonval (2016). With K the desired level of sparsity, this method ensures that each sparse factor contains at least K non-zero values per row and per column and at most 2K on average. For experiments with random sparsity support, the initialization of the weights must be adapted to the reduced number of connections in the layer. To do this, we adapt the initializations Xavier (Glorot & Bengio, 2010) and He (He et al., 2015) to the initialization of sparse matrices (See Supplementary Material A.3). We chose M = 4 cores for the Tensor-Train decomposition of any tensor and the maximum possible rank value of the decompositions is specified by the value R in the experiments. In the hybrid of Tucker and SVD decomposition, the rank of the Tucker decomposition is automatically detected by the Variational Bayes Matrix Factorization method, as explained by Kim et al. (2016); the rank of the SVD in the dense layers is chosen such that only a certain percentage (specified in the experiments) of the singular values are kept. Fine-tuning of the Lenet network was done with the RMSProp optimizer and 100 learning epochs. The VGG19 and Resnet networks are fine-tuned with Adam (Kingma & Ba, 2014) optimizer and respectively 300 and 200 epochs. For each compression method and configuration, the best learning rate is chosen using the classification error on a validation sample after 10 iterations, with learning rate values in {10−3, 10−4, 10−5}. Standard data augmentation is applied: translation, rotation and flipping. 3.2 Analysis of the method We first provide an in-depth analysis of our method to validate the use of Palm4MSA for the decomposition of layer weight matrices into a product of sparse matrices. We then evaluate the impact of hyper-parameters Q and K on model accuracy and compression rate. Approximation error. We evaluate the quality of the Palm4MSA approximation of original weight matrices, and its influence on the final performance of the method. We report results gained on the compression VGG19 trained on CIFAR100 as illustration. Figure 2 shows the approximation error between the product of Q sparse factors W̃ := ∏Q q=1 Sq and the original weights W for each layer. It is computed as the normalized distance between the matrices: error = ‖W− W̃‖2F /‖W‖2F . Figure 2 further shows that higher K, i.e. the minimum number of non-zero values per row and per column, yield better approximation. We observe that for some layers, the error can be high, despite relative good accuracy performance observed in Table 1, suggesting the use of the Palm4MSA is limited. In order to examine this usefulness, we have built two other types of models that implement the decomposition of layers into product of sparse matrices, either with random sparsity support, or with random weights, rather than considering those provided by Palm4MSA: • “PSM random”: We construct completely random sparse factorization. The sparsity support is chosen by projecting a reduced centered Gaussian matrix. The weights are initialized using the procedure described in Section 3.1; • “PSM re-init.”: We use the sparsity support obtained by Palm4MSA but we reinitialize the weights using the procedure described in Section 3.1. Table 1 shows the network with the Palm4MSA method obtains the best performance in classification after refining the network (more results in Table 4 of Supplementary Material A.7). Overall «PSM re-init» and «PSM random» may perform very badly on some cases, which suggests the importance of both weights and sparsity support found by Palm4MSA. Sparsity level and the number of factors. Sparsity level corresponds to the approximate number of non-zero values in each row and each column of the sparse matrices constituting the sparse decomposition. Figure 1 and Table 4 show the performance of our model obtained with various sparsity levels K and numbers of factors Q. We notice that the number of factors seems to have a rather limited effect on the quality of the performance, while sparsity level is a more determining factor of the quality of the approximation. Note that we did not considered the Hierarchical version of Palm4MSA (Magoarou & Gribonval, 2016) since it requires Q = logD factors, which greatly increases the number of non-zero values in the decomposition without significantly improving the final performance of the model. 3.3 Comparative study Figure 1 reports comparative results gained on standard benchmark datasets with well known architectures. We discuss below the main comments to be drawn from these results. Reliability. First of all, the behaviour of the baseline methods seems quite dependent on the experiment. For instance TT performance may vary depending on the chosen rank (e.g. see rank 10 in figure 1-(b)); Hard Pruning technique performs badly on MNIST. Moreover these baselines are not always manageable in practice (e.g. no results of TT on Resnet compression, see below). On the contrary, we observe some more stable performances with regards to the choice of hyper-parameters and a systematic very low variance obtained with our method. Comparison to low rank decomposition techniques. Our approach significantly outperforms the Tucker-SVD method in any case. We believe that the low rank approximation may make more difficult to reach high compression rates while preserving good performance. This emphasizes our point that standard low rank decomposition methods cannot afford high compression regime, e.g. low-rank assumption, without degradation of the accuracy of the network. On the contrary, the TT formulation may achieve higher compression rates than Tucker, as was already observed in past works. It seems better suited than Tucker decomposition and may performs similarly or better as our method in some cases. Yet, the method has few drawbacks: first it may exhibit very strong variance, especially for high compression rates (see results in figures 1-(b) to 1-(d) on SVHN, CIFAR10-100) ; second, as illustrated in Supplementary Material A.6, the implementation provided by authors do not allow getting results in any case, when the product of the number of filters and of the TT rank is large. In particular we were unable to run experiments on models such as Resnet20 and Resnet50 because the memory footprint is increased considerably (figures 1-(e) and 1-(f)). We thus are unable to get results for higher compression rates that those in the figure with VGG19 (figures 1-(c) and 1-(d)). Comparison with pruning techniques. In the “Hard” pruning case, the compressed network perform very badly. This confirms that a sparse factorization with more than one factor is profitable. When applying the procedure of Zhu & Gupta (2017), however, the magnitude based pruning method preserve good accuracy while removing up to 98% of the connections from the model, except for the MNIST dataset. While our approach significantly outperforms the Hard pruning technique in any case, its Iterative pruning variant Zhu & Gupta (2017) may sometimes leads to significantly higher performance compression in high compression settings than our approach, this is the case in particular with Resnet models on CIFAR100 (figures 1-(e) and 1-(f)). Otherwise, in other settings on Resnet models, and for compressing other models, this technique allows similar performance vs compression tradeoff than our method. Since the Hard pruning technique may be viewed as a special case of our method, this suggests that an iterative-like extension of our method could reach even better results, which is a perspective of this work. 4 Conclusion The proposed approach is able to compress dense and convolutional layers of a neural network by decomposing weight matrices into sparse matrices products. Unlike common decomposition methods, our method does not make any assumptions on the rank of the weight matrices and allows high compression rate while maintaining good accuracy. The experiments show the interest of using a product of multiple sparse matrices instead of a single sparse matrix to convey, in theory, more information for a fixed number of non-zero values. A possible extension of this work would be to study a strategy for progressive sparcity inducing Zhu & Gupta (2017) that could offer further improvements. A Supplementary Material A.1 Illustration of the proposed method The proposed method allows network compression through the factorization of layers weights. Figure 3 highlight the difference between the standard and compressed architecture composed of a convolutional and a fully connected layer, where each layer weight matrix is replaced by a product of sparce matrices. A.2 Reshape operations in convolutional layers In order to apply factorization of convolutions weights, we employ reshaping operators to represent the linear operator as a dense matrix applied to all the patches extracted from an input (Garipov et al., 2016), see Figure 4 for more details. A.3 Implementation details Implementation of sparse matrices. In our implementation, a sparse matrix is in fact defined as the Hadamard product , or pairwise product, between a dense matrix that encodes the weights to be learned and a matrix of constant binary elements that encodes the sparsity support. Thus, the implementation of a sparse matrix S ∈ RD×D, ‖S‖0 = O(D) is: S = W M, (4) where W ∈ RD×D is a dense weight matrix and M ∈ {0, 1}D×D, ‖M‖0 = O(D) is a constant binary matrix. With this implementation, the values of the gradient weights of W outside the sparsity support defined by M are always equal to zero and thus the corresponding weights of W are never updated. In other words, the sparsity support of S is fixed by M. Although this implementation allows to evaluate ou method, the dense storage of all the values of W and M is required. Specifically, 2D2 values are stored to simulate the sparsity of a matrix of size D2 containing O(D) non-zero values. This non-optimal implementation takes advantage of the parallel algorithms of the matrix product and is actually faster on GPU than an implementation that would use Tensorflow’s SparseTensor class 1. Implementation of the convolution. To compute the convolutions in the network, we rebuild the convolution kernel from the product of sparse matrices. Then this convolution kernel is directly used as a parameter of the Tensorflow function conv2d for fast computation. Implementation of the Tensortrain decomposition. The decomposition is performed by applying the decomposition function matrix_product_state provided by the Tensorly library on the tensors obtained on the pre-trained networks. Implementation of pruning as a function of magnitude. To implement the method from Zhu & Gupta (2017), we used the function prune_low_magnitude from the library tensorflow_model_optimization provided by Tensorflow. With this method, the pruning and the refinement of the weights are combined by progressively removing the connections during the learning process until the desired percentage of pruning is obtained. (Re)-Initialization of the weights of a sparse matrix decomposition. When the weights of a sparse matrix factorization are not provided by palm4MSA, the initialization of the weights must be adapted to the reduced number of connections in the layer. We adapt the Xavier (Glorot & Bengio, 2010) and He (He et al., 2015) initialization methods for sparse matrices. Specifically, the first sparse factor is initialized using the He method because ReLU activation function is apllied yielding values that are not zero-centered. The following sparse factors are initialized using the Xavier method since there is no non-linearity between factors. A.4 palm4MSA algorithm The palm4MSA algorithm Magoarou & Gribonval (2016) is given in Algorithm 1 together with the time complexity of each line, using A = min(D,D′) and B = max(D,D′) for a matrix to factorize W ∈ RD×D′ . Even more general constraints can be used, the constraint 1One can use the SparseTensor in conjunction with the Variable class to implement these layers sparingly and have exactly a reduced number of parameters but this implementation was slower and we preferred not to use it for experiments. sets Eq are typically defined as the intersection of the set of unit Froebenius-norm matrices and of a set of sparse matrices. The unit Froebenius norm is used together with the λ factor to avoid a scaling indeterminacy. Note that to simplify the model presentation, factor λ is used internally in palm4MSA and is integrated in factor S1 at the end of the algorithm (Line 14) so that S1 does not satisfy the unit Froebenius norm in E1 at the end of the algorithm. The sparse constraints we used, as in Magoarou & Gribonval (2016), consist of trying to have a given number of non-zero coefficients in each row and in each column. This number of non-zero coefficients is called sparsity level in this paper. In practice, the projection function at Line 9 keeps the largest non-zero coefficients in each row and in each column, which only guarantees the actual number of non-zero coefficients is at least equal to the sparsity level. Algorithm 1 palm4MSA algorithm Require: The matrix to factorize U ∈ RD×D′ , the desired number of factors Q, the constraint sets Eq , q ∈ JQK and a stopping criterion (e.g., here, a number of iterations I ). 1: λ← ||S1||F {O(B)} 2: S1 ← 1λS1 {O(B)} 3: for i ∈ JIK while the stopping criterion is not met do 4: for q = Q down to 1 do 5: Lq ← ∏q−1 l=1 S (i) l 6: Rq ← ∏Q l=q+1 S (i+1) l 7: Choose c > λ2||Rq||22||Lq||22 {O(A logB +B)} 8: D← Siq − 1cλL T q ( λLqSiqRq −U ) RTq {O(AB logB)} 9: S(i+1)q ← PEq (D) {O(A2 logA) or O(AB logB)} 10: end for 11: Û:= ∏Q j=1 S (i+1) q {O(A2 logB +AB)} 12: λ← Trace(U T Û) Trace(ÛT Û) {O(AB)} 13: end for 14: S1 ← λS1 {O(B)} Ensure: {Sq : Sq ∈ Eq}q∈JQK such that ∏ q∈JQK Sq ≈ U A.5 Dataset details Experiments are conducted on four public image classification dataset: MNIST, SVHN, CIFAR10, and CIFAR100, with three pretrained network architectures: Lenet, VGG-19, Resnet50, and Resnet20. Table 2 details the datasets’ characteristics and the corresponding NN models on which we evaluated compression methods. A.6 Experiments on Tensor Train memory overhead Although Tensor Train can obtain impressive compression rates the method may require large amounts of memory. This memory requirement does not allow to experiment on architectures with many convolution layers with many filters, such as ResNet. Table 3 highlights the increase in memory when experimenting on investigated architectures for few hyperparameter settings. The row Others stands for requirement of all other methods. A.7 Experiments of the proposed method Since the error of the matrix approximation can be high, despite the relative good accuracy, we investigate other factorization method than Palm4MSA. Specifically, two other methods are evaluated. «PSM re-init.» use the same sparsity support as palm with re-initialized weights and «PSM random» has random sparsity support and weights. Table 4 present the results and shows the supperiority of Palm4MSA.
1. What is the focus of the paper regarding neural network compression? 2. What are the strengths and weaknesses of the proposed approach in terms of its effectiveness and efficiency? 3. How does the method compare to other compression methods for neural networks? 4. Are there any limitations or concerns regarding the implementation or application of the proposed approach? 5. Is the experimental evaluation of the method convincing, and what improvements could be made to strengthen the results?
Review
Review Summary: The paper proposes compressing the layers of the neural networks using a product of sparse matrices. This approach is in line with the initial methods on neural network compression: direct (task-independent) compression of weights, which is followed by NN task-dependent fine-tuning. In this case, the direct compression is obtained using the Palm4MSA method of Magoarou and Gribonval (2016), and then models are fine-tuned in an end-to-end fashion using TensorFlow. Decision: Given the contributions of the paper and the lackluster experimental evaluation, I am recommending rejection. Detailed comments: The factorization as a product of sparse matrices is an interesting approach, and in its general form, such a scheme has not been studied in the context of neural network compression. However, the evaluation of the proposed approach is not convincing: Effect of chaining sparse matrices for the actual inference is not discussed. As authors are well aware, the support of sparse matrix-vector products is limited in major frameworks and hardware. Therefore, chaining multiple sparse products might significantly delay the actual inference, even though it has fewer parameters and theoretically fewer FLOPs. Comparison of different compression mechanism for neural networks is a very complex task: since most methods are applied per layer, the parameters of the compressions must be tuned per layer too: e.g., ranks/sparsity of every layer cannot have the same value, as some layers require more/fewer parameters. However, in the comparison, authors use uniform settings across layers for their method (e.g., M=4 cores) and for other methods. The conclusions under such comparison strategy have a limited value: it says that chosen hyperparameters of compression A beats the chosen (by hand) hyperparameters of compression B in terms of final compression ratio/tradeoff etc. It leaves the possibility that by tuning parameters of B we might outperform A. To fully understand the usefulness of the compression scheme, we need to ask a different question: given the scheme A and B (with the best selection of hyperparameters for both), compression using which scheme gives the smallest model for a given accuracy? Or even better, which compression results in a better tradeoff when optimizing the shape/form of the schemes to suit the compression target? Related to the previous point, authors miss a large body of the methods that instead of expecting the weights to be in a certain form (e.g., sparse or low-rank), actually force the network to attain such form using penalties and constraints. For example, in terms of low-rank compression, here are some of the relevant works: Accelerating Very Deep Convolutional Networks for Classification and Detection (IEEE TPAMI 2016) Coordinating Filters for Faster Deep Neural Networks (ICCV 2017) Compression-Aware Training of Deep Networks (NIPS2017) Constrained Optimization Based Low-rank Approximation of Deep Neural Networks (ECCV 2018) Automated Multi-Stage Compression of Neural Networks (ICCV Workshops 2019) Low-Rank Compression of Neural Nets: Learning the Rank of Each Layer (CVPR 2020) Factorized Higher-Order CNNs with an Application to Spatio-Temporal Emotion Estimation (CVPR 2020) TRP: Trained Rank Pruning for Efficient Deep Neural Networks (IJCAI 2020) There is an equally large body of literature on network sparsification. Please add the comparison to best-in-its-class results from the literature in order to fully evaluate the proposed scheme. Minor concerns: I strongly believe citing Li Deng for the MNIST dataset is inadequate. Please correct. Post rebuttal comments: I appreciate the author's efforts for the rebuttal, however, the feedback did not adequately address my questions. I am not changing the score.
ICLR
Title Sparse matrix products for neural network compression Abstract Over-parameterization of neural networks is a well known issue that comes along with their great performance. Among the many approaches proposed to tackle this problem, low-rank tensor decompositions are largely investigated to compress deep neural networks. Such techniques rely on a low-rank assumption of the layer weight tensors that does not always hold in practice. Following this observation, this paper studies sparsity inducing techniques to build new sparse matrix product layer for high-rate neural networks compression. Specifically, we explore recent advances in sparse optimization to replace each layer’s weight matrix, either convolutional or fully connected, by a product of sparse matrices. Our experiments validate that our approach provides a better compression-accuracy trade-off than most popular low-rank-based compression techniques. 1 Introduction The success of neural networks in the processing of structured data is in part due to their over-parametrization which plays a key role in their ability to learn rich features from the data (Neyshabur et al., 2018). Unfortunately, this also makes most state-of-the-art models so huge that they are expensive to store and impossible to operate on devices with limited resources (memory, computing capacity) or that cannot integrate GPUs (Cheng et al., 2017). This problem has led to a popular line of research for “neural networks compression”, which aims at building models with few parameters while preserving their accuracy. State of the art techniques for neural network compression. Popular matrix or tensor decomposition methods including Singular Value Decomposition (SVD), CANDECOMP/PARAFAC (CP) and Tucker have been used to address the problem of model compression by a low-rank approximation of the neural network’s weights after learning. Sainath et al. (2013) describe a method based on SVD to compress weight matrices in fully connected layers. Denton et al. (2014); Lebedev et al. (2015); Kim et al. (2016) generalize this idea to convolutional layers and then reduce the memory footprint of convolution kernels by using higher-order low-rank decompositions such as CP or Tucker decompositions. Besides, the Tensor-Train (TT) decomposition has been explored to compress both dense and convolutional layers after a pre-training step (Novikov et al., 2015). This approach may achieve extreme compression rates but it also have impractical downsides that we demonstrate now. In a TT format, all the elements of a M -order tensor are expressed by a product ofM matrices whose dimensions are determined by the TT-ranks (R0, R1, . . . , RM ). For each of theM dimension of the initial tensor, the corresponding matrices can be stacked into an order 3 tensor called a “core” of the decomposition. Hence, the layer weight is decomposed as a set of M cores of small dimensions. Novikov et al. (2015) use this tensor representation to factorize fully connected layers. They first reshape the matrix of weights into an M -order tensor, then apply the TT decomposition. By choosing sufficiently small Rm values, this technique allows to obtain a high compression ratio on extremely wide ad hoc neural architectures. Garipov et al. (2016) have adapted this idea to convolutional layers. However, the current formulation of such TT convolutional layer involves the multiplication of all input values by a matrix of dimension 1 × R1 thus causing an inflation of R1 times the size of the input in memory. This makes the available implementation (Garipov, 2020) unusable for recent wide convolutional networks at inference time. Other compression methods include unstructured pruning techniques that we review more in details in Section 2.3 and structured pruning techniques that reduce the inner hidden dimensions of the network by completely removing neurons (Anwar et al., 2017). According to the recent paper of Liu et al. (2018) however, these techniques are more akin to Neural Architecture Search than actual network compression. Finally, quantization-based compression maps the columns of the weight matrices in the network to a subset of reference columns with lower memory footprint (Guo, 2018). Sparse matrices product for full rank decompositions. We are specifically interested in high-rate compression of neural networks via the efficient factorization of the layer weight matrices. Most known approaches to layer decomposition usually makes low-rank assumption on the layer weight tensors which does not always hold in practice. As we will show in the experiments, this makes the Tucker and SVD based techniques unable to effectively reach high compression rates for standard architectures including both convolutional and fully connected layers, such as VGG19 or ResNet50, whose weight matrices usually exhibit full rank. In this paper, we propose instead to express the weight matrices of fully-connected or convolutional layers as a product of sparse factors which contains very little parameters but still can represent high-rank matrices. Moreover, products of matrices with a total sparsity budget are strictly more expressive than single matrices with that sparsity (Dao et al., 2019), which motivates our interest in products of multiple matrices. Usually, a linear operator (a matrix) from RD to RD has a time and space complexities of O(D2). But some well known operators like the Hadamard or the Fourier transforms can be expressed in the form of a product of logD sparse matrices, each having O(D) non-zero values (Dao et al., 2019; Magoarou & Gribonval, 2016). These linear operators, called fast-operators, thus have a time and space complexities lowered to O(D logD). This interesting feature of fast-operators have inspired the design of new algorithms that learn sparse matrix product representations of existing fast-transforms (Dao et al., 2019) or even that computes sparse product approximations of any matrix in order to accelerate learning and inference (Magoarou & Gribonval, 2016; Giffon et al., 2019). Even though these new methods were initially designed to recover the logD factors corresponding to a fasttransform, they are more general than that and can actually be used to find a factorization with Q < logD sparse matrices. Contributions. We introduce a general framework for neural network compression using the factorization of layers into sparse matrix products. We explore the use of the recently proposed palm4MSA algorithm (Magoarou & Gribonval, 2016) on every layer of a pre-trained neural network to express them as a product of sparse matrices. The obtained sparse matrices are then refined by gradient descent to best fit the final prediction task. When there is only one sparse matrix in the decomposition, our approach recovers the simple procedure of hard thresholding the weights of a matrix after pre-training. We evaluate the effect of different hyper-parameters on our method and show that layers can be factorized into two or three sparse matrices to obtain high compression rates while preserving good performance, compared to several main state-of-the-art methods for neural network compression. 2 Learning sparse matrix products for network compression We describe how to compress NN weight matrices by sparse matrix factorization. We call our procedure PSM for Product of Sparse Matrices. It is obvious to see that a product of sparse matrices with a given sparsity budget can recover a full rank matrix or a matrix with more non-zero values than the initial sparsity budget. This observation motivates the use of a sparse matrix factorization in place of usual low-rank decomposition and sparsity inducing techniques for neural network compression. We first recall linear transform operations in fully-connected and convolutional layers. Then, inspired by recent work on learning linear operators with fast-transform structures, we propose to use a product of sparse matrices to replace linear transforms in neural networks. We also introduce a procedure to learn such factorization for every layers in deep architecture. Finally, we review some known neural network compression techniques that appear as particular cases of our framework. 2.1 Weight matrices as product of sparse matrices Fully-connected and convolutional layers are based on the computation of linear operations. In a fully-connected layer, the output z ∈ RD′ is simply given by z = a(Wx) where a is some non-linear activation function. W ∈ RD′×D is the weight matrix of the layer and x ∈ RD is the output of the preceding layer. The linear operation in a convolutional layer can be represented by a doubly-block Toeplitz matrix (Wang et al., 2020). An other way to perform the operation is to employ reshaping operators to represent the linear operator as a dense matrix applied to all the patches extracted from the input (Garipov et al., 2016). In this work, we focus on this latter representation of the convolution operation. More formally, let rS : RH×W×C 7→ RHW×CS 2 be the reshape operation that creates the matrix of all vectorized patches of size (height and width) S2 on an input image with C channels. The matrix of K filters W ∈ RCS2×K can then be applied to these patches (multiplied with rS) to produce the output of the convolutional layer in a matrix shape. Finally, a second reshape operator t : RHW×K 7→ RH×W×K is applied on the feature map matrix to reconstruct the output tensor of the layer Z ∈ RH×W×K . Altogether, the convolution operation can be written as Z = a(t(rS(X )W)) where a is some non-linear activation function and X is the output 3-D tensor of the preceding layer. We preserve simplicity in notation here, assuming without loss of generality that the stride used by rS is equal to 1 and that the input tensor is padded with bS2 c zeros vertically and horizontally. The whole process is depicted in Supplementary Material A.2. Our general idea is to replace the weight matrix of each neural network layer with a product of Q sparse matrices, hence reducing the storage and computational complexities of the layer. Indeed, for an initial matrix of dimension (D × D′), if all sparse matrices store O(D) non-zero values, then the total complexity of the product becomes O(QD) instead of O(DD′). To define a fast-transform operator, one would use Q = logD but in practice we show that we can chose much smaller Q and achieve huge compression rates without lowering much the performance. Supplementary Material A.1 illustrates the effect of our compression scheme on a simple architecture including one convolution layer and a single dense layer. Given an input vector x ∈ RD, expressing the weight matrix W ∈ RD′×D of a fully connected layer as a product of sparse matrices gives output z such that: z = a ( Q∏ i=1 Six ) , (1) where ||Si||0 = O(D) so that the complexity in time and space of this layer is reduced to O(QD) instead of O(DD′). Similarly, in the convolution layers, the output Z ∈ RH×W×K is obtained from an input tensor X ∈ RH×W×C : Z = a ( t ( rS (X ) Q∏ i=1 Si )) , (2) where ||Si||0 = O(max(S2C,K)) so that the time complexity of the layer is reduced from O(HWCS2K) to O(HWQ ·max(CS2,K)) and the complexity in space is reduced from O(CS2K) to O(Q ·max (CS2,K)). Since there is no constraint on the rank of factors, the sparse matrix products of each layer can reach full rank, unlike low-rank decomposition methods. Moreover, the reconstruction of a sparse matrix product with a total of O(QD) non-zero values can produce a matrix with more than O(QD) non-zero values. This is consistent with the intuition that a product of sparse matrices can be more expressive than a single sparse matrix. 2.2 Full Neural Network Compression The full compression pipeline we propose includes first the learning of a standard NN, second the compression of each layer independently as a product of sparse matrices, and finally a fine tuning of the compressed NN whose layers are all expressed as PSM layers. The second step requires approximating each weight matrix W (of a dense or a convolutional layer) as a product of sparse factors, which is cast as the following optimization problem: min {Si}Qi=1 ∥∥∥∥∥W− Q∏ i=1 Si ∥∥∥∥∥ 2 F + Q∑ i=1 δEi(Si), (3) where for each i ∈ JQK, δEi(Si) = 0 if Si ∈ Ei and δEi(Si) = +∞ otherwise. Ei is the set of solutions that respect a sparsity constraint (e.g. number of non zeros values). Although this problem is non-convex, non-differentiable, and the computation of a global optimum cannot be ascertained, the palm4MSA algorithm proposed by Magoarou & Gribonval (2016) is able to learn such factorization by finding a local minimum with convergence guarantees. For more details about palm4MSA, see Supplmentary Materiel A.4. Once every layer’s weight matrix is approximated by a product of sparse matrices, these PSM layers are assembled in a compressed NN which is refined to optimize the initial task objective while the sparsity support of all factors is kept fixed. 2.3 Related work Some techniques based on inducing sparsity in neural connections, e.g. zeroing single weights in layer tensors, can be seen as particular cases of our method. The most straightforward approach to this is to simply remove the weights with lowest magnitude until a given sparsity ratio is reached. This can be done in a very trivial fashion by just removing weights on a pre-trained network and then finetuning the remaining connections. This method can be seen as the particular case of ours when there is only one factor to approximate weight matrices (i.e., Q = 1). As we show in the experiments, this method doesn’t allow high compression rate without degradation of the accuracy. Zhu & Gupta (2017) proposed instead to intertwine the removal of the connections by finetuning remaining weights, achieving better classification performance. Others, approaches for inducing sparsity in the network were proposed (Molchanov et al., 2017; Louizos et al., 2018-06-22) but they do not seem to offer performance improvements in general settings (Gale et al., 2019). The idea of replacing layers by sparse factorization has been previously explored but restricted to particular structures. In Deep Fried Convnets, Yang et al. (2015) propose to replace dense layers of convolutional neural networks by the Fastfood approximation (Le et al., 2013). This approximation is a product of diagonal matrices, a permutation matrix, and a Hadamard matrix which can itself be expressed as a product of logD sparse matrices (Dao et al., 2019). The Fastfood approximation thus provides a product of sparse factors and from this perspective, the Fastfood layer proposed by Yang et al. (2015) is a particular, constrained, case of our more general framework. Moreover, the Deep Fried Convnets architecture is based on the Hadamard matrix that imposes a strong structural constraint on the factorization, which might not be suitable for all layers of a deep architecture. The term sparse decomposition used in Liu et al. (2015) for network compression refers to separate products between dense and sparse matrices to represent the weights of the convolution kernels in a network. Finally, Wu et al. (2019) have recently proposed a very similar framework than ours along with a regularization strategy to learn the sparsity in the sparse factors but their method does not allow for more than two sparse factors and the compression of the convolutional layers is not considered although best performing architectures tend to store most of their parameters in these layers. 3 Experiments Section 3.1 details the experimental settings and parameters to ensure reproducibility. We provide an in depth analysis of our method in Section 3.2. Finally we report in Section 3.3 a comparative study of our method with state-of-the-art methods. 3.1 Experimental setting Our analysis is focused on image classification tasks, we investigate the compression of standard architectures (pretrained models) with our approach and with a few state of the art methods. We evaluate all methods by measuring both the compression ratio and the accuracy of compressed models. We first present implementation details and datasets, then we detail the baselines and the hyperparameters we used. Implementation details. The code was written in Python, including the palm4MSA algorithm (it will be made available on github). NNs were implemented with Keras (Chollet, 2015) and Tensorflow (Abadi et al., 2015). Due to the lack of efficient implementation of the sparse matrices in Tensorflow, we had to hijack the implementations of the dense matrix product and convolution offered by Keras in order to make the learning of networks as efficient as possible (See Supplementary material A.3 for details). Datasets and Neural Architectures. Our experiments are conducted on four standard image classification data sets of varying difficulty: MNIST (Deng, 2012), SVHN (Netzer et al., 2011), CIFAR10, CIFAR100 (Krizhevsky, 2009). Pretrained NNs to be compressed are classically used with these data sets i.e. Lenet(LeCun et al., 1998), VGG19(Simonyan & Zisserman, 2015), Resnet50 and Resnet20 (He et al., 2016). Details on datasets and neural architectures may be found in Supplementary Material A.5. Competing baselines. We present below the baselines and the variants of our approach that we considered. In all cases the methods were applied on the same pre-trained models and all compressed models were fine-tuned after compression: • Low-rank factorization methods, including Tensor-Train decomposition (Novikov et al., 2015; Garipov et al., 2016) (named TT hereafter) and an hybrid of Tucker decomposition (Kim et al., 2016) and SVD (Sainath et al., 2013) (named Tucker-SVD) where Tucker and SVD decomposition are used respectively for the compression of convolutional and dense layers. • Two sparsity inducing pruning techniques. The first one is a simple magnitude based projection of weight matrices on a sparse support. This can be seen as a particular case of our model where only one sparse factor is required. We name this strategy “Hard pruning” (HP in the following). The second method, named Iterative pruning (IP) is the iterative strategy proposed by Zhu & Gupta (2017), which refine magnitude based projection with finetuning steps. • Finally, we evaluate the interest of using the palm4MSA algorithm to discover a sparse matrix decomposition of the layers compared to some random decomposition. More precisely, we evaluate (I) the performance of a model whose layers would be decomposed by palm4MSA but whose weights would be re-initialized while peserving the sparsituy support; and (II) the performance of a model whose weights and sparsity support would be randomly sampled at initialization. Note that we initially considered Deep Fried convnets as an additional baseline but we finally did not include it on our experimental study since this method is originally dedicated to compress fully connected layers and our attempts to make it able to compress convolutional layers as well actually failed, preventing the method to be applied to compress many state of the art architectures that contain mostly convolutional layers. Hyper-parameters. The stopping criteria used for the palm4MSA algorithm is 300 iterations or a relative change between two iterations below 10−6. The projection method is the one used in Magoarou & Gribonval (2016). With K the desired level of sparsity, this method ensures that each sparse factor contains at least K non-zero values per row and per column and at most 2K on average. For experiments with random sparsity support, the initialization of the weights must be adapted to the reduced number of connections in the layer. To do this, we adapt the initializations Xavier (Glorot & Bengio, 2010) and He (He et al., 2015) to the initialization of sparse matrices (See Supplementary Material A.3). We chose M = 4 cores for the Tensor-Train decomposition of any tensor and the maximum possible rank value of the decompositions is specified by the value R in the experiments. In the hybrid of Tucker and SVD decomposition, the rank of the Tucker decomposition is automatically detected by the Variational Bayes Matrix Factorization method, as explained by Kim et al. (2016); the rank of the SVD in the dense layers is chosen such that only a certain percentage (specified in the experiments) of the singular values are kept. Fine-tuning of the Lenet network was done with the RMSProp optimizer and 100 learning epochs. The VGG19 and Resnet networks are fine-tuned with Adam (Kingma & Ba, 2014) optimizer and respectively 300 and 200 epochs. For each compression method and configuration, the best learning rate is chosen using the classification error on a validation sample after 10 iterations, with learning rate values in {10−3, 10−4, 10−5}. Standard data augmentation is applied: translation, rotation and flipping. 3.2 Analysis of the method We first provide an in-depth analysis of our method to validate the use of Palm4MSA for the decomposition of layer weight matrices into a product of sparse matrices. We then evaluate the impact of hyper-parameters Q and K on model accuracy and compression rate. Approximation error. We evaluate the quality of the Palm4MSA approximation of original weight matrices, and its influence on the final performance of the method. We report results gained on the compression VGG19 trained on CIFAR100 as illustration. Figure 2 shows the approximation error between the product of Q sparse factors W̃ := ∏Q q=1 Sq and the original weights W for each layer. It is computed as the normalized distance between the matrices: error = ‖W− W̃‖2F /‖W‖2F . Figure 2 further shows that higher K, i.e. the minimum number of non-zero values per row and per column, yield better approximation. We observe that for some layers, the error can be high, despite relative good accuracy performance observed in Table 1, suggesting the use of the Palm4MSA is limited. In order to examine this usefulness, we have built two other types of models that implement the decomposition of layers into product of sparse matrices, either with random sparsity support, or with random weights, rather than considering those provided by Palm4MSA: • “PSM random”: We construct completely random sparse factorization. The sparsity support is chosen by projecting a reduced centered Gaussian matrix. The weights are initialized using the procedure described in Section 3.1; • “PSM re-init.”: We use the sparsity support obtained by Palm4MSA but we reinitialize the weights using the procedure described in Section 3.1. Table 1 shows the network with the Palm4MSA method obtains the best performance in classification after refining the network (more results in Table 4 of Supplementary Material A.7). Overall «PSM re-init» and «PSM random» may perform very badly on some cases, which suggests the importance of both weights and sparsity support found by Palm4MSA. Sparsity level and the number of factors. Sparsity level corresponds to the approximate number of non-zero values in each row and each column of the sparse matrices constituting the sparse decomposition. Figure 1 and Table 4 show the performance of our model obtained with various sparsity levels K and numbers of factors Q. We notice that the number of factors seems to have a rather limited effect on the quality of the performance, while sparsity level is a more determining factor of the quality of the approximation. Note that we did not considered the Hierarchical version of Palm4MSA (Magoarou & Gribonval, 2016) since it requires Q = logD factors, which greatly increases the number of non-zero values in the decomposition without significantly improving the final performance of the model. 3.3 Comparative study Figure 1 reports comparative results gained on standard benchmark datasets with well known architectures. We discuss below the main comments to be drawn from these results. Reliability. First of all, the behaviour of the baseline methods seems quite dependent on the experiment. For instance TT performance may vary depending on the chosen rank (e.g. see rank 10 in figure 1-(b)); Hard Pruning technique performs badly on MNIST. Moreover these baselines are not always manageable in practice (e.g. no results of TT on Resnet compression, see below). On the contrary, we observe some more stable performances with regards to the choice of hyper-parameters and a systematic very low variance obtained with our method. Comparison to low rank decomposition techniques. Our approach significantly outperforms the Tucker-SVD method in any case. We believe that the low rank approximation may make more difficult to reach high compression rates while preserving good performance. This emphasizes our point that standard low rank decomposition methods cannot afford high compression regime, e.g. low-rank assumption, without degradation of the accuracy of the network. On the contrary, the TT formulation may achieve higher compression rates than Tucker, as was already observed in past works. It seems better suited than Tucker decomposition and may performs similarly or better as our method in some cases. Yet, the method has few drawbacks: first it may exhibit very strong variance, especially for high compression rates (see results in figures 1-(b) to 1-(d) on SVHN, CIFAR10-100) ; second, as illustrated in Supplementary Material A.6, the implementation provided by authors do not allow getting results in any case, when the product of the number of filters and of the TT rank is large. In particular we were unable to run experiments on models such as Resnet20 and Resnet50 because the memory footprint is increased considerably (figures 1-(e) and 1-(f)). We thus are unable to get results for higher compression rates that those in the figure with VGG19 (figures 1-(c) and 1-(d)). Comparison with pruning techniques. In the “Hard” pruning case, the compressed network perform very badly. This confirms that a sparse factorization with more than one factor is profitable. When applying the procedure of Zhu & Gupta (2017), however, the magnitude based pruning method preserve good accuracy while removing up to 98% of the connections from the model, except for the MNIST dataset. While our approach significantly outperforms the Hard pruning technique in any case, its Iterative pruning variant Zhu & Gupta (2017) may sometimes leads to significantly higher performance compression in high compression settings than our approach, this is the case in particular with Resnet models on CIFAR100 (figures 1-(e) and 1-(f)). Otherwise, in other settings on Resnet models, and for compressing other models, this technique allows similar performance vs compression tradeoff than our method. Since the Hard pruning technique may be viewed as a special case of our method, this suggests that an iterative-like extension of our method could reach even better results, which is a perspective of this work. 4 Conclusion The proposed approach is able to compress dense and convolutional layers of a neural network by decomposing weight matrices into sparse matrices products. Unlike common decomposition methods, our method does not make any assumptions on the rank of the weight matrices and allows high compression rate while maintaining good accuracy. The experiments show the interest of using a product of multiple sparse matrices instead of a single sparse matrix to convey, in theory, more information for a fixed number of non-zero values. A possible extension of this work would be to study a strategy for progressive sparcity inducing Zhu & Gupta (2017) that could offer further improvements. A Supplementary Material A.1 Illustration of the proposed method The proposed method allows network compression through the factorization of layers weights. Figure 3 highlight the difference between the standard and compressed architecture composed of a convolutional and a fully connected layer, where each layer weight matrix is replaced by a product of sparce matrices. A.2 Reshape operations in convolutional layers In order to apply factorization of convolutions weights, we employ reshaping operators to represent the linear operator as a dense matrix applied to all the patches extracted from an input (Garipov et al., 2016), see Figure 4 for more details. A.3 Implementation details Implementation of sparse matrices. In our implementation, a sparse matrix is in fact defined as the Hadamard product , or pairwise product, between a dense matrix that encodes the weights to be learned and a matrix of constant binary elements that encodes the sparsity support. Thus, the implementation of a sparse matrix S ∈ RD×D, ‖S‖0 = O(D) is: S = W M, (4) where W ∈ RD×D is a dense weight matrix and M ∈ {0, 1}D×D, ‖M‖0 = O(D) is a constant binary matrix. With this implementation, the values of the gradient weights of W outside the sparsity support defined by M are always equal to zero and thus the corresponding weights of W are never updated. In other words, the sparsity support of S is fixed by M. Although this implementation allows to evaluate ou method, the dense storage of all the values of W and M is required. Specifically, 2D2 values are stored to simulate the sparsity of a matrix of size D2 containing O(D) non-zero values. This non-optimal implementation takes advantage of the parallel algorithms of the matrix product and is actually faster on GPU than an implementation that would use Tensorflow’s SparseTensor class 1. Implementation of the convolution. To compute the convolutions in the network, we rebuild the convolution kernel from the product of sparse matrices. Then this convolution kernel is directly used as a parameter of the Tensorflow function conv2d for fast computation. Implementation of the Tensortrain decomposition. The decomposition is performed by applying the decomposition function matrix_product_state provided by the Tensorly library on the tensors obtained on the pre-trained networks. Implementation of pruning as a function of magnitude. To implement the method from Zhu & Gupta (2017), we used the function prune_low_magnitude from the library tensorflow_model_optimization provided by Tensorflow. With this method, the pruning and the refinement of the weights are combined by progressively removing the connections during the learning process until the desired percentage of pruning is obtained. (Re)-Initialization of the weights of a sparse matrix decomposition. When the weights of a sparse matrix factorization are not provided by palm4MSA, the initialization of the weights must be adapted to the reduced number of connections in the layer. We adapt the Xavier (Glorot & Bengio, 2010) and He (He et al., 2015) initialization methods for sparse matrices. Specifically, the first sparse factor is initialized using the He method because ReLU activation function is apllied yielding values that are not zero-centered. The following sparse factors are initialized using the Xavier method since there is no non-linearity between factors. A.4 palm4MSA algorithm The palm4MSA algorithm Magoarou & Gribonval (2016) is given in Algorithm 1 together with the time complexity of each line, using A = min(D,D′) and B = max(D,D′) for a matrix to factorize W ∈ RD×D′ . Even more general constraints can be used, the constraint 1One can use the SparseTensor in conjunction with the Variable class to implement these layers sparingly and have exactly a reduced number of parameters but this implementation was slower and we preferred not to use it for experiments. sets Eq are typically defined as the intersection of the set of unit Froebenius-norm matrices and of a set of sparse matrices. The unit Froebenius norm is used together with the λ factor to avoid a scaling indeterminacy. Note that to simplify the model presentation, factor λ is used internally in palm4MSA and is integrated in factor S1 at the end of the algorithm (Line 14) so that S1 does not satisfy the unit Froebenius norm in E1 at the end of the algorithm. The sparse constraints we used, as in Magoarou & Gribonval (2016), consist of trying to have a given number of non-zero coefficients in each row and in each column. This number of non-zero coefficients is called sparsity level in this paper. In practice, the projection function at Line 9 keeps the largest non-zero coefficients in each row and in each column, which only guarantees the actual number of non-zero coefficients is at least equal to the sparsity level. Algorithm 1 palm4MSA algorithm Require: The matrix to factorize U ∈ RD×D′ , the desired number of factors Q, the constraint sets Eq , q ∈ JQK and a stopping criterion (e.g., here, a number of iterations I ). 1: λ← ||S1||F {O(B)} 2: S1 ← 1λS1 {O(B)} 3: for i ∈ JIK while the stopping criterion is not met do 4: for q = Q down to 1 do 5: Lq ← ∏q−1 l=1 S (i) l 6: Rq ← ∏Q l=q+1 S (i+1) l 7: Choose c > λ2||Rq||22||Lq||22 {O(A logB +B)} 8: D← Siq − 1cλL T q ( λLqSiqRq −U ) RTq {O(AB logB)} 9: S(i+1)q ← PEq (D) {O(A2 logA) or O(AB logB)} 10: end for 11: Û:= ∏Q j=1 S (i+1) q {O(A2 logB +AB)} 12: λ← Trace(U T Û) Trace(ÛT Û) {O(AB)} 13: end for 14: S1 ← λS1 {O(B)} Ensure: {Sq : Sq ∈ Eq}q∈JQK such that ∏ q∈JQK Sq ≈ U A.5 Dataset details Experiments are conducted on four public image classification dataset: MNIST, SVHN, CIFAR10, and CIFAR100, with three pretrained network architectures: Lenet, VGG-19, Resnet50, and Resnet20. Table 2 details the datasets’ characteristics and the corresponding NN models on which we evaluated compression methods. A.6 Experiments on Tensor Train memory overhead Although Tensor Train can obtain impressive compression rates the method may require large amounts of memory. This memory requirement does not allow to experiment on architectures with many convolution layers with many filters, such as ResNet. Table 3 highlights the increase in memory when experimenting on investigated architectures for few hyperparameter settings. The row Others stands for requirement of all other methods. A.7 Experiments of the proposed method Since the error of the matrix approximation can be high, despite the relative good accuracy, we investigate other factorization method than Palm4MSA. Specifically, two other methods are evaluated. «PSM re-init.» use the same sparsity support as palm with re-initialized weights and «PSM random» has random sparsity support and weights. Table 4 present the results and shows the supperiority of Palm4MSA.
1. What is the focus of the paper on neural network weight matrix compression? 2. What is the contribution of the paper regarding the experimental study of compression in neural networks? 3. What are the strengths and weaknesses of the paper in terms of its findings and comparisons with other works? 4. How does the reviewer assess the significance of the results, considering the existence of architectures with fewer parameters? 5. What is the reviewer's view on the distinction between neural architecture search and neural network compression? 6. How could the paper be improved by considering the compression objective during training? 7. What is the purpose of comparing PSM random and PSM re-init methods?
Review
Review The paper considers compression of neural network weight matrices by decomposition into a product of sparse matrices. Its main contribution is an experimental study of the effectiveness of this type of compression in neural networks for image classification (MNIST and CIFAR) with standard architectures such as RESNET and VGG19. The compression algorithm, Palm4MSA, is taken from previous work. I am not familiar with other experimental studies of Palm4MSA, but the paper has a good overview of other compression methods that I am familiar with, so I trust that the overview is complete. The main reason that I am not too excited about the paper is that the results are kind of expected. We know that architectures with much smaller number of parameters exist (a comparison with MobileNetV2 which has about 40x fewer parameters would be suitable). As a side remark, I don't think it makes a lot of sense to distinguish between neural architecture search and neural network compression. You can think of neural architecture search as yet another compression method that happens during training (not post training). One way of making the paper more interesting would be to consider the compression objective already during training, rather than merely fine tuning post compression. The "PSM random" and "PSM re-init" methods seem like rather arbitrary choices for comparison.
ICLR
Title A 3D Convolutional Neural Network for Predicting Wildfire Profiles Abstract Wildfire has become an unavoidable natural disaster that continues to threaten fire-prone communities and the frequency is expected to increase due to climate change. Therefore, predicting a wildfire spread profile is an essential tool for firefighters when planning an evacuation strategy. The current traditional, physics, and empirically based fire spread models require extensive inputs, which are often difficult to obtain. Thus, we propose a 3D Convolutional Neural Network (CNN), named WildfireNet, that can predict the profile of wildfire of the next day when given historical wildfire profiles and accessible remote-sensing data. WildfireNet utilizes 3-dimensional spaces to extract features from both the temporal and spatial dimensions to better understand the relationship between historical fires and upcoming fires. The motivation behind WildfireNet is to locate fires in a precise manner and be able to accurately predict fire profiles. Pixels that were labeled as fire but not on the previous days were extracted to calculate Intersection over Union (IoU) and recall. WildfireNet outperformed 2D CNN and logistic regression model in both IoU and recall. 1 INTRODUCTION In recent years, the magnitude and intensity of wildfire have become challenging for the fire-prone community to withstand. Even worse, global warming and consequent fuel drying will increase the frequency of devastating wildfires (Halofsky et al., 2020). If such trend continues, upcoming wildfires will be more destructive than any fires in the past. The consequences of massive wildfires are brutal. For instance, in 2003, wildfires that occurred in San Diego County, California, burned over 376,000 acres and 3,241 households, which was estimated to be $2.45 billion in terms of total economic costs (Diaz, 2012). Traditional, physics, and empirically based wildfire spread models have been continuously studied to mitigate losses resulting from wildfire. However, these models often require extensive inputs, which are often difficult to obtain or even impossible to get. Convolutional Neural Networks (CNN) have been widely used with remote sensing data for various applications and showed promising results. Therefore, we implemented one of the commonly used CNN architecture, U-Net, to create WildfireNet. U-Net was developed for biomedical image segmentation that extracts global and local features to learn patterns to output a segmented image. The method has a relatively modest need for data collection and is capable of predicting a wildfire profile within a second. Thus, we present a novel deep learning method to determine dynamic wildfire profiles with the basic input data: wildfire perimeters, land cover, topography, and weather data. 2 RELATED WORKS Modeling wildfire has been an active research topic. Works have been done towards predicting the occurrence of wildfires with spatial susceptibility (Ghorbanzadeh et al., 2019; Sayad et al., 2019). FARSITE is a two-dimensional model that utilizes a vector propagation approach to depict fire perimeter growth. The model is sophisticated to adjust different fire types and behaviors, such as surface fire, crown fire, spotting, and point source fire acceleration (Finney, 1998). The model shows a promising result in basic conditions as the prediction matches very well to the actual fire boundary. However, it is computationally expensive and requires an extensive number of inputs. Also, the model accuracy varies widely across wildfires in different regions. In recent years, artificial intelligence has been used to solve the problem. Subramanian and Crowley presented a novel approach for utilizing Reinforcement Learning for learning forest wildfire spread dynamics directly from readily available satellite images (Subramanian & Crowley, 2018). FireCast combined Artificial Intelligence and data collection from GIS to predict which areas are at high risk in the future. It utilizes 2D CNN and is trained to predict areas that are expected to burn during the next 24 hours when given an initial fire perimeter (Radke et al., 2019) . From the studies, utilizing AI to solve the complex nature of wildfire is a promising way to predict wildfire spread and avoid complicated physical computation that many traditional methods hold. Thus, we propose a WildfireNet that can predict the profile of wildfire of the next day. To our knowledge, WildfireNet is the first 3D CNN to be used in the application of the fire spreading model. 3 WILDFIRENET ALGORITHM 3.1 TRAINING METHOD There are two categories, fire or no fire, to classify for each pixel. Binary cross-entropy loss (BCE) is an appropriate loss function to train the model . BCE = −1 N N∑ i=0 yi · log(ŷi) + (1− yi) · log(1− ŷi) (1) As shown in the equation above, binary cross-entropy (BCE) loss is informative to our model because it tries to minimize the distance between the predicted and the ground truth probability distributions, which is the ultimate goal of our problem. For instance, if the model predicts a pixel to contain fire where it doesn’t, BCE will output a high loss, penalizing the model with a large cost. From this train and error approach, the model will find the local minimum point of the loss function as its goal is to achieve a smaller loss at every training step. Thus, with BCE, the model will be optimized to maximize the likelihood of outputting the correct shape of the wildfire profile. Furthermore, Adam optimization was used and a learning rate of 1e-4 was applied to train the model. 3.2 MODEL AND IMPLEMENTATION U-Net was first introduced solely for the purpose of image segmentation on biomedical images. The output of U-Net is different than the typical use of CNN, which is on classification tasks that outputs a single class label to an input image. The output of U-Net is different. Instead of assigning a single label per input image, the model localizes the label for each pixel of the input image (Ronneberger et al., 2015). The model has two major paths. It first begins with contraction, which consists of convolutions and maxpool to extract features. Next, the model undergoes expansion where the size of the image resizes back to the original input to enable precise localization. The sequence of contraction and expansion yields a u-shaped architecture. The unique aspect of this model is the outputs from contraction are concatenated to the expansion output. This approach helps the model to maintain precise localization of high-resolution features at the output (Ronneberger et al., 2015). We decided to implement the U-Net model because 1) the model has a capacity to intake a raw image and output a segmented image, 2) it works well with a small dataset, 3) and it is capable of predicting a wildfire profile within a second. The U-Net model is adjusted so that it becomes more applicable to our study. Similar to the U-Net, WildfireNet is composed of the two major paths: contraction and expansion. The full architecture of the model is shown in Figure 1. In contrast to the U-Net, WildfireNet consists of fully connected layers at the bottom of the architecture. After the last downsampling, the 3D image is flattened into 1D array, and weather data is concatenated. The model is further trained with dense layers to learn the effect of weather variables in its prediction. Furthermore, past wildfire profiles can play a dominant role in the future shape of the wildfire. Therefore, 3D CNN was used instead of 2D. In 3D CNN, the model further extracts features from both the temporal and spatial dimensions, whereas, in 2D CNN, the model only focuses on spatial features (Tran et al., 2015). In this study, 3 previous days of wildfire profiles are combined to convert input images from 2D to 3D. This allows the model to have a better sense on how historical fires are correlated to the fire on the next day. 3.3 INPUT DATA Topography, weather, and available fuels are major contributor to fire growth (Estes et al., 2017). Thus, along with three days of historical fire profiles, we decided to add land cover, topography, and weather data to the model. 3.3.1 DYNAMIC WILDFIRE PERIMETERS A total of 302 daily fire perimeters were retrieved. The size of the data is limited compared to other deep learning studies. However, WildfireNet, derived from U-Net, is proven to perform well with small dataset (Ronneberger et al., 2015). Wildfire perimeters were obtained from National Inter agency Fire Center FTP Server 1. Perimeters are in the format of .kmz, which contains an array of coordinates of boundaries. In this paper, only the fires that occurred in California from 2013 to 2019 were observed. For each wildfire perimeters, as shown in Figure 2, an array of coordinates was used to fill inside the perimeter to create a binary map to reflect the overall shape of the wildfire. In other words, if a given pixel is within the perimeter, the pixel is labeled as 1 to indicate fire, otherwise, if the pixel is outside the boundary, the pixel is assigned to 0 to reflect no fire. An Important assumption was made when creating a binary map. For instance, there were spots of the region within the boundary that was not on fire, but these spots were considered as a risk zone and were filled in as well. binary map consists of 256x256 pixels, covering 0.5 degrees in both latitude and longitude. It is important to maintain the same spatial scale to distinguish fires with respect to their true sizes. Overall, a preprocessed binary map is used as an input to represent the wildfire profile. 3.3.2 LAND COVER AND VEGETATION Normalized Difference Vegetation Index (NDVI) is a remote sensing data that combines measurement of wavelengths and intensity of visible and near-infrared light to calculate the concentrations of green leaf vegetation (Zaitunah1 et al., 2018). NDVI is useful to fire spreading models because it indicates the water content of the crop and it is more likely for fire to spread on dry vegetation than on wet crops. 1Wildfire perimeter source: https://ftp.nifc.gov/ Advanced Very High Resolution Radiometer (AVHRR) NDVI was collected from National Oceanic and Atmospheric Administration 2 (NOAA). The data are projected on a 0.05-degree x 0.05-degree global grid. The mean NDVI was used as an input. 3.3.3 TOPOGRAPHY Topography has a direct effect on fire behavior (Rothermel, 1972). For example, the rate of fire spread rapidly increases on steeper slopes. Studies have shown that there is a strong correlation between topography and fire severity (Estes et al., 2017). 1/3 arc-second digital elevation model (DEM) was retrieved from the USGS national map 3 to reflect topography on locations of fires. DEM contains elevation for each pixel. This allows the model to learn fire behavior that is directly responded to topography, such as slope change. For each wildfire, four corner coordinates of the binary map were used to crop DEM to match spatial location. 3.3.4 WEATHER Weather is an important factor that can significantly contribute to the spread of wildfires. Simplified daily weather data, including mean wind speed, mean temperature, mean relative humidity were used to represent atmospheric conditions. Weather data were retrieved from the Climatology Lab 4. The resolution of the data is 1/24 degree x 1/24 degree. 3.4 OUTPUT After three days of historical fire profiles and remote-sensing data are processed into WildfireNet, it outputs a probabilistic distribution of fire of the next day. A sigmoid activation is used at the last layer to ensure the values of each pixel is within the range of [0, 1], representing a probability of fire occurrence. The output is further processed to create a predicted binary map, which is evaluated with the ground truth in the loss function to train the model. To create a binary map, an optimal threshold is picked and the state of the pixel is classified as fire (1) or not (0) with the following rule: State of Pixel = { 1, if p ≥ threshold 0, if p < threshold (2) Various thresholds were evaluated and selected threshold yielded the highest metric scores. 2NDVI source: https://www.ncdc.noaa.gov/cdr/terrestrial/normalized-difference-vegetation-index 3Elevation data source: https://www.usgs.gov/core-science-systems/national- geospatial-program/national- map 4Weather data source: http://www.climatologylab.org/gridmet.html 4 BASELINE MODEL We decided to use the U-Net and logistic regression model as baseline models to compare and evaluate the performance of WildfireNet. Similar to WildfireNet, U-Net extracts features through downsampling and upsampling pathways, however, it does not consider historical fires since the dimension is limited to 2. U-Net will work as a baseline model to assess the temporal aspects of the WildfireNet in predicting the wildfire shape. The logistic regression model is a commonly used baseline model due to its simplicity. However, it lacks convolution to extract features. Thus, it will provide a suitable starting point to understand the task and validate the usage of CNN in predicting wildfire spread profiles. The input for the logistic regression model includes historical binary maps, elevation data, wind speed, wind direction, and the state of the surrounding pixels. The state of the surrounding pixels is an essential aspect since the pixel has a higher probability of setting on fire if one of the neighboring pixels is on fire. Moreover, there are a total of 8 neighboring pixels that contribute to the state of the pixel, as shown in Figure 2. Baseline models were trained and tested with the same data as WildfireNet. 5 RESULTS As shown in Figure 4(a,c), the output of WildfireNet shows a probabilistic distribution of fires occurring at each pixel. If the model is confident that there is a fire in a certain pixel, it will assign a low score on the pixel. The model also projects how fire will spread in the future. For example, in the Mad River Complex fire, there are two major fire bodies. WildfireNet predicts the fire will expand at the bottom of the left body and no change will occur on the right body. In fact, in the comparison between the current day to the next day, it shows that the actual fire of the next day did not expand elsewhere except at the bottom of the left body. Moreover, in the Rey fire, the model predicts the fire will enlarge on the right side of the boundary, whereas, not so much on the left side. In the comparison between the current day to the next day, it shows that the actual fire did expand to its right. These examples validate the model’s capacity of predicting the growth pattern of wildfires. 5.1 EVALUATION Intersection over Union (IoU) was calculated to evaluate the model’s performance in predicting the profile of the wildfire. IoU is commonly used metric to evaluate the performance of object segmentation. IoU is defined as IoU = Area of Overlap Area of Union (3) The metric is straightforward once a predicted image and a ground truth image are defined. In this study, both images are binary maps where each pixel is either 0 or 1. Then, the area of overlap is simply the number of pixels that have the same value in both images and union is the area encompassed by both images. Therefore, a predicted image that matches perfectly to the ground truth will score 1. As shown in Table 1, WildfireNet achieved an IoU score of 0.997 in the test set, while U-Net and logistic regression model scored 0.995 and 0.913, respectively. The result indicates that WildfireNet is excellent in precisely labeling each pixel with the presence of fire or not and performs better than the baseline models. However, in the test set, only 5 percent of labels in the binary map is labeled as fire, which shows a sign of class imbalance in the data set. Therefore, IoU is not the best metric to evaluate a model’s performance in predicting new fires because the model can obtain a high IoU score by simply predicting every pixel to be 0. Therefore, models are evaluated on only the pixels that were labeled fire on the next day but not on the current day. We defined such pixels as changed pixels. Considering only the changed pixels will truly measure the model’s performance in predicting the changes in wildfire profile. Thus, we defined expanded IoU as the area of intersection between the predicted and the truth of the changed pixels over the area of union between the predicted and the truth of the changed pixels. Expanded IoU = Area of Overlap of Changed Pixels Area of Union of Changed Pixels (4) Moreover, expanded recall was formulated as true positives of the changed pixels over the total positive of changed pixels. Expanded Recall = True Positive of Changed Pixels Total Positive of Changed Pixels (5) On the test set, WildfireNet performed better than the baseline models in both expanded IoU and recall. All models scored the lowest in expanded IoU because the metric further penalizes when predicted fires are not present in the actual fire. WildfireNet achieved 0.541 on recall while the UNet and baseline model scored 0.458 and 0.154, respectively. This implies that WildfireNet predicts correctly more than half of the time on how fires are growing in the actual fire expansion, while the logistic regression model is only correct about 15% of the time. From this result, WildfireNet is superior to baseline models. Wildfires are further categorized into three classes based on the percentage of fire growth from the previous day. Fire is defined as rapid if the fire body expands more than 7% than the previous day, moderate if the growth is between 3% and 7%, and subtle if the growth is under 3%. Table 2 shows that all models having the highest score in the following order: subtle, moderate, and rapid. It is reasonable for models to perform poorly as fire changes rapidly from one day to another. Wildfire is affected by various dynamic factors including but not limited to the inputs that were used to train the model. For instance, unstable weather conditions are often the major contributor to the rapid fire growth. However, daily weather data is not detailed enough to reflect the volatile nature of the weather. The abrupt changes in the weather, such as gust usually occurs for less than 20 seconds. Daily weather data, having a time interval of 24 hours, is not finite enough to inform the model about the sudden changes. In addition, human-induced fire growth can be a huge factor in contributing fire growth, but simply ignoring the effect can hinder the model’s prediction. Therefore, it is reasonable for models to fail in keeping up with the rapid change. In figure 5, WildfireNet outputs a reasonable prediction on the moderate fire. Prediction nearly matches the overall shape of the truth and it predicts the most bottom of the fire body to enlarge, which is the actual case in the real fire. However, in figure 6, WildfireNet cannot keep with a large change of the rapid fire. The model predicts the fire to increase only at the bottom. But in the actual fire, it increases in every direction. 6 LIMITATIONS Compared to many other deep learning applications, WildfireNet is trained with a small dataset. The lack of variability in training data could limit the model to predict accurately when new weather conditions, land cover, and fire profile are introduced. Furthermore, neglecting human-induced fire spread could stall the performance of the model. Furthermore, wildfire is highly influenced by interdependence on coupled climatic and human factors, but several factors were not considered, such as human-induced fire growth. Moreover, a key assumption was made when fire perimeters were preprocessed to create binary maps, which was to consider spots within the boundary that was not on fire to be labeled as fire. Due to such modification, the binary map couldn’t have reflected the actual shape of the fire. In addition, fire perimeters were recorded daily, but there is uncertainty whether fire perimeters were obtained at the same time. If time interval varies between days, the fire growth does not reflect changes in the 24-hour time window. Remote sensing data had different resolutions (wind speed: 1/24 degree, elevation: 1/512 degree, NDVI: 1/200 degree) and contained missing values. Thus, interpolations were put in place to properly complete the data, but it could lead to an error when there is a large amount of missing data. 7 CONCLUSION AND FUTURE WORK WildfireNet is built and experimented to establish a footprint of using 3D CNN to predict wildfire spread profile. Unlike current traditional, physics, and empirically based fire spreading models, WildfireNet does not require extensive inputs and complex computations and with fire profiles and accessible remote sensing data, it can output an upcoming fire profile within a second. Statistical analysis shows WildfireNet is capable of learning patterns from historical fire spread along with land cover, topography, and weather. The model shows a promising result by obtaining higher scores in IoU, expanded IoU, and recall than the baseline models. Future work includes obtaining more fire incidents and remote sensing data. Currently, WildfireNet is trained solely on the fires that existed in California and we believe the model can be flexible to predict any types of fire once it has been trained with various types and conditions of wildfires.
1. What is the main contribution of the paper regarding wildfire forecasting using convolutional neural networks? 2. What are the strengths of the proposed method, particularly in its architecture and incorporation of multiple input types? 3. What are the weaknesses of the paper, especially regarding experimentation and reporting? 4. How does the reviewer assess the use of terms such as IoU and 3D convolution in the context of this paper? 5. What recommendations does the reviewer make for improving the paper, including additional experiments and clarification of certain aspects?
Review
Review Summary of Contributions The paper contributes to the application of spatial wildfire forecasting using convolutional neural networks. In particular, the paper utilizes a modified UNet architecture that includes fully-connected layers that are used to incorporate scalar weather data. The authors also demonstrate that incorporating multiple, previous timesteps of observed fire perimeters improves the model’s predictions. Strengths Wildfire forecasting is an important application for study due to its current and projected future effects on the environment and human interests. Furthermore, it appears to be an application that is promising for improvement with machine learning due to the growth in widely available remote sensing data. The use of a UNet-based architecture seems appropriate for the application because of its ability to capture global information and preserve local, high-frequency spatial patterns. The problem setup of forecasting the spread of wildfire perimeters is similar to existing fire spread models making comparison more straightforward and is of key interest to scientists who study fire spread. The incorporation of multiple, relevant input types (fire perimeters, topography, vegetation, and weather) are all key to wildfire spread. Weaknesses The primary weakness of the paper is the lack of detailed experiments. As a paper primarily focused on the application of existing methodology to a valuable problem it is important that clear and detailed experiments provide information and guidance about the chosen methodology. Furthermore, a broader comparison of different models would be helpful. More extensive experiments and reporting also help with future comparisons made with your work done by subsequent authors. A secondary weakness of the paper is a lack of details about the models and experimental setup used for the results described. The details currently reported are insufficient for the proper reproduction of the reported results. Another secondary weakness is the non-standard use of terms including the definition of IoU and the use of the term 3D convolution. I elaborate on the use of these terms in the context of this paper in the suggestions section. Recommendation Based primarily on the lack of extensive experimentation my recommendation is to reject the paper. While I believe the problem is of keen interest, the paper lacks substantive experimentation and evaluation to make a significant contribution to the community. Clarification The authors use the term “3D convolution” throughout the paper, but from the description of the model it appears that their approach is a 2D convolution with multiple channels (representing time). Specifically, the line “In this study, 3 previous days of wildfire profiles are combined to convert input images from 2D to 3D” from page 2 sounds consistent with a 2D convolution (over space), with temporal channels, not a 3D convolution which convolves over the spatial and temporal dimensions. If the model does convolve over space and time this should be more clearly described and the choice to do so elaborated on. The authors describe IoU as “the area of overlap is simply the number of pixels that have the same value in both images.” The standard definition of IoU only defines the intersection term as pixels where both are labeled as the positive class, not as all pixels with the same value. Does this language reflect an intentional departure from the traditional definition, a misstatement, or a mistake in the way the evaluation metric was computed? Additional information regarding the baseline models and experimental parameters would be helpful when analyzing the results. Suggestions The issues identified on page 6 with “the model can obtain a high IoU score by simply predicting every pixel to be 0” appears to only be the case due to the alternative definition of IoU. The captions on several figures could be expanded to provide more detail (especially figures 5 and 6). I think a more extensive set of experiments to help identify a) the effect of the amount of data on model performance, b) the performance of other CNN architectures (e.g. a simple fully-convolutional architecture), c) the effect of various hyperparameters including model depth/size, and d) the effect of the inclusion/exclusion of specific types of input data (topography, vegetation, and weather). More extensive reporting about the experiments that were reported in the paper. For example, the authors report the performance on three different groupings of data (rapid, moderate, and subtle), but the size of each group is not reported as a fraction of the total test set. Can the authors provide insight into the choice of fire perimeter dataset? There are a number of active fire datasets for both individual fire pixels/detections (VIIRS 375m, MODIS active fire product) and fire perimeters. Why was this set chosen, given the small size? Were others investigated? Given the repeated reference about the benefits of a CNN-based model over more traditional, physics-based models it would be helpful to see quantitative or qualitative comparisons. How was the threshold selected that was used in the final evaluation? “Various thresholds were evaluated and selected threshold yielded the highest metric scores.” Was there a validation set (separate from the training and testing datasets) that was used for threshold selection? Your figures of predicted perimeter versus observed perimeter could be improved by ensuring that subfigures a) and b) have the same spatial scale. It appears subfigures (a) and (c) in figure 4 and subfigure (a) in figure 5 are slightly smaller due to the scale bar. It may also be of interest to more specifically identify the part of the fire perimeter that the model forecasted the spread of (beyond the previous day’s perimeter) by shading it in a different color. The authors may be interested in this (https://arxiv.org/abs/2010.07445) paper which takes a similar approach to fire spread forecasting. This is not a suggestion, but just a relevant paper that the authors should be aware of.
ICLR
Title A 3D Convolutional Neural Network for Predicting Wildfire Profiles Abstract Wildfire has become an unavoidable natural disaster that continues to threaten fire-prone communities and the frequency is expected to increase due to climate change. Therefore, predicting a wildfire spread profile is an essential tool for firefighters when planning an evacuation strategy. The current traditional, physics, and empirically based fire spread models require extensive inputs, which are often difficult to obtain. Thus, we propose a 3D Convolutional Neural Network (CNN), named WildfireNet, that can predict the profile of wildfire of the next day when given historical wildfire profiles and accessible remote-sensing data. WildfireNet utilizes 3-dimensional spaces to extract features from both the temporal and spatial dimensions to better understand the relationship between historical fires and upcoming fires. The motivation behind WildfireNet is to locate fires in a precise manner and be able to accurately predict fire profiles. Pixels that were labeled as fire but not on the previous days were extracted to calculate Intersection over Union (IoU) and recall. WildfireNet outperformed 2D CNN and logistic regression model in both IoU and recall. 1 INTRODUCTION In recent years, the magnitude and intensity of wildfire have become challenging for the fire-prone community to withstand. Even worse, global warming and consequent fuel drying will increase the frequency of devastating wildfires (Halofsky et al., 2020). If such trend continues, upcoming wildfires will be more destructive than any fires in the past. The consequences of massive wildfires are brutal. For instance, in 2003, wildfires that occurred in San Diego County, California, burned over 376,000 acres and 3,241 households, which was estimated to be $2.45 billion in terms of total economic costs (Diaz, 2012). Traditional, physics, and empirically based wildfire spread models have been continuously studied to mitigate losses resulting from wildfire. However, these models often require extensive inputs, which are often difficult to obtain or even impossible to get. Convolutional Neural Networks (CNN) have been widely used with remote sensing data for various applications and showed promising results. Therefore, we implemented one of the commonly used CNN architecture, U-Net, to create WildfireNet. U-Net was developed for biomedical image segmentation that extracts global and local features to learn patterns to output a segmented image. The method has a relatively modest need for data collection and is capable of predicting a wildfire profile within a second. Thus, we present a novel deep learning method to determine dynamic wildfire profiles with the basic input data: wildfire perimeters, land cover, topography, and weather data. 2 RELATED WORKS Modeling wildfire has been an active research topic. Works have been done towards predicting the occurrence of wildfires with spatial susceptibility (Ghorbanzadeh et al., 2019; Sayad et al., 2019). FARSITE is a two-dimensional model that utilizes a vector propagation approach to depict fire perimeter growth. The model is sophisticated to adjust different fire types and behaviors, such as surface fire, crown fire, spotting, and point source fire acceleration (Finney, 1998). The model shows a promising result in basic conditions as the prediction matches very well to the actual fire boundary. However, it is computationally expensive and requires an extensive number of inputs. Also, the model accuracy varies widely across wildfires in different regions. In recent years, artificial intelligence has been used to solve the problem. Subramanian and Crowley presented a novel approach for utilizing Reinforcement Learning for learning forest wildfire spread dynamics directly from readily available satellite images (Subramanian & Crowley, 2018). FireCast combined Artificial Intelligence and data collection from GIS to predict which areas are at high risk in the future. It utilizes 2D CNN and is trained to predict areas that are expected to burn during the next 24 hours when given an initial fire perimeter (Radke et al., 2019) . From the studies, utilizing AI to solve the complex nature of wildfire is a promising way to predict wildfire spread and avoid complicated physical computation that many traditional methods hold. Thus, we propose a WildfireNet that can predict the profile of wildfire of the next day. To our knowledge, WildfireNet is the first 3D CNN to be used in the application of the fire spreading model. 3 WILDFIRENET ALGORITHM 3.1 TRAINING METHOD There are two categories, fire or no fire, to classify for each pixel. Binary cross-entropy loss (BCE) is an appropriate loss function to train the model . BCE = −1 N N∑ i=0 yi · log(ŷi) + (1− yi) · log(1− ŷi) (1) As shown in the equation above, binary cross-entropy (BCE) loss is informative to our model because it tries to minimize the distance between the predicted and the ground truth probability distributions, which is the ultimate goal of our problem. For instance, if the model predicts a pixel to contain fire where it doesn’t, BCE will output a high loss, penalizing the model with a large cost. From this train and error approach, the model will find the local minimum point of the loss function as its goal is to achieve a smaller loss at every training step. Thus, with BCE, the model will be optimized to maximize the likelihood of outputting the correct shape of the wildfire profile. Furthermore, Adam optimization was used and a learning rate of 1e-4 was applied to train the model. 3.2 MODEL AND IMPLEMENTATION U-Net was first introduced solely for the purpose of image segmentation on biomedical images. The output of U-Net is different than the typical use of CNN, which is on classification tasks that outputs a single class label to an input image. The output of U-Net is different. Instead of assigning a single label per input image, the model localizes the label for each pixel of the input image (Ronneberger et al., 2015). The model has two major paths. It first begins with contraction, which consists of convolutions and maxpool to extract features. Next, the model undergoes expansion where the size of the image resizes back to the original input to enable precise localization. The sequence of contraction and expansion yields a u-shaped architecture. The unique aspect of this model is the outputs from contraction are concatenated to the expansion output. This approach helps the model to maintain precise localization of high-resolution features at the output (Ronneberger et al., 2015). We decided to implement the U-Net model because 1) the model has a capacity to intake a raw image and output a segmented image, 2) it works well with a small dataset, 3) and it is capable of predicting a wildfire profile within a second. The U-Net model is adjusted so that it becomes more applicable to our study. Similar to the U-Net, WildfireNet is composed of the two major paths: contraction and expansion. The full architecture of the model is shown in Figure 1. In contrast to the U-Net, WildfireNet consists of fully connected layers at the bottom of the architecture. After the last downsampling, the 3D image is flattened into 1D array, and weather data is concatenated. The model is further trained with dense layers to learn the effect of weather variables in its prediction. Furthermore, past wildfire profiles can play a dominant role in the future shape of the wildfire. Therefore, 3D CNN was used instead of 2D. In 3D CNN, the model further extracts features from both the temporal and spatial dimensions, whereas, in 2D CNN, the model only focuses on spatial features (Tran et al., 2015). In this study, 3 previous days of wildfire profiles are combined to convert input images from 2D to 3D. This allows the model to have a better sense on how historical fires are correlated to the fire on the next day. 3.3 INPUT DATA Topography, weather, and available fuels are major contributor to fire growth (Estes et al., 2017). Thus, along with three days of historical fire profiles, we decided to add land cover, topography, and weather data to the model. 3.3.1 DYNAMIC WILDFIRE PERIMETERS A total of 302 daily fire perimeters were retrieved. The size of the data is limited compared to other deep learning studies. However, WildfireNet, derived from U-Net, is proven to perform well with small dataset (Ronneberger et al., 2015). Wildfire perimeters were obtained from National Inter agency Fire Center FTP Server 1. Perimeters are in the format of .kmz, which contains an array of coordinates of boundaries. In this paper, only the fires that occurred in California from 2013 to 2019 were observed. For each wildfire perimeters, as shown in Figure 2, an array of coordinates was used to fill inside the perimeter to create a binary map to reflect the overall shape of the wildfire. In other words, if a given pixel is within the perimeter, the pixel is labeled as 1 to indicate fire, otherwise, if the pixel is outside the boundary, the pixel is assigned to 0 to reflect no fire. An Important assumption was made when creating a binary map. For instance, there were spots of the region within the boundary that was not on fire, but these spots were considered as a risk zone and were filled in as well. binary map consists of 256x256 pixels, covering 0.5 degrees in both latitude and longitude. It is important to maintain the same spatial scale to distinguish fires with respect to their true sizes. Overall, a preprocessed binary map is used as an input to represent the wildfire profile. 3.3.2 LAND COVER AND VEGETATION Normalized Difference Vegetation Index (NDVI) is a remote sensing data that combines measurement of wavelengths and intensity of visible and near-infrared light to calculate the concentrations of green leaf vegetation (Zaitunah1 et al., 2018). NDVI is useful to fire spreading models because it indicates the water content of the crop and it is more likely for fire to spread on dry vegetation than on wet crops. 1Wildfire perimeter source: https://ftp.nifc.gov/ Advanced Very High Resolution Radiometer (AVHRR) NDVI was collected from National Oceanic and Atmospheric Administration 2 (NOAA). The data are projected on a 0.05-degree x 0.05-degree global grid. The mean NDVI was used as an input. 3.3.3 TOPOGRAPHY Topography has a direct effect on fire behavior (Rothermel, 1972). For example, the rate of fire spread rapidly increases on steeper slopes. Studies have shown that there is a strong correlation between topography and fire severity (Estes et al., 2017). 1/3 arc-second digital elevation model (DEM) was retrieved from the USGS national map 3 to reflect topography on locations of fires. DEM contains elevation for each pixel. This allows the model to learn fire behavior that is directly responded to topography, such as slope change. For each wildfire, four corner coordinates of the binary map were used to crop DEM to match spatial location. 3.3.4 WEATHER Weather is an important factor that can significantly contribute to the spread of wildfires. Simplified daily weather data, including mean wind speed, mean temperature, mean relative humidity were used to represent atmospheric conditions. Weather data were retrieved from the Climatology Lab 4. The resolution of the data is 1/24 degree x 1/24 degree. 3.4 OUTPUT After three days of historical fire profiles and remote-sensing data are processed into WildfireNet, it outputs a probabilistic distribution of fire of the next day. A sigmoid activation is used at the last layer to ensure the values of each pixel is within the range of [0, 1], representing a probability of fire occurrence. The output is further processed to create a predicted binary map, which is evaluated with the ground truth in the loss function to train the model. To create a binary map, an optimal threshold is picked and the state of the pixel is classified as fire (1) or not (0) with the following rule: State of Pixel = { 1, if p ≥ threshold 0, if p < threshold (2) Various thresholds were evaluated and selected threshold yielded the highest metric scores. 2NDVI source: https://www.ncdc.noaa.gov/cdr/terrestrial/normalized-difference-vegetation-index 3Elevation data source: https://www.usgs.gov/core-science-systems/national- geospatial-program/national- map 4Weather data source: http://www.climatologylab.org/gridmet.html 4 BASELINE MODEL We decided to use the U-Net and logistic regression model as baseline models to compare and evaluate the performance of WildfireNet. Similar to WildfireNet, U-Net extracts features through downsampling and upsampling pathways, however, it does not consider historical fires since the dimension is limited to 2. U-Net will work as a baseline model to assess the temporal aspects of the WildfireNet in predicting the wildfire shape. The logistic regression model is a commonly used baseline model due to its simplicity. However, it lacks convolution to extract features. Thus, it will provide a suitable starting point to understand the task and validate the usage of CNN in predicting wildfire spread profiles. The input for the logistic regression model includes historical binary maps, elevation data, wind speed, wind direction, and the state of the surrounding pixels. The state of the surrounding pixels is an essential aspect since the pixel has a higher probability of setting on fire if one of the neighboring pixels is on fire. Moreover, there are a total of 8 neighboring pixels that contribute to the state of the pixel, as shown in Figure 2. Baseline models were trained and tested with the same data as WildfireNet. 5 RESULTS As shown in Figure 4(a,c), the output of WildfireNet shows a probabilistic distribution of fires occurring at each pixel. If the model is confident that there is a fire in a certain pixel, it will assign a low score on the pixel. The model also projects how fire will spread in the future. For example, in the Mad River Complex fire, there are two major fire bodies. WildfireNet predicts the fire will expand at the bottom of the left body and no change will occur on the right body. In fact, in the comparison between the current day to the next day, it shows that the actual fire of the next day did not expand elsewhere except at the bottom of the left body. Moreover, in the Rey fire, the model predicts the fire will enlarge on the right side of the boundary, whereas, not so much on the left side. In the comparison between the current day to the next day, it shows that the actual fire did expand to its right. These examples validate the model’s capacity of predicting the growth pattern of wildfires. 5.1 EVALUATION Intersection over Union (IoU) was calculated to evaluate the model’s performance in predicting the profile of the wildfire. IoU is commonly used metric to evaluate the performance of object segmentation. IoU is defined as IoU = Area of Overlap Area of Union (3) The metric is straightforward once a predicted image and a ground truth image are defined. In this study, both images are binary maps where each pixel is either 0 or 1. Then, the area of overlap is simply the number of pixels that have the same value in both images and union is the area encompassed by both images. Therefore, a predicted image that matches perfectly to the ground truth will score 1. As shown in Table 1, WildfireNet achieved an IoU score of 0.997 in the test set, while U-Net and logistic regression model scored 0.995 and 0.913, respectively. The result indicates that WildfireNet is excellent in precisely labeling each pixel with the presence of fire or not and performs better than the baseline models. However, in the test set, only 5 percent of labels in the binary map is labeled as fire, which shows a sign of class imbalance in the data set. Therefore, IoU is not the best metric to evaluate a model’s performance in predicting new fires because the model can obtain a high IoU score by simply predicting every pixel to be 0. Therefore, models are evaluated on only the pixels that were labeled fire on the next day but not on the current day. We defined such pixels as changed pixels. Considering only the changed pixels will truly measure the model’s performance in predicting the changes in wildfire profile. Thus, we defined expanded IoU as the area of intersection between the predicted and the truth of the changed pixels over the area of union between the predicted and the truth of the changed pixels. Expanded IoU = Area of Overlap of Changed Pixels Area of Union of Changed Pixels (4) Moreover, expanded recall was formulated as true positives of the changed pixels over the total positive of changed pixels. Expanded Recall = True Positive of Changed Pixels Total Positive of Changed Pixels (5) On the test set, WildfireNet performed better than the baseline models in both expanded IoU and recall. All models scored the lowest in expanded IoU because the metric further penalizes when predicted fires are not present in the actual fire. WildfireNet achieved 0.541 on recall while the UNet and baseline model scored 0.458 and 0.154, respectively. This implies that WildfireNet predicts correctly more than half of the time on how fires are growing in the actual fire expansion, while the logistic regression model is only correct about 15% of the time. From this result, WildfireNet is superior to baseline models. Wildfires are further categorized into three classes based on the percentage of fire growth from the previous day. Fire is defined as rapid if the fire body expands more than 7% than the previous day, moderate if the growth is between 3% and 7%, and subtle if the growth is under 3%. Table 2 shows that all models having the highest score in the following order: subtle, moderate, and rapid. It is reasonable for models to perform poorly as fire changes rapidly from one day to another. Wildfire is affected by various dynamic factors including but not limited to the inputs that were used to train the model. For instance, unstable weather conditions are often the major contributor to the rapid fire growth. However, daily weather data is not detailed enough to reflect the volatile nature of the weather. The abrupt changes in the weather, such as gust usually occurs for less than 20 seconds. Daily weather data, having a time interval of 24 hours, is not finite enough to inform the model about the sudden changes. In addition, human-induced fire growth can be a huge factor in contributing fire growth, but simply ignoring the effect can hinder the model’s prediction. Therefore, it is reasonable for models to fail in keeping up with the rapid change. In figure 5, WildfireNet outputs a reasonable prediction on the moderate fire. Prediction nearly matches the overall shape of the truth and it predicts the most bottom of the fire body to enlarge, which is the actual case in the real fire. However, in figure 6, WildfireNet cannot keep with a large change of the rapid fire. The model predicts the fire to increase only at the bottom. But in the actual fire, it increases in every direction. 6 LIMITATIONS Compared to many other deep learning applications, WildfireNet is trained with a small dataset. The lack of variability in training data could limit the model to predict accurately when new weather conditions, land cover, and fire profile are introduced. Furthermore, neglecting human-induced fire spread could stall the performance of the model. Furthermore, wildfire is highly influenced by interdependence on coupled climatic and human factors, but several factors were not considered, such as human-induced fire growth. Moreover, a key assumption was made when fire perimeters were preprocessed to create binary maps, which was to consider spots within the boundary that was not on fire to be labeled as fire. Due to such modification, the binary map couldn’t have reflected the actual shape of the fire. In addition, fire perimeters were recorded daily, but there is uncertainty whether fire perimeters were obtained at the same time. If time interval varies between days, the fire growth does not reflect changes in the 24-hour time window. Remote sensing data had different resolutions (wind speed: 1/24 degree, elevation: 1/512 degree, NDVI: 1/200 degree) and contained missing values. Thus, interpolations were put in place to properly complete the data, but it could lead to an error when there is a large amount of missing data. 7 CONCLUSION AND FUTURE WORK WildfireNet is built and experimented to establish a footprint of using 3D CNN to predict wildfire spread profile. Unlike current traditional, physics, and empirically based fire spreading models, WildfireNet does not require extensive inputs and complex computations and with fire profiles and accessible remote sensing data, it can output an upcoming fire profile within a second. Statistical analysis shows WildfireNet is capable of learning patterns from historical fire spread along with land cover, topography, and weather. The model shows a promising result by obtaining higher scores in IoU, expanded IoU, and recall than the baseline models. Future work includes obtaining more fire incidents and remote sensing data. Currently, WildfireNet is trained solely on the fires that existed in California and we believe the model can be flexible to predict any types of fire once it has been trained with various types and conditions of wildfires.
1. What is the focus of the paper regarding wildfire range segmentation? 2. What are the strengths of the proposed 3D CNN model? 3. What are the weaknesses of the paper, particularly concerning its novelty and comparisons with other works? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Does the reviewer have any suggestions for improving the paper or its relevance to machine learning research?
Review
Review This paper proposes a 3D CNN model to segment wildfire range from remote sensing data. The proposed model takes both spatial dimensions and the temporal dimension as 3D inputs. The proposed idea is similar to U-Net architecture. Some evaluations on a real-world dataset show that the proposed model outperforms U-Net and logistic regression. The paper solves an interesting application. However, the proposed method lacks novelty compared with the popular U-Net model. The proposed extension has limited novelty from the machine learning research perspective. The paper may fit better for an applied research venue (e.g., interdisciplinary deep learning applications). Strengthen: The paper solves a significant real-world application of wildfire mapping. The paper is well-written and easily readable. The results show that the proposed model outperforms several baseline methods. Weakness: The main concern is on the novelty of the proposed model. The proposed model seems a minor tuning of the U-Net model with a slightly different input shape. For a research paper, it will be helpful to identify some unique challenges in wildfire mapping that motivate the novel design of the model and algorithms. The baselines in evaluation comparison can be broader with other states of the art image segmentation algorithms, e.g., DeepLab.
ICLR
Title A 3D Convolutional Neural Network for Predicting Wildfire Profiles Abstract Wildfire has become an unavoidable natural disaster that continues to threaten fire-prone communities and the frequency is expected to increase due to climate change. Therefore, predicting a wildfire spread profile is an essential tool for firefighters when planning an evacuation strategy. The current traditional, physics, and empirically based fire spread models require extensive inputs, which are often difficult to obtain. Thus, we propose a 3D Convolutional Neural Network (CNN), named WildfireNet, that can predict the profile of wildfire of the next day when given historical wildfire profiles and accessible remote-sensing data. WildfireNet utilizes 3-dimensional spaces to extract features from both the temporal and spatial dimensions to better understand the relationship between historical fires and upcoming fires. The motivation behind WildfireNet is to locate fires in a precise manner and be able to accurately predict fire profiles. Pixels that were labeled as fire but not on the previous days were extracted to calculate Intersection over Union (IoU) and recall. WildfireNet outperformed 2D CNN and logistic regression model in both IoU and recall. 1 INTRODUCTION In recent years, the magnitude and intensity of wildfire have become challenging for the fire-prone community to withstand. Even worse, global warming and consequent fuel drying will increase the frequency of devastating wildfires (Halofsky et al., 2020). If such trend continues, upcoming wildfires will be more destructive than any fires in the past. The consequences of massive wildfires are brutal. For instance, in 2003, wildfires that occurred in San Diego County, California, burned over 376,000 acres and 3,241 households, which was estimated to be $2.45 billion in terms of total economic costs (Diaz, 2012). Traditional, physics, and empirically based wildfire spread models have been continuously studied to mitigate losses resulting from wildfire. However, these models often require extensive inputs, which are often difficult to obtain or even impossible to get. Convolutional Neural Networks (CNN) have been widely used with remote sensing data for various applications and showed promising results. Therefore, we implemented one of the commonly used CNN architecture, U-Net, to create WildfireNet. U-Net was developed for biomedical image segmentation that extracts global and local features to learn patterns to output a segmented image. The method has a relatively modest need for data collection and is capable of predicting a wildfire profile within a second. Thus, we present a novel deep learning method to determine dynamic wildfire profiles with the basic input data: wildfire perimeters, land cover, topography, and weather data. 2 RELATED WORKS Modeling wildfire has been an active research topic. Works have been done towards predicting the occurrence of wildfires with spatial susceptibility (Ghorbanzadeh et al., 2019; Sayad et al., 2019). FARSITE is a two-dimensional model that utilizes a vector propagation approach to depict fire perimeter growth. The model is sophisticated to adjust different fire types and behaviors, such as surface fire, crown fire, spotting, and point source fire acceleration (Finney, 1998). The model shows a promising result in basic conditions as the prediction matches very well to the actual fire boundary. However, it is computationally expensive and requires an extensive number of inputs. Also, the model accuracy varies widely across wildfires in different regions. In recent years, artificial intelligence has been used to solve the problem. Subramanian and Crowley presented a novel approach for utilizing Reinforcement Learning for learning forest wildfire spread dynamics directly from readily available satellite images (Subramanian & Crowley, 2018). FireCast combined Artificial Intelligence and data collection from GIS to predict which areas are at high risk in the future. It utilizes 2D CNN and is trained to predict areas that are expected to burn during the next 24 hours when given an initial fire perimeter (Radke et al., 2019) . From the studies, utilizing AI to solve the complex nature of wildfire is a promising way to predict wildfire spread and avoid complicated physical computation that many traditional methods hold. Thus, we propose a WildfireNet that can predict the profile of wildfire of the next day. To our knowledge, WildfireNet is the first 3D CNN to be used in the application of the fire spreading model. 3 WILDFIRENET ALGORITHM 3.1 TRAINING METHOD There are two categories, fire or no fire, to classify for each pixel. Binary cross-entropy loss (BCE) is an appropriate loss function to train the model . BCE = −1 N N∑ i=0 yi · log(ŷi) + (1− yi) · log(1− ŷi) (1) As shown in the equation above, binary cross-entropy (BCE) loss is informative to our model because it tries to minimize the distance between the predicted and the ground truth probability distributions, which is the ultimate goal of our problem. For instance, if the model predicts a pixel to contain fire where it doesn’t, BCE will output a high loss, penalizing the model with a large cost. From this train and error approach, the model will find the local minimum point of the loss function as its goal is to achieve a smaller loss at every training step. Thus, with BCE, the model will be optimized to maximize the likelihood of outputting the correct shape of the wildfire profile. Furthermore, Adam optimization was used and a learning rate of 1e-4 was applied to train the model. 3.2 MODEL AND IMPLEMENTATION U-Net was first introduced solely for the purpose of image segmentation on biomedical images. The output of U-Net is different than the typical use of CNN, which is on classification tasks that outputs a single class label to an input image. The output of U-Net is different. Instead of assigning a single label per input image, the model localizes the label for each pixel of the input image (Ronneberger et al., 2015). The model has two major paths. It first begins with contraction, which consists of convolutions and maxpool to extract features. Next, the model undergoes expansion where the size of the image resizes back to the original input to enable precise localization. The sequence of contraction and expansion yields a u-shaped architecture. The unique aspect of this model is the outputs from contraction are concatenated to the expansion output. This approach helps the model to maintain precise localization of high-resolution features at the output (Ronneberger et al., 2015). We decided to implement the U-Net model because 1) the model has a capacity to intake a raw image and output a segmented image, 2) it works well with a small dataset, 3) and it is capable of predicting a wildfire profile within a second. The U-Net model is adjusted so that it becomes more applicable to our study. Similar to the U-Net, WildfireNet is composed of the two major paths: contraction and expansion. The full architecture of the model is shown in Figure 1. In contrast to the U-Net, WildfireNet consists of fully connected layers at the bottom of the architecture. After the last downsampling, the 3D image is flattened into 1D array, and weather data is concatenated. The model is further trained with dense layers to learn the effect of weather variables in its prediction. Furthermore, past wildfire profiles can play a dominant role in the future shape of the wildfire. Therefore, 3D CNN was used instead of 2D. In 3D CNN, the model further extracts features from both the temporal and spatial dimensions, whereas, in 2D CNN, the model only focuses on spatial features (Tran et al., 2015). In this study, 3 previous days of wildfire profiles are combined to convert input images from 2D to 3D. This allows the model to have a better sense on how historical fires are correlated to the fire on the next day. 3.3 INPUT DATA Topography, weather, and available fuels are major contributor to fire growth (Estes et al., 2017). Thus, along with three days of historical fire profiles, we decided to add land cover, topography, and weather data to the model. 3.3.1 DYNAMIC WILDFIRE PERIMETERS A total of 302 daily fire perimeters were retrieved. The size of the data is limited compared to other deep learning studies. However, WildfireNet, derived from U-Net, is proven to perform well with small dataset (Ronneberger et al., 2015). Wildfire perimeters were obtained from National Inter agency Fire Center FTP Server 1. Perimeters are in the format of .kmz, which contains an array of coordinates of boundaries. In this paper, only the fires that occurred in California from 2013 to 2019 were observed. For each wildfire perimeters, as shown in Figure 2, an array of coordinates was used to fill inside the perimeter to create a binary map to reflect the overall shape of the wildfire. In other words, if a given pixel is within the perimeter, the pixel is labeled as 1 to indicate fire, otherwise, if the pixel is outside the boundary, the pixel is assigned to 0 to reflect no fire. An Important assumption was made when creating a binary map. For instance, there were spots of the region within the boundary that was not on fire, but these spots were considered as a risk zone and were filled in as well. binary map consists of 256x256 pixels, covering 0.5 degrees in both latitude and longitude. It is important to maintain the same spatial scale to distinguish fires with respect to their true sizes. Overall, a preprocessed binary map is used as an input to represent the wildfire profile. 3.3.2 LAND COVER AND VEGETATION Normalized Difference Vegetation Index (NDVI) is a remote sensing data that combines measurement of wavelengths and intensity of visible and near-infrared light to calculate the concentrations of green leaf vegetation (Zaitunah1 et al., 2018). NDVI is useful to fire spreading models because it indicates the water content of the crop and it is more likely for fire to spread on dry vegetation than on wet crops. 1Wildfire perimeter source: https://ftp.nifc.gov/ Advanced Very High Resolution Radiometer (AVHRR) NDVI was collected from National Oceanic and Atmospheric Administration 2 (NOAA). The data are projected on a 0.05-degree x 0.05-degree global grid. The mean NDVI was used as an input. 3.3.3 TOPOGRAPHY Topography has a direct effect on fire behavior (Rothermel, 1972). For example, the rate of fire spread rapidly increases on steeper slopes. Studies have shown that there is a strong correlation between topography and fire severity (Estes et al., 2017). 1/3 arc-second digital elevation model (DEM) was retrieved from the USGS national map 3 to reflect topography on locations of fires. DEM contains elevation for each pixel. This allows the model to learn fire behavior that is directly responded to topography, such as slope change. For each wildfire, four corner coordinates of the binary map were used to crop DEM to match spatial location. 3.3.4 WEATHER Weather is an important factor that can significantly contribute to the spread of wildfires. Simplified daily weather data, including mean wind speed, mean temperature, mean relative humidity were used to represent atmospheric conditions. Weather data were retrieved from the Climatology Lab 4. The resolution of the data is 1/24 degree x 1/24 degree. 3.4 OUTPUT After three days of historical fire profiles and remote-sensing data are processed into WildfireNet, it outputs a probabilistic distribution of fire of the next day. A sigmoid activation is used at the last layer to ensure the values of each pixel is within the range of [0, 1], representing a probability of fire occurrence. The output is further processed to create a predicted binary map, which is evaluated with the ground truth in the loss function to train the model. To create a binary map, an optimal threshold is picked and the state of the pixel is classified as fire (1) or not (0) with the following rule: State of Pixel = { 1, if p ≥ threshold 0, if p < threshold (2) Various thresholds were evaluated and selected threshold yielded the highest metric scores. 2NDVI source: https://www.ncdc.noaa.gov/cdr/terrestrial/normalized-difference-vegetation-index 3Elevation data source: https://www.usgs.gov/core-science-systems/national- geospatial-program/national- map 4Weather data source: http://www.climatologylab.org/gridmet.html 4 BASELINE MODEL We decided to use the U-Net and logistic regression model as baseline models to compare and evaluate the performance of WildfireNet. Similar to WildfireNet, U-Net extracts features through downsampling and upsampling pathways, however, it does not consider historical fires since the dimension is limited to 2. U-Net will work as a baseline model to assess the temporal aspects of the WildfireNet in predicting the wildfire shape. The logistic regression model is a commonly used baseline model due to its simplicity. However, it lacks convolution to extract features. Thus, it will provide a suitable starting point to understand the task and validate the usage of CNN in predicting wildfire spread profiles. The input for the logistic regression model includes historical binary maps, elevation data, wind speed, wind direction, and the state of the surrounding pixels. The state of the surrounding pixels is an essential aspect since the pixel has a higher probability of setting on fire if one of the neighboring pixels is on fire. Moreover, there are a total of 8 neighboring pixels that contribute to the state of the pixel, as shown in Figure 2. Baseline models were trained and tested with the same data as WildfireNet. 5 RESULTS As shown in Figure 4(a,c), the output of WildfireNet shows a probabilistic distribution of fires occurring at each pixel. If the model is confident that there is a fire in a certain pixel, it will assign a low score on the pixel. The model also projects how fire will spread in the future. For example, in the Mad River Complex fire, there are two major fire bodies. WildfireNet predicts the fire will expand at the bottom of the left body and no change will occur on the right body. In fact, in the comparison between the current day to the next day, it shows that the actual fire of the next day did not expand elsewhere except at the bottom of the left body. Moreover, in the Rey fire, the model predicts the fire will enlarge on the right side of the boundary, whereas, not so much on the left side. In the comparison between the current day to the next day, it shows that the actual fire did expand to its right. These examples validate the model’s capacity of predicting the growth pattern of wildfires. 5.1 EVALUATION Intersection over Union (IoU) was calculated to evaluate the model’s performance in predicting the profile of the wildfire. IoU is commonly used metric to evaluate the performance of object segmentation. IoU is defined as IoU = Area of Overlap Area of Union (3) The metric is straightforward once a predicted image and a ground truth image are defined. In this study, both images are binary maps where each pixel is either 0 or 1. Then, the area of overlap is simply the number of pixels that have the same value in both images and union is the area encompassed by both images. Therefore, a predicted image that matches perfectly to the ground truth will score 1. As shown in Table 1, WildfireNet achieved an IoU score of 0.997 in the test set, while U-Net and logistic regression model scored 0.995 and 0.913, respectively. The result indicates that WildfireNet is excellent in precisely labeling each pixel with the presence of fire or not and performs better than the baseline models. However, in the test set, only 5 percent of labels in the binary map is labeled as fire, which shows a sign of class imbalance in the data set. Therefore, IoU is not the best metric to evaluate a model’s performance in predicting new fires because the model can obtain a high IoU score by simply predicting every pixel to be 0. Therefore, models are evaluated on only the pixels that were labeled fire on the next day but not on the current day. We defined such pixels as changed pixels. Considering only the changed pixels will truly measure the model’s performance in predicting the changes in wildfire profile. Thus, we defined expanded IoU as the area of intersection between the predicted and the truth of the changed pixels over the area of union between the predicted and the truth of the changed pixels. Expanded IoU = Area of Overlap of Changed Pixels Area of Union of Changed Pixels (4) Moreover, expanded recall was formulated as true positives of the changed pixels over the total positive of changed pixels. Expanded Recall = True Positive of Changed Pixels Total Positive of Changed Pixels (5) On the test set, WildfireNet performed better than the baseline models in both expanded IoU and recall. All models scored the lowest in expanded IoU because the metric further penalizes when predicted fires are not present in the actual fire. WildfireNet achieved 0.541 on recall while the UNet and baseline model scored 0.458 and 0.154, respectively. This implies that WildfireNet predicts correctly more than half of the time on how fires are growing in the actual fire expansion, while the logistic regression model is only correct about 15% of the time. From this result, WildfireNet is superior to baseline models. Wildfires are further categorized into three classes based on the percentage of fire growth from the previous day. Fire is defined as rapid if the fire body expands more than 7% than the previous day, moderate if the growth is between 3% and 7%, and subtle if the growth is under 3%. Table 2 shows that all models having the highest score in the following order: subtle, moderate, and rapid. It is reasonable for models to perform poorly as fire changes rapidly from one day to another. Wildfire is affected by various dynamic factors including but not limited to the inputs that were used to train the model. For instance, unstable weather conditions are often the major contributor to the rapid fire growth. However, daily weather data is not detailed enough to reflect the volatile nature of the weather. The abrupt changes in the weather, such as gust usually occurs for less than 20 seconds. Daily weather data, having a time interval of 24 hours, is not finite enough to inform the model about the sudden changes. In addition, human-induced fire growth can be a huge factor in contributing fire growth, but simply ignoring the effect can hinder the model’s prediction. Therefore, it is reasonable for models to fail in keeping up with the rapid change. In figure 5, WildfireNet outputs a reasonable prediction on the moderate fire. Prediction nearly matches the overall shape of the truth and it predicts the most bottom of the fire body to enlarge, which is the actual case in the real fire. However, in figure 6, WildfireNet cannot keep with a large change of the rapid fire. The model predicts the fire to increase only at the bottom. But in the actual fire, it increases in every direction. 6 LIMITATIONS Compared to many other deep learning applications, WildfireNet is trained with a small dataset. The lack of variability in training data could limit the model to predict accurately when new weather conditions, land cover, and fire profile are introduced. Furthermore, neglecting human-induced fire spread could stall the performance of the model. Furthermore, wildfire is highly influenced by interdependence on coupled climatic and human factors, but several factors were not considered, such as human-induced fire growth. Moreover, a key assumption was made when fire perimeters were preprocessed to create binary maps, which was to consider spots within the boundary that was not on fire to be labeled as fire. Due to such modification, the binary map couldn’t have reflected the actual shape of the fire. In addition, fire perimeters were recorded daily, but there is uncertainty whether fire perimeters were obtained at the same time. If time interval varies between days, the fire growth does not reflect changes in the 24-hour time window. Remote sensing data had different resolutions (wind speed: 1/24 degree, elevation: 1/512 degree, NDVI: 1/200 degree) and contained missing values. Thus, interpolations were put in place to properly complete the data, but it could lead to an error when there is a large amount of missing data. 7 CONCLUSION AND FUTURE WORK WildfireNet is built and experimented to establish a footprint of using 3D CNN to predict wildfire spread profile. Unlike current traditional, physics, and empirically based fire spreading models, WildfireNet does not require extensive inputs and complex computations and with fire profiles and accessible remote sensing data, it can output an upcoming fire profile within a second. Statistical analysis shows WildfireNet is capable of learning patterns from historical fire spread along with land cover, topography, and weather. The model shows a promising result by obtaining higher scores in IoU, expanded IoU, and recall than the baseline models. Future work includes obtaining more fire incidents and remote sensing data. Currently, WildfireNet is trained solely on the fires that existed in California and we believe the model can be flexible to predict any types of fire once it has been trained with various types and conditions of wildfires.
1. What is the focus of the paper regarding wildfire prediction? 2. What are the strengths of the proposed approach, particularly in terms of its application significance? 3. What are the weaknesses of the paper, especially regarding its novelty and technical components? 4. How does the reviewer assess the clarity and definition of the paper's content, such as the term "wildfire profiles"?
Review
Review This paper studies the wildfire prediction problem by using remote sensing data. The authors propose a 3D-CNN based methods named WildfireNet that leans from historical data in both temporal and spatial dimensions to predict the upcoming fire of the next day. The study is of great application signification. My main concern lies in the novelty of this paper. In my opinion, almost all the technical components used by the authors, including the binary cross-entropy loss, the UNet-like architecture, and 3D convolutions are all widely studied before. The integration of the weather data and temporal data to the network flow is also intuitive and trivial. I would consider this paper a very good application report but may not be the best fit for an ICLR venue. Others: The term “wildfire profiles” is not clearly defined. Is that mean the binary pixelmap of the wildfire regions?
ICLR
Title A 3D Convolutional Neural Network for Predicting Wildfire Profiles Abstract Wildfire has become an unavoidable natural disaster that continues to threaten fire-prone communities and the frequency is expected to increase due to climate change. Therefore, predicting a wildfire spread profile is an essential tool for firefighters when planning an evacuation strategy. The current traditional, physics, and empirically based fire spread models require extensive inputs, which are often difficult to obtain. Thus, we propose a 3D Convolutional Neural Network (CNN), named WildfireNet, that can predict the profile of wildfire of the next day when given historical wildfire profiles and accessible remote-sensing data. WildfireNet utilizes 3-dimensional spaces to extract features from both the temporal and spatial dimensions to better understand the relationship between historical fires and upcoming fires. The motivation behind WildfireNet is to locate fires in a precise manner and be able to accurately predict fire profiles. Pixels that were labeled as fire but not on the previous days were extracted to calculate Intersection over Union (IoU) and recall. WildfireNet outperformed 2D CNN and logistic regression model in both IoU and recall. 1 INTRODUCTION In recent years, the magnitude and intensity of wildfire have become challenging for the fire-prone community to withstand. Even worse, global warming and consequent fuel drying will increase the frequency of devastating wildfires (Halofsky et al., 2020). If such trend continues, upcoming wildfires will be more destructive than any fires in the past. The consequences of massive wildfires are brutal. For instance, in 2003, wildfires that occurred in San Diego County, California, burned over 376,000 acres and 3,241 households, which was estimated to be $2.45 billion in terms of total economic costs (Diaz, 2012). Traditional, physics, and empirically based wildfire spread models have been continuously studied to mitigate losses resulting from wildfire. However, these models often require extensive inputs, which are often difficult to obtain or even impossible to get. Convolutional Neural Networks (CNN) have been widely used with remote sensing data for various applications and showed promising results. Therefore, we implemented one of the commonly used CNN architecture, U-Net, to create WildfireNet. U-Net was developed for biomedical image segmentation that extracts global and local features to learn patterns to output a segmented image. The method has a relatively modest need for data collection and is capable of predicting a wildfire profile within a second. Thus, we present a novel deep learning method to determine dynamic wildfire profiles with the basic input data: wildfire perimeters, land cover, topography, and weather data. 2 RELATED WORKS Modeling wildfire has been an active research topic. Works have been done towards predicting the occurrence of wildfires with spatial susceptibility (Ghorbanzadeh et al., 2019; Sayad et al., 2019). FARSITE is a two-dimensional model that utilizes a vector propagation approach to depict fire perimeter growth. The model is sophisticated to adjust different fire types and behaviors, such as surface fire, crown fire, spotting, and point source fire acceleration (Finney, 1998). The model shows a promising result in basic conditions as the prediction matches very well to the actual fire boundary. However, it is computationally expensive and requires an extensive number of inputs. Also, the model accuracy varies widely across wildfires in different regions. In recent years, artificial intelligence has been used to solve the problem. Subramanian and Crowley presented a novel approach for utilizing Reinforcement Learning for learning forest wildfire spread dynamics directly from readily available satellite images (Subramanian & Crowley, 2018). FireCast combined Artificial Intelligence and data collection from GIS to predict which areas are at high risk in the future. It utilizes 2D CNN and is trained to predict areas that are expected to burn during the next 24 hours when given an initial fire perimeter (Radke et al., 2019) . From the studies, utilizing AI to solve the complex nature of wildfire is a promising way to predict wildfire spread and avoid complicated physical computation that many traditional methods hold. Thus, we propose a WildfireNet that can predict the profile of wildfire of the next day. To our knowledge, WildfireNet is the first 3D CNN to be used in the application of the fire spreading model. 3 WILDFIRENET ALGORITHM 3.1 TRAINING METHOD There are two categories, fire or no fire, to classify for each pixel. Binary cross-entropy loss (BCE) is an appropriate loss function to train the model . BCE = −1 N N∑ i=0 yi · log(ŷi) + (1− yi) · log(1− ŷi) (1) As shown in the equation above, binary cross-entropy (BCE) loss is informative to our model because it tries to minimize the distance between the predicted and the ground truth probability distributions, which is the ultimate goal of our problem. For instance, if the model predicts a pixel to contain fire where it doesn’t, BCE will output a high loss, penalizing the model with a large cost. From this train and error approach, the model will find the local minimum point of the loss function as its goal is to achieve a smaller loss at every training step. Thus, with BCE, the model will be optimized to maximize the likelihood of outputting the correct shape of the wildfire profile. Furthermore, Adam optimization was used and a learning rate of 1e-4 was applied to train the model. 3.2 MODEL AND IMPLEMENTATION U-Net was first introduced solely for the purpose of image segmentation on biomedical images. The output of U-Net is different than the typical use of CNN, which is on classification tasks that outputs a single class label to an input image. The output of U-Net is different. Instead of assigning a single label per input image, the model localizes the label for each pixel of the input image (Ronneberger et al., 2015). The model has two major paths. It first begins with contraction, which consists of convolutions and maxpool to extract features. Next, the model undergoes expansion where the size of the image resizes back to the original input to enable precise localization. The sequence of contraction and expansion yields a u-shaped architecture. The unique aspect of this model is the outputs from contraction are concatenated to the expansion output. This approach helps the model to maintain precise localization of high-resolution features at the output (Ronneberger et al., 2015). We decided to implement the U-Net model because 1) the model has a capacity to intake a raw image and output a segmented image, 2) it works well with a small dataset, 3) and it is capable of predicting a wildfire profile within a second. The U-Net model is adjusted so that it becomes more applicable to our study. Similar to the U-Net, WildfireNet is composed of the two major paths: contraction and expansion. The full architecture of the model is shown in Figure 1. In contrast to the U-Net, WildfireNet consists of fully connected layers at the bottom of the architecture. After the last downsampling, the 3D image is flattened into 1D array, and weather data is concatenated. The model is further trained with dense layers to learn the effect of weather variables in its prediction. Furthermore, past wildfire profiles can play a dominant role in the future shape of the wildfire. Therefore, 3D CNN was used instead of 2D. In 3D CNN, the model further extracts features from both the temporal and spatial dimensions, whereas, in 2D CNN, the model only focuses on spatial features (Tran et al., 2015). In this study, 3 previous days of wildfire profiles are combined to convert input images from 2D to 3D. This allows the model to have a better sense on how historical fires are correlated to the fire on the next day. 3.3 INPUT DATA Topography, weather, and available fuels are major contributor to fire growth (Estes et al., 2017). Thus, along with three days of historical fire profiles, we decided to add land cover, topography, and weather data to the model. 3.3.1 DYNAMIC WILDFIRE PERIMETERS A total of 302 daily fire perimeters were retrieved. The size of the data is limited compared to other deep learning studies. However, WildfireNet, derived from U-Net, is proven to perform well with small dataset (Ronneberger et al., 2015). Wildfire perimeters were obtained from National Inter agency Fire Center FTP Server 1. Perimeters are in the format of .kmz, which contains an array of coordinates of boundaries. In this paper, only the fires that occurred in California from 2013 to 2019 were observed. For each wildfire perimeters, as shown in Figure 2, an array of coordinates was used to fill inside the perimeter to create a binary map to reflect the overall shape of the wildfire. In other words, if a given pixel is within the perimeter, the pixel is labeled as 1 to indicate fire, otherwise, if the pixel is outside the boundary, the pixel is assigned to 0 to reflect no fire. An Important assumption was made when creating a binary map. For instance, there were spots of the region within the boundary that was not on fire, but these spots were considered as a risk zone and were filled in as well. binary map consists of 256x256 pixels, covering 0.5 degrees in both latitude and longitude. It is important to maintain the same spatial scale to distinguish fires with respect to their true sizes. Overall, a preprocessed binary map is used as an input to represent the wildfire profile. 3.3.2 LAND COVER AND VEGETATION Normalized Difference Vegetation Index (NDVI) is a remote sensing data that combines measurement of wavelengths and intensity of visible and near-infrared light to calculate the concentrations of green leaf vegetation (Zaitunah1 et al., 2018). NDVI is useful to fire spreading models because it indicates the water content of the crop and it is more likely for fire to spread on dry vegetation than on wet crops. 1Wildfire perimeter source: https://ftp.nifc.gov/ Advanced Very High Resolution Radiometer (AVHRR) NDVI was collected from National Oceanic and Atmospheric Administration 2 (NOAA). The data are projected on a 0.05-degree x 0.05-degree global grid. The mean NDVI was used as an input. 3.3.3 TOPOGRAPHY Topography has a direct effect on fire behavior (Rothermel, 1972). For example, the rate of fire spread rapidly increases on steeper slopes. Studies have shown that there is a strong correlation between topography and fire severity (Estes et al., 2017). 1/3 arc-second digital elevation model (DEM) was retrieved from the USGS national map 3 to reflect topography on locations of fires. DEM contains elevation for each pixel. This allows the model to learn fire behavior that is directly responded to topography, such as slope change. For each wildfire, four corner coordinates of the binary map were used to crop DEM to match spatial location. 3.3.4 WEATHER Weather is an important factor that can significantly contribute to the spread of wildfires. Simplified daily weather data, including mean wind speed, mean temperature, mean relative humidity were used to represent atmospheric conditions. Weather data were retrieved from the Climatology Lab 4. The resolution of the data is 1/24 degree x 1/24 degree. 3.4 OUTPUT After three days of historical fire profiles and remote-sensing data are processed into WildfireNet, it outputs a probabilistic distribution of fire of the next day. A sigmoid activation is used at the last layer to ensure the values of each pixel is within the range of [0, 1], representing a probability of fire occurrence. The output is further processed to create a predicted binary map, which is evaluated with the ground truth in the loss function to train the model. To create a binary map, an optimal threshold is picked and the state of the pixel is classified as fire (1) or not (0) with the following rule: State of Pixel = { 1, if p ≥ threshold 0, if p < threshold (2) Various thresholds were evaluated and selected threshold yielded the highest metric scores. 2NDVI source: https://www.ncdc.noaa.gov/cdr/terrestrial/normalized-difference-vegetation-index 3Elevation data source: https://www.usgs.gov/core-science-systems/national- geospatial-program/national- map 4Weather data source: http://www.climatologylab.org/gridmet.html 4 BASELINE MODEL We decided to use the U-Net and logistic regression model as baseline models to compare and evaluate the performance of WildfireNet. Similar to WildfireNet, U-Net extracts features through downsampling and upsampling pathways, however, it does not consider historical fires since the dimension is limited to 2. U-Net will work as a baseline model to assess the temporal aspects of the WildfireNet in predicting the wildfire shape. The logistic regression model is a commonly used baseline model due to its simplicity. However, it lacks convolution to extract features. Thus, it will provide a suitable starting point to understand the task and validate the usage of CNN in predicting wildfire spread profiles. The input for the logistic regression model includes historical binary maps, elevation data, wind speed, wind direction, and the state of the surrounding pixels. The state of the surrounding pixels is an essential aspect since the pixel has a higher probability of setting on fire if one of the neighboring pixels is on fire. Moreover, there are a total of 8 neighboring pixels that contribute to the state of the pixel, as shown in Figure 2. Baseline models were trained and tested with the same data as WildfireNet. 5 RESULTS As shown in Figure 4(a,c), the output of WildfireNet shows a probabilistic distribution of fires occurring at each pixel. If the model is confident that there is a fire in a certain pixel, it will assign a low score on the pixel. The model also projects how fire will spread in the future. For example, in the Mad River Complex fire, there are two major fire bodies. WildfireNet predicts the fire will expand at the bottom of the left body and no change will occur on the right body. In fact, in the comparison between the current day to the next day, it shows that the actual fire of the next day did not expand elsewhere except at the bottom of the left body. Moreover, in the Rey fire, the model predicts the fire will enlarge on the right side of the boundary, whereas, not so much on the left side. In the comparison between the current day to the next day, it shows that the actual fire did expand to its right. These examples validate the model’s capacity of predicting the growth pattern of wildfires. 5.1 EVALUATION Intersection over Union (IoU) was calculated to evaluate the model’s performance in predicting the profile of the wildfire. IoU is commonly used metric to evaluate the performance of object segmentation. IoU is defined as IoU = Area of Overlap Area of Union (3) The metric is straightforward once a predicted image and a ground truth image are defined. In this study, both images are binary maps where each pixel is either 0 or 1. Then, the area of overlap is simply the number of pixels that have the same value in both images and union is the area encompassed by both images. Therefore, a predicted image that matches perfectly to the ground truth will score 1. As shown in Table 1, WildfireNet achieved an IoU score of 0.997 in the test set, while U-Net and logistic regression model scored 0.995 and 0.913, respectively. The result indicates that WildfireNet is excellent in precisely labeling each pixel with the presence of fire or not and performs better than the baseline models. However, in the test set, only 5 percent of labels in the binary map is labeled as fire, which shows a sign of class imbalance in the data set. Therefore, IoU is not the best metric to evaluate a model’s performance in predicting new fires because the model can obtain a high IoU score by simply predicting every pixel to be 0. Therefore, models are evaluated on only the pixels that were labeled fire on the next day but not on the current day. We defined such pixels as changed pixels. Considering only the changed pixels will truly measure the model’s performance in predicting the changes in wildfire profile. Thus, we defined expanded IoU as the area of intersection between the predicted and the truth of the changed pixels over the area of union between the predicted and the truth of the changed pixels. Expanded IoU = Area of Overlap of Changed Pixels Area of Union of Changed Pixels (4) Moreover, expanded recall was formulated as true positives of the changed pixels over the total positive of changed pixels. Expanded Recall = True Positive of Changed Pixels Total Positive of Changed Pixels (5) On the test set, WildfireNet performed better than the baseline models in both expanded IoU and recall. All models scored the lowest in expanded IoU because the metric further penalizes when predicted fires are not present in the actual fire. WildfireNet achieved 0.541 on recall while the UNet and baseline model scored 0.458 and 0.154, respectively. This implies that WildfireNet predicts correctly more than half of the time on how fires are growing in the actual fire expansion, while the logistic regression model is only correct about 15% of the time. From this result, WildfireNet is superior to baseline models. Wildfires are further categorized into three classes based on the percentage of fire growth from the previous day. Fire is defined as rapid if the fire body expands more than 7% than the previous day, moderate if the growth is between 3% and 7%, and subtle if the growth is under 3%. Table 2 shows that all models having the highest score in the following order: subtle, moderate, and rapid. It is reasonable for models to perform poorly as fire changes rapidly from one day to another. Wildfire is affected by various dynamic factors including but not limited to the inputs that were used to train the model. For instance, unstable weather conditions are often the major contributor to the rapid fire growth. However, daily weather data is not detailed enough to reflect the volatile nature of the weather. The abrupt changes in the weather, such as gust usually occurs for less than 20 seconds. Daily weather data, having a time interval of 24 hours, is not finite enough to inform the model about the sudden changes. In addition, human-induced fire growth can be a huge factor in contributing fire growth, but simply ignoring the effect can hinder the model’s prediction. Therefore, it is reasonable for models to fail in keeping up with the rapid change. In figure 5, WildfireNet outputs a reasonable prediction on the moderate fire. Prediction nearly matches the overall shape of the truth and it predicts the most bottom of the fire body to enlarge, which is the actual case in the real fire. However, in figure 6, WildfireNet cannot keep with a large change of the rapid fire. The model predicts the fire to increase only at the bottom. But in the actual fire, it increases in every direction. 6 LIMITATIONS Compared to many other deep learning applications, WildfireNet is trained with a small dataset. The lack of variability in training data could limit the model to predict accurately when new weather conditions, land cover, and fire profile are introduced. Furthermore, neglecting human-induced fire spread could stall the performance of the model. Furthermore, wildfire is highly influenced by interdependence on coupled climatic and human factors, but several factors were not considered, such as human-induced fire growth. Moreover, a key assumption was made when fire perimeters were preprocessed to create binary maps, which was to consider spots within the boundary that was not on fire to be labeled as fire. Due to such modification, the binary map couldn’t have reflected the actual shape of the fire. In addition, fire perimeters were recorded daily, but there is uncertainty whether fire perimeters were obtained at the same time. If time interval varies between days, the fire growth does not reflect changes in the 24-hour time window. Remote sensing data had different resolutions (wind speed: 1/24 degree, elevation: 1/512 degree, NDVI: 1/200 degree) and contained missing values. Thus, interpolations were put in place to properly complete the data, but it could lead to an error when there is a large amount of missing data. 7 CONCLUSION AND FUTURE WORK WildfireNet is built and experimented to establish a footprint of using 3D CNN to predict wildfire spread profile. Unlike current traditional, physics, and empirically based fire spreading models, WildfireNet does not require extensive inputs and complex computations and with fire profiles and accessible remote sensing data, it can output an upcoming fire profile within a second. Statistical analysis shows WildfireNet is capable of learning patterns from historical fire spread along with land cover, topography, and weather. The model shows a promising result by obtaining higher scores in IoU, expanded IoU, and recall than the baseline models. Future work includes obtaining more fire incidents and remote sensing data. Currently, WildfireNet is trained solely on the fires that existed in California and we believe the model can be flexible to predict any types of fire once it has been trained with various types and conditions of wildfires.
1. What is the focus of the reviewed paper? 2. What are the strengths and weaknesses of the proposed approach in the paper? 3. How does the reviewer assess the novelty and motivation of the selected architecture? 4. Are there any concerns regarding the experimental setup and results presented in the paper? 5. How does the reviewer evaluate the contribution and performance of the proposed WildfireNet model?
Review
Review Decision: Reject Review: The authors propose WildFireNet, a 3D CNN architecture based on the popular U-Net model for forecasting the progression of wildfires. Specifically the authors address the problem of day-ahead wildfire profile forecasting. Wildfirenet input data (D): Daily weather data including mean wind speed, mean temperature and mean relative humidity for the region of interest. Topography (i.e elevation, slopes and other terrain characteristics) data. Normalized Difference Vegetation Index (NDVI) indicating the water content of the crop in the region of interest. Daily fire perimeters. - Fire perimeters are constructed as a binary map (1 indicates fire, 0 indicates no fire). These binary maps are input the the model as fire perimeter profiles. Given the data D for the current day and the past 2 days, the model is tasked with predicting the fire perimeter binary map on the next day. The problem is formulated as a pixel-wise binary classification using a binary cross-entropy loss. The authors present results by comparing their method WildfireNet with a logistic-regression based model and a 2D CNN based U-Net model. Positives: The authors adapt the U-Net architecture to the task of wildfire profile growth estimation. Through the course of the paper, the authors provide a good review of state-of-the-art models used for wild fire profile growth estimation task and also elucidate the types of fires and their properties, enabling the readers to appreciate the complexity of the task at hand. Concerns: Lack of State-of-the-art Model Comparisons: In the related work section (sec. 2), the authors mention a model FARSITE [1] which uses a vector propagation approach to estimate the growth of the fire perimieter. The authors acknowledge that the approach is effective in certain conditions. However they fail to include the performance comparison of the FARSITE model performance in the current paper. Further, the authors cite multiple models for wildfire profile spread estimation but fail to compare with any of the models. The only claim of novelty the authors make is that they utilize 3D CNNs in their U-Net architecture (which is not in itself a novel contribution). Lack of sufficient motivation for selected architecture: At the outset of section 3.2, the authors appear to (somewhat) justify their use of a U-Net architecture by claiming that the output of a U-Net is "different than the typical use of CNN" and go on to say that CNNs typically used for classification tasks output a single label for input images wherease U-Nets output multiple labels (i.e., per pixel of the input image). Such a pixel-wise classification is not something that is restricted to be unique to U-Net architectures (or not something that CNNs or other architectures don't enable). Hence this motivation for using a U-Net architecture is weak at best. However, detailing the effectiveness of the U-Net architecture (which in itself has proven to be fairly effective in representation learning across various tasks due to its unique architectural properties) as a motivation for its usage in the current task would have been slightly more appropriate. Although this change by itself still does not make the paper any stronger. Experimental Setup not fully detailed: In section 3.4, it is stated that to obtain the binary map of future fire perimeter, the authors binarize (based on a threshold) the per-pixel probabilistic output of WildfireNet. However, the only commentary about how this threshold is chosen is the following: "Various thresholds were evaluated and selected threshold yielded the highest metric scores" - How exactly did this parameter selection occur? Such hyperparameters often have a significant effect on the results and hence such information is critical to supply for effective utility of the model by readers. Insufficient results (not enough strong baselines) which don't allow for clear contextualization of WildfireNet performance: In the results, section (sec. 5) the authors present results for the proposed WildfireNet model with logistic-regression and a U-Net model (with 2D convolution sans the weather data and including only the most recent day of historical fire profile data). Why is logistic-regression a competitive baseline for WildfireNet? The authors themselves acknowledge that the logistic regression model is "simple" and that it lacks "convolution" to extract features. However, the motivation for the selection of logistic regression is not detailed. In Table 1. for the overall expanded Recall (i.e., prediction of how fires expand), the authors mention that WildFireNet achieves a 54% recall whereas other models achieve lower recalls. However, since other state of the art models (designed specifically for or previously applied for wildfire profile evolution forecasting), have not been evaluated, it is hard to contextualize how good or bad these results are in the context of this task. Minor Details: There is also a discrepancy between the result values included in the text (Page 6. last paragraph: "WildfireNet achieved 0.541 on recall while the U-Net and baseline model scored 0.458 and 0.154, respectively") and in Table 1 (where the corresponding values for U-Net and baseline model - which I'm presuming is Logistic-Regression - are 0.498 and 0.154 respectively). There are also other places in the text where such errors are made (e.g., Page 5, penultimate paragraph). Summary: Hence, overall the paper presents weak (and incomplete) results. It also does not have a strong novel contribution either in terms of a novel model architecture or in terms of showcasing new previously unknown facets or characteristics of popular (U-Net like) models. The only potential novelty (or strength) of the paper could have been the effectiveness (i.e., superior performance) of the adapted (slightly modified) U-Net architecture (i.e., proposed WildfireNet model) but unfortunately the authors do not contextualize the performance of the proposed WildfireNet model in terms of the previous state-of-the-art learning models employed for this task. References: Finney, Mark A. 1998. FARSITE: Fire Area Simulator-model development and evaluation . Res. Pap. RMRS-RP-4, Revised 2004, Ogden, UT: U.S. Department of Agriculture, Forest Service, Rocky Mountain Research Station. 47 p.
ICLR
Title A 3D Convolutional Neural Network for Predicting Wildfire Profiles Abstract Wildfire has become an unavoidable natural disaster that continues to threaten fire-prone communities and the frequency is expected to increase due to climate change. Therefore, predicting a wildfire spread profile is an essential tool for firefighters when planning an evacuation strategy. The current traditional, physics, and empirically based fire spread models require extensive inputs, which are often difficult to obtain. Thus, we propose a 3D Convolutional Neural Network (CNN), named WildfireNet, that can predict the profile of wildfire of the next day when given historical wildfire profiles and accessible remote-sensing data. WildfireNet utilizes 3-dimensional spaces to extract features from both the temporal and spatial dimensions to better understand the relationship between historical fires and upcoming fires. The motivation behind WildfireNet is to locate fires in a precise manner and be able to accurately predict fire profiles. Pixels that were labeled as fire but not on the previous days were extracted to calculate Intersection over Union (IoU) and recall. WildfireNet outperformed 2D CNN and logistic regression model in both IoU and recall. 1 INTRODUCTION In recent years, the magnitude and intensity of wildfire have become challenging for the fire-prone community to withstand. Even worse, global warming and consequent fuel drying will increase the frequency of devastating wildfires (Halofsky et al., 2020). If such trend continues, upcoming wildfires will be more destructive than any fires in the past. The consequences of massive wildfires are brutal. For instance, in 2003, wildfires that occurred in San Diego County, California, burned over 376,000 acres and 3,241 households, which was estimated to be $2.45 billion in terms of total economic costs (Diaz, 2012). Traditional, physics, and empirically based wildfire spread models have been continuously studied to mitigate losses resulting from wildfire. However, these models often require extensive inputs, which are often difficult to obtain or even impossible to get. Convolutional Neural Networks (CNN) have been widely used with remote sensing data for various applications and showed promising results. Therefore, we implemented one of the commonly used CNN architecture, U-Net, to create WildfireNet. U-Net was developed for biomedical image segmentation that extracts global and local features to learn patterns to output a segmented image. The method has a relatively modest need for data collection and is capable of predicting a wildfire profile within a second. Thus, we present a novel deep learning method to determine dynamic wildfire profiles with the basic input data: wildfire perimeters, land cover, topography, and weather data. 2 RELATED WORKS Modeling wildfire has been an active research topic. Works have been done towards predicting the occurrence of wildfires with spatial susceptibility (Ghorbanzadeh et al., 2019; Sayad et al., 2019). FARSITE is a two-dimensional model that utilizes a vector propagation approach to depict fire perimeter growth. The model is sophisticated to adjust different fire types and behaviors, such as surface fire, crown fire, spotting, and point source fire acceleration (Finney, 1998). The model shows a promising result in basic conditions as the prediction matches very well to the actual fire boundary. However, it is computationally expensive and requires an extensive number of inputs. Also, the model accuracy varies widely across wildfires in different regions. In recent years, artificial intelligence has been used to solve the problem. Subramanian and Crowley presented a novel approach for utilizing Reinforcement Learning for learning forest wildfire spread dynamics directly from readily available satellite images (Subramanian & Crowley, 2018). FireCast combined Artificial Intelligence and data collection from GIS to predict which areas are at high risk in the future. It utilizes 2D CNN and is trained to predict areas that are expected to burn during the next 24 hours when given an initial fire perimeter (Radke et al., 2019) . From the studies, utilizing AI to solve the complex nature of wildfire is a promising way to predict wildfire spread and avoid complicated physical computation that many traditional methods hold. Thus, we propose a WildfireNet that can predict the profile of wildfire of the next day. To our knowledge, WildfireNet is the first 3D CNN to be used in the application of the fire spreading model. 3 WILDFIRENET ALGORITHM 3.1 TRAINING METHOD There are two categories, fire or no fire, to classify for each pixel. Binary cross-entropy loss (BCE) is an appropriate loss function to train the model . BCE = −1 N N∑ i=0 yi · log(ŷi) + (1− yi) · log(1− ŷi) (1) As shown in the equation above, binary cross-entropy (BCE) loss is informative to our model because it tries to minimize the distance between the predicted and the ground truth probability distributions, which is the ultimate goal of our problem. For instance, if the model predicts a pixel to contain fire where it doesn’t, BCE will output a high loss, penalizing the model with a large cost. From this train and error approach, the model will find the local minimum point of the loss function as its goal is to achieve a smaller loss at every training step. Thus, with BCE, the model will be optimized to maximize the likelihood of outputting the correct shape of the wildfire profile. Furthermore, Adam optimization was used and a learning rate of 1e-4 was applied to train the model. 3.2 MODEL AND IMPLEMENTATION U-Net was first introduced solely for the purpose of image segmentation on biomedical images. The output of U-Net is different than the typical use of CNN, which is on classification tasks that outputs a single class label to an input image. The output of U-Net is different. Instead of assigning a single label per input image, the model localizes the label for each pixel of the input image (Ronneberger et al., 2015). The model has two major paths. It first begins with contraction, which consists of convolutions and maxpool to extract features. Next, the model undergoes expansion where the size of the image resizes back to the original input to enable precise localization. The sequence of contraction and expansion yields a u-shaped architecture. The unique aspect of this model is the outputs from contraction are concatenated to the expansion output. This approach helps the model to maintain precise localization of high-resolution features at the output (Ronneberger et al., 2015). We decided to implement the U-Net model because 1) the model has a capacity to intake a raw image and output a segmented image, 2) it works well with a small dataset, 3) and it is capable of predicting a wildfire profile within a second. The U-Net model is adjusted so that it becomes more applicable to our study. Similar to the U-Net, WildfireNet is composed of the two major paths: contraction and expansion. The full architecture of the model is shown in Figure 1. In contrast to the U-Net, WildfireNet consists of fully connected layers at the bottom of the architecture. After the last downsampling, the 3D image is flattened into 1D array, and weather data is concatenated. The model is further trained with dense layers to learn the effect of weather variables in its prediction. Furthermore, past wildfire profiles can play a dominant role in the future shape of the wildfire. Therefore, 3D CNN was used instead of 2D. In 3D CNN, the model further extracts features from both the temporal and spatial dimensions, whereas, in 2D CNN, the model only focuses on spatial features (Tran et al., 2015). In this study, 3 previous days of wildfire profiles are combined to convert input images from 2D to 3D. This allows the model to have a better sense on how historical fires are correlated to the fire on the next day. 3.3 INPUT DATA Topography, weather, and available fuels are major contributor to fire growth (Estes et al., 2017). Thus, along with three days of historical fire profiles, we decided to add land cover, topography, and weather data to the model. 3.3.1 DYNAMIC WILDFIRE PERIMETERS A total of 302 daily fire perimeters were retrieved. The size of the data is limited compared to other deep learning studies. However, WildfireNet, derived from U-Net, is proven to perform well with small dataset (Ronneberger et al., 2015). Wildfire perimeters were obtained from National Inter agency Fire Center FTP Server 1. Perimeters are in the format of .kmz, which contains an array of coordinates of boundaries. In this paper, only the fires that occurred in California from 2013 to 2019 were observed. For each wildfire perimeters, as shown in Figure 2, an array of coordinates was used to fill inside the perimeter to create a binary map to reflect the overall shape of the wildfire. In other words, if a given pixel is within the perimeter, the pixel is labeled as 1 to indicate fire, otherwise, if the pixel is outside the boundary, the pixel is assigned to 0 to reflect no fire. An Important assumption was made when creating a binary map. For instance, there were spots of the region within the boundary that was not on fire, but these spots were considered as a risk zone and were filled in as well. binary map consists of 256x256 pixels, covering 0.5 degrees in both latitude and longitude. It is important to maintain the same spatial scale to distinguish fires with respect to their true sizes. Overall, a preprocessed binary map is used as an input to represent the wildfire profile. 3.3.2 LAND COVER AND VEGETATION Normalized Difference Vegetation Index (NDVI) is a remote sensing data that combines measurement of wavelengths and intensity of visible and near-infrared light to calculate the concentrations of green leaf vegetation (Zaitunah1 et al., 2018). NDVI is useful to fire spreading models because it indicates the water content of the crop and it is more likely for fire to spread on dry vegetation than on wet crops. 1Wildfire perimeter source: https://ftp.nifc.gov/ Advanced Very High Resolution Radiometer (AVHRR) NDVI was collected from National Oceanic and Atmospheric Administration 2 (NOAA). The data are projected on a 0.05-degree x 0.05-degree global grid. The mean NDVI was used as an input. 3.3.3 TOPOGRAPHY Topography has a direct effect on fire behavior (Rothermel, 1972). For example, the rate of fire spread rapidly increases on steeper slopes. Studies have shown that there is a strong correlation between topography and fire severity (Estes et al., 2017). 1/3 arc-second digital elevation model (DEM) was retrieved from the USGS national map 3 to reflect topography on locations of fires. DEM contains elevation for each pixel. This allows the model to learn fire behavior that is directly responded to topography, such as slope change. For each wildfire, four corner coordinates of the binary map were used to crop DEM to match spatial location. 3.3.4 WEATHER Weather is an important factor that can significantly contribute to the spread of wildfires. Simplified daily weather data, including mean wind speed, mean temperature, mean relative humidity were used to represent atmospheric conditions. Weather data were retrieved from the Climatology Lab 4. The resolution of the data is 1/24 degree x 1/24 degree. 3.4 OUTPUT After three days of historical fire profiles and remote-sensing data are processed into WildfireNet, it outputs a probabilistic distribution of fire of the next day. A sigmoid activation is used at the last layer to ensure the values of each pixel is within the range of [0, 1], representing a probability of fire occurrence. The output is further processed to create a predicted binary map, which is evaluated with the ground truth in the loss function to train the model. To create a binary map, an optimal threshold is picked and the state of the pixel is classified as fire (1) or not (0) with the following rule: State of Pixel = { 1, if p ≥ threshold 0, if p < threshold (2) Various thresholds were evaluated and selected threshold yielded the highest metric scores. 2NDVI source: https://www.ncdc.noaa.gov/cdr/terrestrial/normalized-difference-vegetation-index 3Elevation data source: https://www.usgs.gov/core-science-systems/national- geospatial-program/national- map 4Weather data source: http://www.climatologylab.org/gridmet.html 4 BASELINE MODEL We decided to use the U-Net and logistic regression model as baseline models to compare and evaluate the performance of WildfireNet. Similar to WildfireNet, U-Net extracts features through downsampling and upsampling pathways, however, it does not consider historical fires since the dimension is limited to 2. U-Net will work as a baseline model to assess the temporal aspects of the WildfireNet in predicting the wildfire shape. The logistic regression model is a commonly used baseline model due to its simplicity. However, it lacks convolution to extract features. Thus, it will provide a suitable starting point to understand the task and validate the usage of CNN in predicting wildfire spread profiles. The input for the logistic regression model includes historical binary maps, elevation data, wind speed, wind direction, and the state of the surrounding pixels. The state of the surrounding pixels is an essential aspect since the pixel has a higher probability of setting on fire if one of the neighboring pixels is on fire. Moreover, there are a total of 8 neighboring pixels that contribute to the state of the pixel, as shown in Figure 2. Baseline models were trained and tested with the same data as WildfireNet. 5 RESULTS As shown in Figure 4(a,c), the output of WildfireNet shows a probabilistic distribution of fires occurring at each pixel. If the model is confident that there is a fire in a certain pixel, it will assign a low score on the pixel. The model also projects how fire will spread in the future. For example, in the Mad River Complex fire, there are two major fire bodies. WildfireNet predicts the fire will expand at the bottom of the left body and no change will occur on the right body. In fact, in the comparison between the current day to the next day, it shows that the actual fire of the next day did not expand elsewhere except at the bottom of the left body. Moreover, in the Rey fire, the model predicts the fire will enlarge on the right side of the boundary, whereas, not so much on the left side. In the comparison between the current day to the next day, it shows that the actual fire did expand to its right. These examples validate the model’s capacity of predicting the growth pattern of wildfires. 5.1 EVALUATION Intersection over Union (IoU) was calculated to evaluate the model’s performance in predicting the profile of the wildfire. IoU is commonly used metric to evaluate the performance of object segmentation. IoU is defined as IoU = Area of Overlap Area of Union (3) The metric is straightforward once a predicted image and a ground truth image are defined. In this study, both images are binary maps where each pixel is either 0 or 1. Then, the area of overlap is simply the number of pixels that have the same value in both images and union is the area encompassed by both images. Therefore, a predicted image that matches perfectly to the ground truth will score 1. As shown in Table 1, WildfireNet achieved an IoU score of 0.997 in the test set, while U-Net and logistic regression model scored 0.995 and 0.913, respectively. The result indicates that WildfireNet is excellent in precisely labeling each pixel with the presence of fire or not and performs better than the baseline models. However, in the test set, only 5 percent of labels in the binary map is labeled as fire, which shows a sign of class imbalance in the data set. Therefore, IoU is not the best metric to evaluate a model’s performance in predicting new fires because the model can obtain a high IoU score by simply predicting every pixel to be 0. Therefore, models are evaluated on only the pixels that were labeled fire on the next day but not on the current day. We defined such pixels as changed pixels. Considering only the changed pixels will truly measure the model’s performance in predicting the changes in wildfire profile. Thus, we defined expanded IoU as the area of intersection between the predicted and the truth of the changed pixels over the area of union between the predicted and the truth of the changed pixels. Expanded IoU = Area of Overlap of Changed Pixels Area of Union of Changed Pixels (4) Moreover, expanded recall was formulated as true positives of the changed pixels over the total positive of changed pixels. Expanded Recall = True Positive of Changed Pixels Total Positive of Changed Pixels (5) On the test set, WildfireNet performed better than the baseline models in both expanded IoU and recall. All models scored the lowest in expanded IoU because the metric further penalizes when predicted fires are not present in the actual fire. WildfireNet achieved 0.541 on recall while the UNet and baseline model scored 0.458 and 0.154, respectively. This implies that WildfireNet predicts correctly more than half of the time on how fires are growing in the actual fire expansion, while the logistic regression model is only correct about 15% of the time. From this result, WildfireNet is superior to baseline models. Wildfires are further categorized into three classes based on the percentage of fire growth from the previous day. Fire is defined as rapid if the fire body expands more than 7% than the previous day, moderate if the growth is between 3% and 7%, and subtle if the growth is under 3%. Table 2 shows that all models having the highest score in the following order: subtle, moderate, and rapid. It is reasonable for models to perform poorly as fire changes rapidly from one day to another. Wildfire is affected by various dynamic factors including but not limited to the inputs that were used to train the model. For instance, unstable weather conditions are often the major contributor to the rapid fire growth. However, daily weather data is not detailed enough to reflect the volatile nature of the weather. The abrupt changes in the weather, such as gust usually occurs for less than 20 seconds. Daily weather data, having a time interval of 24 hours, is not finite enough to inform the model about the sudden changes. In addition, human-induced fire growth can be a huge factor in contributing fire growth, but simply ignoring the effect can hinder the model’s prediction. Therefore, it is reasonable for models to fail in keeping up with the rapid change. In figure 5, WildfireNet outputs a reasonable prediction on the moderate fire. Prediction nearly matches the overall shape of the truth and it predicts the most bottom of the fire body to enlarge, which is the actual case in the real fire. However, in figure 6, WildfireNet cannot keep with a large change of the rapid fire. The model predicts the fire to increase only at the bottom. But in the actual fire, it increases in every direction. 6 LIMITATIONS Compared to many other deep learning applications, WildfireNet is trained with a small dataset. The lack of variability in training data could limit the model to predict accurately when new weather conditions, land cover, and fire profile are introduced. Furthermore, neglecting human-induced fire spread could stall the performance of the model. Furthermore, wildfire is highly influenced by interdependence on coupled climatic and human factors, but several factors were not considered, such as human-induced fire growth. Moreover, a key assumption was made when fire perimeters were preprocessed to create binary maps, which was to consider spots within the boundary that was not on fire to be labeled as fire. Due to such modification, the binary map couldn’t have reflected the actual shape of the fire. In addition, fire perimeters were recorded daily, but there is uncertainty whether fire perimeters were obtained at the same time. If time interval varies between days, the fire growth does not reflect changes in the 24-hour time window. Remote sensing data had different resolutions (wind speed: 1/24 degree, elevation: 1/512 degree, NDVI: 1/200 degree) and contained missing values. Thus, interpolations were put in place to properly complete the data, but it could lead to an error when there is a large amount of missing data. 7 CONCLUSION AND FUTURE WORK WildfireNet is built and experimented to establish a footprint of using 3D CNN to predict wildfire spread profile. Unlike current traditional, physics, and empirically based fire spreading models, WildfireNet does not require extensive inputs and complex computations and with fire profiles and accessible remote sensing data, it can output an upcoming fire profile within a second. Statistical analysis shows WildfireNet is capable of learning patterns from historical fire spread along with land cover, topography, and weather. The model shows a promising result by obtaining higher scores in IoU, expanded IoU, and recall than the baseline models. Future work includes obtaining more fire incidents and remote sensing data. Currently, WildfireNet is trained solely on the fires that existed in California and we believe the model can be flexible to predict any types of fire once it has been trained with various types and conditions of wildfires.
1. What is the focus of the paper regarding wildfire prediction? 2. What are the strengths and weaknesses of the proposed approach? 3. How does the reviewer assess the novelty and incremental nature of the method? 4. What concerns does the reviewer raise regarding the experimental validation and related work? 5. How does the reviewer evaluate the reproducibility of the paper's content? 6. What suggestions does the reviewer provide for improving the paper's quality?
Review
Review Strengths: the paper proposes a sensible approach to an interesting problem Weaknesses: there is no clear motivation of why a fast approach for wildfire prediction is desired (in comparison with Finney et al. 98). the experimental validation lacks comparison to any state of the art method. coverage of related work should be much more extensive; at least on prior works focusing on the prediction of wildfire, but also perhaps on other applications of deep learning in the modeling of physical processes, eg. for weather nowcasting. while the proposed approach is a sensible baseline for the problem at hand, it is not novel, or at best it is incremental. UNet has also been previously employed for modeling other spatiotemporal processes such as weather nowcasting. eg. [1] (and several others - in fact, [1] refers to this architecture as the “ubiquitous U-Net”.) in terms of reproducibility, there are no apparent plans to publish the code; data preprocessing and dataset splits are insufficiently described. The concern raised by the authors about the IOU metric (“…sign of imbalance in the dataset. There IOU is not the best metric…”) does not sound valid. The IOU should be averaged over relevant classes, which means that the metric is unaffected by imbalanced classes. The background class is certainly generally easy; however, one could simply report the IOU of the positive class, or report per-class IOU for each class. In any case, the extremely high reported numbers, in light of this invalid concern, look suspicious; perhaps they have been reported for the background class instead of the positive class. Furthermore, the qualitative samples (although they are hard to interpret) do not seem to indicate the perfect segmentation ability that these scores imply. Finally, the authors do not mention a validation set for the setting of the thresholds they employ, “Various thresholds were evaluated and selected threshold yielded the highest metric scores”, implying that the reported results might not be a good estimation of expected performance. Comments: qualitative results are difficult to interpret: colour and superimposed predictions and ground truths should be used to improve in this respect. AI is an overloaded and imprecise term - I would encourage the authors to name specific methods whenever possible. [1] Machine Learning for Precipitation Nowcasting from Radar Images, Agrawal et al. NeurIPS 2019
ICLR
Title Repository-Level Prompt Generation for Large Language Models of Code Abstract With the success of large language models (LLMs) of code and their use as code assistants (e.g. Codex (Chen et al., 2021) used in GitHub Copilot1), techniques for introducing domain-specific knowledge in the prompt design process become important. In this work, we propose a framework called Repo-Level Prompt Generator that learns to generate example-specific prompts using prompt proposals. The prompt proposals take context from the entire repository, thereby incorporating both the structure of the repository and the context from other relevant files (e.g. imports, parent class files). Our technique doesn’t require any access to the weights of the LLM, making it applicable in cases where we only have black-box access to the LLM. We conduct experiments on the task of single-line code-autocompletion using code repositories taken from Google Code archives. We demonstrate that an oracle constructed from our prompt proposals gives a remarkably high relative improvement of 36% over Codex, showing the quality of these proposals. Further, we show that when we train a model to predict a prompt proposal, we can achieve significant performance gains over Codex and other baselines. 1 INTRODUCTION Large Language Models (LLMs) have demonstrated remarkable performance in natural language processing tasks (Brown et al., 2020; Chowdhery et al., 2022), text-to-image generation (Ramesh et al., 2022; Rombach et al., 2021), protein-sequencing (Rives et al., 2019) and even as a generalized agent (Reed et al., 2022). As opposed to the pretrain-finetune paradigm, prompting these LLMs has been found to yield good performance even with few-examples (Liu et al., 2021a). A prompt is an input to the LM such that the desired task can be expressed as predictions generated from the LM. Besides providing a mechanism to control and evaluate a LM, prompts have shown to elicit emergent behaviour as well. Examples of this behavior include GPT-3 (Brown et al., 2020) doing better in tasks it has never seen during training and improved reasoning capabilities with few-shot (Wei et al., 2022) and zero-shot (Kojima et al., 2022) prompts that encourage a chain of thoughts. These factors highlight the importance of designing an effective task-specific prompt 2. However, currently we have limited understanding of how to do this (Reynolds & McDonell, 2021). LLMs have also been used for modeling source code with impressive results (Austin et al., 2021; Fried et al., 2022; Xu et al., 2022a). In particular, one of the best performing LLM, Codex (Chen et al., 2021), has been deployed as part of GitHub Copilot 1, a state-of-the-art in-IDE code assistant. Despite the growing popularity of LLMs of code, there is no work that systematically tackles different aspects of prompt generation in relation to source code. One such aspect is that when it comes to code, the relevant context to be put in the prompt can come from not just the current file, but also from outside, such as imports and parent classes. Also, depending on the scenario, the relevant context can be scattered across multiple locations. Since the LLMs have a limited context length available for the prompt, it becomes increasing crucial for our domain-specific understanding to guide the selection of relevant context. Currently, it is not clear how to integrate this domain knowledge of what constitutes a relevant context, into the process of creating prompts. Addressing this question has potential benefits in other domains such as question answering (Liu et al., 2022) and multi-document summarization (Xiao et al., 2022), where domain-specific structured retrieval of context can be useful. 1https://copilot.github.com/ 2Platforms such as PromptBase https://promptbase.com/ allow buying and selling of prompts. In this work, we address this problem by proposing Repo-Level Prompt Generator (RLPG), a framework that while generating the prompt, incorporates both the structure of the repository as well as the relevant context in all the files of the repository. In RLPG, the choice of where from and what to take from the repository is specified by a set of prompt proposals. For example, one of the prompt proposal can be to take all the identifiers used in the first import file. These prompt proposals allow the prompt engineers to induce their domain expertise in the prompt-designing process. With the increasing use of LLMs as assistive agents to humans, demand for transparency and the desire for software engineers to take active part in tailoring prompts to suit their requirements (Jiang et al., 2022; Sun et al., 2022), this capability becomes important. As suggested in some previous works in NLP (Shin et al., 2020; Schick & Schütze, 2021), our prompt proposals are discrete. However, rather than fixing one particular prompt proposal for each example, we instead predict the best prompt proposal conditioned on the example. We do this by coming up with a neural network called Prompt Proposal Classifier (PPC), that given an example, learns to select a prompt proposal such that the resulting prompt is likely to produce the desired output. Therefore, RLPG allows the introduction of domain expertise, and at the same time facilitates automatic example-specific prompt generation via a learned neural network. Note that there are some techniques for automatic prompt generation in NLP (Li & Liang, 2021; Shin et al., 2020; Lester et al., 2021) that require updating some or all of the weights of the LLM. However, the strongest LLMs are not publicly available (e.g. OpenAI provides access only to the generated output from Codex via an API 3 and no access to model weights and data is provided), making these techniques less useful under this scenario. RLPG addresses this limitation by generating prompts assuming only black-box access to the LLM. We focus on the task of single-line code-autocompletion in an IDE, where the objective is to predict the blanked-out portion (or target hole) starting from the position of an imagined cursor to the end of line. We operate under the line-level maintenance setting (Shrivastava et al., 2020; Hellendoorn & Devanbu, 2017) that reflects the scenario where a user is editing an existing file. This means that there can be code following the line. Figure 1 provides an illustration of our approach. The prompt proposal classifier takes in the hole position (position of the cursor) in the current file, the repository to which the current file belongs and a set of repo-level prompt proposals as input, and predicts a prompt proposal. In our illustrated example, the predicted prompt proposal corresponds to taking the method names and bodies from MaximizingGibbsSampler.java (mg.before the hole position indicates that a method from the imported file is likely to be invoked). The Prompt Composer uses the context from the predicted prompt proposal and combines it with the default Codex context, 3https://openai.com/blog/openai-codex/ i.e., code prior to the position of the hole in the current file. The resulting prompt consists of the method name InitializeToAssignment (from the prompt proposal context) and the method CurrentAssignments() (from the default Codex context), resulting in a successful prediction (brown box on the top) of the target hole. Our key contributions are as follows: • We propose a framework called the Repo-Level Prompt Generator (RLPG) that learns to generate prompts conditioned on the example, without requiring access to the weights of the LLM. • To incorporate domain-knowledge in the prompt design process, RLPG uses a set of repository-level prompt proposals. These prompt proposals are designed to incorporate both the structure of the repository as well as the relevant context from all files in the repository. • On the task of single-line code-autocompletion, we show that an oracle constructed from our proposed prompt proposals gives up to 36% relative improvement over Codex. This improvement is pleasantly surprising as Codex has never seen prompts made from these prompt proposals during training. Further, we show that when we use our prompt proposal classifier to predict the best prompt proposal, we can achieve up to 17% relative improvement over Codex. 2 REPO-LEVEL PROMPT GENERATOR (RLPG) In this section, we provide details of our framework. We start by describing our prompt proposals and then discuss our prompt proposal classifier which is followed by a description of prompt composer. 2.1 REPO-LEVEL PROMPT PROPOSALS The core idea of RLPG consists of substituting part of the default context used by Codex with context coming from somewhere else in the repository. The decision of what to take and from where in the repository to take from is governed by a set of prompt proposals. These prompt proposals were decided based on manual inspection of our training data and intend to capture common coding patterns (but more generally can also include project/organization-specific coding practises). A prompt proposal can be thought of as a function that takes as input a target hole’s position and the repository that the hole is a part of, and that returns the prompt proposal context (a string constituted by the context from the prompt proposal). A prompt proposal is specified by a prompt source and a prompt context type. We mention each of these along with their motivation below. Prompt Source: For a target hole position, a prompt source determines from where should we take code that will be part of the prompt proposal context. We propose ten different prompt sources: 1. Current: take code from the current file excluding the contents of the target hole. The current file is the file that contains the target hole. The code in the current file (e.g. the lines after the hole position) can be very useful in predicting the target hole. 2. Parent Class: take code from the file that contains the parent of the class to which the target hole belongs. The intuition behind this is to account for cases where a method present in the parent class is invoked in the current file (i.e. the child class). 3. Import: take code from the import files used in the current file. The dependencies specified via imports can provide useful cues to predict the target hole. 4. Sibling: take code from the files that are in the same directory as the current file. Files in the same directory tend to share code variables (e.g. identifiers). 5. Similar Name: take code from files that have a similar name as the current file. Similar names are determined by doing a splitting of the file name based on underscore or camel-case formatting and then matching parts of the filename. If one or more parts matches, two files are considered to have similar names. The intuition behind this is that software developers tend to name files based on the functionality of the code written in that file. Therefore, a similar name file might contain some portion of the code that is common with the current file and hence might be useful for predicting the target hole. 6. Child Class: take code from files that have the current file as their parent class file. 7. Import of Parent Class: take code from the import files used in the parent class files. 8. Import of Sibling: take code from the import files used in the sibling files. 9. Import of Similar Name: take code from the import files used in the similar name files. 10. Import of Child Class: take code from the import files used in the child class files. The last four prompt sources are useful when the target hole occurs at the very beginning of the current file. In these cases, there would be less context coming from other prompt sources. For each prompt source, we can get either a single file or a ranked list of files (see Appendix B.1). In the latter case, we will take context from these files until we exhaust the maximum context length allocated to the prompt proposal. Prompt Context Type: The prompt context type determines what code to take from the prompt source. We propose seven different prompt context types (Appendix B.2 has examples of each type): 1. Post Lines (PL): Take all the lines after the target hole line till we reach the end of the file. This context type is applicable only when prompt source is the current file 4. 2. Identifiers (I): Take all the identifiers used in the prompt source. 3. Type Identifiers (TI): Take all the type identifiers used in the prompt source. 4. Field Declarations (FD): Take all the field declarations used in the prompt source. 5. String Literals (SL): Take all the string literals used in the prompt source. 6. Method Names (MN): Take all the method names along with their signatures that are used in the prompt source. 7. Method Names and Bodies (MNB): Take all the method names along with their signatures and corresponding bodies used in the prompt source. By combining prompt sources with prompt context types, we get a total of 63 prompt proposals (see Appendix B.4 for details). Note that depending on the target hole, not all prompt proposals would be applicable (e.g. if there are no parent classes in the current file, prompt proposals with prompt source as parent class file won’t be applicable). In Figure 1, the predicted prompt proposal corresponds to taking prompt source Import and prompt context type MNB. We aimed for a set of prompt proposals that offer more diversity rather than a set of prompt proposals that are all good. This in turn ensures that for any hole position, a significant number of prompt proposals are applicable. 2.2 PROMPT PROPOSAL CLASSIFIER (PPC) Given a hole position, the goal of the prompt proposal classifier is to predict the prompt proposal p that will lead to success, where success happens when the predicted hole ĥ exactly matches the target hole h. This task is formulated as a multi-label binary classification problem since for a given target hole, more than one prompt proposals can lead to success. In this formulation, we treat the default Codex context as one of the prompt proposals. Next, we describe the training procedure for PPC. Training: For each target hole h, we generate a ground-truth vector Y h = [yhp ]Mp=1 which is a multi-hot vector of size M , where M is the total number of prompt proposals. This vector is obtained by feeding the prompt generated from prompt proposal p into Codex and then seeing whether ĥ = h. If there is a match, we say that the prompt proposal p is successful. For hole h, if a prompt proposal p is applicable and leads to success, yhp = 1 and will be zero otherwise. For each hole h, we obtain a mask Th where Thp = 1 when p is applicable or zero otherwise. The overall training loss L can be expressed as the sum of individual hole losses Lh as follows: L = 1 N N∑ h=1 Lh = 1 N N∑ h=1 1 Mh Mh∑ p=1 BCE(ŷhp , y h p ) ∗ Thp where Mh = ∑ p Thp (1) In the above equation, N is the total number of holes encountered while training, Mh denotes the total number of applicable prompt proposals for h and BCE corresponds to the binary cross entropy loss. Masking ensures that we consider only the prompt proposals that are applicable. Next, we describe our two variants of PPC that can be used to obtain the prediction ŷhp . RLPG-H: Let Hh be the hole window that includes code present around the hole h excluding the hole itself. In our work, we take two lines before the hole position, the code up to the hole position and two lines after the hole position. We use a pretrained model Fϕ to obtain a context representation vector of size Z, where Z is the dimension of the hidden state of the model. Specifically, we take the hidden state at the first position, i.e. the representation of the [CLS] token. To make training of PPC 4We also conducted experiments (Appendix D.2) where we take lines starting from the 4th line after the hole. computationally efficient, the parameters ϕ are frozen during training. The RLPG-H model takes the context representation of the hole window and projects it to the prompt proposal space of size M via two dense layers with a non-linearity in between (see Equation 2). Taking the sigmoid of this output gives the prediction of the prompt proposal. ŷhp = P (y h p = 1|Hh) = sigmoid(W 2(relu(W 1(Fϕ(Hh)) + b1)) + b2) (2) RLPG-R: The motivation behind this variant is to use the similarity of the hole window and the prompt proposal context to determine which prompt proposal can be useful. Given a particular hole h, let Chp denote the prompt proposal context from prompt proposal p . Intuitively, if the hole window contains variables (e.g. identifiers) that are similar to the variables in the prompt proposal context, then there are chances that h might occur somewhere in Chp . The similarity is modeled using a multiheaded attention mechanism (Vaswani et al., 2017), by treating the projected hole window representation as a query Qh and the projected prompt proposal context representation Khp as a key (Equation 3). The value V hp is the same as the key. Qh = Fϕ(H h), Khp = Fϕ(C h p ), V h p = Fϕ(C h p ) (3) Att(Qh,Khp , V h p ) = V h p softmax (Qh⊤Khp√ dk ) (4) MultiHead(Qh,Khp , V h p ) = W Oconcat(headi, head2, . . . headτ ) (5) where headi = Att(W Q i Q h,WKi K h p ,W V i V h p ) ŷhp = P (y h p = 1|Hh, Chp ) = sigmoid ( WpG(MultiHead(Q h,Khp , V h p )) + bp ) (6) In the equations above, dk is the dimension of the key, W Q i ,W K i ,W V i are the query, key and value projection matrices, τ is the number of heads and WO is the linear projection that combines the heads. The output from Equation 5 is fed to module G consisting of two-layers of feedforward network with relu activation in between (see Appendix C for more details). The resulting output is then linearly projected and a sigmoid is applied to get the predicted prompt proposal (Equation 6). 2.3 PROMPT COMPOSER The prompt composer combines the context from the selected prompt proposal (given by PPC) with the context normally used by Codex (default Codex context) to generate the prompt. Since the total length that can be used for a prompt is fixed, we adopted a dynamic context allocation strategy where if the prompt proposal context is shorter than its allocated length, we assign the remaining portion from the prompt proposal context to the default Codex context. The prompt proposal context is always added before the default Codex context. For all prompt proposals, we assign half of the total context length to the prompt proposal context and the remaining to the default Codex context. For post lines, in addition, we also assign one-fourth and three-fourths of the total context length to the prompt proposal context. If the prompt proposal context or the default Codex context is greater than the context length allocated to it, we truncate it (see Appendix B.3 for our truncation strategies). 3 EXPERIMENTS AND RESULTS In this section, we describe our process of dataset creation, details of experiments along with their results and interesting ablation studies. 3.1 DATASET CREATION To mitigate the effects caused by potential memorization of the code present in the dataset used for training Codex, we avoided code repositories from GitHub (Chen et al., 2021). Instead, we scraped Google Code 5 for repositories in Java (removing the ones that matched with a repository on GitHub 5https://code.google.com/archive/ with the same name). We selected the repositories that had a permissive license giving us a total of 47 repositories. We divided the repositories into train, validation and test splits, where each repository in its entirety is part of a split. In each file within a repository, we remove lines that are blank and comments, and set the hole position to be the middle character in the line. All the characters from the middle position to the end of the line constitute the target hole. Since code duplication has been shown to have adverse effects (Allamanis, 2018), within a repository, we look for files that are exact replica of each other, but placed in a different folder. We mark all such copies as duplicates and omit all of them when creating target holes for our dataset. Note that the prompt proposal context can still come from the duplicate files. We felt comfortable with this choice since we wouldn’t want to predict a target hole in a duplicate file, but we can still use the context from the duplicate file to predict the hole in a file that is not its duplicate (e.g. in a sibling file). Further, we found that the repositories were quite uneven in terms of their size. To avoid large repositories dominating the training of PPC, we capped the maximum contribution of holes from a repository to 10,000, i.e. if the total number of holes in the repository exceeded 10,000, we selected 10,000 holes randomly from the total holes. Please see the left part of Figure 2 for statistics of our dataset. The #Holes represents the holes after deduplication and capping. For some of our prompt proposals, we require semantic information that can be obtained with a parse tree. We used the tree-sitter API for Java 6 that enables us to get the AST of a file and query it. Since our prompt proposals need information at a repository level, we stored some extra information that allowed us to collate the information from individual files according to the directory structure inside the repository (see Appendix A for more details). 3.2 EXPERIMENTAL DETAILS Prompt Generation: We used the OpenAI Codex Completions API for generating the predicted hole from the Codex model. In particular, we used the code-davinci-001 engine with temperature set to 1.0 and stop criteria as newline. The completion length was kept to be 24 and the maximum prompt length was 4072. Tokenization was done using the suggested tokenizer 7. To allow for fast computation, we used simple models like CodeBERT (Feng et al., 2020) and GraphCodeBERT (Guo et al., 2020) as our pretrained models. One of the limitations of these pretrained models is that the maximum context length that can be taken as input by these models is much smaller than the maximum context length allowed by Codex. Therefore, when getting the representation of the prompt proposal context that is used by PPC, we need to truncate the prompt proposal context that might lead to omitting important parts of the prompt proposal context in certain cases. Using pretrained models that allow larger context length or models that augment the context (Wu et al., 2022) offer avenues for future work. See Appendix D.4 for results when using a smaller context length from Codex. Computational Complexity and Scalability of RLPG: To collect the ground-truth data for training our prompt proposal classifier, we queried the Codex API for each applicable prompt proposal per hole (maximum rate limit of 400 holes per minute). The computational complexity of training our larger RLPG-R variant (3.6M parameters, 141269 holes and 9.19 minutes per epoch on a single Tesla V100 GPU) is much smaller than finetuning all or some part of Codex (12B parameters). During inference, we need to calculate the repo-level statistics just once and all the subsequent hole completions in the repo can utilize this cached information, incurring no additional computational complexity. Besides training the PPC, all our experiments were performed on a CPU with 8GB RAM. Our prompt proposals are based on concepts such as post lines, imports, similar name files, method names and identifiers that are quite general and are applicable to other programming languages. In addition to the existing prompt proposals, our framework provides the flexibility to incorporate new prompt proposals. Since the cost of retraining RLPG with the extended prompt proposals is extremely low (much lower than finetuning Codex with the new prompt proposals), our framework can be used to make interventions on the LLM to address observed weaknesses as long as the intervention can be expressed as a prompt proposal that adds the missing context to the LLM. As opposed to techniques that perform prompt engineering in the latent space and require access to the weights of the LLM such as Li & Liang (2021), RLPG facilitates expressing intent in the form of prompt proposals that are intuitive for humans, easy to understand and do not require access to the weights of the LLM. 6https://github.com/tree-sitter/tree-sitter-java 7https://huggingface.co/docs/transformers/model_doc/gpt2#transformers.GPT2TokenizerFast Methods: We experimented with the following methods for generating the prompt: 1. Codex: Using the default context from Codex as the entire prompt. 2. Oracle: Using the ground-truth vector Y h (mentioned in Section 2.2). The prompt generated corresponds to using any of the successful prompt proposals (i.e., yhp = 1). Since this information is not available at inference, the oracle performance represents an upper bound. 3. Fixed Prompt Proposal: Using the most successful prompt proposal for all target holes. This was chosen based on the performance on the validation set and corresponded to taking 75% of the total context length from post lines in the current file. 4. RLPG-H and RLPG-R: Using the prompt proposal predicted by the RLPG-H and RLPG-H varients of PPC. The selected prompt proposal corresponds to taking the argmax of the predicted probabilities over different prompt proposals. 5. RLPG-BM25: Instead of using PPC to rank prompt proposals, using the scores obtained by BM25 (Jones et al., 2000) to select the best prompt proposal. The scores are calculated with the hole window being the query and prompt proposal contexts being the search documents. This serves as a non-learned retrieval method that makes use of our prompt proposals. 6. File-level BM25: Same as above, except that instead of using our prompt proposal contexts, search documents consist of full context from other files in the repository. 7. Random: For each target hole, select a context randomly from anywhere in the repository. 8. Random NN: Same as Random, except that amongst the randomly chosen contexts, we take the nearest neighbours of the hole window in the representation space of a pretrained model. This is analogous to the technique used in Liu et al. (2022). 9. Identifier Usage: For each target hole, we take the closest identifier and take usage windows of that identifier from everywhere in the repository. We take two lines above, two lines below and the usage line as the usage window. We can rank the usage windows either randomly (random) or based on the nearest neighbour distance to the hole window in the representation space (NN). The last four methods help us understand the performance when a context other than the prompt proposal context is used. To generate a prompt using these methods, we take 50% of the context from these followed by the default Codex context that takes up the remaining context length. For the NN baselines, we use CodeBERT (Feng et al., 2020) as the pretrained model. The contexts are taken in the increasing order of the nearest neighbour distances, until we exhaust the allocated context length. RLPG-BM25 helps us understand the role of PPC. See Appendix C.3 for more details on the implementation of these methods. Evaluation Metric: As mentioned in Section 2.2, to measure success, we used exact match between the predicted hole string generated by Codex and the target hole string. In our experiments, we report the percentage of successful holes divided by the total number of holes for each split. We will call this success rate (SR) going forward. 3.3 RESULTS In this section, we present the results of the following two research questions explored in this paper: [RQ1] Is it useful to generate a prompt that is composed of code context that is different from the default Codex context? If yes, what context can be useful? [RQ2] For each target hole, is there a way of automatically selecting the prompt? If yes, how does this system perform relative to Codex? RQ1 - Performance of Prompt Proposals: We found that combining the prompt proposal context (context from other files in the repository) with the default Codex context led to substantial improve- ment in performance. The right part of Figure 2 shows the performance of an oracle constructed from our prompt proposals. We see that across all data splits, the prompt proposals contribute to significantly large improvements over Codex (upto 36% for test split). These results might seem surprising as Codex has not been trained on prompts that consist of context other than the default Codex context. What makes this result more surprising is that in most of the cases, the prompt consists of mashed up context without logical ordering that may not even look like a semantically meaningful chunk of code (e.g. list of string literals from a sibling file followed by the default Codex context or post lines placed before the default Codex context as opposed to after). These results might suggest that as long as the relevant context (in our case repo-level knowledge in the form of prompt proposals) is present in any form in the prompt, it can be quite effective. RQ2 - Performance of PPC: Having seen promise in our prompt proposals, next we present the results of RLPG, which for each target hole predicts a single best prompt proposal. Table 1 presents the success rates along with the percentage of relative improvements for the test data. The second and third columns correspond to the averages across all holes in the test data. The last two columns correspond to the average success rate of individual repositories. The latter metric doesn’t account for the size of the repository. As can be seen from the table, all the RLPG variants as well as the fixed prompt proposal improve the performance significantly over Codex. The random baselines are either worse or on par with Codex. Identifier usage is a good baseline but still performs worse than either the fixed prompt proposal or RLPG. The improved performance of RLPG-BM25 as compared to fixed prompt proposal shows the value of generating example-specific prompts using RLPG. However, both the learned variants of RLPG, i.e., RLPG-H and RLPG-R outperform the RLPG-BM25, highlighting the importance of learning PPC. See Appendix D.5 for performance of all methods on individual repositories. Note that even though we consider identifier usage as a separate baseline, one could consider it as one of the prompt proposal leading to further improved performance of RLPG. Despite our efforts of avoiding overlap, since the training data for Codex is not exactly known, there might be a slight possibility that part of our Google Code data is part of the training data for Codex. Even if there were an overlap, we want to point out that since Codex has seen the default Codex context during training, it would be more beneficial to use the default Codex context in the prompt rather than the context from the prompt proposals or any other context from other baselines. Therefore, under this scenario, our evaluation would be more generous to the Codex baseline with results biased more in favour of the Codex baseline than other methods we have used. Variation with #attempts: Imagine a scenario where we have a human-in-the-loop who has been given k attempts to prompt the LLM and then can choose one of the k hole predictions. We wanted to see how does the performance of our framework varies with #attempts under this setting. This corresponds to using k prompts generated with top-k prompt proposals (one prompt per proposal) and marking success if any of the k prompts lead to success. The left part of Figure 3 shows the variation of SR over the validation data with the value of k. For RLPG, the top-k prompt proposals were chosen based on the decreasing order of probabilities given by PPC. For the fixed prompt proposal, the top-k prompt proposals were decided based on decreasing order of success rate of the individual prompt proposals on the validation dataset. From the figure, we notice that as we increase the value of k, the performance increases gradually at first and then saturates towards the oracle performance (79.05% for val data). This behaviour is observed for both fixed prompt proposal as well as RLPG. However, we see that for the same value of k, the success rate for RLPG is higher indicating that PPC learns a useful ranking of the prompt proposal contexts that can scale well with the #attempts. Performance based on Prompt Proposals: The right part of Figure 3 shows mean success rate of prompt sources when we count success only when the corresponding prompt source is applicable. From the figure, we see that the current file is the most important prompt source. Closely following are sibling files and similar name files. We see that all prompt sources have non-zero chances of success, highlighting the usefulness of each prompt source. See Appendix D.1 for a similar breakdown based on prompt context type and Appendix E for analysis of successful and failed sample cases. 4 RELATED WORK LLMs for Code: Recently, there has been a lot of work around large language models of code. One class of models are the decoder-only models that correspond to generating code from left-to-right. Codex (Chen et al., 2021), Google’s model (Austin et al., 2021), GPT-J-6B (Wang & Komatsuzaki, 2021), GPT-Neo (Black et al., 2021b), GPT-Neo-X (Black et al., 2021a), CodeParrot (Tunstall et al., 2022), PolyCoder (Xu et al., 2022a) and InCoder (Fried et al., 2022) are some examples. We also have some encoder-only models that use a masked language modelling objective. CodeBERT (Feng et al., 2020), GraphcodeBERT (Guo et al., 2020) and CuBERT (Kanade et al., 2020) are examples of such models. Lastly, we have the class of encoder-decoder models that generally use a bidirectional encoding of a context to decode a series of masked tokens. Code-T5 (Wang et al., 2021) and AlphaCode (Li et al., 2022) are examples of such models. Repo-Level Info: Fewer works use information from outside the current file. Hellendoorn & Devanbu (2017) propose a nested n-gram model that utilizes a locality-based cache where the locality consists of all directories from the root of the project (inclusive of the current file). Zhang et al. (2021) uses the parent class to generate the comments for the child class. Pashakhanloo et al. (2022b;a) capture the structure and semantics of the repository by converting it into a relational database and propose a graph-walk based mechanism for pruning the unrelated context. Lyu et al. (2021) incorporates the API-dependency graph in a LSTM-based Seq2Seq model to assist in code generation. Xu et al. (2022b) incorporate three types of structural locality features while training the kNN-LM (Khandelwal et al., 2020). These features are binary variables that correspond to the presence or absence of similar hierarchy. The three levels of hierarchy are (a) sibling file, (b) file in the same repo (c) no hierarchy. In contrast we have a much richer set of prompt proposals incorporating the semantics and structure of the repository. Also, we assume black-box access to the actual LM and restrict ourselves to generating a prompt for the LLM without performing any finetuning of the LLM. Prompt Generation: There have been promising works around prompt generation techniques in NLP. Broadly, there are two categories of automatic prompt generation techniques. The first category corresponds to producing continuous/soft prompts where the prompt is described in the latent space of a language model (Li & Liang, 2021; Qin & Eisner, 2021; Bragg et al., 2021; Lester et al., 2021; Liu et al., 2021b). For example, Prefix-Tuning (Li & Liang, 2021) adds a prefix to the LM that can be learned by finetuning on examples from the downstream task. The second category produces discrete prompts where the prompt is a text string that can be interpreted by a human (Shin et al., 2020; Gao et al., 2021; Schick & Schütze, 2021). For example, Autoprompt (Shin et al., 2020) generates prompt using a fixed template consisting of trigger tokens. The trigger tokens are shared across all inputs and determined by a gradient-guided search involving the LM. Our work falls in the category of discrete prompt generation techniques as we produce a prompt consisting of code tokens that can be easily interpreted by a human. However, in contrast to prior works that use a set of fixed templates for all examples, we learn to produce prompts conditioned on each example. Another important distinction is that we do not require access to the weights of the LM. A concurrent work as ours (Wang et al., 2022) studies the role of prompt-tuning when compared to fine-tuning for code translation, defect localization and code summarization. However, their technique requires access to the weights of the LLM and they perform experiments over models that are much smaller in scale than Codex. To the best of our knowledge, our work is the first to explore automatic prompt generation in a black-box access setting in the domain of source code. 5 CONCLUSIONS AND FUTURE DIRECTIONS We present RLPG, a framework that learns to automatically generate prompts conditioned on the example, without requiring access to the weights of the LLM. RLPG utilizes the structure of the repository as well as the context from other files in the repository using a set of easy to understand prompt proposals. Note that even though we have scoped and worded our prompt proposals to be repository-level, the idea of RLPG and prompt proposals in itself is quite universal and need not be scoped to a repository. Taking context from other repositories as well as external knowledge such as API dependencies offers an interesting direction to explore in the future. In this work, we are taking context from only one prompt proposal. For future work, we want to learn a model that can automatically compose a prompt from multiple prompt proposals (see Appendix D.3 for promising initial results). Other interesting directions include incorporating the user’s feedback in RLPG and extending RLPG to multi-line code autocompletion. A DATASET CREATION DETAILS A.1 CREATION OF HOLE COMPLETION DATA To collect the hole completion data, we scraped Google Code 8 for repositories tagged with the language “Java”. Then we deduplicated repositories by searching for a matching repository with the same name on GitHub. For those repositories with zero matching names on GitHub, we downloaded the archive and extracted the source code (preserving the directory structure). Next, we tried to determine the licenses of all repositories by either looking for a LICENSE file or matching with keywords "license", "copyright", "mit", etc. For repos for which our process was able to come up with a known license, we selected the ones having a permissive license, i.e., MIT, ApacheV2 and BSD. This was followed by removing files that are exact duplicates of each other within a repo. One of the reasons we found this inter-repository duplication may be because sometimes developers adopt lousy practises where instead of declaring a package and importing functions, they simply copy-paste the desired file in the current folder. The target holes coming from any of the duplicate files do not form part of the hole completion dataset. However, these files might be used to contribute to prompt proposal context for completing a target hole in a non-duplicate file. For the remaining files, we took each line that is not a blanked line or a comment, and chose the middle character as the hole position, i.e., all the characters from the middle of the line to the end of the line form target hole. To avoid large repos having strong bias on our prompt proposal classifier, we capped the contribution from each repo to be a maximum of 10000 holes. If the number of holes in the repo exceeds 10000, we randomly select 10000 holes. A.2 CREATION OF DATA FOR REPO-LEVEL PROMPT PROPOSALS We used the tree-sitter API for Java 9 to get the parse-tree of an individual file in a repo. To get information at a repo-level, for each file in the repo, we stored the following information: 1. list of all class names in the file. This helped us to get the parent or child class file corresponding to a given parent or child class. 2. the file corresponding to each import statement. 3. for each import statement in the file, the position in the file where the import is used. This is used for ranking the files based on the heuristics mentioned in Table 2. 4. list of sibling files 5. list of similar name files. This was done by splitting the filenames based on either camel-case or underscore. If the sub-parts of two files match, then they are said to have similar name. The above meta-data was calculated only once for each repo. The subsequent hole completions can use the same cached information. In practise, we can use a hash to store and retrieve this info efficiently. For a prompt proposal, given the prompt source, we first obtain a single file or ranked list of files (see Table 2) using the info in the parse tree in conjugation with the above repo-level meta-data. All the prompt proposal context type information (MN, MNB, SL, I, TI, FD) can then be obtained by querying the parse tree of the selected file. B PROMPT PROPOSAL DETAILS B.1 RANKING OF FILES BASED ON PROMPT SOURCE In Table 2, we provide details of how we select files for a given prompt source. Depending on the prompt proposal, we get either a single file or a list of files ranked based on some criteria. For example, if the prompt source is Import, we take all the import statements used in the current file and identify the location in the current file where the corresponding imports have been used. According to our heuristic, the closer is the import usage to the hole position, the more likely it is for the prompt proposal context coming from the corresponding import file to be more relevant (to predict the target hole). We get a ranked list of import files sorted based on increasing order of distance (i.e., number of lines ) between the import usage and the hole position. We start by taking all of the prompt proposal 8https://code.google.com/archive/ 9https://github.com/tree-sitter/tree-sitter-java context from the first file in the ranked list and then keep iterating the ranked list until either the total context length allocated to the prompt proposal gets exhausted or we reach the end of the ranked list. B.2 EXAMPLES OF PROMPT CONTEXT TYPE We provide examples of each of our prompt context type below: 1. Post Lines (PL) : For the example shown in Figure 1 of the main paper, post lines will take all the lines after the line mg.InitializeToAssignment(CurrentAssignments()) till we reach the end of the file (AffinityPropagation.java). 2. Identifiers (I): Identifiers are the names of variables used in the code. For example, for the prompt proposal context taken from the imported file shown in Figure 1 in the main paper (highlighted in violet), identifiers are InitializeToAssignment (line 1), a (line 1), currentAssignment_ (line 2), a ( line 2), clone (line 2), alreadyInitialized_ (line 3), justOneRound_ (line 4). 3. Type Identifiers (TI): Type Identifiers define the type of an identifier. For example, in the code snippet class DPAffinityPropagation extends AffinityPropagation , [ AffinityPropagation is labeled as a type identifier. Similarly in the snippet DPAPParameters parameters_;, DPAPParameters is a type identifier. 4. Field Declarations (FD): The variables of a class type are introduced by field declarations. For example, double[][] mHijMujT_; and MessageValuePair[][] sortedMHijMujTs_; are examples of field declarations. 5. String Literals (SL): A string literal is the sequence of characters enclosed in double- quotes. For example, in the code snippet, System.err.println("DPAP load Warning: unknown parameter " + entries[0] + ", value = " + entries[1]);, we have two string literals: (a) "DPAP load Warning: unknown parameter " ; (b) ", value = " . 6. Method Names (MN): For the example shown in Figure 1 of the main paper, public void InitializeToAssignment(int[] a) is the method name prompt context type. 7. Method Names and Bodies (MNB): For the example shown in Figure 1 of the main paper, the part highlighted in violet represents the method names and bodies. B.3 TRUNCATION STRATEGIES FOR PROMPT PROPOSAL CONTEXT If the prompt proposal context is greater than the context length allocated to it, then we need to truncate the prompt proposal context. We followed the below two schemes for truncating context: • front: We truncate the context from the front. This is used for all prompt sources except Parent Class and when we take PL from Current. • back: We truncate the context from the back. This is used when the prompt source is Parent Class and when we take prompt context types other than PL from Current. The truncation strategies for each case were selected based on results on a small validation set. For the prompt source Current, except when the prompt context type is PL, we always start by taking code of prompt context type from after the hole position. This makes sense as the default Codex context will anyways contain code before the hole. Only if this turns out to be blank, we will use the code of context type from before the hole. B.4 LIST OF PROMPT PROPOSALS B.5 OTHER PROMPT PROPOSAL VARIATIONS We experimented with other variations that include: (a) appending class names at the beginning of the prompt proposal context, (b) using newline or space to join the prompt proposal context and the default Codex context, (c) taking all or the top-k of the prompt context types, (d) ordering of top-k. • Context Separator: This defines how we join the prompt proposal context string to the default Codex context string. We experimented with space and newline as context separators. • Prompt Proposal Context Formatting: We can format the prompt proposal context before giving it to the Prompt Composer. We experimented with the following options: 1. class_name: append [class name of the file] at the beginning of the prompt proposal context taken from each file that is part of the prompt source. For example, if we are taking prompt proposal context from two import files f1 and f2, the prompt proposal context will be formatted as: [class name of f1] prompt proposal context from f1 + space + [class name of f2] prompt proposal context from f2. We use this when the prompt proposal context types are MN, I, TI, FD and SL. 2. class_method_name: we apply this only when the prompt proposal context type is MNB. We append method names at the beginning of each of the corresponding method bodies. We also append the prompt proposal context from a file with the name of the class as described in the previous item. 3. comment: Adding in the prompt proposal context as a comment, i.e., formatting it as: /** prompt proposal context */. This wasn’t found to be much useful. 4. none: passing the prompt proposal context as it is. We use this when the prompt proposal context type is PL. • Top-k Type: For each of the prompt proposal context types, except PL, we experimented with taking the (a) first (b) last and (c) all of the prompt proposal context types, i.e., we can take first-10 identifiers. We found ’all’ to be the best among all. • Top-k: We experiment with k values of (a) 10 (b) 20 and (c) all. We found ’all’ to work best for all prompt context types. C IMPLEMENTATION DETAILS C.1 RLPG-H We used Adam (Kingma & Ba, 2015) optimizer with a learning rate of 3e-4 and batch size of 64. We used CodeBERT (Feng et al., 2020) as our pretrained model Fϕ to obtain the representation of hole window. The size of the representation (corresponding to the hidden dimension of the [CLS] token) is 768. W 1 ∈ R512×768, b1 = 512,W 2 ∈ R63×512, b2 = 63. C.2 RLPG-R We used Adam (Kingma & Ba, 2015) optimizer with a learning rate of 3e-4 and batch size of 64. We used CodeBERT (Feng et al., 2020) as our pretrained model Fϕ to obtain the representation of hole window and prompt proposal context. The size of the representation (corresponding to the hidden dimension of the [CLS] token) is 768. In equations 1, 2 and 3 in Section 3.2, the projection matrices WQi ∈ Rdq×dmodel , WKi ∈ Rdk×dmodel , WVi ∈ Rdv×dmodel , WO ∈ Rdmodel×τdv . For the multihead attention, we used dk = dq = dv = 32, τ = 4 and dmodel = 768, Wr ∈ R63×768 and bp = 63. For each head, we perform a scaled dot-product attention (Equation 4). G module consists of a dropout (Srivastava et al., 2014) layer, a residual connection (He et al., 2016), a layernorm (Ba et al., 2016), followed by a sequence of (a) dense layer of weights=2048 × 768, bias=768, (b) relu activation, (c) dense layer of weights=768 × 2048, bias=2048, (d) dropout layer, (e) residual connection, (f) layernorm. A dropout value of 0.25 was used while training. Our model resembles one layer of the transformer encoder block (Vaswani et al., 2017). C.3 BASELINES Random baseline first selects a file randomly from the current repository followed by selecting a random line within that file. We choose all the lines starting from that line to the end line of the chosen file as context (excluding the hole window if the chosen file is the current file). The nearest neighbour similarity is based on the dot product between the representation of the hole window and the representation of the context, where we use a pretrained CodeBERT (Feng et al., 2020) model to obtain the representations. For the Identifier Usage baseline, if the nearest identifier to the hole doesn’t return any usage window, we proceed to the next nearest identifier. For faster computation and to avoid memory issues when running on our hardware, for NN baselines, we collect 64 random neighbours and then rank based on the nearest neighbour distance. The BM25-based baselines use the Okapi BM25 implementation with default parameters given by the pip package rank-bm25 0.2.2 10. For file-level BM25, if the file context exceeds the allocated context length, we truncate from the back. 10https://pypi.org/project/rank-bm25/ D ADDITIONAL RESULTS D.1 ABLATION ON PERFORMANCE BASED ON PROMPT PROPOSAL Figure 4 shows the mean success rate of prompt context types when success is counted only for the cases when these prompt contexts are applicable. As can be seen from the figure, post lines is the most useful prompt context type on an average. The contribution from other prompt context types though smaller than post lines is still significant highlighting the importance of each prompt context type. Figure 5 shows the normalized success rates where the normalization is performed across the prompt proposals. This helps us understand the relative performance of prompt proposal sources and context types. The left part of the figure breaks down the performance based on prompt sources and the right part breaks down based on prompt context types. One thing to note from the plot of prompt context types is that when we consider relative performance, post lines is no longer the most dominant context type. This is because post lines is tied to only when the prompt source corresponds to the current file, thereby contributing to lower numbers when compared to most of the other context types that are tied to all prompt sources. D.2 PERFORMANCE ON NON-IMMEDIATE POST LINES Table 4 shows the performance of post lines when starting the fourth line after the target hole line (i.e., skipping three lines after the target hole) as opposed to starting from the line that immediately follows the target hole line. This experiment helps us understand the performance when we are interested in doing a much harder task of multi-line code autocompletion, wherein the objective is to predict not just the blanked out portion in the current line but also say the next three lines. This can correspond to completing a block of code like a function body. As can be seen from the table, when starting from the fourth line, we see a very slight deterioration in performance. This is expected because the farther away we move from the target hole, the less relevant the post lines context would be. However, the performance drop is not significant suggesting that post lines is still a very useful prompt context type that can be used under the setting of multi-line code-autocompletion. Equivalently, we can include this as one of the prompt proposals in our framework along with the current version of post lines. D.3 COMPOSITION OF PROMPT PROPOSALS Table 5 shows the performance of the two versions of RLPG when we compose the prompt proposal context from l prompt proposals. We take the top-l prompt proposals given by RLPG based on decreasing order of probability. To decide how much context should be used for each prompt proposal, we divide the total context length in proportion to the normalized probabilities of the top-l prompt proposals. As can be seen from the table, even though PPC is not explicitly trained to perform composition (both the ground-truth vector and the representation of prompt proposal context involve a single prompt proposal), all the compositions lead to significant improvements over Codex. However, as expected the best results correspond to taking context from a single prompt proposal (i.e., the training setting). The drop in success rate with l = 2 and l = 5 is not that significant, which suggests that explicitly training RLPG to learn to compose contexts from different prompt proposals can lead to promising results and hence offers an interesting future direction. D.4 EFFECT OF CONTEXT LENGTH To understand the effect of context length on the performance of our prompt proposals, we took half of the context length available for a prompt in Codex and observed the performance of the oracle and fixed prompt proposal. As before, we saw that an oracle constructed from our prompt proposals shows remarkable improvement over Codex highlighting the value of our prompt proposals. However, when compared to a larger context length, the relative gains are smaller. This is expected as a smaller context length means that the relevant context coming from a prompt proposal needs to be truncated to make it fit inside the prompt, thereby leading to loss of information. D.5 PERFORMANCE ON INDIVIDUAL REPOSITORIES Table 7, Table 8 and Table 9 present the success rates of different methods over individual repositories in the training, validation and test splits, respectively. The repo-wise averages in Table 2 in the main paper were calculated by taking the average of numbers corresponding to each column. The hole-wise averages correspond to multiplying the repo-wise numbers of each method by the total holes in the repo to get the total number of successful holes by that method for that repo. We then add the total number of successful holes across repos and divide it by the total number of holes in the entire data split to get the hole-wise averages. E ANALYSIS OF SAMPLE CASES In Figure 1, RLPG selects the prompt proposal that corresponds to taking method names and bodies from the imported file (i.e. MaximizingGibbsSampler.java ). Note that mg. before the hole position indicates that a method used in the imported file is likely to be invoked. In this case, the prompt proposal context (highlighted in violet) contains the method name InitializeToAssignment (part of target hole). This in conjunction with the default Codex context which contains the method CurrentAssignments() (part of target hole) leads to generation of a successful prompt. On the other hand, the prompt created from the default Codex context fails to predict the target hole in this case. In general, we observed that in the absence of a strong signal, Codex has a tendency to give preference to natural language comments occurring before the hole position, e.g. naming the method based on the comment. This in certain cases might hurt. We provide insatnces of positive and negative samples cases for RLPG below: E.1 POSITIVE CASES We provide some examples of cases where RLPG led to the correct prediction and Codex failed. 1. Cases where part of the target hole is found exactly in the prompt proposal context. • RLPG = Propagation(int numVars) vs Codex = Propagation() • RLPG = tersFromFile(String filename) { vs Codex = ters(String filename) { • RLPG = als("dampingFactor")) { vs Codex = als("numVars")) { • RLPG = ] + ", value = " + entries[1]); vs Codex = ]); • RLPG = stem.exit(1); vs Codex = stem.err.println("DPAP load error: " + ex.get 2. Cases where Codex takes strong hint from the preceding natural language comment, thereby producing incorrect predictions. • RLPG = d PassMessages() vs Codex = d DoOneRoundOfMessagePassing() • RLPG = teger> CurrentExemplars() { vs Codex = teger> ChooseExemplars() { • RLPG = ring FileName() { vs Codex = ring GetAlgorithmFilename() { E.2 NEGATIVE CASES In certain cases, extra information from prompt proposal-context might lead to confusion and produce incorrect predictions. • RLPG = an hasConverged_; vs Codex = an converged_; • RLPG = _[i][j] = -Double.MAX_VALUE; vs Codex = _[i][j] = 0;
1. What is the focus and contribution of the paper regarding prompt proposals for LLMs? 2. What are the strengths of the proposed approach, particularly in terms of exploring different sources and types of prompts? 3. What are the weaknesses of the paper, such as taking the whole file as context and limiting the number of prompt proposals? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a method to make prompt proposals from repository-level context and add concatenate the prompt proposals with normal context to LLM in order to feed more related information to achieve better generations. The paper explores different sources of repository level prompts, as well as prompt context types. Experiment results show that additional repository level prompt help improve the performance significantly. Strengths And Weaknesses Strength: The paper is well-written. The method is well explained with examples. Experiment results are extensive with analysis. The paper explores several variants on the prompt sources and prompt types, and made a nice figure to demonstrate which prompt source works the best, though only one is used in experiment at a time. Weakness: Taking the whole file from the whole import file seems unnecessarily large context. Rather, if the import statement only imports one class or one function from the module, it makes sense to only consider the function implementation, rather than the whole file where the function resides. This especially applies to 7) Method Names and Bodies prompt type. The paper only considers one prompt proposal, which is a large limitation of the paper. Why don't the authors take top-k prompt proposals and put into context instead of just one? Besides, I'm curious whether there is a better choice of prompt proposal, for example, intuitively the function names, docstrings, and variable names of the imported function could help inform the model the functionality of the imported function. It seems that, RLPG-H is more like classification, while RLPG-R is more like retrieval. Would a lexical search method like BM25 could retrieve relevant prompts already compared? That way no training for the PPC is needed. And I would expect it works reasonably well. Clarity, Quality, Novelty And Reproducibility This paper is well-writtern and is of good quality. The proposed method is somewhat novel. And I expect that the results should be easy to reproduce.
ICLR
Title Repository-Level Prompt Generation for Large Language Models of Code Abstract With the success of large language models (LLMs) of code and their use as code assistants (e.g. Codex (Chen et al., 2021) used in GitHub Copilot1), techniques for introducing domain-specific knowledge in the prompt design process become important. In this work, we propose a framework called Repo-Level Prompt Generator that learns to generate example-specific prompts using prompt proposals. The prompt proposals take context from the entire repository, thereby incorporating both the structure of the repository and the context from other relevant files (e.g. imports, parent class files). Our technique doesn’t require any access to the weights of the LLM, making it applicable in cases where we only have black-box access to the LLM. We conduct experiments on the task of single-line code-autocompletion using code repositories taken from Google Code archives. We demonstrate that an oracle constructed from our prompt proposals gives a remarkably high relative improvement of 36% over Codex, showing the quality of these proposals. Further, we show that when we train a model to predict a prompt proposal, we can achieve significant performance gains over Codex and other baselines. 1 INTRODUCTION Large Language Models (LLMs) have demonstrated remarkable performance in natural language processing tasks (Brown et al., 2020; Chowdhery et al., 2022), text-to-image generation (Ramesh et al., 2022; Rombach et al., 2021), protein-sequencing (Rives et al., 2019) and even as a generalized agent (Reed et al., 2022). As opposed to the pretrain-finetune paradigm, prompting these LLMs has been found to yield good performance even with few-examples (Liu et al., 2021a). A prompt is an input to the LM such that the desired task can be expressed as predictions generated from the LM. Besides providing a mechanism to control and evaluate a LM, prompts have shown to elicit emergent behaviour as well. Examples of this behavior include GPT-3 (Brown et al., 2020) doing better in tasks it has never seen during training and improved reasoning capabilities with few-shot (Wei et al., 2022) and zero-shot (Kojima et al., 2022) prompts that encourage a chain of thoughts. These factors highlight the importance of designing an effective task-specific prompt 2. However, currently we have limited understanding of how to do this (Reynolds & McDonell, 2021). LLMs have also been used for modeling source code with impressive results (Austin et al., 2021; Fried et al., 2022; Xu et al., 2022a). In particular, one of the best performing LLM, Codex (Chen et al., 2021), has been deployed as part of GitHub Copilot 1, a state-of-the-art in-IDE code assistant. Despite the growing popularity of LLMs of code, there is no work that systematically tackles different aspects of prompt generation in relation to source code. One such aspect is that when it comes to code, the relevant context to be put in the prompt can come from not just the current file, but also from outside, such as imports and parent classes. Also, depending on the scenario, the relevant context can be scattered across multiple locations. Since the LLMs have a limited context length available for the prompt, it becomes increasing crucial for our domain-specific understanding to guide the selection of relevant context. Currently, it is not clear how to integrate this domain knowledge of what constitutes a relevant context, into the process of creating prompts. Addressing this question has potential benefits in other domains such as question answering (Liu et al., 2022) and multi-document summarization (Xiao et al., 2022), where domain-specific structured retrieval of context can be useful. 1https://copilot.github.com/ 2Platforms such as PromptBase https://promptbase.com/ allow buying and selling of prompts. In this work, we address this problem by proposing Repo-Level Prompt Generator (RLPG), a framework that while generating the prompt, incorporates both the structure of the repository as well as the relevant context in all the files of the repository. In RLPG, the choice of where from and what to take from the repository is specified by a set of prompt proposals. For example, one of the prompt proposal can be to take all the identifiers used in the first import file. These prompt proposals allow the prompt engineers to induce their domain expertise in the prompt-designing process. With the increasing use of LLMs as assistive agents to humans, demand for transparency and the desire for software engineers to take active part in tailoring prompts to suit their requirements (Jiang et al., 2022; Sun et al., 2022), this capability becomes important. As suggested in some previous works in NLP (Shin et al., 2020; Schick & Schütze, 2021), our prompt proposals are discrete. However, rather than fixing one particular prompt proposal for each example, we instead predict the best prompt proposal conditioned on the example. We do this by coming up with a neural network called Prompt Proposal Classifier (PPC), that given an example, learns to select a prompt proposal such that the resulting prompt is likely to produce the desired output. Therefore, RLPG allows the introduction of domain expertise, and at the same time facilitates automatic example-specific prompt generation via a learned neural network. Note that there are some techniques for automatic prompt generation in NLP (Li & Liang, 2021; Shin et al., 2020; Lester et al., 2021) that require updating some or all of the weights of the LLM. However, the strongest LLMs are not publicly available (e.g. OpenAI provides access only to the generated output from Codex via an API 3 and no access to model weights and data is provided), making these techniques less useful under this scenario. RLPG addresses this limitation by generating prompts assuming only black-box access to the LLM. We focus on the task of single-line code-autocompletion in an IDE, where the objective is to predict the blanked-out portion (or target hole) starting from the position of an imagined cursor to the end of line. We operate under the line-level maintenance setting (Shrivastava et al., 2020; Hellendoorn & Devanbu, 2017) that reflects the scenario where a user is editing an existing file. This means that there can be code following the line. Figure 1 provides an illustration of our approach. The prompt proposal classifier takes in the hole position (position of the cursor) in the current file, the repository to which the current file belongs and a set of repo-level prompt proposals as input, and predicts a prompt proposal. In our illustrated example, the predicted prompt proposal corresponds to taking the method names and bodies from MaximizingGibbsSampler.java (mg.before the hole position indicates that a method from the imported file is likely to be invoked). The Prompt Composer uses the context from the predicted prompt proposal and combines it with the default Codex context, 3https://openai.com/blog/openai-codex/ i.e., code prior to the position of the hole in the current file. The resulting prompt consists of the method name InitializeToAssignment (from the prompt proposal context) and the method CurrentAssignments() (from the default Codex context), resulting in a successful prediction (brown box on the top) of the target hole. Our key contributions are as follows: • We propose a framework called the Repo-Level Prompt Generator (RLPG) that learns to generate prompts conditioned on the example, without requiring access to the weights of the LLM. • To incorporate domain-knowledge in the prompt design process, RLPG uses a set of repository-level prompt proposals. These prompt proposals are designed to incorporate both the structure of the repository as well as the relevant context from all files in the repository. • On the task of single-line code-autocompletion, we show that an oracle constructed from our proposed prompt proposals gives up to 36% relative improvement over Codex. This improvement is pleasantly surprising as Codex has never seen prompts made from these prompt proposals during training. Further, we show that when we use our prompt proposal classifier to predict the best prompt proposal, we can achieve up to 17% relative improvement over Codex. 2 REPO-LEVEL PROMPT GENERATOR (RLPG) In this section, we provide details of our framework. We start by describing our prompt proposals and then discuss our prompt proposal classifier which is followed by a description of prompt composer. 2.1 REPO-LEVEL PROMPT PROPOSALS The core idea of RLPG consists of substituting part of the default context used by Codex with context coming from somewhere else in the repository. The decision of what to take and from where in the repository to take from is governed by a set of prompt proposals. These prompt proposals were decided based on manual inspection of our training data and intend to capture common coding patterns (but more generally can also include project/organization-specific coding practises). A prompt proposal can be thought of as a function that takes as input a target hole’s position and the repository that the hole is a part of, and that returns the prompt proposal context (a string constituted by the context from the prompt proposal). A prompt proposal is specified by a prompt source and a prompt context type. We mention each of these along with their motivation below. Prompt Source: For a target hole position, a prompt source determines from where should we take code that will be part of the prompt proposal context. We propose ten different prompt sources: 1. Current: take code from the current file excluding the contents of the target hole. The current file is the file that contains the target hole. The code in the current file (e.g. the lines after the hole position) can be very useful in predicting the target hole. 2. Parent Class: take code from the file that contains the parent of the class to which the target hole belongs. The intuition behind this is to account for cases where a method present in the parent class is invoked in the current file (i.e. the child class). 3. Import: take code from the import files used in the current file. The dependencies specified via imports can provide useful cues to predict the target hole. 4. Sibling: take code from the files that are in the same directory as the current file. Files in the same directory tend to share code variables (e.g. identifiers). 5. Similar Name: take code from files that have a similar name as the current file. Similar names are determined by doing a splitting of the file name based on underscore or camel-case formatting and then matching parts of the filename. If one or more parts matches, two files are considered to have similar names. The intuition behind this is that software developers tend to name files based on the functionality of the code written in that file. Therefore, a similar name file might contain some portion of the code that is common with the current file and hence might be useful for predicting the target hole. 6. Child Class: take code from files that have the current file as their parent class file. 7. Import of Parent Class: take code from the import files used in the parent class files. 8. Import of Sibling: take code from the import files used in the sibling files. 9. Import of Similar Name: take code from the import files used in the similar name files. 10. Import of Child Class: take code from the import files used in the child class files. The last four prompt sources are useful when the target hole occurs at the very beginning of the current file. In these cases, there would be less context coming from other prompt sources. For each prompt source, we can get either a single file or a ranked list of files (see Appendix B.1). In the latter case, we will take context from these files until we exhaust the maximum context length allocated to the prompt proposal. Prompt Context Type: The prompt context type determines what code to take from the prompt source. We propose seven different prompt context types (Appendix B.2 has examples of each type): 1. Post Lines (PL): Take all the lines after the target hole line till we reach the end of the file. This context type is applicable only when prompt source is the current file 4. 2. Identifiers (I): Take all the identifiers used in the prompt source. 3. Type Identifiers (TI): Take all the type identifiers used in the prompt source. 4. Field Declarations (FD): Take all the field declarations used in the prompt source. 5. String Literals (SL): Take all the string literals used in the prompt source. 6. Method Names (MN): Take all the method names along with their signatures that are used in the prompt source. 7. Method Names and Bodies (MNB): Take all the method names along with their signatures and corresponding bodies used in the prompt source. By combining prompt sources with prompt context types, we get a total of 63 prompt proposals (see Appendix B.4 for details). Note that depending on the target hole, not all prompt proposals would be applicable (e.g. if there are no parent classes in the current file, prompt proposals with prompt source as parent class file won’t be applicable). In Figure 1, the predicted prompt proposal corresponds to taking prompt source Import and prompt context type MNB. We aimed for a set of prompt proposals that offer more diversity rather than a set of prompt proposals that are all good. This in turn ensures that for any hole position, a significant number of prompt proposals are applicable. 2.2 PROMPT PROPOSAL CLASSIFIER (PPC) Given a hole position, the goal of the prompt proposal classifier is to predict the prompt proposal p that will lead to success, where success happens when the predicted hole ĥ exactly matches the target hole h. This task is formulated as a multi-label binary classification problem since for a given target hole, more than one prompt proposals can lead to success. In this formulation, we treat the default Codex context as one of the prompt proposals. Next, we describe the training procedure for PPC. Training: For each target hole h, we generate a ground-truth vector Y h = [yhp ]Mp=1 which is a multi-hot vector of size M , where M is the total number of prompt proposals. This vector is obtained by feeding the prompt generated from prompt proposal p into Codex and then seeing whether ĥ = h. If there is a match, we say that the prompt proposal p is successful. For hole h, if a prompt proposal p is applicable and leads to success, yhp = 1 and will be zero otherwise. For each hole h, we obtain a mask Th where Thp = 1 when p is applicable or zero otherwise. The overall training loss L can be expressed as the sum of individual hole losses Lh as follows: L = 1 N N∑ h=1 Lh = 1 N N∑ h=1 1 Mh Mh∑ p=1 BCE(ŷhp , y h p ) ∗ Thp where Mh = ∑ p Thp (1) In the above equation, N is the total number of holes encountered while training, Mh denotes the total number of applicable prompt proposals for h and BCE corresponds to the binary cross entropy loss. Masking ensures that we consider only the prompt proposals that are applicable. Next, we describe our two variants of PPC that can be used to obtain the prediction ŷhp . RLPG-H: Let Hh be the hole window that includes code present around the hole h excluding the hole itself. In our work, we take two lines before the hole position, the code up to the hole position and two lines after the hole position. We use a pretrained model Fϕ to obtain a context representation vector of size Z, where Z is the dimension of the hidden state of the model. Specifically, we take the hidden state at the first position, i.e. the representation of the [CLS] token. To make training of PPC 4We also conducted experiments (Appendix D.2) where we take lines starting from the 4th line after the hole. computationally efficient, the parameters ϕ are frozen during training. The RLPG-H model takes the context representation of the hole window and projects it to the prompt proposal space of size M via two dense layers with a non-linearity in between (see Equation 2). Taking the sigmoid of this output gives the prediction of the prompt proposal. ŷhp = P (y h p = 1|Hh) = sigmoid(W 2(relu(W 1(Fϕ(Hh)) + b1)) + b2) (2) RLPG-R: The motivation behind this variant is to use the similarity of the hole window and the prompt proposal context to determine which prompt proposal can be useful. Given a particular hole h, let Chp denote the prompt proposal context from prompt proposal p . Intuitively, if the hole window contains variables (e.g. identifiers) that are similar to the variables in the prompt proposal context, then there are chances that h might occur somewhere in Chp . The similarity is modeled using a multiheaded attention mechanism (Vaswani et al., 2017), by treating the projected hole window representation as a query Qh and the projected prompt proposal context representation Khp as a key (Equation 3). The value V hp is the same as the key. Qh = Fϕ(H h), Khp = Fϕ(C h p ), V h p = Fϕ(C h p ) (3) Att(Qh,Khp , V h p ) = V h p softmax (Qh⊤Khp√ dk ) (4) MultiHead(Qh,Khp , V h p ) = W Oconcat(headi, head2, . . . headτ ) (5) where headi = Att(W Q i Q h,WKi K h p ,W V i V h p ) ŷhp = P (y h p = 1|Hh, Chp ) = sigmoid ( WpG(MultiHead(Q h,Khp , V h p )) + bp ) (6) In the equations above, dk is the dimension of the key, W Q i ,W K i ,W V i are the query, key and value projection matrices, τ is the number of heads and WO is the linear projection that combines the heads. The output from Equation 5 is fed to module G consisting of two-layers of feedforward network with relu activation in between (see Appendix C for more details). The resulting output is then linearly projected and a sigmoid is applied to get the predicted prompt proposal (Equation 6). 2.3 PROMPT COMPOSER The prompt composer combines the context from the selected prompt proposal (given by PPC) with the context normally used by Codex (default Codex context) to generate the prompt. Since the total length that can be used for a prompt is fixed, we adopted a dynamic context allocation strategy where if the prompt proposal context is shorter than its allocated length, we assign the remaining portion from the prompt proposal context to the default Codex context. The prompt proposal context is always added before the default Codex context. For all prompt proposals, we assign half of the total context length to the prompt proposal context and the remaining to the default Codex context. For post lines, in addition, we also assign one-fourth and three-fourths of the total context length to the prompt proposal context. If the prompt proposal context or the default Codex context is greater than the context length allocated to it, we truncate it (see Appendix B.3 for our truncation strategies). 3 EXPERIMENTS AND RESULTS In this section, we describe our process of dataset creation, details of experiments along with their results and interesting ablation studies. 3.1 DATASET CREATION To mitigate the effects caused by potential memorization of the code present in the dataset used for training Codex, we avoided code repositories from GitHub (Chen et al., 2021). Instead, we scraped Google Code 5 for repositories in Java (removing the ones that matched with a repository on GitHub 5https://code.google.com/archive/ with the same name). We selected the repositories that had a permissive license giving us a total of 47 repositories. We divided the repositories into train, validation and test splits, where each repository in its entirety is part of a split. In each file within a repository, we remove lines that are blank and comments, and set the hole position to be the middle character in the line. All the characters from the middle position to the end of the line constitute the target hole. Since code duplication has been shown to have adverse effects (Allamanis, 2018), within a repository, we look for files that are exact replica of each other, but placed in a different folder. We mark all such copies as duplicates and omit all of them when creating target holes for our dataset. Note that the prompt proposal context can still come from the duplicate files. We felt comfortable with this choice since we wouldn’t want to predict a target hole in a duplicate file, but we can still use the context from the duplicate file to predict the hole in a file that is not its duplicate (e.g. in a sibling file). Further, we found that the repositories were quite uneven in terms of their size. To avoid large repositories dominating the training of PPC, we capped the maximum contribution of holes from a repository to 10,000, i.e. if the total number of holes in the repository exceeded 10,000, we selected 10,000 holes randomly from the total holes. Please see the left part of Figure 2 for statistics of our dataset. The #Holes represents the holes after deduplication and capping. For some of our prompt proposals, we require semantic information that can be obtained with a parse tree. We used the tree-sitter API for Java 6 that enables us to get the AST of a file and query it. Since our prompt proposals need information at a repository level, we stored some extra information that allowed us to collate the information from individual files according to the directory structure inside the repository (see Appendix A for more details). 3.2 EXPERIMENTAL DETAILS Prompt Generation: We used the OpenAI Codex Completions API for generating the predicted hole from the Codex model. In particular, we used the code-davinci-001 engine with temperature set to 1.0 and stop criteria as newline. The completion length was kept to be 24 and the maximum prompt length was 4072. Tokenization was done using the suggested tokenizer 7. To allow for fast computation, we used simple models like CodeBERT (Feng et al., 2020) and GraphCodeBERT (Guo et al., 2020) as our pretrained models. One of the limitations of these pretrained models is that the maximum context length that can be taken as input by these models is much smaller than the maximum context length allowed by Codex. Therefore, when getting the representation of the prompt proposal context that is used by PPC, we need to truncate the prompt proposal context that might lead to omitting important parts of the prompt proposal context in certain cases. Using pretrained models that allow larger context length or models that augment the context (Wu et al., 2022) offer avenues for future work. See Appendix D.4 for results when using a smaller context length from Codex. Computational Complexity and Scalability of RLPG: To collect the ground-truth data for training our prompt proposal classifier, we queried the Codex API for each applicable prompt proposal per hole (maximum rate limit of 400 holes per minute). The computational complexity of training our larger RLPG-R variant (3.6M parameters, 141269 holes and 9.19 minutes per epoch on a single Tesla V100 GPU) is much smaller than finetuning all or some part of Codex (12B parameters). During inference, we need to calculate the repo-level statistics just once and all the subsequent hole completions in the repo can utilize this cached information, incurring no additional computational complexity. Besides training the PPC, all our experiments were performed on a CPU with 8GB RAM. Our prompt proposals are based on concepts such as post lines, imports, similar name files, method names and identifiers that are quite general and are applicable to other programming languages. In addition to the existing prompt proposals, our framework provides the flexibility to incorporate new prompt proposals. Since the cost of retraining RLPG with the extended prompt proposals is extremely low (much lower than finetuning Codex with the new prompt proposals), our framework can be used to make interventions on the LLM to address observed weaknesses as long as the intervention can be expressed as a prompt proposal that adds the missing context to the LLM. As opposed to techniques that perform prompt engineering in the latent space and require access to the weights of the LLM such as Li & Liang (2021), RLPG facilitates expressing intent in the form of prompt proposals that are intuitive for humans, easy to understand and do not require access to the weights of the LLM. 6https://github.com/tree-sitter/tree-sitter-java 7https://huggingface.co/docs/transformers/model_doc/gpt2#transformers.GPT2TokenizerFast Methods: We experimented with the following methods for generating the prompt: 1. Codex: Using the default context from Codex as the entire prompt. 2. Oracle: Using the ground-truth vector Y h (mentioned in Section 2.2). The prompt generated corresponds to using any of the successful prompt proposals (i.e., yhp = 1). Since this information is not available at inference, the oracle performance represents an upper bound. 3. Fixed Prompt Proposal: Using the most successful prompt proposal for all target holes. This was chosen based on the performance on the validation set and corresponded to taking 75% of the total context length from post lines in the current file. 4. RLPG-H and RLPG-R: Using the prompt proposal predicted by the RLPG-H and RLPG-H varients of PPC. The selected prompt proposal corresponds to taking the argmax of the predicted probabilities over different prompt proposals. 5. RLPG-BM25: Instead of using PPC to rank prompt proposals, using the scores obtained by BM25 (Jones et al., 2000) to select the best prompt proposal. The scores are calculated with the hole window being the query and prompt proposal contexts being the search documents. This serves as a non-learned retrieval method that makes use of our prompt proposals. 6. File-level BM25: Same as above, except that instead of using our prompt proposal contexts, search documents consist of full context from other files in the repository. 7. Random: For each target hole, select a context randomly from anywhere in the repository. 8. Random NN: Same as Random, except that amongst the randomly chosen contexts, we take the nearest neighbours of the hole window in the representation space of a pretrained model. This is analogous to the technique used in Liu et al. (2022). 9. Identifier Usage: For each target hole, we take the closest identifier and take usage windows of that identifier from everywhere in the repository. We take two lines above, two lines below and the usage line as the usage window. We can rank the usage windows either randomly (random) or based on the nearest neighbour distance to the hole window in the representation space (NN). The last four methods help us understand the performance when a context other than the prompt proposal context is used. To generate a prompt using these methods, we take 50% of the context from these followed by the default Codex context that takes up the remaining context length. For the NN baselines, we use CodeBERT (Feng et al., 2020) as the pretrained model. The contexts are taken in the increasing order of the nearest neighbour distances, until we exhaust the allocated context length. RLPG-BM25 helps us understand the role of PPC. See Appendix C.3 for more details on the implementation of these methods. Evaluation Metric: As mentioned in Section 2.2, to measure success, we used exact match between the predicted hole string generated by Codex and the target hole string. In our experiments, we report the percentage of successful holes divided by the total number of holes for each split. We will call this success rate (SR) going forward. 3.3 RESULTS In this section, we present the results of the following two research questions explored in this paper: [RQ1] Is it useful to generate a prompt that is composed of code context that is different from the default Codex context? If yes, what context can be useful? [RQ2] For each target hole, is there a way of automatically selecting the prompt? If yes, how does this system perform relative to Codex? RQ1 - Performance of Prompt Proposals: We found that combining the prompt proposal context (context from other files in the repository) with the default Codex context led to substantial improve- ment in performance. The right part of Figure 2 shows the performance of an oracle constructed from our prompt proposals. We see that across all data splits, the prompt proposals contribute to significantly large improvements over Codex (upto 36% for test split). These results might seem surprising as Codex has not been trained on prompts that consist of context other than the default Codex context. What makes this result more surprising is that in most of the cases, the prompt consists of mashed up context without logical ordering that may not even look like a semantically meaningful chunk of code (e.g. list of string literals from a sibling file followed by the default Codex context or post lines placed before the default Codex context as opposed to after). These results might suggest that as long as the relevant context (in our case repo-level knowledge in the form of prompt proposals) is present in any form in the prompt, it can be quite effective. RQ2 - Performance of PPC: Having seen promise in our prompt proposals, next we present the results of RLPG, which for each target hole predicts a single best prompt proposal. Table 1 presents the success rates along with the percentage of relative improvements for the test data. The second and third columns correspond to the averages across all holes in the test data. The last two columns correspond to the average success rate of individual repositories. The latter metric doesn’t account for the size of the repository. As can be seen from the table, all the RLPG variants as well as the fixed prompt proposal improve the performance significantly over Codex. The random baselines are either worse or on par with Codex. Identifier usage is a good baseline but still performs worse than either the fixed prompt proposal or RLPG. The improved performance of RLPG-BM25 as compared to fixed prompt proposal shows the value of generating example-specific prompts using RLPG. However, both the learned variants of RLPG, i.e., RLPG-H and RLPG-R outperform the RLPG-BM25, highlighting the importance of learning PPC. See Appendix D.5 for performance of all methods on individual repositories. Note that even though we consider identifier usage as a separate baseline, one could consider it as one of the prompt proposal leading to further improved performance of RLPG. Despite our efforts of avoiding overlap, since the training data for Codex is not exactly known, there might be a slight possibility that part of our Google Code data is part of the training data for Codex. Even if there were an overlap, we want to point out that since Codex has seen the default Codex context during training, it would be more beneficial to use the default Codex context in the prompt rather than the context from the prompt proposals or any other context from other baselines. Therefore, under this scenario, our evaluation would be more generous to the Codex baseline with results biased more in favour of the Codex baseline than other methods we have used. Variation with #attempts: Imagine a scenario where we have a human-in-the-loop who has been given k attempts to prompt the LLM and then can choose one of the k hole predictions. We wanted to see how does the performance of our framework varies with #attempts under this setting. This corresponds to using k prompts generated with top-k prompt proposals (one prompt per proposal) and marking success if any of the k prompts lead to success. The left part of Figure 3 shows the variation of SR over the validation data with the value of k. For RLPG, the top-k prompt proposals were chosen based on the decreasing order of probabilities given by PPC. For the fixed prompt proposal, the top-k prompt proposals were decided based on decreasing order of success rate of the individual prompt proposals on the validation dataset. From the figure, we notice that as we increase the value of k, the performance increases gradually at first and then saturates towards the oracle performance (79.05% for val data). This behaviour is observed for both fixed prompt proposal as well as RLPG. However, we see that for the same value of k, the success rate for RLPG is higher indicating that PPC learns a useful ranking of the prompt proposal contexts that can scale well with the #attempts. Performance based on Prompt Proposals: The right part of Figure 3 shows mean success rate of prompt sources when we count success only when the corresponding prompt source is applicable. From the figure, we see that the current file is the most important prompt source. Closely following are sibling files and similar name files. We see that all prompt sources have non-zero chances of success, highlighting the usefulness of each prompt source. See Appendix D.1 for a similar breakdown based on prompt context type and Appendix E for analysis of successful and failed sample cases. 4 RELATED WORK LLMs for Code: Recently, there has been a lot of work around large language models of code. One class of models are the decoder-only models that correspond to generating code from left-to-right. Codex (Chen et al., 2021), Google’s model (Austin et al., 2021), GPT-J-6B (Wang & Komatsuzaki, 2021), GPT-Neo (Black et al., 2021b), GPT-Neo-X (Black et al., 2021a), CodeParrot (Tunstall et al., 2022), PolyCoder (Xu et al., 2022a) and InCoder (Fried et al., 2022) are some examples. We also have some encoder-only models that use a masked language modelling objective. CodeBERT (Feng et al., 2020), GraphcodeBERT (Guo et al., 2020) and CuBERT (Kanade et al., 2020) are examples of such models. Lastly, we have the class of encoder-decoder models that generally use a bidirectional encoding of a context to decode a series of masked tokens. Code-T5 (Wang et al., 2021) and AlphaCode (Li et al., 2022) are examples of such models. Repo-Level Info: Fewer works use information from outside the current file. Hellendoorn & Devanbu (2017) propose a nested n-gram model that utilizes a locality-based cache where the locality consists of all directories from the root of the project (inclusive of the current file). Zhang et al. (2021) uses the parent class to generate the comments for the child class. Pashakhanloo et al. (2022b;a) capture the structure and semantics of the repository by converting it into a relational database and propose a graph-walk based mechanism for pruning the unrelated context. Lyu et al. (2021) incorporates the API-dependency graph in a LSTM-based Seq2Seq model to assist in code generation. Xu et al. (2022b) incorporate three types of structural locality features while training the kNN-LM (Khandelwal et al., 2020). These features are binary variables that correspond to the presence or absence of similar hierarchy. The three levels of hierarchy are (a) sibling file, (b) file in the same repo (c) no hierarchy. In contrast we have a much richer set of prompt proposals incorporating the semantics and structure of the repository. Also, we assume black-box access to the actual LM and restrict ourselves to generating a prompt for the LLM without performing any finetuning of the LLM. Prompt Generation: There have been promising works around prompt generation techniques in NLP. Broadly, there are two categories of automatic prompt generation techniques. The first category corresponds to producing continuous/soft prompts where the prompt is described in the latent space of a language model (Li & Liang, 2021; Qin & Eisner, 2021; Bragg et al., 2021; Lester et al., 2021; Liu et al., 2021b). For example, Prefix-Tuning (Li & Liang, 2021) adds a prefix to the LM that can be learned by finetuning on examples from the downstream task. The second category produces discrete prompts where the prompt is a text string that can be interpreted by a human (Shin et al., 2020; Gao et al., 2021; Schick & Schütze, 2021). For example, Autoprompt (Shin et al., 2020) generates prompt using a fixed template consisting of trigger tokens. The trigger tokens are shared across all inputs and determined by a gradient-guided search involving the LM. Our work falls in the category of discrete prompt generation techniques as we produce a prompt consisting of code tokens that can be easily interpreted by a human. However, in contrast to prior works that use a set of fixed templates for all examples, we learn to produce prompts conditioned on each example. Another important distinction is that we do not require access to the weights of the LM. A concurrent work as ours (Wang et al., 2022) studies the role of prompt-tuning when compared to fine-tuning for code translation, defect localization and code summarization. However, their technique requires access to the weights of the LLM and they perform experiments over models that are much smaller in scale than Codex. To the best of our knowledge, our work is the first to explore automatic prompt generation in a black-box access setting in the domain of source code. 5 CONCLUSIONS AND FUTURE DIRECTIONS We present RLPG, a framework that learns to automatically generate prompts conditioned on the example, without requiring access to the weights of the LLM. RLPG utilizes the structure of the repository as well as the context from other files in the repository using a set of easy to understand prompt proposals. Note that even though we have scoped and worded our prompt proposals to be repository-level, the idea of RLPG and prompt proposals in itself is quite universal and need not be scoped to a repository. Taking context from other repositories as well as external knowledge such as API dependencies offers an interesting direction to explore in the future. In this work, we are taking context from only one prompt proposal. For future work, we want to learn a model that can automatically compose a prompt from multiple prompt proposals (see Appendix D.3 for promising initial results). Other interesting directions include incorporating the user’s feedback in RLPG and extending RLPG to multi-line code autocompletion. A DATASET CREATION DETAILS A.1 CREATION OF HOLE COMPLETION DATA To collect the hole completion data, we scraped Google Code 8 for repositories tagged with the language “Java”. Then we deduplicated repositories by searching for a matching repository with the same name on GitHub. For those repositories with zero matching names on GitHub, we downloaded the archive and extracted the source code (preserving the directory structure). Next, we tried to determine the licenses of all repositories by either looking for a LICENSE file or matching with keywords "license", "copyright", "mit", etc. For repos for which our process was able to come up with a known license, we selected the ones having a permissive license, i.e., MIT, ApacheV2 and BSD. This was followed by removing files that are exact duplicates of each other within a repo. One of the reasons we found this inter-repository duplication may be because sometimes developers adopt lousy practises where instead of declaring a package and importing functions, they simply copy-paste the desired file in the current folder. The target holes coming from any of the duplicate files do not form part of the hole completion dataset. However, these files might be used to contribute to prompt proposal context for completing a target hole in a non-duplicate file. For the remaining files, we took each line that is not a blanked line or a comment, and chose the middle character as the hole position, i.e., all the characters from the middle of the line to the end of the line form target hole. To avoid large repos having strong bias on our prompt proposal classifier, we capped the contribution from each repo to be a maximum of 10000 holes. If the number of holes in the repo exceeds 10000, we randomly select 10000 holes. A.2 CREATION OF DATA FOR REPO-LEVEL PROMPT PROPOSALS We used the tree-sitter API for Java 9 to get the parse-tree of an individual file in a repo. To get information at a repo-level, for each file in the repo, we stored the following information: 1. list of all class names in the file. This helped us to get the parent or child class file corresponding to a given parent or child class. 2. the file corresponding to each import statement. 3. for each import statement in the file, the position in the file where the import is used. This is used for ranking the files based on the heuristics mentioned in Table 2. 4. list of sibling files 5. list of similar name files. This was done by splitting the filenames based on either camel-case or underscore. If the sub-parts of two files match, then they are said to have similar name. The above meta-data was calculated only once for each repo. The subsequent hole completions can use the same cached information. In practise, we can use a hash to store and retrieve this info efficiently. For a prompt proposal, given the prompt source, we first obtain a single file or ranked list of files (see Table 2) using the info in the parse tree in conjugation with the above repo-level meta-data. All the prompt proposal context type information (MN, MNB, SL, I, TI, FD) can then be obtained by querying the parse tree of the selected file. B PROMPT PROPOSAL DETAILS B.1 RANKING OF FILES BASED ON PROMPT SOURCE In Table 2, we provide details of how we select files for a given prompt source. Depending on the prompt proposal, we get either a single file or a list of files ranked based on some criteria. For example, if the prompt source is Import, we take all the import statements used in the current file and identify the location in the current file where the corresponding imports have been used. According to our heuristic, the closer is the import usage to the hole position, the more likely it is for the prompt proposal context coming from the corresponding import file to be more relevant (to predict the target hole). We get a ranked list of import files sorted based on increasing order of distance (i.e., number of lines ) between the import usage and the hole position. We start by taking all of the prompt proposal 8https://code.google.com/archive/ 9https://github.com/tree-sitter/tree-sitter-java context from the first file in the ranked list and then keep iterating the ranked list until either the total context length allocated to the prompt proposal gets exhausted or we reach the end of the ranked list. B.2 EXAMPLES OF PROMPT CONTEXT TYPE We provide examples of each of our prompt context type below: 1. Post Lines (PL) : For the example shown in Figure 1 of the main paper, post lines will take all the lines after the line mg.InitializeToAssignment(CurrentAssignments()) till we reach the end of the file (AffinityPropagation.java). 2. Identifiers (I): Identifiers are the names of variables used in the code. For example, for the prompt proposal context taken from the imported file shown in Figure 1 in the main paper (highlighted in violet), identifiers are InitializeToAssignment (line 1), a (line 1), currentAssignment_ (line 2), a ( line 2), clone (line 2), alreadyInitialized_ (line 3), justOneRound_ (line 4). 3. Type Identifiers (TI): Type Identifiers define the type of an identifier. For example, in the code snippet class DPAffinityPropagation extends AffinityPropagation , [ AffinityPropagation is labeled as a type identifier. Similarly in the snippet DPAPParameters parameters_;, DPAPParameters is a type identifier. 4. Field Declarations (FD): The variables of a class type are introduced by field declarations. For example, double[][] mHijMujT_; and MessageValuePair[][] sortedMHijMujTs_; are examples of field declarations. 5. String Literals (SL): A string literal is the sequence of characters enclosed in double- quotes. For example, in the code snippet, System.err.println("DPAP load Warning: unknown parameter " + entries[0] + ", value = " + entries[1]);, we have two string literals: (a) "DPAP load Warning: unknown parameter " ; (b) ", value = " . 6. Method Names (MN): For the example shown in Figure 1 of the main paper, public void InitializeToAssignment(int[] a) is the method name prompt context type. 7. Method Names and Bodies (MNB): For the example shown in Figure 1 of the main paper, the part highlighted in violet represents the method names and bodies. B.3 TRUNCATION STRATEGIES FOR PROMPT PROPOSAL CONTEXT If the prompt proposal context is greater than the context length allocated to it, then we need to truncate the prompt proposal context. We followed the below two schemes for truncating context: • front: We truncate the context from the front. This is used for all prompt sources except Parent Class and when we take PL from Current. • back: We truncate the context from the back. This is used when the prompt source is Parent Class and when we take prompt context types other than PL from Current. The truncation strategies for each case were selected based on results on a small validation set. For the prompt source Current, except when the prompt context type is PL, we always start by taking code of prompt context type from after the hole position. This makes sense as the default Codex context will anyways contain code before the hole. Only if this turns out to be blank, we will use the code of context type from before the hole. B.4 LIST OF PROMPT PROPOSALS B.5 OTHER PROMPT PROPOSAL VARIATIONS We experimented with other variations that include: (a) appending class names at the beginning of the prompt proposal context, (b) using newline or space to join the prompt proposal context and the default Codex context, (c) taking all or the top-k of the prompt context types, (d) ordering of top-k. • Context Separator: This defines how we join the prompt proposal context string to the default Codex context string. We experimented with space and newline as context separators. • Prompt Proposal Context Formatting: We can format the prompt proposal context before giving it to the Prompt Composer. We experimented with the following options: 1. class_name: append [class name of the file] at the beginning of the prompt proposal context taken from each file that is part of the prompt source. For example, if we are taking prompt proposal context from two import files f1 and f2, the prompt proposal context will be formatted as: [class name of f1] prompt proposal context from f1 + space + [class name of f2] prompt proposal context from f2. We use this when the prompt proposal context types are MN, I, TI, FD and SL. 2. class_method_name: we apply this only when the prompt proposal context type is MNB. We append method names at the beginning of each of the corresponding method bodies. We also append the prompt proposal context from a file with the name of the class as described in the previous item. 3. comment: Adding in the prompt proposal context as a comment, i.e., formatting it as: /** prompt proposal context */. This wasn’t found to be much useful. 4. none: passing the prompt proposal context as it is. We use this when the prompt proposal context type is PL. • Top-k Type: For each of the prompt proposal context types, except PL, we experimented with taking the (a) first (b) last and (c) all of the prompt proposal context types, i.e., we can take first-10 identifiers. We found ’all’ to be the best among all. • Top-k: We experiment with k values of (a) 10 (b) 20 and (c) all. We found ’all’ to work best for all prompt context types. C IMPLEMENTATION DETAILS C.1 RLPG-H We used Adam (Kingma & Ba, 2015) optimizer with a learning rate of 3e-4 and batch size of 64. We used CodeBERT (Feng et al., 2020) as our pretrained model Fϕ to obtain the representation of hole window. The size of the representation (corresponding to the hidden dimension of the [CLS] token) is 768. W 1 ∈ R512×768, b1 = 512,W 2 ∈ R63×512, b2 = 63. C.2 RLPG-R We used Adam (Kingma & Ba, 2015) optimizer with a learning rate of 3e-4 and batch size of 64. We used CodeBERT (Feng et al., 2020) as our pretrained model Fϕ to obtain the representation of hole window and prompt proposal context. The size of the representation (corresponding to the hidden dimension of the [CLS] token) is 768. In equations 1, 2 and 3 in Section 3.2, the projection matrices WQi ∈ Rdq×dmodel , WKi ∈ Rdk×dmodel , WVi ∈ Rdv×dmodel , WO ∈ Rdmodel×τdv . For the multihead attention, we used dk = dq = dv = 32, τ = 4 and dmodel = 768, Wr ∈ R63×768 and bp = 63. For each head, we perform a scaled dot-product attention (Equation 4). G module consists of a dropout (Srivastava et al., 2014) layer, a residual connection (He et al., 2016), a layernorm (Ba et al., 2016), followed by a sequence of (a) dense layer of weights=2048 × 768, bias=768, (b) relu activation, (c) dense layer of weights=768 × 2048, bias=2048, (d) dropout layer, (e) residual connection, (f) layernorm. A dropout value of 0.25 was used while training. Our model resembles one layer of the transformer encoder block (Vaswani et al., 2017). C.3 BASELINES Random baseline first selects a file randomly from the current repository followed by selecting a random line within that file. We choose all the lines starting from that line to the end line of the chosen file as context (excluding the hole window if the chosen file is the current file). The nearest neighbour similarity is based on the dot product between the representation of the hole window and the representation of the context, where we use a pretrained CodeBERT (Feng et al., 2020) model to obtain the representations. For the Identifier Usage baseline, if the nearest identifier to the hole doesn’t return any usage window, we proceed to the next nearest identifier. For faster computation and to avoid memory issues when running on our hardware, for NN baselines, we collect 64 random neighbours and then rank based on the nearest neighbour distance. The BM25-based baselines use the Okapi BM25 implementation with default parameters given by the pip package rank-bm25 0.2.2 10. For file-level BM25, if the file context exceeds the allocated context length, we truncate from the back. 10https://pypi.org/project/rank-bm25/ D ADDITIONAL RESULTS D.1 ABLATION ON PERFORMANCE BASED ON PROMPT PROPOSAL Figure 4 shows the mean success rate of prompt context types when success is counted only for the cases when these prompt contexts are applicable. As can be seen from the figure, post lines is the most useful prompt context type on an average. The contribution from other prompt context types though smaller than post lines is still significant highlighting the importance of each prompt context type. Figure 5 shows the normalized success rates where the normalization is performed across the prompt proposals. This helps us understand the relative performance of prompt proposal sources and context types. The left part of the figure breaks down the performance based on prompt sources and the right part breaks down based on prompt context types. One thing to note from the plot of prompt context types is that when we consider relative performance, post lines is no longer the most dominant context type. This is because post lines is tied to only when the prompt source corresponds to the current file, thereby contributing to lower numbers when compared to most of the other context types that are tied to all prompt sources. D.2 PERFORMANCE ON NON-IMMEDIATE POST LINES Table 4 shows the performance of post lines when starting the fourth line after the target hole line (i.e., skipping three lines after the target hole) as opposed to starting from the line that immediately follows the target hole line. This experiment helps us understand the performance when we are interested in doing a much harder task of multi-line code autocompletion, wherein the objective is to predict not just the blanked out portion in the current line but also say the next three lines. This can correspond to completing a block of code like a function body. As can be seen from the table, when starting from the fourth line, we see a very slight deterioration in performance. This is expected because the farther away we move from the target hole, the less relevant the post lines context would be. However, the performance drop is not significant suggesting that post lines is still a very useful prompt context type that can be used under the setting of multi-line code-autocompletion. Equivalently, we can include this as one of the prompt proposals in our framework along with the current version of post lines. D.3 COMPOSITION OF PROMPT PROPOSALS Table 5 shows the performance of the two versions of RLPG when we compose the prompt proposal context from l prompt proposals. We take the top-l prompt proposals given by RLPG based on decreasing order of probability. To decide how much context should be used for each prompt proposal, we divide the total context length in proportion to the normalized probabilities of the top-l prompt proposals. As can be seen from the table, even though PPC is not explicitly trained to perform composition (both the ground-truth vector and the representation of prompt proposal context involve a single prompt proposal), all the compositions lead to significant improvements over Codex. However, as expected the best results correspond to taking context from a single prompt proposal (i.e., the training setting). The drop in success rate with l = 2 and l = 5 is not that significant, which suggests that explicitly training RLPG to learn to compose contexts from different prompt proposals can lead to promising results and hence offers an interesting future direction. D.4 EFFECT OF CONTEXT LENGTH To understand the effect of context length on the performance of our prompt proposals, we took half of the context length available for a prompt in Codex and observed the performance of the oracle and fixed prompt proposal. As before, we saw that an oracle constructed from our prompt proposals shows remarkable improvement over Codex highlighting the value of our prompt proposals. However, when compared to a larger context length, the relative gains are smaller. This is expected as a smaller context length means that the relevant context coming from a prompt proposal needs to be truncated to make it fit inside the prompt, thereby leading to loss of information. D.5 PERFORMANCE ON INDIVIDUAL REPOSITORIES Table 7, Table 8 and Table 9 present the success rates of different methods over individual repositories in the training, validation and test splits, respectively. The repo-wise averages in Table 2 in the main paper were calculated by taking the average of numbers corresponding to each column. The hole-wise averages correspond to multiplying the repo-wise numbers of each method by the total holes in the repo to get the total number of successful holes by that method for that repo. We then add the total number of successful holes across repos and divide it by the total number of holes in the entire data split to get the hole-wise averages. E ANALYSIS OF SAMPLE CASES In Figure 1, RLPG selects the prompt proposal that corresponds to taking method names and bodies from the imported file (i.e. MaximizingGibbsSampler.java ). Note that mg. before the hole position indicates that a method used in the imported file is likely to be invoked. In this case, the prompt proposal context (highlighted in violet) contains the method name InitializeToAssignment (part of target hole). This in conjunction with the default Codex context which contains the method CurrentAssignments() (part of target hole) leads to generation of a successful prompt. On the other hand, the prompt created from the default Codex context fails to predict the target hole in this case. In general, we observed that in the absence of a strong signal, Codex has a tendency to give preference to natural language comments occurring before the hole position, e.g. naming the method based on the comment. This in certain cases might hurt. We provide insatnces of positive and negative samples cases for RLPG below: E.1 POSITIVE CASES We provide some examples of cases where RLPG led to the correct prediction and Codex failed. 1. Cases where part of the target hole is found exactly in the prompt proposal context. • RLPG = Propagation(int numVars) vs Codex = Propagation() • RLPG = tersFromFile(String filename) { vs Codex = ters(String filename) { • RLPG = als("dampingFactor")) { vs Codex = als("numVars")) { • RLPG = ] + ", value = " + entries[1]); vs Codex = ]); • RLPG = stem.exit(1); vs Codex = stem.err.println("DPAP load error: " + ex.get 2. Cases where Codex takes strong hint from the preceding natural language comment, thereby producing incorrect predictions. • RLPG = d PassMessages() vs Codex = d DoOneRoundOfMessagePassing() • RLPG = teger> CurrentExemplars() { vs Codex = teger> ChooseExemplars() { • RLPG = ring FileName() { vs Codex = ring GetAlgorithmFilename() { E.2 NEGATIVE CASES In certain cases, extra information from prompt proposal-context might lead to confusion and produce incorrect predictions. • RLPG = an hasConverged_; vs Codex = an converged_; • RLPG = _[i][j] = -Double.MAX_VALUE; vs Codex = _[i][j] = 0;
1. What is the focus of the paper regarding generating code from prompts? 2. What are the strengths of the proposed approach, particularly in terms of novelty and investigating multiple baselines? 3. What are the weaknesses of the paper, especially concerning the overlap with pretraining data and modest improvements? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper is part of the stream of papers that try and generate code given a prompt. This paper tries to assess that if additional context is given as part of the prompt, whether the performance can improve. In particular, they predefine different types of contexts and then train a classifier to predict which context should be prepended to the prompt. To get the training data, they given different types of contexts to Codex and if a particular context is able to make the completion match the ground truth completion, they label that context as 1. They then train a multi-label classifier and use its predictions at inference time to decide what context to add on to the prompt. They compare against several baselines - no additional context, oracle context (using the ground truth correct contexts) which is the upper bound, random context, nearest neighbors in representation space from the random contexts, lines using the closest identifier throughout the repo, and context to the right of the line to be completed. The strongest baseline is the right context baseline against which they show a 1-4% relative improvement and a 14-16% relative improvement over no additional context (Table 1) Strengths And Weaknesses Strengths: (1) The idea of generating training labels for the correct contexts and training a classifier to predict what the correct contexts should be is novel and interesting (2) They investigate multiple possible baselines Weaknesses: (1) Overlap with pretraining data -- Github is not the only source of training data for Codex (https://openai.com/blog/openai-codex/ -- "OpenAI Codex is a descendant of GPT-3; its training data contains both natural language and billions of lines of source code from publicly available sources, including code in public GitHub repositories"). Thus it is very possible that Google Code -- which they use for their test data -- is part of the training data for Codex. Furthermore, deduping based on just repo name match is highly imperfect. A much better way would be file level deduping or suffix array based deduping. Unless there's a better understanding of what the pretraining data consists of, the results might be invalid. The authors could consider using a model like CodeGen for which the pretraining dataset is known. Alternatively they could use code published after June 2021 (the training data cutoff for the Codex davinci-002 model), dedupe it against the code data available upto June 2021 and use that as their test data. (2) The improvements over both right context as well as identifier usage baselines are modest. (3) They don't explore how different prompt proposals could be combined with each other Clarity, Quality, Novelty And Reproducibility Clarity: It's not clear how they are selecting the prompt proposal at inference time. Are they taking the argmax over the probabilities for the different prompt proposals? Quality: Error bars are not given. It's unclear how much overlap the test set has with the pretraining data.
ICLR
Title Repository-Level Prompt Generation for Large Language Models of Code Abstract With the success of large language models (LLMs) of code and their use as code assistants (e.g. Codex (Chen et al., 2021) used in GitHub Copilot1), techniques for introducing domain-specific knowledge in the prompt design process become important. In this work, we propose a framework called Repo-Level Prompt Generator that learns to generate example-specific prompts using prompt proposals. The prompt proposals take context from the entire repository, thereby incorporating both the structure of the repository and the context from other relevant files (e.g. imports, parent class files). Our technique doesn’t require any access to the weights of the LLM, making it applicable in cases where we only have black-box access to the LLM. We conduct experiments on the task of single-line code-autocompletion using code repositories taken from Google Code archives. We demonstrate that an oracle constructed from our prompt proposals gives a remarkably high relative improvement of 36% over Codex, showing the quality of these proposals. Further, we show that when we train a model to predict a prompt proposal, we can achieve significant performance gains over Codex and other baselines. 1 INTRODUCTION Large Language Models (LLMs) have demonstrated remarkable performance in natural language processing tasks (Brown et al., 2020; Chowdhery et al., 2022), text-to-image generation (Ramesh et al., 2022; Rombach et al., 2021), protein-sequencing (Rives et al., 2019) and even as a generalized agent (Reed et al., 2022). As opposed to the pretrain-finetune paradigm, prompting these LLMs has been found to yield good performance even with few-examples (Liu et al., 2021a). A prompt is an input to the LM such that the desired task can be expressed as predictions generated from the LM. Besides providing a mechanism to control and evaluate a LM, prompts have shown to elicit emergent behaviour as well. Examples of this behavior include GPT-3 (Brown et al., 2020) doing better in tasks it has never seen during training and improved reasoning capabilities with few-shot (Wei et al., 2022) and zero-shot (Kojima et al., 2022) prompts that encourage a chain of thoughts. These factors highlight the importance of designing an effective task-specific prompt 2. However, currently we have limited understanding of how to do this (Reynolds & McDonell, 2021). LLMs have also been used for modeling source code with impressive results (Austin et al., 2021; Fried et al., 2022; Xu et al., 2022a). In particular, one of the best performing LLM, Codex (Chen et al., 2021), has been deployed as part of GitHub Copilot 1, a state-of-the-art in-IDE code assistant. Despite the growing popularity of LLMs of code, there is no work that systematically tackles different aspects of prompt generation in relation to source code. One such aspect is that when it comes to code, the relevant context to be put in the prompt can come from not just the current file, but also from outside, such as imports and parent classes. Also, depending on the scenario, the relevant context can be scattered across multiple locations. Since the LLMs have a limited context length available for the prompt, it becomes increasing crucial for our domain-specific understanding to guide the selection of relevant context. Currently, it is not clear how to integrate this domain knowledge of what constitutes a relevant context, into the process of creating prompts. Addressing this question has potential benefits in other domains such as question answering (Liu et al., 2022) and multi-document summarization (Xiao et al., 2022), where domain-specific structured retrieval of context can be useful. 1https://copilot.github.com/ 2Platforms such as PromptBase https://promptbase.com/ allow buying and selling of prompts. In this work, we address this problem by proposing Repo-Level Prompt Generator (RLPG), a framework that while generating the prompt, incorporates both the structure of the repository as well as the relevant context in all the files of the repository. In RLPG, the choice of where from and what to take from the repository is specified by a set of prompt proposals. For example, one of the prompt proposal can be to take all the identifiers used in the first import file. These prompt proposals allow the prompt engineers to induce their domain expertise in the prompt-designing process. With the increasing use of LLMs as assistive agents to humans, demand for transparency and the desire for software engineers to take active part in tailoring prompts to suit their requirements (Jiang et al., 2022; Sun et al., 2022), this capability becomes important. As suggested in some previous works in NLP (Shin et al., 2020; Schick & Schütze, 2021), our prompt proposals are discrete. However, rather than fixing one particular prompt proposal for each example, we instead predict the best prompt proposal conditioned on the example. We do this by coming up with a neural network called Prompt Proposal Classifier (PPC), that given an example, learns to select a prompt proposal such that the resulting prompt is likely to produce the desired output. Therefore, RLPG allows the introduction of domain expertise, and at the same time facilitates automatic example-specific prompt generation via a learned neural network. Note that there are some techniques for automatic prompt generation in NLP (Li & Liang, 2021; Shin et al., 2020; Lester et al., 2021) that require updating some or all of the weights of the LLM. However, the strongest LLMs are not publicly available (e.g. OpenAI provides access only to the generated output from Codex via an API 3 and no access to model weights and data is provided), making these techniques less useful under this scenario. RLPG addresses this limitation by generating prompts assuming only black-box access to the LLM. We focus on the task of single-line code-autocompletion in an IDE, where the objective is to predict the blanked-out portion (or target hole) starting from the position of an imagined cursor to the end of line. We operate under the line-level maintenance setting (Shrivastava et al., 2020; Hellendoorn & Devanbu, 2017) that reflects the scenario where a user is editing an existing file. This means that there can be code following the line. Figure 1 provides an illustration of our approach. The prompt proposal classifier takes in the hole position (position of the cursor) in the current file, the repository to which the current file belongs and a set of repo-level prompt proposals as input, and predicts a prompt proposal. In our illustrated example, the predicted prompt proposal corresponds to taking the method names and bodies from MaximizingGibbsSampler.java (mg.before the hole position indicates that a method from the imported file is likely to be invoked). The Prompt Composer uses the context from the predicted prompt proposal and combines it with the default Codex context, 3https://openai.com/blog/openai-codex/ i.e., code prior to the position of the hole in the current file. The resulting prompt consists of the method name InitializeToAssignment (from the prompt proposal context) and the method CurrentAssignments() (from the default Codex context), resulting in a successful prediction (brown box on the top) of the target hole. Our key contributions are as follows: • We propose a framework called the Repo-Level Prompt Generator (RLPG) that learns to generate prompts conditioned on the example, without requiring access to the weights of the LLM. • To incorporate domain-knowledge in the prompt design process, RLPG uses a set of repository-level prompt proposals. These prompt proposals are designed to incorporate both the structure of the repository as well as the relevant context from all files in the repository. • On the task of single-line code-autocompletion, we show that an oracle constructed from our proposed prompt proposals gives up to 36% relative improvement over Codex. This improvement is pleasantly surprising as Codex has never seen prompts made from these prompt proposals during training. Further, we show that when we use our prompt proposal classifier to predict the best prompt proposal, we can achieve up to 17% relative improvement over Codex. 2 REPO-LEVEL PROMPT GENERATOR (RLPG) In this section, we provide details of our framework. We start by describing our prompt proposals and then discuss our prompt proposal classifier which is followed by a description of prompt composer. 2.1 REPO-LEVEL PROMPT PROPOSALS The core idea of RLPG consists of substituting part of the default context used by Codex with context coming from somewhere else in the repository. The decision of what to take and from where in the repository to take from is governed by a set of prompt proposals. These prompt proposals were decided based on manual inspection of our training data and intend to capture common coding patterns (but more generally can also include project/organization-specific coding practises). A prompt proposal can be thought of as a function that takes as input a target hole’s position and the repository that the hole is a part of, and that returns the prompt proposal context (a string constituted by the context from the prompt proposal). A prompt proposal is specified by a prompt source and a prompt context type. We mention each of these along with their motivation below. Prompt Source: For a target hole position, a prompt source determines from where should we take code that will be part of the prompt proposal context. We propose ten different prompt sources: 1. Current: take code from the current file excluding the contents of the target hole. The current file is the file that contains the target hole. The code in the current file (e.g. the lines after the hole position) can be very useful in predicting the target hole. 2. Parent Class: take code from the file that contains the parent of the class to which the target hole belongs. The intuition behind this is to account for cases where a method present in the parent class is invoked in the current file (i.e. the child class). 3. Import: take code from the import files used in the current file. The dependencies specified via imports can provide useful cues to predict the target hole. 4. Sibling: take code from the files that are in the same directory as the current file. Files in the same directory tend to share code variables (e.g. identifiers). 5. Similar Name: take code from files that have a similar name as the current file. Similar names are determined by doing a splitting of the file name based on underscore or camel-case formatting and then matching parts of the filename. If one or more parts matches, two files are considered to have similar names. The intuition behind this is that software developers tend to name files based on the functionality of the code written in that file. Therefore, a similar name file might contain some portion of the code that is common with the current file and hence might be useful for predicting the target hole. 6. Child Class: take code from files that have the current file as their parent class file. 7. Import of Parent Class: take code from the import files used in the parent class files. 8. Import of Sibling: take code from the import files used in the sibling files. 9. Import of Similar Name: take code from the import files used in the similar name files. 10. Import of Child Class: take code from the import files used in the child class files. The last four prompt sources are useful when the target hole occurs at the very beginning of the current file. In these cases, there would be less context coming from other prompt sources. For each prompt source, we can get either a single file or a ranked list of files (see Appendix B.1). In the latter case, we will take context from these files until we exhaust the maximum context length allocated to the prompt proposal. Prompt Context Type: The prompt context type determines what code to take from the prompt source. We propose seven different prompt context types (Appendix B.2 has examples of each type): 1. Post Lines (PL): Take all the lines after the target hole line till we reach the end of the file. This context type is applicable only when prompt source is the current file 4. 2. Identifiers (I): Take all the identifiers used in the prompt source. 3. Type Identifiers (TI): Take all the type identifiers used in the prompt source. 4. Field Declarations (FD): Take all the field declarations used in the prompt source. 5. String Literals (SL): Take all the string literals used in the prompt source. 6. Method Names (MN): Take all the method names along with their signatures that are used in the prompt source. 7. Method Names and Bodies (MNB): Take all the method names along with their signatures and corresponding bodies used in the prompt source. By combining prompt sources with prompt context types, we get a total of 63 prompt proposals (see Appendix B.4 for details). Note that depending on the target hole, not all prompt proposals would be applicable (e.g. if there are no parent classes in the current file, prompt proposals with prompt source as parent class file won’t be applicable). In Figure 1, the predicted prompt proposal corresponds to taking prompt source Import and prompt context type MNB. We aimed for a set of prompt proposals that offer more diversity rather than a set of prompt proposals that are all good. This in turn ensures that for any hole position, a significant number of prompt proposals are applicable. 2.2 PROMPT PROPOSAL CLASSIFIER (PPC) Given a hole position, the goal of the prompt proposal classifier is to predict the prompt proposal p that will lead to success, where success happens when the predicted hole ĥ exactly matches the target hole h. This task is formulated as a multi-label binary classification problem since for a given target hole, more than one prompt proposals can lead to success. In this formulation, we treat the default Codex context as one of the prompt proposals. Next, we describe the training procedure for PPC. Training: For each target hole h, we generate a ground-truth vector Y h = [yhp ]Mp=1 which is a multi-hot vector of size M , where M is the total number of prompt proposals. This vector is obtained by feeding the prompt generated from prompt proposal p into Codex and then seeing whether ĥ = h. If there is a match, we say that the prompt proposal p is successful. For hole h, if a prompt proposal p is applicable and leads to success, yhp = 1 and will be zero otherwise. For each hole h, we obtain a mask Th where Thp = 1 when p is applicable or zero otherwise. The overall training loss L can be expressed as the sum of individual hole losses Lh as follows: L = 1 N N∑ h=1 Lh = 1 N N∑ h=1 1 Mh Mh∑ p=1 BCE(ŷhp , y h p ) ∗ Thp where Mh = ∑ p Thp (1) In the above equation, N is the total number of holes encountered while training, Mh denotes the total number of applicable prompt proposals for h and BCE corresponds to the binary cross entropy loss. Masking ensures that we consider only the prompt proposals that are applicable. Next, we describe our two variants of PPC that can be used to obtain the prediction ŷhp . RLPG-H: Let Hh be the hole window that includes code present around the hole h excluding the hole itself. In our work, we take two lines before the hole position, the code up to the hole position and two lines after the hole position. We use a pretrained model Fϕ to obtain a context representation vector of size Z, where Z is the dimension of the hidden state of the model. Specifically, we take the hidden state at the first position, i.e. the representation of the [CLS] token. To make training of PPC 4We also conducted experiments (Appendix D.2) where we take lines starting from the 4th line after the hole. computationally efficient, the parameters ϕ are frozen during training. The RLPG-H model takes the context representation of the hole window and projects it to the prompt proposal space of size M via two dense layers with a non-linearity in between (see Equation 2). Taking the sigmoid of this output gives the prediction of the prompt proposal. ŷhp = P (y h p = 1|Hh) = sigmoid(W 2(relu(W 1(Fϕ(Hh)) + b1)) + b2) (2) RLPG-R: The motivation behind this variant is to use the similarity of the hole window and the prompt proposal context to determine which prompt proposal can be useful. Given a particular hole h, let Chp denote the prompt proposal context from prompt proposal p . Intuitively, if the hole window contains variables (e.g. identifiers) that are similar to the variables in the prompt proposal context, then there are chances that h might occur somewhere in Chp . The similarity is modeled using a multiheaded attention mechanism (Vaswani et al., 2017), by treating the projected hole window representation as a query Qh and the projected prompt proposal context representation Khp as a key (Equation 3). The value V hp is the same as the key. Qh = Fϕ(H h), Khp = Fϕ(C h p ), V h p = Fϕ(C h p ) (3) Att(Qh,Khp , V h p ) = V h p softmax (Qh⊤Khp√ dk ) (4) MultiHead(Qh,Khp , V h p ) = W Oconcat(headi, head2, . . . headτ ) (5) where headi = Att(W Q i Q h,WKi K h p ,W V i V h p ) ŷhp = P (y h p = 1|Hh, Chp ) = sigmoid ( WpG(MultiHead(Q h,Khp , V h p )) + bp ) (6) In the equations above, dk is the dimension of the key, W Q i ,W K i ,W V i are the query, key and value projection matrices, τ is the number of heads and WO is the linear projection that combines the heads. The output from Equation 5 is fed to module G consisting of two-layers of feedforward network with relu activation in between (see Appendix C for more details). The resulting output is then linearly projected and a sigmoid is applied to get the predicted prompt proposal (Equation 6). 2.3 PROMPT COMPOSER The prompt composer combines the context from the selected prompt proposal (given by PPC) with the context normally used by Codex (default Codex context) to generate the prompt. Since the total length that can be used for a prompt is fixed, we adopted a dynamic context allocation strategy where if the prompt proposal context is shorter than its allocated length, we assign the remaining portion from the prompt proposal context to the default Codex context. The prompt proposal context is always added before the default Codex context. For all prompt proposals, we assign half of the total context length to the prompt proposal context and the remaining to the default Codex context. For post lines, in addition, we also assign one-fourth and three-fourths of the total context length to the prompt proposal context. If the prompt proposal context or the default Codex context is greater than the context length allocated to it, we truncate it (see Appendix B.3 for our truncation strategies). 3 EXPERIMENTS AND RESULTS In this section, we describe our process of dataset creation, details of experiments along with their results and interesting ablation studies. 3.1 DATASET CREATION To mitigate the effects caused by potential memorization of the code present in the dataset used for training Codex, we avoided code repositories from GitHub (Chen et al., 2021). Instead, we scraped Google Code 5 for repositories in Java (removing the ones that matched with a repository on GitHub 5https://code.google.com/archive/ with the same name). We selected the repositories that had a permissive license giving us a total of 47 repositories. We divided the repositories into train, validation and test splits, where each repository in its entirety is part of a split. In each file within a repository, we remove lines that are blank and comments, and set the hole position to be the middle character in the line. All the characters from the middle position to the end of the line constitute the target hole. Since code duplication has been shown to have adverse effects (Allamanis, 2018), within a repository, we look for files that are exact replica of each other, but placed in a different folder. We mark all such copies as duplicates and omit all of them when creating target holes for our dataset. Note that the prompt proposal context can still come from the duplicate files. We felt comfortable with this choice since we wouldn’t want to predict a target hole in a duplicate file, but we can still use the context from the duplicate file to predict the hole in a file that is not its duplicate (e.g. in a sibling file). Further, we found that the repositories were quite uneven in terms of their size. To avoid large repositories dominating the training of PPC, we capped the maximum contribution of holes from a repository to 10,000, i.e. if the total number of holes in the repository exceeded 10,000, we selected 10,000 holes randomly from the total holes. Please see the left part of Figure 2 for statistics of our dataset. The #Holes represents the holes after deduplication and capping. For some of our prompt proposals, we require semantic information that can be obtained with a parse tree. We used the tree-sitter API for Java 6 that enables us to get the AST of a file and query it. Since our prompt proposals need information at a repository level, we stored some extra information that allowed us to collate the information from individual files according to the directory structure inside the repository (see Appendix A for more details). 3.2 EXPERIMENTAL DETAILS Prompt Generation: We used the OpenAI Codex Completions API for generating the predicted hole from the Codex model. In particular, we used the code-davinci-001 engine with temperature set to 1.0 and stop criteria as newline. The completion length was kept to be 24 and the maximum prompt length was 4072. Tokenization was done using the suggested tokenizer 7. To allow for fast computation, we used simple models like CodeBERT (Feng et al., 2020) and GraphCodeBERT (Guo et al., 2020) as our pretrained models. One of the limitations of these pretrained models is that the maximum context length that can be taken as input by these models is much smaller than the maximum context length allowed by Codex. Therefore, when getting the representation of the prompt proposal context that is used by PPC, we need to truncate the prompt proposal context that might lead to omitting important parts of the prompt proposal context in certain cases. Using pretrained models that allow larger context length or models that augment the context (Wu et al., 2022) offer avenues for future work. See Appendix D.4 for results when using a smaller context length from Codex. Computational Complexity and Scalability of RLPG: To collect the ground-truth data for training our prompt proposal classifier, we queried the Codex API for each applicable prompt proposal per hole (maximum rate limit of 400 holes per minute). The computational complexity of training our larger RLPG-R variant (3.6M parameters, 141269 holes and 9.19 minutes per epoch on a single Tesla V100 GPU) is much smaller than finetuning all or some part of Codex (12B parameters). During inference, we need to calculate the repo-level statistics just once and all the subsequent hole completions in the repo can utilize this cached information, incurring no additional computational complexity. Besides training the PPC, all our experiments were performed on a CPU with 8GB RAM. Our prompt proposals are based on concepts such as post lines, imports, similar name files, method names and identifiers that are quite general and are applicable to other programming languages. In addition to the existing prompt proposals, our framework provides the flexibility to incorporate new prompt proposals. Since the cost of retraining RLPG with the extended prompt proposals is extremely low (much lower than finetuning Codex with the new prompt proposals), our framework can be used to make interventions on the LLM to address observed weaknesses as long as the intervention can be expressed as a prompt proposal that adds the missing context to the LLM. As opposed to techniques that perform prompt engineering in the latent space and require access to the weights of the LLM such as Li & Liang (2021), RLPG facilitates expressing intent in the form of prompt proposals that are intuitive for humans, easy to understand and do not require access to the weights of the LLM. 6https://github.com/tree-sitter/tree-sitter-java 7https://huggingface.co/docs/transformers/model_doc/gpt2#transformers.GPT2TokenizerFast Methods: We experimented with the following methods for generating the prompt: 1. Codex: Using the default context from Codex as the entire prompt. 2. Oracle: Using the ground-truth vector Y h (mentioned in Section 2.2). The prompt generated corresponds to using any of the successful prompt proposals (i.e., yhp = 1). Since this information is not available at inference, the oracle performance represents an upper bound. 3. Fixed Prompt Proposal: Using the most successful prompt proposal for all target holes. This was chosen based on the performance on the validation set and corresponded to taking 75% of the total context length from post lines in the current file. 4. RLPG-H and RLPG-R: Using the prompt proposal predicted by the RLPG-H and RLPG-H varients of PPC. The selected prompt proposal corresponds to taking the argmax of the predicted probabilities over different prompt proposals. 5. RLPG-BM25: Instead of using PPC to rank prompt proposals, using the scores obtained by BM25 (Jones et al., 2000) to select the best prompt proposal. The scores are calculated with the hole window being the query and prompt proposal contexts being the search documents. This serves as a non-learned retrieval method that makes use of our prompt proposals. 6. File-level BM25: Same as above, except that instead of using our prompt proposal contexts, search documents consist of full context from other files in the repository. 7. Random: For each target hole, select a context randomly from anywhere in the repository. 8. Random NN: Same as Random, except that amongst the randomly chosen contexts, we take the nearest neighbours of the hole window in the representation space of a pretrained model. This is analogous to the technique used in Liu et al. (2022). 9. Identifier Usage: For each target hole, we take the closest identifier and take usage windows of that identifier from everywhere in the repository. We take two lines above, two lines below and the usage line as the usage window. We can rank the usage windows either randomly (random) or based on the nearest neighbour distance to the hole window in the representation space (NN). The last four methods help us understand the performance when a context other than the prompt proposal context is used. To generate a prompt using these methods, we take 50% of the context from these followed by the default Codex context that takes up the remaining context length. For the NN baselines, we use CodeBERT (Feng et al., 2020) as the pretrained model. The contexts are taken in the increasing order of the nearest neighbour distances, until we exhaust the allocated context length. RLPG-BM25 helps us understand the role of PPC. See Appendix C.3 for more details on the implementation of these methods. Evaluation Metric: As mentioned in Section 2.2, to measure success, we used exact match between the predicted hole string generated by Codex and the target hole string. In our experiments, we report the percentage of successful holes divided by the total number of holes for each split. We will call this success rate (SR) going forward. 3.3 RESULTS In this section, we present the results of the following two research questions explored in this paper: [RQ1] Is it useful to generate a prompt that is composed of code context that is different from the default Codex context? If yes, what context can be useful? [RQ2] For each target hole, is there a way of automatically selecting the prompt? If yes, how does this system perform relative to Codex? RQ1 - Performance of Prompt Proposals: We found that combining the prompt proposal context (context from other files in the repository) with the default Codex context led to substantial improve- ment in performance. The right part of Figure 2 shows the performance of an oracle constructed from our prompt proposals. We see that across all data splits, the prompt proposals contribute to significantly large improvements over Codex (upto 36% for test split). These results might seem surprising as Codex has not been trained on prompts that consist of context other than the default Codex context. What makes this result more surprising is that in most of the cases, the prompt consists of mashed up context without logical ordering that may not even look like a semantically meaningful chunk of code (e.g. list of string literals from a sibling file followed by the default Codex context or post lines placed before the default Codex context as opposed to after). These results might suggest that as long as the relevant context (in our case repo-level knowledge in the form of prompt proposals) is present in any form in the prompt, it can be quite effective. RQ2 - Performance of PPC: Having seen promise in our prompt proposals, next we present the results of RLPG, which for each target hole predicts a single best prompt proposal. Table 1 presents the success rates along with the percentage of relative improvements for the test data. The second and third columns correspond to the averages across all holes in the test data. The last two columns correspond to the average success rate of individual repositories. The latter metric doesn’t account for the size of the repository. As can be seen from the table, all the RLPG variants as well as the fixed prompt proposal improve the performance significantly over Codex. The random baselines are either worse or on par with Codex. Identifier usage is a good baseline but still performs worse than either the fixed prompt proposal or RLPG. The improved performance of RLPG-BM25 as compared to fixed prompt proposal shows the value of generating example-specific prompts using RLPG. However, both the learned variants of RLPG, i.e., RLPG-H and RLPG-R outperform the RLPG-BM25, highlighting the importance of learning PPC. See Appendix D.5 for performance of all methods on individual repositories. Note that even though we consider identifier usage as a separate baseline, one could consider it as one of the prompt proposal leading to further improved performance of RLPG. Despite our efforts of avoiding overlap, since the training data for Codex is not exactly known, there might be a slight possibility that part of our Google Code data is part of the training data for Codex. Even if there were an overlap, we want to point out that since Codex has seen the default Codex context during training, it would be more beneficial to use the default Codex context in the prompt rather than the context from the prompt proposals or any other context from other baselines. Therefore, under this scenario, our evaluation would be more generous to the Codex baseline with results biased more in favour of the Codex baseline than other methods we have used. Variation with #attempts: Imagine a scenario where we have a human-in-the-loop who has been given k attempts to prompt the LLM and then can choose one of the k hole predictions. We wanted to see how does the performance of our framework varies with #attempts under this setting. This corresponds to using k prompts generated with top-k prompt proposals (one prompt per proposal) and marking success if any of the k prompts lead to success. The left part of Figure 3 shows the variation of SR over the validation data with the value of k. For RLPG, the top-k prompt proposals were chosen based on the decreasing order of probabilities given by PPC. For the fixed prompt proposal, the top-k prompt proposals were decided based on decreasing order of success rate of the individual prompt proposals on the validation dataset. From the figure, we notice that as we increase the value of k, the performance increases gradually at first and then saturates towards the oracle performance (79.05% for val data). This behaviour is observed for both fixed prompt proposal as well as RLPG. However, we see that for the same value of k, the success rate for RLPG is higher indicating that PPC learns a useful ranking of the prompt proposal contexts that can scale well with the #attempts. Performance based on Prompt Proposals: The right part of Figure 3 shows mean success rate of prompt sources when we count success only when the corresponding prompt source is applicable. From the figure, we see that the current file is the most important prompt source. Closely following are sibling files and similar name files. We see that all prompt sources have non-zero chances of success, highlighting the usefulness of each prompt source. See Appendix D.1 for a similar breakdown based on prompt context type and Appendix E for analysis of successful and failed sample cases. 4 RELATED WORK LLMs for Code: Recently, there has been a lot of work around large language models of code. One class of models are the decoder-only models that correspond to generating code from left-to-right. Codex (Chen et al., 2021), Google’s model (Austin et al., 2021), GPT-J-6B (Wang & Komatsuzaki, 2021), GPT-Neo (Black et al., 2021b), GPT-Neo-X (Black et al., 2021a), CodeParrot (Tunstall et al., 2022), PolyCoder (Xu et al., 2022a) and InCoder (Fried et al., 2022) are some examples. We also have some encoder-only models that use a masked language modelling objective. CodeBERT (Feng et al., 2020), GraphcodeBERT (Guo et al., 2020) and CuBERT (Kanade et al., 2020) are examples of such models. Lastly, we have the class of encoder-decoder models that generally use a bidirectional encoding of a context to decode a series of masked tokens. Code-T5 (Wang et al., 2021) and AlphaCode (Li et al., 2022) are examples of such models. Repo-Level Info: Fewer works use information from outside the current file. Hellendoorn & Devanbu (2017) propose a nested n-gram model that utilizes a locality-based cache where the locality consists of all directories from the root of the project (inclusive of the current file). Zhang et al. (2021) uses the parent class to generate the comments for the child class. Pashakhanloo et al. (2022b;a) capture the structure and semantics of the repository by converting it into a relational database and propose a graph-walk based mechanism for pruning the unrelated context. Lyu et al. (2021) incorporates the API-dependency graph in a LSTM-based Seq2Seq model to assist in code generation. Xu et al. (2022b) incorporate three types of structural locality features while training the kNN-LM (Khandelwal et al., 2020). These features are binary variables that correspond to the presence or absence of similar hierarchy. The three levels of hierarchy are (a) sibling file, (b) file in the same repo (c) no hierarchy. In contrast we have a much richer set of prompt proposals incorporating the semantics and structure of the repository. Also, we assume black-box access to the actual LM and restrict ourselves to generating a prompt for the LLM without performing any finetuning of the LLM. Prompt Generation: There have been promising works around prompt generation techniques in NLP. Broadly, there are two categories of automatic prompt generation techniques. The first category corresponds to producing continuous/soft prompts where the prompt is described in the latent space of a language model (Li & Liang, 2021; Qin & Eisner, 2021; Bragg et al., 2021; Lester et al., 2021; Liu et al., 2021b). For example, Prefix-Tuning (Li & Liang, 2021) adds a prefix to the LM that can be learned by finetuning on examples from the downstream task. The second category produces discrete prompts where the prompt is a text string that can be interpreted by a human (Shin et al., 2020; Gao et al., 2021; Schick & Schütze, 2021). For example, Autoprompt (Shin et al., 2020) generates prompt using a fixed template consisting of trigger tokens. The trigger tokens are shared across all inputs and determined by a gradient-guided search involving the LM. Our work falls in the category of discrete prompt generation techniques as we produce a prompt consisting of code tokens that can be easily interpreted by a human. However, in contrast to prior works that use a set of fixed templates for all examples, we learn to produce prompts conditioned on each example. Another important distinction is that we do not require access to the weights of the LM. A concurrent work as ours (Wang et al., 2022) studies the role of prompt-tuning when compared to fine-tuning for code translation, defect localization and code summarization. However, their technique requires access to the weights of the LLM and they perform experiments over models that are much smaller in scale than Codex. To the best of our knowledge, our work is the first to explore automatic prompt generation in a black-box access setting in the domain of source code. 5 CONCLUSIONS AND FUTURE DIRECTIONS We present RLPG, a framework that learns to automatically generate prompts conditioned on the example, without requiring access to the weights of the LLM. RLPG utilizes the structure of the repository as well as the context from other files in the repository using a set of easy to understand prompt proposals. Note that even though we have scoped and worded our prompt proposals to be repository-level, the idea of RLPG and prompt proposals in itself is quite universal and need not be scoped to a repository. Taking context from other repositories as well as external knowledge such as API dependencies offers an interesting direction to explore in the future. In this work, we are taking context from only one prompt proposal. For future work, we want to learn a model that can automatically compose a prompt from multiple prompt proposals (see Appendix D.3 for promising initial results). Other interesting directions include incorporating the user’s feedback in RLPG and extending RLPG to multi-line code autocompletion. A DATASET CREATION DETAILS A.1 CREATION OF HOLE COMPLETION DATA To collect the hole completion data, we scraped Google Code 8 for repositories tagged with the language “Java”. Then we deduplicated repositories by searching for a matching repository with the same name on GitHub. For those repositories with zero matching names on GitHub, we downloaded the archive and extracted the source code (preserving the directory structure). Next, we tried to determine the licenses of all repositories by either looking for a LICENSE file or matching with keywords "license", "copyright", "mit", etc. For repos for which our process was able to come up with a known license, we selected the ones having a permissive license, i.e., MIT, ApacheV2 and BSD. This was followed by removing files that are exact duplicates of each other within a repo. One of the reasons we found this inter-repository duplication may be because sometimes developers adopt lousy practises where instead of declaring a package and importing functions, they simply copy-paste the desired file in the current folder. The target holes coming from any of the duplicate files do not form part of the hole completion dataset. However, these files might be used to contribute to prompt proposal context for completing a target hole in a non-duplicate file. For the remaining files, we took each line that is not a blanked line or a comment, and chose the middle character as the hole position, i.e., all the characters from the middle of the line to the end of the line form target hole. To avoid large repos having strong bias on our prompt proposal classifier, we capped the contribution from each repo to be a maximum of 10000 holes. If the number of holes in the repo exceeds 10000, we randomly select 10000 holes. A.2 CREATION OF DATA FOR REPO-LEVEL PROMPT PROPOSALS We used the tree-sitter API for Java 9 to get the parse-tree of an individual file in a repo. To get information at a repo-level, for each file in the repo, we stored the following information: 1. list of all class names in the file. This helped us to get the parent or child class file corresponding to a given parent or child class. 2. the file corresponding to each import statement. 3. for each import statement in the file, the position in the file where the import is used. This is used for ranking the files based on the heuristics mentioned in Table 2. 4. list of sibling files 5. list of similar name files. This was done by splitting the filenames based on either camel-case or underscore. If the sub-parts of two files match, then they are said to have similar name. The above meta-data was calculated only once for each repo. The subsequent hole completions can use the same cached information. In practise, we can use a hash to store and retrieve this info efficiently. For a prompt proposal, given the prompt source, we first obtain a single file or ranked list of files (see Table 2) using the info in the parse tree in conjugation with the above repo-level meta-data. All the prompt proposal context type information (MN, MNB, SL, I, TI, FD) can then be obtained by querying the parse tree of the selected file. B PROMPT PROPOSAL DETAILS B.1 RANKING OF FILES BASED ON PROMPT SOURCE In Table 2, we provide details of how we select files for a given prompt source. Depending on the prompt proposal, we get either a single file or a list of files ranked based on some criteria. For example, if the prompt source is Import, we take all the import statements used in the current file and identify the location in the current file where the corresponding imports have been used. According to our heuristic, the closer is the import usage to the hole position, the more likely it is for the prompt proposal context coming from the corresponding import file to be more relevant (to predict the target hole). We get a ranked list of import files sorted based on increasing order of distance (i.e., number of lines ) between the import usage and the hole position. We start by taking all of the prompt proposal 8https://code.google.com/archive/ 9https://github.com/tree-sitter/tree-sitter-java context from the first file in the ranked list and then keep iterating the ranked list until either the total context length allocated to the prompt proposal gets exhausted or we reach the end of the ranked list. B.2 EXAMPLES OF PROMPT CONTEXT TYPE We provide examples of each of our prompt context type below: 1. Post Lines (PL) : For the example shown in Figure 1 of the main paper, post lines will take all the lines after the line mg.InitializeToAssignment(CurrentAssignments()) till we reach the end of the file (AffinityPropagation.java). 2. Identifiers (I): Identifiers are the names of variables used in the code. For example, for the prompt proposal context taken from the imported file shown in Figure 1 in the main paper (highlighted in violet), identifiers are InitializeToAssignment (line 1), a (line 1), currentAssignment_ (line 2), a ( line 2), clone (line 2), alreadyInitialized_ (line 3), justOneRound_ (line 4). 3. Type Identifiers (TI): Type Identifiers define the type of an identifier. For example, in the code snippet class DPAffinityPropagation extends AffinityPropagation , [ AffinityPropagation is labeled as a type identifier. Similarly in the snippet DPAPParameters parameters_;, DPAPParameters is a type identifier. 4. Field Declarations (FD): The variables of a class type are introduced by field declarations. For example, double[][] mHijMujT_; and MessageValuePair[][] sortedMHijMujTs_; are examples of field declarations. 5. String Literals (SL): A string literal is the sequence of characters enclosed in double- quotes. For example, in the code snippet, System.err.println("DPAP load Warning: unknown parameter " + entries[0] + ", value = " + entries[1]);, we have two string literals: (a) "DPAP load Warning: unknown parameter " ; (b) ", value = " . 6. Method Names (MN): For the example shown in Figure 1 of the main paper, public void InitializeToAssignment(int[] a) is the method name prompt context type. 7. Method Names and Bodies (MNB): For the example shown in Figure 1 of the main paper, the part highlighted in violet represents the method names and bodies. B.3 TRUNCATION STRATEGIES FOR PROMPT PROPOSAL CONTEXT If the prompt proposal context is greater than the context length allocated to it, then we need to truncate the prompt proposal context. We followed the below two schemes for truncating context: • front: We truncate the context from the front. This is used for all prompt sources except Parent Class and when we take PL from Current. • back: We truncate the context from the back. This is used when the prompt source is Parent Class and when we take prompt context types other than PL from Current. The truncation strategies for each case were selected based on results on a small validation set. For the prompt source Current, except when the prompt context type is PL, we always start by taking code of prompt context type from after the hole position. This makes sense as the default Codex context will anyways contain code before the hole. Only if this turns out to be blank, we will use the code of context type from before the hole. B.4 LIST OF PROMPT PROPOSALS B.5 OTHER PROMPT PROPOSAL VARIATIONS We experimented with other variations that include: (a) appending class names at the beginning of the prompt proposal context, (b) using newline or space to join the prompt proposal context and the default Codex context, (c) taking all or the top-k of the prompt context types, (d) ordering of top-k. • Context Separator: This defines how we join the prompt proposal context string to the default Codex context string. We experimented with space and newline as context separators. • Prompt Proposal Context Formatting: We can format the prompt proposal context before giving it to the Prompt Composer. We experimented with the following options: 1. class_name: append [class name of the file] at the beginning of the prompt proposal context taken from each file that is part of the prompt source. For example, if we are taking prompt proposal context from two import files f1 and f2, the prompt proposal context will be formatted as: [class name of f1] prompt proposal context from f1 + space + [class name of f2] prompt proposal context from f2. We use this when the prompt proposal context types are MN, I, TI, FD and SL. 2. class_method_name: we apply this only when the prompt proposal context type is MNB. We append method names at the beginning of each of the corresponding method bodies. We also append the prompt proposal context from a file with the name of the class as described in the previous item. 3. comment: Adding in the prompt proposal context as a comment, i.e., formatting it as: /** prompt proposal context */. This wasn’t found to be much useful. 4. none: passing the prompt proposal context as it is. We use this when the prompt proposal context type is PL. • Top-k Type: For each of the prompt proposal context types, except PL, we experimented with taking the (a) first (b) last and (c) all of the prompt proposal context types, i.e., we can take first-10 identifiers. We found ’all’ to be the best among all. • Top-k: We experiment with k values of (a) 10 (b) 20 and (c) all. We found ’all’ to work best for all prompt context types. C IMPLEMENTATION DETAILS C.1 RLPG-H We used Adam (Kingma & Ba, 2015) optimizer with a learning rate of 3e-4 and batch size of 64. We used CodeBERT (Feng et al., 2020) as our pretrained model Fϕ to obtain the representation of hole window. The size of the representation (corresponding to the hidden dimension of the [CLS] token) is 768. W 1 ∈ R512×768, b1 = 512,W 2 ∈ R63×512, b2 = 63. C.2 RLPG-R We used Adam (Kingma & Ba, 2015) optimizer with a learning rate of 3e-4 and batch size of 64. We used CodeBERT (Feng et al., 2020) as our pretrained model Fϕ to obtain the representation of hole window and prompt proposal context. The size of the representation (corresponding to the hidden dimension of the [CLS] token) is 768. In equations 1, 2 and 3 in Section 3.2, the projection matrices WQi ∈ Rdq×dmodel , WKi ∈ Rdk×dmodel , WVi ∈ Rdv×dmodel , WO ∈ Rdmodel×τdv . For the multihead attention, we used dk = dq = dv = 32, τ = 4 and dmodel = 768, Wr ∈ R63×768 and bp = 63. For each head, we perform a scaled dot-product attention (Equation 4). G module consists of a dropout (Srivastava et al., 2014) layer, a residual connection (He et al., 2016), a layernorm (Ba et al., 2016), followed by a sequence of (a) dense layer of weights=2048 × 768, bias=768, (b) relu activation, (c) dense layer of weights=768 × 2048, bias=2048, (d) dropout layer, (e) residual connection, (f) layernorm. A dropout value of 0.25 was used while training. Our model resembles one layer of the transformer encoder block (Vaswani et al., 2017). C.3 BASELINES Random baseline first selects a file randomly from the current repository followed by selecting a random line within that file. We choose all the lines starting from that line to the end line of the chosen file as context (excluding the hole window if the chosen file is the current file). The nearest neighbour similarity is based on the dot product between the representation of the hole window and the representation of the context, where we use a pretrained CodeBERT (Feng et al., 2020) model to obtain the representations. For the Identifier Usage baseline, if the nearest identifier to the hole doesn’t return any usage window, we proceed to the next nearest identifier. For faster computation and to avoid memory issues when running on our hardware, for NN baselines, we collect 64 random neighbours and then rank based on the nearest neighbour distance. The BM25-based baselines use the Okapi BM25 implementation with default parameters given by the pip package rank-bm25 0.2.2 10. For file-level BM25, if the file context exceeds the allocated context length, we truncate from the back. 10https://pypi.org/project/rank-bm25/ D ADDITIONAL RESULTS D.1 ABLATION ON PERFORMANCE BASED ON PROMPT PROPOSAL Figure 4 shows the mean success rate of prompt context types when success is counted only for the cases when these prompt contexts are applicable. As can be seen from the figure, post lines is the most useful prompt context type on an average. The contribution from other prompt context types though smaller than post lines is still significant highlighting the importance of each prompt context type. Figure 5 shows the normalized success rates where the normalization is performed across the prompt proposals. This helps us understand the relative performance of prompt proposal sources and context types. The left part of the figure breaks down the performance based on prompt sources and the right part breaks down based on prompt context types. One thing to note from the plot of prompt context types is that when we consider relative performance, post lines is no longer the most dominant context type. This is because post lines is tied to only when the prompt source corresponds to the current file, thereby contributing to lower numbers when compared to most of the other context types that are tied to all prompt sources. D.2 PERFORMANCE ON NON-IMMEDIATE POST LINES Table 4 shows the performance of post lines when starting the fourth line after the target hole line (i.e., skipping three lines after the target hole) as opposed to starting from the line that immediately follows the target hole line. This experiment helps us understand the performance when we are interested in doing a much harder task of multi-line code autocompletion, wherein the objective is to predict not just the blanked out portion in the current line but also say the next three lines. This can correspond to completing a block of code like a function body. As can be seen from the table, when starting from the fourth line, we see a very slight deterioration in performance. This is expected because the farther away we move from the target hole, the less relevant the post lines context would be. However, the performance drop is not significant suggesting that post lines is still a very useful prompt context type that can be used under the setting of multi-line code-autocompletion. Equivalently, we can include this as one of the prompt proposals in our framework along with the current version of post lines. D.3 COMPOSITION OF PROMPT PROPOSALS Table 5 shows the performance of the two versions of RLPG when we compose the prompt proposal context from l prompt proposals. We take the top-l prompt proposals given by RLPG based on decreasing order of probability. To decide how much context should be used for each prompt proposal, we divide the total context length in proportion to the normalized probabilities of the top-l prompt proposals. As can be seen from the table, even though PPC is not explicitly trained to perform composition (both the ground-truth vector and the representation of prompt proposal context involve a single prompt proposal), all the compositions lead to significant improvements over Codex. However, as expected the best results correspond to taking context from a single prompt proposal (i.e., the training setting). The drop in success rate with l = 2 and l = 5 is not that significant, which suggests that explicitly training RLPG to learn to compose contexts from different prompt proposals can lead to promising results and hence offers an interesting future direction. D.4 EFFECT OF CONTEXT LENGTH To understand the effect of context length on the performance of our prompt proposals, we took half of the context length available for a prompt in Codex and observed the performance of the oracle and fixed prompt proposal. As before, we saw that an oracle constructed from our prompt proposals shows remarkable improvement over Codex highlighting the value of our prompt proposals. However, when compared to a larger context length, the relative gains are smaller. This is expected as a smaller context length means that the relevant context coming from a prompt proposal needs to be truncated to make it fit inside the prompt, thereby leading to loss of information. D.5 PERFORMANCE ON INDIVIDUAL REPOSITORIES Table 7, Table 8 and Table 9 present the success rates of different methods over individual repositories in the training, validation and test splits, respectively. The repo-wise averages in Table 2 in the main paper were calculated by taking the average of numbers corresponding to each column. The hole-wise averages correspond to multiplying the repo-wise numbers of each method by the total holes in the repo to get the total number of successful holes by that method for that repo. We then add the total number of successful holes across repos and divide it by the total number of holes in the entire data split to get the hole-wise averages. E ANALYSIS OF SAMPLE CASES In Figure 1, RLPG selects the prompt proposal that corresponds to taking method names and bodies from the imported file (i.e. MaximizingGibbsSampler.java ). Note that mg. before the hole position indicates that a method used in the imported file is likely to be invoked. In this case, the prompt proposal context (highlighted in violet) contains the method name InitializeToAssignment (part of target hole). This in conjunction with the default Codex context which contains the method CurrentAssignments() (part of target hole) leads to generation of a successful prompt. On the other hand, the prompt created from the default Codex context fails to predict the target hole in this case. In general, we observed that in the absence of a strong signal, Codex has a tendency to give preference to natural language comments occurring before the hole position, e.g. naming the method based on the comment. This in certain cases might hurt. We provide insatnces of positive and negative samples cases for RLPG below: E.1 POSITIVE CASES We provide some examples of cases where RLPG led to the correct prediction and Codex failed. 1. Cases where part of the target hole is found exactly in the prompt proposal context. • RLPG = Propagation(int numVars) vs Codex = Propagation() • RLPG = tersFromFile(String filename) { vs Codex = ters(String filename) { • RLPG = als("dampingFactor")) { vs Codex = als("numVars")) { • RLPG = ] + ", value = " + entries[1]); vs Codex = ]); • RLPG = stem.exit(1); vs Codex = stem.err.println("DPAP load error: " + ex.get 2. Cases where Codex takes strong hint from the preceding natural language comment, thereby producing incorrect predictions. • RLPG = d PassMessages() vs Codex = d DoOneRoundOfMessagePassing() • RLPG = teger> CurrentExemplars() { vs Codex = teger> ChooseExemplars() { • RLPG = ring FileName() { vs Codex = ring GetAlgorithmFilename() { E.2 NEGATIVE CASES In certain cases, extra information from prompt proposal-context might lead to confusion and produce incorrect predictions. • RLPG = an hasConverged_; vs Codex = an converged_; • RLPG = _[i][j] = -Double.MAX_VALUE; vs Codex = _[i][j] = 0;
1. What is the focus and contribution of the paper on language models for source code? 2. What are the strengths of the proposed approach, particularly in terms of its efficiency and novelty? 3. What are the weaknesses of the paper, especially regarding its suitability for a different conference and the limited consideration of context in prompt proposals? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a repository-level prompt generation (RLPG) approach for language models of source code. Without having access to the weight of LLMs, RLPG can improve the performance of LM in code completion task. Through training the prompt proposal classifier, the prompt composer can generate high quality prompts for LM, increasing the accuracy of code completion. Strengths And Weaknesses Strengths: A novel way of generating prompts. Extensive experiments. RLPG is efficient for both training and inference. Weaknesses: The paper is better for a software engineering/programming language conference. Only the structure of the repository and the context from other relevant files are considered in the prompt proposals. Clarity, Quality, Novelty And Reproducibility The paper is generally well written. The implementation of the model is not available for replication study. The proposed work appears to be novel.
ICLR
Title Repository-Level Prompt Generation for Large Language Models of Code Abstract With the success of large language models (LLMs) of code and their use as code assistants (e.g. Codex (Chen et al., 2021) used in GitHub Copilot1), techniques for introducing domain-specific knowledge in the prompt design process become important. In this work, we propose a framework called Repo-Level Prompt Generator that learns to generate example-specific prompts using prompt proposals. The prompt proposals take context from the entire repository, thereby incorporating both the structure of the repository and the context from other relevant files (e.g. imports, parent class files). Our technique doesn’t require any access to the weights of the LLM, making it applicable in cases where we only have black-box access to the LLM. We conduct experiments on the task of single-line code-autocompletion using code repositories taken from Google Code archives. We demonstrate that an oracle constructed from our prompt proposals gives a remarkably high relative improvement of 36% over Codex, showing the quality of these proposals. Further, we show that when we train a model to predict a prompt proposal, we can achieve significant performance gains over Codex and other baselines. 1 INTRODUCTION Large Language Models (LLMs) have demonstrated remarkable performance in natural language processing tasks (Brown et al., 2020; Chowdhery et al., 2022), text-to-image generation (Ramesh et al., 2022; Rombach et al., 2021), protein-sequencing (Rives et al., 2019) and even as a generalized agent (Reed et al., 2022). As opposed to the pretrain-finetune paradigm, prompting these LLMs has been found to yield good performance even with few-examples (Liu et al., 2021a). A prompt is an input to the LM such that the desired task can be expressed as predictions generated from the LM. Besides providing a mechanism to control and evaluate a LM, prompts have shown to elicit emergent behaviour as well. Examples of this behavior include GPT-3 (Brown et al., 2020) doing better in tasks it has never seen during training and improved reasoning capabilities with few-shot (Wei et al., 2022) and zero-shot (Kojima et al., 2022) prompts that encourage a chain of thoughts. These factors highlight the importance of designing an effective task-specific prompt 2. However, currently we have limited understanding of how to do this (Reynolds & McDonell, 2021). LLMs have also been used for modeling source code with impressive results (Austin et al., 2021; Fried et al., 2022; Xu et al., 2022a). In particular, one of the best performing LLM, Codex (Chen et al., 2021), has been deployed as part of GitHub Copilot 1, a state-of-the-art in-IDE code assistant. Despite the growing popularity of LLMs of code, there is no work that systematically tackles different aspects of prompt generation in relation to source code. One such aspect is that when it comes to code, the relevant context to be put in the prompt can come from not just the current file, but also from outside, such as imports and parent classes. Also, depending on the scenario, the relevant context can be scattered across multiple locations. Since the LLMs have a limited context length available for the prompt, it becomes increasing crucial for our domain-specific understanding to guide the selection of relevant context. Currently, it is not clear how to integrate this domain knowledge of what constitutes a relevant context, into the process of creating prompts. Addressing this question has potential benefits in other domains such as question answering (Liu et al., 2022) and multi-document summarization (Xiao et al., 2022), where domain-specific structured retrieval of context can be useful. 1https://copilot.github.com/ 2Platforms such as PromptBase https://promptbase.com/ allow buying and selling of prompts. In this work, we address this problem by proposing Repo-Level Prompt Generator (RLPG), a framework that while generating the prompt, incorporates both the structure of the repository as well as the relevant context in all the files of the repository. In RLPG, the choice of where from and what to take from the repository is specified by a set of prompt proposals. For example, one of the prompt proposal can be to take all the identifiers used in the first import file. These prompt proposals allow the prompt engineers to induce their domain expertise in the prompt-designing process. With the increasing use of LLMs as assistive agents to humans, demand for transparency and the desire for software engineers to take active part in tailoring prompts to suit their requirements (Jiang et al., 2022; Sun et al., 2022), this capability becomes important. As suggested in some previous works in NLP (Shin et al., 2020; Schick & Schütze, 2021), our prompt proposals are discrete. However, rather than fixing one particular prompt proposal for each example, we instead predict the best prompt proposal conditioned on the example. We do this by coming up with a neural network called Prompt Proposal Classifier (PPC), that given an example, learns to select a prompt proposal such that the resulting prompt is likely to produce the desired output. Therefore, RLPG allows the introduction of domain expertise, and at the same time facilitates automatic example-specific prompt generation via a learned neural network. Note that there are some techniques for automatic prompt generation in NLP (Li & Liang, 2021; Shin et al., 2020; Lester et al., 2021) that require updating some or all of the weights of the LLM. However, the strongest LLMs are not publicly available (e.g. OpenAI provides access only to the generated output from Codex via an API 3 and no access to model weights and data is provided), making these techniques less useful under this scenario. RLPG addresses this limitation by generating prompts assuming only black-box access to the LLM. We focus on the task of single-line code-autocompletion in an IDE, where the objective is to predict the blanked-out portion (or target hole) starting from the position of an imagined cursor to the end of line. We operate under the line-level maintenance setting (Shrivastava et al., 2020; Hellendoorn & Devanbu, 2017) that reflects the scenario where a user is editing an existing file. This means that there can be code following the line. Figure 1 provides an illustration of our approach. The prompt proposal classifier takes in the hole position (position of the cursor) in the current file, the repository to which the current file belongs and a set of repo-level prompt proposals as input, and predicts a prompt proposal. In our illustrated example, the predicted prompt proposal corresponds to taking the method names and bodies from MaximizingGibbsSampler.java (mg.before the hole position indicates that a method from the imported file is likely to be invoked). The Prompt Composer uses the context from the predicted prompt proposal and combines it with the default Codex context, 3https://openai.com/blog/openai-codex/ i.e., code prior to the position of the hole in the current file. The resulting prompt consists of the method name InitializeToAssignment (from the prompt proposal context) and the method CurrentAssignments() (from the default Codex context), resulting in a successful prediction (brown box on the top) of the target hole. Our key contributions are as follows: • We propose a framework called the Repo-Level Prompt Generator (RLPG) that learns to generate prompts conditioned on the example, without requiring access to the weights of the LLM. • To incorporate domain-knowledge in the prompt design process, RLPG uses a set of repository-level prompt proposals. These prompt proposals are designed to incorporate both the structure of the repository as well as the relevant context from all files in the repository. • On the task of single-line code-autocompletion, we show that an oracle constructed from our proposed prompt proposals gives up to 36% relative improvement over Codex. This improvement is pleasantly surprising as Codex has never seen prompts made from these prompt proposals during training. Further, we show that when we use our prompt proposal classifier to predict the best prompt proposal, we can achieve up to 17% relative improvement over Codex. 2 REPO-LEVEL PROMPT GENERATOR (RLPG) In this section, we provide details of our framework. We start by describing our prompt proposals and then discuss our prompt proposal classifier which is followed by a description of prompt composer. 2.1 REPO-LEVEL PROMPT PROPOSALS The core idea of RLPG consists of substituting part of the default context used by Codex with context coming from somewhere else in the repository. The decision of what to take and from where in the repository to take from is governed by a set of prompt proposals. These prompt proposals were decided based on manual inspection of our training data and intend to capture common coding patterns (but more generally can also include project/organization-specific coding practises). A prompt proposal can be thought of as a function that takes as input a target hole’s position and the repository that the hole is a part of, and that returns the prompt proposal context (a string constituted by the context from the prompt proposal). A prompt proposal is specified by a prompt source and a prompt context type. We mention each of these along with their motivation below. Prompt Source: For a target hole position, a prompt source determines from where should we take code that will be part of the prompt proposal context. We propose ten different prompt sources: 1. Current: take code from the current file excluding the contents of the target hole. The current file is the file that contains the target hole. The code in the current file (e.g. the lines after the hole position) can be very useful in predicting the target hole. 2. Parent Class: take code from the file that contains the parent of the class to which the target hole belongs. The intuition behind this is to account for cases where a method present in the parent class is invoked in the current file (i.e. the child class). 3. Import: take code from the import files used in the current file. The dependencies specified via imports can provide useful cues to predict the target hole. 4. Sibling: take code from the files that are in the same directory as the current file. Files in the same directory tend to share code variables (e.g. identifiers). 5. Similar Name: take code from files that have a similar name as the current file. Similar names are determined by doing a splitting of the file name based on underscore or camel-case formatting and then matching parts of the filename. If one or more parts matches, two files are considered to have similar names. The intuition behind this is that software developers tend to name files based on the functionality of the code written in that file. Therefore, a similar name file might contain some portion of the code that is common with the current file and hence might be useful for predicting the target hole. 6. Child Class: take code from files that have the current file as their parent class file. 7. Import of Parent Class: take code from the import files used in the parent class files. 8. Import of Sibling: take code from the import files used in the sibling files. 9. Import of Similar Name: take code from the import files used in the similar name files. 10. Import of Child Class: take code from the import files used in the child class files. The last four prompt sources are useful when the target hole occurs at the very beginning of the current file. In these cases, there would be less context coming from other prompt sources. For each prompt source, we can get either a single file or a ranked list of files (see Appendix B.1). In the latter case, we will take context from these files until we exhaust the maximum context length allocated to the prompt proposal. Prompt Context Type: The prompt context type determines what code to take from the prompt source. We propose seven different prompt context types (Appendix B.2 has examples of each type): 1. Post Lines (PL): Take all the lines after the target hole line till we reach the end of the file. This context type is applicable only when prompt source is the current file 4. 2. Identifiers (I): Take all the identifiers used in the prompt source. 3. Type Identifiers (TI): Take all the type identifiers used in the prompt source. 4. Field Declarations (FD): Take all the field declarations used in the prompt source. 5. String Literals (SL): Take all the string literals used in the prompt source. 6. Method Names (MN): Take all the method names along with their signatures that are used in the prompt source. 7. Method Names and Bodies (MNB): Take all the method names along with their signatures and corresponding bodies used in the prompt source. By combining prompt sources with prompt context types, we get a total of 63 prompt proposals (see Appendix B.4 for details). Note that depending on the target hole, not all prompt proposals would be applicable (e.g. if there are no parent classes in the current file, prompt proposals with prompt source as parent class file won’t be applicable). In Figure 1, the predicted prompt proposal corresponds to taking prompt source Import and prompt context type MNB. We aimed for a set of prompt proposals that offer more diversity rather than a set of prompt proposals that are all good. This in turn ensures that for any hole position, a significant number of prompt proposals are applicable. 2.2 PROMPT PROPOSAL CLASSIFIER (PPC) Given a hole position, the goal of the prompt proposal classifier is to predict the prompt proposal p that will lead to success, where success happens when the predicted hole ĥ exactly matches the target hole h. This task is formulated as a multi-label binary classification problem since for a given target hole, more than one prompt proposals can lead to success. In this formulation, we treat the default Codex context as one of the prompt proposals. Next, we describe the training procedure for PPC. Training: For each target hole h, we generate a ground-truth vector Y h = [yhp ]Mp=1 which is a multi-hot vector of size M , where M is the total number of prompt proposals. This vector is obtained by feeding the prompt generated from prompt proposal p into Codex and then seeing whether ĥ = h. If there is a match, we say that the prompt proposal p is successful. For hole h, if a prompt proposal p is applicable and leads to success, yhp = 1 and will be zero otherwise. For each hole h, we obtain a mask Th where Thp = 1 when p is applicable or zero otherwise. The overall training loss L can be expressed as the sum of individual hole losses Lh as follows: L = 1 N N∑ h=1 Lh = 1 N N∑ h=1 1 Mh Mh∑ p=1 BCE(ŷhp , y h p ) ∗ Thp where Mh = ∑ p Thp (1) In the above equation, N is the total number of holes encountered while training, Mh denotes the total number of applicable prompt proposals for h and BCE corresponds to the binary cross entropy loss. Masking ensures that we consider only the prompt proposals that are applicable. Next, we describe our two variants of PPC that can be used to obtain the prediction ŷhp . RLPG-H: Let Hh be the hole window that includes code present around the hole h excluding the hole itself. In our work, we take two lines before the hole position, the code up to the hole position and two lines after the hole position. We use a pretrained model Fϕ to obtain a context representation vector of size Z, where Z is the dimension of the hidden state of the model. Specifically, we take the hidden state at the first position, i.e. the representation of the [CLS] token. To make training of PPC 4We also conducted experiments (Appendix D.2) where we take lines starting from the 4th line after the hole. computationally efficient, the parameters ϕ are frozen during training. The RLPG-H model takes the context representation of the hole window and projects it to the prompt proposal space of size M via two dense layers with a non-linearity in between (see Equation 2). Taking the sigmoid of this output gives the prediction of the prompt proposal. ŷhp = P (y h p = 1|Hh) = sigmoid(W 2(relu(W 1(Fϕ(Hh)) + b1)) + b2) (2) RLPG-R: The motivation behind this variant is to use the similarity of the hole window and the prompt proposal context to determine which prompt proposal can be useful. Given a particular hole h, let Chp denote the prompt proposal context from prompt proposal p . Intuitively, if the hole window contains variables (e.g. identifiers) that are similar to the variables in the prompt proposal context, then there are chances that h might occur somewhere in Chp . The similarity is modeled using a multiheaded attention mechanism (Vaswani et al., 2017), by treating the projected hole window representation as a query Qh and the projected prompt proposal context representation Khp as a key (Equation 3). The value V hp is the same as the key. Qh = Fϕ(H h), Khp = Fϕ(C h p ), V h p = Fϕ(C h p ) (3) Att(Qh,Khp , V h p ) = V h p softmax (Qh⊤Khp√ dk ) (4) MultiHead(Qh,Khp , V h p ) = W Oconcat(headi, head2, . . . headτ ) (5) where headi = Att(W Q i Q h,WKi K h p ,W V i V h p ) ŷhp = P (y h p = 1|Hh, Chp ) = sigmoid ( WpG(MultiHead(Q h,Khp , V h p )) + bp ) (6) In the equations above, dk is the dimension of the key, W Q i ,W K i ,W V i are the query, key and value projection matrices, τ is the number of heads and WO is the linear projection that combines the heads. The output from Equation 5 is fed to module G consisting of two-layers of feedforward network with relu activation in between (see Appendix C for more details). The resulting output is then linearly projected and a sigmoid is applied to get the predicted prompt proposal (Equation 6). 2.3 PROMPT COMPOSER The prompt composer combines the context from the selected prompt proposal (given by PPC) with the context normally used by Codex (default Codex context) to generate the prompt. Since the total length that can be used for a prompt is fixed, we adopted a dynamic context allocation strategy where if the prompt proposal context is shorter than its allocated length, we assign the remaining portion from the prompt proposal context to the default Codex context. The prompt proposal context is always added before the default Codex context. For all prompt proposals, we assign half of the total context length to the prompt proposal context and the remaining to the default Codex context. For post lines, in addition, we also assign one-fourth and three-fourths of the total context length to the prompt proposal context. If the prompt proposal context or the default Codex context is greater than the context length allocated to it, we truncate it (see Appendix B.3 for our truncation strategies). 3 EXPERIMENTS AND RESULTS In this section, we describe our process of dataset creation, details of experiments along with their results and interesting ablation studies. 3.1 DATASET CREATION To mitigate the effects caused by potential memorization of the code present in the dataset used for training Codex, we avoided code repositories from GitHub (Chen et al., 2021). Instead, we scraped Google Code 5 for repositories in Java (removing the ones that matched with a repository on GitHub 5https://code.google.com/archive/ with the same name). We selected the repositories that had a permissive license giving us a total of 47 repositories. We divided the repositories into train, validation and test splits, where each repository in its entirety is part of a split. In each file within a repository, we remove lines that are blank and comments, and set the hole position to be the middle character in the line. All the characters from the middle position to the end of the line constitute the target hole. Since code duplication has been shown to have adverse effects (Allamanis, 2018), within a repository, we look for files that are exact replica of each other, but placed in a different folder. We mark all such copies as duplicates and omit all of them when creating target holes for our dataset. Note that the prompt proposal context can still come from the duplicate files. We felt comfortable with this choice since we wouldn’t want to predict a target hole in a duplicate file, but we can still use the context from the duplicate file to predict the hole in a file that is not its duplicate (e.g. in a sibling file). Further, we found that the repositories were quite uneven in terms of their size. To avoid large repositories dominating the training of PPC, we capped the maximum contribution of holes from a repository to 10,000, i.e. if the total number of holes in the repository exceeded 10,000, we selected 10,000 holes randomly from the total holes. Please see the left part of Figure 2 for statistics of our dataset. The #Holes represents the holes after deduplication and capping. For some of our prompt proposals, we require semantic information that can be obtained with a parse tree. We used the tree-sitter API for Java 6 that enables us to get the AST of a file and query it. Since our prompt proposals need information at a repository level, we stored some extra information that allowed us to collate the information from individual files according to the directory structure inside the repository (see Appendix A for more details). 3.2 EXPERIMENTAL DETAILS Prompt Generation: We used the OpenAI Codex Completions API for generating the predicted hole from the Codex model. In particular, we used the code-davinci-001 engine with temperature set to 1.0 and stop criteria as newline. The completion length was kept to be 24 and the maximum prompt length was 4072. Tokenization was done using the suggested tokenizer 7. To allow for fast computation, we used simple models like CodeBERT (Feng et al., 2020) and GraphCodeBERT (Guo et al., 2020) as our pretrained models. One of the limitations of these pretrained models is that the maximum context length that can be taken as input by these models is much smaller than the maximum context length allowed by Codex. Therefore, when getting the representation of the prompt proposal context that is used by PPC, we need to truncate the prompt proposal context that might lead to omitting important parts of the prompt proposal context in certain cases. Using pretrained models that allow larger context length or models that augment the context (Wu et al., 2022) offer avenues for future work. See Appendix D.4 for results when using a smaller context length from Codex. Computational Complexity and Scalability of RLPG: To collect the ground-truth data for training our prompt proposal classifier, we queried the Codex API for each applicable prompt proposal per hole (maximum rate limit of 400 holes per minute). The computational complexity of training our larger RLPG-R variant (3.6M parameters, 141269 holes and 9.19 minutes per epoch on a single Tesla V100 GPU) is much smaller than finetuning all or some part of Codex (12B parameters). During inference, we need to calculate the repo-level statistics just once and all the subsequent hole completions in the repo can utilize this cached information, incurring no additional computational complexity. Besides training the PPC, all our experiments were performed on a CPU with 8GB RAM. Our prompt proposals are based on concepts such as post lines, imports, similar name files, method names and identifiers that are quite general and are applicable to other programming languages. In addition to the existing prompt proposals, our framework provides the flexibility to incorporate new prompt proposals. Since the cost of retraining RLPG with the extended prompt proposals is extremely low (much lower than finetuning Codex with the new prompt proposals), our framework can be used to make interventions on the LLM to address observed weaknesses as long as the intervention can be expressed as a prompt proposal that adds the missing context to the LLM. As opposed to techniques that perform prompt engineering in the latent space and require access to the weights of the LLM such as Li & Liang (2021), RLPG facilitates expressing intent in the form of prompt proposals that are intuitive for humans, easy to understand and do not require access to the weights of the LLM. 6https://github.com/tree-sitter/tree-sitter-java 7https://huggingface.co/docs/transformers/model_doc/gpt2#transformers.GPT2TokenizerFast Methods: We experimented with the following methods for generating the prompt: 1. Codex: Using the default context from Codex as the entire prompt. 2. Oracle: Using the ground-truth vector Y h (mentioned in Section 2.2). The prompt generated corresponds to using any of the successful prompt proposals (i.e., yhp = 1). Since this information is not available at inference, the oracle performance represents an upper bound. 3. Fixed Prompt Proposal: Using the most successful prompt proposal for all target holes. This was chosen based on the performance on the validation set and corresponded to taking 75% of the total context length from post lines in the current file. 4. RLPG-H and RLPG-R: Using the prompt proposal predicted by the RLPG-H and RLPG-H varients of PPC. The selected prompt proposal corresponds to taking the argmax of the predicted probabilities over different prompt proposals. 5. RLPG-BM25: Instead of using PPC to rank prompt proposals, using the scores obtained by BM25 (Jones et al., 2000) to select the best prompt proposal. The scores are calculated with the hole window being the query and prompt proposal contexts being the search documents. This serves as a non-learned retrieval method that makes use of our prompt proposals. 6. File-level BM25: Same as above, except that instead of using our prompt proposal contexts, search documents consist of full context from other files in the repository. 7. Random: For each target hole, select a context randomly from anywhere in the repository. 8. Random NN: Same as Random, except that amongst the randomly chosen contexts, we take the nearest neighbours of the hole window in the representation space of a pretrained model. This is analogous to the technique used in Liu et al. (2022). 9. Identifier Usage: For each target hole, we take the closest identifier and take usage windows of that identifier from everywhere in the repository. We take two lines above, two lines below and the usage line as the usage window. We can rank the usage windows either randomly (random) or based on the nearest neighbour distance to the hole window in the representation space (NN). The last four methods help us understand the performance when a context other than the prompt proposal context is used. To generate a prompt using these methods, we take 50% of the context from these followed by the default Codex context that takes up the remaining context length. For the NN baselines, we use CodeBERT (Feng et al., 2020) as the pretrained model. The contexts are taken in the increasing order of the nearest neighbour distances, until we exhaust the allocated context length. RLPG-BM25 helps us understand the role of PPC. See Appendix C.3 for more details on the implementation of these methods. Evaluation Metric: As mentioned in Section 2.2, to measure success, we used exact match between the predicted hole string generated by Codex and the target hole string. In our experiments, we report the percentage of successful holes divided by the total number of holes for each split. We will call this success rate (SR) going forward. 3.3 RESULTS In this section, we present the results of the following two research questions explored in this paper: [RQ1] Is it useful to generate a prompt that is composed of code context that is different from the default Codex context? If yes, what context can be useful? [RQ2] For each target hole, is there a way of automatically selecting the prompt? If yes, how does this system perform relative to Codex? RQ1 - Performance of Prompt Proposals: We found that combining the prompt proposal context (context from other files in the repository) with the default Codex context led to substantial improve- ment in performance. The right part of Figure 2 shows the performance of an oracle constructed from our prompt proposals. We see that across all data splits, the prompt proposals contribute to significantly large improvements over Codex (upto 36% for test split). These results might seem surprising as Codex has not been trained on prompts that consist of context other than the default Codex context. What makes this result more surprising is that in most of the cases, the prompt consists of mashed up context without logical ordering that may not even look like a semantically meaningful chunk of code (e.g. list of string literals from a sibling file followed by the default Codex context or post lines placed before the default Codex context as opposed to after). These results might suggest that as long as the relevant context (in our case repo-level knowledge in the form of prompt proposals) is present in any form in the prompt, it can be quite effective. RQ2 - Performance of PPC: Having seen promise in our prompt proposals, next we present the results of RLPG, which for each target hole predicts a single best prompt proposal. Table 1 presents the success rates along with the percentage of relative improvements for the test data. The second and third columns correspond to the averages across all holes in the test data. The last two columns correspond to the average success rate of individual repositories. The latter metric doesn’t account for the size of the repository. As can be seen from the table, all the RLPG variants as well as the fixed prompt proposal improve the performance significantly over Codex. The random baselines are either worse or on par with Codex. Identifier usage is a good baseline but still performs worse than either the fixed prompt proposal or RLPG. The improved performance of RLPG-BM25 as compared to fixed prompt proposal shows the value of generating example-specific prompts using RLPG. However, both the learned variants of RLPG, i.e., RLPG-H and RLPG-R outperform the RLPG-BM25, highlighting the importance of learning PPC. See Appendix D.5 for performance of all methods on individual repositories. Note that even though we consider identifier usage as a separate baseline, one could consider it as one of the prompt proposal leading to further improved performance of RLPG. Despite our efforts of avoiding overlap, since the training data for Codex is not exactly known, there might be a slight possibility that part of our Google Code data is part of the training data for Codex. Even if there were an overlap, we want to point out that since Codex has seen the default Codex context during training, it would be more beneficial to use the default Codex context in the prompt rather than the context from the prompt proposals or any other context from other baselines. Therefore, under this scenario, our evaluation would be more generous to the Codex baseline with results biased more in favour of the Codex baseline than other methods we have used. Variation with #attempts: Imagine a scenario where we have a human-in-the-loop who has been given k attempts to prompt the LLM and then can choose one of the k hole predictions. We wanted to see how does the performance of our framework varies with #attempts under this setting. This corresponds to using k prompts generated with top-k prompt proposals (one prompt per proposal) and marking success if any of the k prompts lead to success. The left part of Figure 3 shows the variation of SR over the validation data with the value of k. For RLPG, the top-k prompt proposals were chosen based on the decreasing order of probabilities given by PPC. For the fixed prompt proposal, the top-k prompt proposals were decided based on decreasing order of success rate of the individual prompt proposals on the validation dataset. From the figure, we notice that as we increase the value of k, the performance increases gradually at first and then saturates towards the oracle performance (79.05% for val data). This behaviour is observed for both fixed prompt proposal as well as RLPG. However, we see that for the same value of k, the success rate for RLPG is higher indicating that PPC learns a useful ranking of the prompt proposal contexts that can scale well with the #attempts. Performance based on Prompt Proposals: The right part of Figure 3 shows mean success rate of prompt sources when we count success only when the corresponding prompt source is applicable. From the figure, we see that the current file is the most important prompt source. Closely following are sibling files and similar name files. We see that all prompt sources have non-zero chances of success, highlighting the usefulness of each prompt source. See Appendix D.1 for a similar breakdown based on prompt context type and Appendix E for analysis of successful and failed sample cases. 4 RELATED WORK LLMs for Code: Recently, there has been a lot of work around large language models of code. One class of models are the decoder-only models that correspond to generating code from left-to-right. Codex (Chen et al., 2021), Google’s model (Austin et al., 2021), GPT-J-6B (Wang & Komatsuzaki, 2021), GPT-Neo (Black et al., 2021b), GPT-Neo-X (Black et al., 2021a), CodeParrot (Tunstall et al., 2022), PolyCoder (Xu et al., 2022a) and InCoder (Fried et al., 2022) are some examples. We also have some encoder-only models that use a masked language modelling objective. CodeBERT (Feng et al., 2020), GraphcodeBERT (Guo et al., 2020) and CuBERT (Kanade et al., 2020) are examples of such models. Lastly, we have the class of encoder-decoder models that generally use a bidirectional encoding of a context to decode a series of masked tokens. Code-T5 (Wang et al., 2021) and AlphaCode (Li et al., 2022) are examples of such models. Repo-Level Info: Fewer works use information from outside the current file. Hellendoorn & Devanbu (2017) propose a nested n-gram model that utilizes a locality-based cache where the locality consists of all directories from the root of the project (inclusive of the current file). Zhang et al. (2021) uses the parent class to generate the comments for the child class. Pashakhanloo et al. (2022b;a) capture the structure and semantics of the repository by converting it into a relational database and propose a graph-walk based mechanism for pruning the unrelated context. Lyu et al. (2021) incorporates the API-dependency graph in a LSTM-based Seq2Seq model to assist in code generation. Xu et al. (2022b) incorporate three types of structural locality features while training the kNN-LM (Khandelwal et al., 2020). These features are binary variables that correspond to the presence or absence of similar hierarchy. The three levels of hierarchy are (a) sibling file, (b) file in the same repo (c) no hierarchy. In contrast we have a much richer set of prompt proposals incorporating the semantics and structure of the repository. Also, we assume black-box access to the actual LM and restrict ourselves to generating a prompt for the LLM without performing any finetuning of the LLM. Prompt Generation: There have been promising works around prompt generation techniques in NLP. Broadly, there are two categories of automatic prompt generation techniques. The first category corresponds to producing continuous/soft prompts where the prompt is described in the latent space of a language model (Li & Liang, 2021; Qin & Eisner, 2021; Bragg et al., 2021; Lester et al., 2021; Liu et al., 2021b). For example, Prefix-Tuning (Li & Liang, 2021) adds a prefix to the LM that can be learned by finetuning on examples from the downstream task. The second category produces discrete prompts where the prompt is a text string that can be interpreted by a human (Shin et al., 2020; Gao et al., 2021; Schick & Schütze, 2021). For example, Autoprompt (Shin et al., 2020) generates prompt using a fixed template consisting of trigger tokens. The trigger tokens are shared across all inputs and determined by a gradient-guided search involving the LM. Our work falls in the category of discrete prompt generation techniques as we produce a prompt consisting of code tokens that can be easily interpreted by a human. However, in contrast to prior works that use a set of fixed templates for all examples, we learn to produce prompts conditioned on each example. Another important distinction is that we do not require access to the weights of the LM. A concurrent work as ours (Wang et al., 2022) studies the role of prompt-tuning when compared to fine-tuning for code translation, defect localization and code summarization. However, their technique requires access to the weights of the LLM and they perform experiments over models that are much smaller in scale than Codex. To the best of our knowledge, our work is the first to explore automatic prompt generation in a black-box access setting in the domain of source code. 5 CONCLUSIONS AND FUTURE DIRECTIONS We present RLPG, a framework that learns to automatically generate prompts conditioned on the example, without requiring access to the weights of the LLM. RLPG utilizes the structure of the repository as well as the context from other files in the repository using a set of easy to understand prompt proposals. Note that even though we have scoped and worded our prompt proposals to be repository-level, the idea of RLPG and prompt proposals in itself is quite universal and need not be scoped to a repository. Taking context from other repositories as well as external knowledge such as API dependencies offers an interesting direction to explore in the future. In this work, we are taking context from only one prompt proposal. For future work, we want to learn a model that can automatically compose a prompt from multiple prompt proposals (see Appendix D.3 for promising initial results). Other interesting directions include incorporating the user’s feedback in RLPG and extending RLPG to multi-line code autocompletion. A DATASET CREATION DETAILS A.1 CREATION OF HOLE COMPLETION DATA To collect the hole completion data, we scraped Google Code 8 for repositories tagged with the language “Java”. Then we deduplicated repositories by searching for a matching repository with the same name on GitHub. For those repositories with zero matching names on GitHub, we downloaded the archive and extracted the source code (preserving the directory structure). Next, we tried to determine the licenses of all repositories by either looking for a LICENSE file or matching with keywords "license", "copyright", "mit", etc. For repos for which our process was able to come up with a known license, we selected the ones having a permissive license, i.e., MIT, ApacheV2 and BSD. This was followed by removing files that are exact duplicates of each other within a repo. One of the reasons we found this inter-repository duplication may be because sometimes developers adopt lousy practises where instead of declaring a package and importing functions, they simply copy-paste the desired file in the current folder. The target holes coming from any of the duplicate files do not form part of the hole completion dataset. However, these files might be used to contribute to prompt proposal context for completing a target hole in a non-duplicate file. For the remaining files, we took each line that is not a blanked line or a comment, and chose the middle character as the hole position, i.e., all the characters from the middle of the line to the end of the line form target hole. To avoid large repos having strong bias on our prompt proposal classifier, we capped the contribution from each repo to be a maximum of 10000 holes. If the number of holes in the repo exceeds 10000, we randomly select 10000 holes. A.2 CREATION OF DATA FOR REPO-LEVEL PROMPT PROPOSALS We used the tree-sitter API for Java 9 to get the parse-tree of an individual file in a repo. To get information at a repo-level, for each file in the repo, we stored the following information: 1. list of all class names in the file. This helped us to get the parent or child class file corresponding to a given parent or child class. 2. the file corresponding to each import statement. 3. for each import statement in the file, the position in the file where the import is used. This is used for ranking the files based on the heuristics mentioned in Table 2. 4. list of sibling files 5. list of similar name files. This was done by splitting the filenames based on either camel-case or underscore. If the sub-parts of two files match, then they are said to have similar name. The above meta-data was calculated only once for each repo. The subsequent hole completions can use the same cached information. In practise, we can use a hash to store and retrieve this info efficiently. For a prompt proposal, given the prompt source, we first obtain a single file or ranked list of files (see Table 2) using the info in the parse tree in conjugation with the above repo-level meta-data. All the prompt proposal context type information (MN, MNB, SL, I, TI, FD) can then be obtained by querying the parse tree of the selected file. B PROMPT PROPOSAL DETAILS B.1 RANKING OF FILES BASED ON PROMPT SOURCE In Table 2, we provide details of how we select files for a given prompt source. Depending on the prompt proposal, we get either a single file or a list of files ranked based on some criteria. For example, if the prompt source is Import, we take all the import statements used in the current file and identify the location in the current file where the corresponding imports have been used. According to our heuristic, the closer is the import usage to the hole position, the more likely it is for the prompt proposal context coming from the corresponding import file to be more relevant (to predict the target hole). We get a ranked list of import files sorted based on increasing order of distance (i.e., number of lines ) between the import usage and the hole position. We start by taking all of the prompt proposal 8https://code.google.com/archive/ 9https://github.com/tree-sitter/tree-sitter-java context from the first file in the ranked list and then keep iterating the ranked list until either the total context length allocated to the prompt proposal gets exhausted or we reach the end of the ranked list. B.2 EXAMPLES OF PROMPT CONTEXT TYPE We provide examples of each of our prompt context type below: 1. Post Lines (PL) : For the example shown in Figure 1 of the main paper, post lines will take all the lines after the line mg.InitializeToAssignment(CurrentAssignments()) till we reach the end of the file (AffinityPropagation.java). 2. Identifiers (I): Identifiers are the names of variables used in the code. For example, for the prompt proposal context taken from the imported file shown in Figure 1 in the main paper (highlighted in violet), identifiers are InitializeToAssignment (line 1), a (line 1), currentAssignment_ (line 2), a ( line 2), clone (line 2), alreadyInitialized_ (line 3), justOneRound_ (line 4). 3. Type Identifiers (TI): Type Identifiers define the type of an identifier. For example, in the code snippet class DPAffinityPropagation extends AffinityPropagation , [ AffinityPropagation is labeled as a type identifier. Similarly in the snippet DPAPParameters parameters_;, DPAPParameters is a type identifier. 4. Field Declarations (FD): The variables of a class type are introduced by field declarations. For example, double[][] mHijMujT_; and MessageValuePair[][] sortedMHijMujTs_; are examples of field declarations. 5. String Literals (SL): A string literal is the sequence of characters enclosed in double- quotes. For example, in the code snippet, System.err.println("DPAP load Warning: unknown parameter " + entries[0] + ", value = " + entries[1]);, we have two string literals: (a) "DPAP load Warning: unknown parameter " ; (b) ", value = " . 6. Method Names (MN): For the example shown in Figure 1 of the main paper, public void InitializeToAssignment(int[] a) is the method name prompt context type. 7. Method Names and Bodies (MNB): For the example shown in Figure 1 of the main paper, the part highlighted in violet represents the method names and bodies. B.3 TRUNCATION STRATEGIES FOR PROMPT PROPOSAL CONTEXT If the prompt proposal context is greater than the context length allocated to it, then we need to truncate the prompt proposal context. We followed the below two schemes for truncating context: • front: We truncate the context from the front. This is used for all prompt sources except Parent Class and when we take PL from Current. • back: We truncate the context from the back. This is used when the prompt source is Parent Class and when we take prompt context types other than PL from Current. The truncation strategies for each case were selected based on results on a small validation set. For the prompt source Current, except when the prompt context type is PL, we always start by taking code of prompt context type from after the hole position. This makes sense as the default Codex context will anyways contain code before the hole. Only if this turns out to be blank, we will use the code of context type from before the hole. B.4 LIST OF PROMPT PROPOSALS B.5 OTHER PROMPT PROPOSAL VARIATIONS We experimented with other variations that include: (a) appending class names at the beginning of the prompt proposal context, (b) using newline or space to join the prompt proposal context and the default Codex context, (c) taking all or the top-k of the prompt context types, (d) ordering of top-k. • Context Separator: This defines how we join the prompt proposal context string to the default Codex context string. We experimented with space and newline as context separators. • Prompt Proposal Context Formatting: We can format the prompt proposal context before giving it to the Prompt Composer. We experimented with the following options: 1. class_name: append [class name of the file] at the beginning of the prompt proposal context taken from each file that is part of the prompt source. For example, if we are taking prompt proposal context from two import files f1 and f2, the prompt proposal context will be formatted as: [class name of f1] prompt proposal context from f1 + space + [class name of f2] prompt proposal context from f2. We use this when the prompt proposal context types are MN, I, TI, FD and SL. 2. class_method_name: we apply this only when the prompt proposal context type is MNB. We append method names at the beginning of each of the corresponding method bodies. We also append the prompt proposal context from a file with the name of the class as described in the previous item. 3. comment: Adding in the prompt proposal context as a comment, i.e., formatting it as: /** prompt proposal context */. This wasn’t found to be much useful. 4. none: passing the prompt proposal context as it is. We use this when the prompt proposal context type is PL. • Top-k Type: For each of the prompt proposal context types, except PL, we experimented with taking the (a) first (b) last and (c) all of the prompt proposal context types, i.e., we can take first-10 identifiers. We found ’all’ to be the best among all. • Top-k: We experiment with k values of (a) 10 (b) 20 and (c) all. We found ’all’ to work best for all prompt context types. C IMPLEMENTATION DETAILS C.1 RLPG-H We used Adam (Kingma & Ba, 2015) optimizer with a learning rate of 3e-4 and batch size of 64. We used CodeBERT (Feng et al., 2020) as our pretrained model Fϕ to obtain the representation of hole window. The size of the representation (corresponding to the hidden dimension of the [CLS] token) is 768. W 1 ∈ R512×768, b1 = 512,W 2 ∈ R63×512, b2 = 63. C.2 RLPG-R We used Adam (Kingma & Ba, 2015) optimizer with a learning rate of 3e-4 and batch size of 64. We used CodeBERT (Feng et al., 2020) as our pretrained model Fϕ to obtain the representation of hole window and prompt proposal context. The size of the representation (corresponding to the hidden dimension of the [CLS] token) is 768. In equations 1, 2 and 3 in Section 3.2, the projection matrices WQi ∈ Rdq×dmodel , WKi ∈ Rdk×dmodel , WVi ∈ Rdv×dmodel , WO ∈ Rdmodel×τdv . For the multihead attention, we used dk = dq = dv = 32, τ = 4 and dmodel = 768, Wr ∈ R63×768 and bp = 63. For each head, we perform a scaled dot-product attention (Equation 4). G module consists of a dropout (Srivastava et al., 2014) layer, a residual connection (He et al., 2016), a layernorm (Ba et al., 2016), followed by a sequence of (a) dense layer of weights=2048 × 768, bias=768, (b) relu activation, (c) dense layer of weights=768 × 2048, bias=2048, (d) dropout layer, (e) residual connection, (f) layernorm. A dropout value of 0.25 was used while training. Our model resembles one layer of the transformer encoder block (Vaswani et al., 2017). C.3 BASELINES Random baseline first selects a file randomly from the current repository followed by selecting a random line within that file. We choose all the lines starting from that line to the end line of the chosen file as context (excluding the hole window if the chosen file is the current file). The nearest neighbour similarity is based on the dot product between the representation of the hole window and the representation of the context, where we use a pretrained CodeBERT (Feng et al., 2020) model to obtain the representations. For the Identifier Usage baseline, if the nearest identifier to the hole doesn’t return any usage window, we proceed to the next nearest identifier. For faster computation and to avoid memory issues when running on our hardware, for NN baselines, we collect 64 random neighbours and then rank based on the nearest neighbour distance. The BM25-based baselines use the Okapi BM25 implementation with default parameters given by the pip package rank-bm25 0.2.2 10. For file-level BM25, if the file context exceeds the allocated context length, we truncate from the back. 10https://pypi.org/project/rank-bm25/ D ADDITIONAL RESULTS D.1 ABLATION ON PERFORMANCE BASED ON PROMPT PROPOSAL Figure 4 shows the mean success rate of prompt context types when success is counted only for the cases when these prompt contexts are applicable. As can be seen from the figure, post lines is the most useful prompt context type on an average. The contribution from other prompt context types though smaller than post lines is still significant highlighting the importance of each prompt context type. Figure 5 shows the normalized success rates where the normalization is performed across the prompt proposals. This helps us understand the relative performance of prompt proposal sources and context types. The left part of the figure breaks down the performance based on prompt sources and the right part breaks down based on prompt context types. One thing to note from the plot of prompt context types is that when we consider relative performance, post lines is no longer the most dominant context type. This is because post lines is tied to only when the prompt source corresponds to the current file, thereby contributing to lower numbers when compared to most of the other context types that are tied to all prompt sources. D.2 PERFORMANCE ON NON-IMMEDIATE POST LINES Table 4 shows the performance of post lines when starting the fourth line after the target hole line (i.e., skipping three lines after the target hole) as opposed to starting from the line that immediately follows the target hole line. This experiment helps us understand the performance when we are interested in doing a much harder task of multi-line code autocompletion, wherein the objective is to predict not just the blanked out portion in the current line but also say the next three lines. This can correspond to completing a block of code like a function body. As can be seen from the table, when starting from the fourth line, we see a very slight deterioration in performance. This is expected because the farther away we move from the target hole, the less relevant the post lines context would be. However, the performance drop is not significant suggesting that post lines is still a very useful prompt context type that can be used under the setting of multi-line code-autocompletion. Equivalently, we can include this as one of the prompt proposals in our framework along with the current version of post lines. D.3 COMPOSITION OF PROMPT PROPOSALS Table 5 shows the performance of the two versions of RLPG when we compose the prompt proposal context from l prompt proposals. We take the top-l prompt proposals given by RLPG based on decreasing order of probability. To decide how much context should be used for each prompt proposal, we divide the total context length in proportion to the normalized probabilities of the top-l prompt proposals. As can be seen from the table, even though PPC is not explicitly trained to perform composition (both the ground-truth vector and the representation of prompt proposal context involve a single prompt proposal), all the compositions lead to significant improvements over Codex. However, as expected the best results correspond to taking context from a single prompt proposal (i.e., the training setting). The drop in success rate with l = 2 and l = 5 is not that significant, which suggests that explicitly training RLPG to learn to compose contexts from different prompt proposals can lead to promising results and hence offers an interesting future direction. D.4 EFFECT OF CONTEXT LENGTH To understand the effect of context length on the performance of our prompt proposals, we took half of the context length available for a prompt in Codex and observed the performance of the oracle and fixed prompt proposal. As before, we saw that an oracle constructed from our prompt proposals shows remarkable improvement over Codex highlighting the value of our prompt proposals. However, when compared to a larger context length, the relative gains are smaller. This is expected as a smaller context length means that the relevant context coming from a prompt proposal needs to be truncated to make it fit inside the prompt, thereby leading to loss of information. D.5 PERFORMANCE ON INDIVIDUAL REPOSITORIES Table 7, Table 8 and Table 9 present the success rates of different methods over individual repositories in the training, validation and test splits, respectively. The repo-wise averages in Table 2 in the main paper were calculated by taking the average of numbers corresponding to each column. The hole-wise averages correspond to multiplying the repo-wise numbers of each method by the total holes in the repo to get the total number of successful holes by that method for that repo. We then add the total number of successful holes across repos and divide it by the total number of holes in the entire data split to get the hole-wise averages. E ANALYSIS OF SAMPLE CASES In Figure 1, RLPG selects the prompt proposal that corresponds to taking method names and bodies from the imported file (i.e. MaximizingGibbsSampler.java ). Note that mg. before the hole position indicates that a method used in the imported file is likely to be invoked. In this case, the prompt proposal context (highlighted in violet) contains the method name InitializeToAssignment (part of target hole). This in conjunction with the default Codex context which contains the method CurrentAssignments() (part of target hole) leads to generation of a successful prompt. On the other hand, the prompt created from the default Codex context fails to predict the target hole in this case. In general, we observed that in the absence of a strong signal, Codex has a tendency to give preference to natural language comments occurring before the hole position, e.g. naming the method based on the comment. This in certain cases might hurt. We provide insatnces of positive and negative samples cases for RLPG below: E.1 POSITIVE CASES We provide some examples of cases where RLPG led to the correct prediction and Codex failed. 1. Cases where part of the target hole is found exactly in the prompt proposal context. • RLPG = Propagation(int numVars) vs Codex = Propagation() • RLPG = tersFromFile(String filename) { vs Codex = ters(String filename) { • RLPG = als("dampingFactor")) { vs Codex = als("numVars")) { • RLPG = ] + ", value = " + entries[1]); vs Codex = ]); • RLPG = stem.exit(1); vs Codex = stem.err.println("DPAP load error: " + ex.get 2. Cases where Codex takes strong hint from the preceding natural language comment, thereby producing incorrect predictions. • RLPG = d PassMessages() vs Codex = d DoOneRoundOfMessagePassing() • RLPG = teger> CurrentExemplars() { vs Codex = teger> ChooseExemplars() { • RLPG = ring FileName() { vs Codex = ring GetAlgorithmFilename() { E.2 NEGATIVE CASES In certain cases, extra information from prompt proposal-context might lead to confusion and produce incorrect predictions. • RLPG = an hasConverged_; vs Codex = an converged_; • RLPG = _[i][j] = -Double.MAX_VALUE; vs Codex = _[i][j] = 0;
1. What is the focus and contribution of the paper regarding prompt generation for large language models? 2. What are the strengths of the proposed approach, particularly in its ability to improve code completion tasks? 3. What are the weaknesses of the paper, especially regarding the repository-specific prompt proposals? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper Authors propose prompt generator based on the whole code repository for the code completion tasks. Authors show a significant improvement over baseline Codex results by using their prompt generator. Strengths And Weaknesses Strengths: Addresses an important and novel area of good prompt generation for LLM tasks Approach does not require access to the weights of LLM Suggests and implements prompt generator that uses information from the whole repository. The prompt generator also uses repository level prompt "proposals" (rules/suggestions). Using generated prompts significantly improves Codex results Weaknesses: It is not clear whether prompt proposals are really per-repository. It seems that they will be pretty universal and should not vary much. So the value of repository-specific prompt proposals is not really established or proven. This is a minor issue though. Clarity, Quality, Novelty And Reproducibility The paper is clearly written. I think that the paper presents a novel approach for improving results of tasks such as code completion by generating prompts from whole repository using a trained framework. The results of the paper should be reproducible assuming that the dataset used by authors to train the PPC is available.
ICLR
Title FORK: A FORward-looKing Actor for Model-Free Reinforcement Learning Abstract In this paper, we propose a new type of Actor, named forward-looking Actor or FORK for short, for Actor-Critic algorithms. FORK can be easily integrated into a model-free ActorCritic algorithm. Our experiments on six Box2D and MuJoCo environments with continuous state and action spaces demonstrate significant performance improvement FORK can bring to the state-of-the-art algorithms. A variation of FORK can further solve BipedalWalkerHardcore in as few as four hours using a single GPU. 1 INTRODUCTION Deep reinforcement learning has had tremendous successes, and sometimes even superhuman performance, in a wide range of applications including board games (Silver et al., 2016), video games (Vinyals et al., 2019), and robotics (Haarnoja et al., 2018a). A key to these recent successes is the use of deep neural networks as high-capacity function approximators that can harvest a large amount of data samples to approximate high-dimensional state or action value functions, which tackles one of the most challenging issues in reinforcement learning problems with very large state and action spaces. Many modern reinforcement learning algorithms are model-free, so they are applicable in different environments and can readily react to new and unseen states. This paper considers model-free reinforcement learning for problems with continuous state and action spaces, in particular, the Actor-Critic method, where Critic evaluates the state or action values of the Actor’s policy and Actor improves the policy based on the value estimation from Critic. To draw an analogy between Actor-Critic algorithms and human decision making, consider the scenario where a high school student is deciding on which college to attend after graduation. The student, like Actor, is likely to make her/his decision based on the perceived values of the colleges, where the value of a college is based on many factors including (i) the quality of education it offers, its culture, and diversity, which can be viewed as instantaneous rewards of attending the college; and (ii) the career opportunities after finishing the college, which can be thought as the future cumulative reward. We now take this analogy one step further, in human decision making, we often not only consider the “value” of current state and action, but also further forecast the outcome of the current decision and the value of the next state. In the example above, a student often explicitly takes into consideration the first job she/he may have after finishing college, and the “value” of the first job. Since forward-looking is common in human decision making, we are interested in understanding whether such forward-looking decision making can help Actor; in particular, whether it is useful for Actor to forecast the next state and use the value of future states to improve the policy. To our great surprise, a relative straightforward implementation of forward-looking Actor, as an add-on to existing Actor algorithms, improves Actor’s performance by a large margin. Our new Actor, named FOrward-looKing Actor or FORK for short, mimics human decision making where we think multi-step ahead. In particular, FORK includes a neural network that forecasts the next state given the current state and current action, called system network; and a neural network that forecasts the reward given a (state, action) pair, called reward network. With the system network and reward network, Under review as a conference paper at ICLR 2021 FORK can forecast the next state and consider the value of the next state when improving the policy. For example, consider the Deep Deterministic Policy Gradient (DDPG) (Lillicrap et al., 2015), which updates the parameters of Actor as follows: φ← φ+ β∇φQψ(st, Aφ(st)), where st is the state at time t, φ are Actor’s parameters, β is the learning rate, Qψ(s, a) is the Critic network, and Aφ(s) is the Actor network. With DDPG-FORK, the parameters can be updated as follows: φ←φ+ β (∇φQψ(st, Aφ(st)) +∇φRη(st, Aφ(st)) + γ∇φRη(s̃t+1, Aφ(s̃t+1))+ γ2∇φQψ(s̃t+2, Aφ(s̃t+2)) ) , (1) where Rη is the reward network, and s̃t+1 and s̃t+2 are the future states forecast by the system network Fθ. We will see that FORK can be easily incorporated into most deep Actor-Critic algorithms, by adding two additional neural networks (the system network and the reward network), and by adding extra terms to the loss function when training Actor, e.g. adding term Rη(st, Aφ(st)) + γRη(s̃t+1, Aφ(s̃t+1)) + γ 2Qψ(s̃t+2, Aφ(s̃t+2)) for each sampled state st to implement (1). We remark that Equation (1) is just one example of FORK, FORK can have different implementations (a detailed discussion can be found in Section 3). We further remark that learning the system model is not a new idea and has a long history in reinforcement learning, called model-based reinforcement learning (some state-of-the-art model-based reinforcement learning algorithms and the benchmark can be found in (Wang et al., 2019)). Model-based reinforcement learning uses the model in a sophisticated way, often based on deterministic or stochastic optimal control theory to optimize the policy based on the model. FORK only uses the system network as a blackbox to forecast future states, and does not use it as a mathematical model for optimizing control actions. With this key distinction, any model-free Actor-Critic algorithm with FORK remains to be model-free. In our experiments, we added FORK to two state-of-the-art model-free algorithms, according to recent benchmark studies (Duan et al., 2016a; Wang et al., 2019): TD3 (Fujimoto et al., 2018) (for deterministic policies) and SAC (Haarnoja et al., 2018b) (for stochastic policies). The evaluations on six challenging environments with continuous state space and action space show significant improvement when adding FORK. In particular, TD3-FORK performs the best among the all we tested. For Ant-v3, it improves the average cumulative reward by more than 50% than TD3, and achieves TD3’s best performance using only 35% of training samples. BipedalWalker-v3 is considered “solved” when the agent obtains an average cumulative reward of at least 3001. TD3-FORK only needs 0.23 million actor training steps to solve the problem, half of that under TD3. Furthermore, a variation of TD3-FORK solves BipedalWalkerHardcore, a well known difficult environment, with as few as four hours using a single GPU. 1.1 RELATED WORK The idea of using learned models in reinforcement learning is not new, and actually has a long history in reinforcement learning. At a high level, FORK shares a similar spirit as model-based reinforcement learning and rollout. However, in terms of implementation, FORK is very different and much simpler. Rollout in general requires the Monte-Carlo method (Silver et al., 2017) to simulate a finite number of future states from the current state and then combines that with value function approximations to decide the action to take at the current time. FORK does not require any high-fidelity simulation. The key distinction between FORK and model-based reinforcement learning is that model-based reinforcement learning uses the learned 1https://github.com/openai/gym/blob/master/gym/envs/box2d/bipedal_walker.py Under review as a conference paper at ICLR 2021 model in a sophisticated manner. For example, in SVG (Heess et al., 2015), the learned system model is integrated as a part of the calculation of the value gradient, in (Gu et al., 2016), refitted local linear model and rollout are used to derive linear-Gaussian controller, and (Bansal et al., 2017) uses a learned dynamical model to compute the trajectory distribution of a given policy and consequently estimates the corresponding cost using a Bayesian optimization-based policy search. More model-based reinforcement learning algorithms and related benchmarking can be found in (Wang et al., 2019). FORK, on the other hand, only uses the system network to predict future states, and does not use the system model beyond that. Other related work that accelerates reinforcement learning algorithms includes: acceleration through exploration strategies (Gupta et al., 2018), optimizers (Duan et al., 2016b), and intrinsic reward (Zheng et al., 2018), just to name a few. These approaches are complementary to ours. FORK can be added to further accelerate learning. 2 BACKGROUND Reinforcement Learning algorithms aim at learning policies that maximize the cumulative reward by interacting with the environment. We consider a standard reinforcement learning setting defined by a Markov decision process (MDP) (S,A, p0, p, r, γ), where S is a set of states, A is the action space, p0(s) is the probability distribution of the initial state, p : S × S ×A → [0,∞) is the transition density function, which represents the distribution of the next state st+1 given current state st and action at, r : S×A → [rmin, rmax] is the bounded reward function on each transition, and γ ∈ (0, 1] is the discount factor. We consider a discrete-time system. At each time step t, given the current st ∈ S, the agent selects an action at ∈ A based on a (deterministic or stochastic) policy π(at|st), which moves the environment to the next state st+1, and yields a reward rt = r(st, at) to the agent. We consider stationary policies in this paper under which the action is taken based on st, and is independent of other historical information. Starting from time 0, the return of given policy π is the discounted cumulative reward Jπ(i) = T∑ t=0 γtr(st, at), given s0 = i. Jπ(i) is also called the state-value function. Our goal is to learn a policy π∗ that maximizes this cumulative reward π∗ ∈ arg max π Jπ(i) ∀i. We assume our policy is parameterized by parameter φ, denoted by πφ, e.g. by the Actor network in ActorCritic Algorithms. In this case, our goal is to identify the optimal parameter φ∗ that maximizes φ∗ ∈ arg maxJπφ(i). Instead of state-value function, it is often convenient to work with action-value function, Q-function, which is defined as follows: Qπ(s, a) = E [rπ(s, a) + γJπ(s ′)] , where s′ is the next state given current state s and action a. The optimal policy is a policy that satisfies the following Bellman equation (Bellman, 1957): Qπ∗(s, a) = E [ r(s, a) + γ max a′∈A Qπ∗(s ′, a′) ] . When neural networks are used to approximate action-value functions, we denote the action-value function by Qψ(s, a), where ψ is the parameters of the neural network. Under review as a conference paper at ICLR 2021 3 FORK — FORWARD-LOOKING ACTOR This paper focuses on Actor-Critic algorithms, where Critic estimates the state or action value functions of the current policy, and Actor improves the policy based on the value functions. We propose a new type of Actor, FORK. More precisely, a new training algorithm that improves the policy by considering not only the action-value of the current state (or states of the current mini-batch), but also future states and actions forecast using a learned system model and a learned reward model. This forward-looking Actor is illustrated in Figure 1. In FORK, we introduce two additional neural networks: The system network Fθ. The network is used to predict the next state of the environment, i.e., given current state st and action at, it predicts the next state s̃t+1 = Fθ(st, at). With experiences (st, at, st+1), training the system network is a supervised learning problem. The neural network can be trained using mini-batch from replaybuffer and smooth-L1 loss L(θ) = ‖st+1 − Fθ(st, at)‖smooth L1. The reward network Rη. This network predicts the reward given current state st and action at, i.e. r̃t = Rη(st, at). The network can be trained from experience (st, at, rt), with MSE loss L(η) = ‖rt −Rη(st, at)‖2. FORK. With the system network and the reward network, the agent can forecast the next state, the next next stat and so on. Actor can then use the forecast to improve the policy. For example, we consider the following loss function L(φ) = E [ −Qψ(st, Aφ(st))−Rη(st, Aφ(st))− γRη(s̃t+1, Aφ(s̃t+1))− γ2Qψ (s̃t+2, Aφ(s̃t+2)) ] . (2) In the loss function above, st are from data samples (e.g. replay buffer), s̃t+1 and s̃t+2 are calculated from the system network as shown below: s̃t+1 = Fθ(st, Aφ(st)) and s̃t+2 = Fθ(s̃t+1, Aφ(s̃t+1)). (3) Note that when training Actor Aφ with loss function L(φ), all other parameters in L(φ) are regarded as constants except φ (see the PyTorch code in the supplemental materials). The action-function Q, without function approximation, under current policy Aφ satisfies Q(st, Aφ(s)) = E [ r(st, Aφ(st)) + γr(st+1, Aφ(st+1)) + γ 2Q (st+2, Aφ(st+2)) ] , where r, st+1 and st+2 are the actual rewards and states under the current policy, not estimated values. Therefore, the loss function L(φ) can be viewed as the average of two estimators. Given action values from Critic and with a mini-batch of size N, FORK updates its parameters as follows: φ← φ− βtOθL(φ), where βt is the learning rate and OφL(φ) = 1 N N∑ i=1 ( ∇aQψ(si, a)|a=Aφ(si)∇φAφ(si) +∇aRη(si, a)|a=Aφ(si)∇φAφ(si) +γ∇aRη(s̃′i, a)|a=Aφ(s̃′i)∇φAφ(s̃ ′ i).+ γ 2∇aQψ(s̃′′i , a)|a=Aφ(s̃′′i )∇φAφ(s̃ ′′ i ) ) , Under review as a conference paper at ICLR 2021 where s̃′i and s̃ ′′ i are the next state and the next next state estimated from the system network. We note that it is important to use the system network to generate future states as in Equation (3) because they mimic the states under the current policy. If we would sample a sequence of consecutive states from the replay buffer, then the sequence is from the old policy, which does not help the learning. Figure 2 compares TD3-FORK, TD3, and TD3-MT, which samples a sequence of three consecutive states, on the BipedalWalker-v3 environment. We can clearly see that simply using consecutive states from experiences does not help improve learning. In fact, it significantly hurts the learning. Modified Reward Model: We found from our experiments that the reward network can more accurately predict reward rt when including the next state st+1 as input into the reward network (an example can found in Appendix A.1). Therefore, we use a modified reward network Rη(st, at, st+1) in FORK. Adaptive Weight: Loss function L(φ) in our algorithm uses the system network and the reward network to boost learning. In our experiments, we found that the forecasting can significantly improve the performance, except at the end of learning. Since the system and reward networks are not perfect, the errors in prediction can introduce errors/noises. To overcome this issue, we found it is helpful to use an adaptive weight w so that FORK accelerates learning at the beginning but its weight decreases gradually as it gets close to the learning goal. A comparison between fixed weights and adaptive weights can be found in Appendix A.2. We use a simple adaptive weight w = ( r̄ r0 )1 0 w0, where r̄ is the moving average of cumulative reward (per episode), and r0 is a predefined goal, w0 is the initial weight, and (a)10 = a if 0 ≤ a ≤ 1, = 0 if a < 0 and = 1 if a > 1. The loss function with adaptive weight becomes L(φ) = E [ −Qψ(st, Aφ(st))− wRη(st, Aφ(st))− wγRη(s̃t+1, Aφ(s̃t+1))− wγ2Qψ (s̃t+2, Aφ(s̃t+2)) ] . (4) Furthermore, we set a threshold and let w = 0 if the loss of the system network is larger than the threshold. This is to avoid using FORK when the system and reward networks are very noisy. We note that in our experiments, the thresholds were chosen such that w = 0 for around 20, 000 steps at the beginning of each instance, which includes the first 10,000 random exploration steps. Different Implementations of FORK: It is easy to see FORK can be implemented in different forms. For example, instead of two-step ahead, we can use one-step ahead as follows: L(φ) = E [−Qψ(st, Aφ(st))− wRη(st, Aφ(st))− wγQψ (s̃t+1, Aφ(s̃t+1))] , (5) or only use future action values: L(φ) = E [−Qψ(st, Aφ(st))− w (Qψ (s̃t+1, Aφ(s̃t+1)) + w′Qψ (s̃t+2, Aφ(s̃t+2)))] . (6) We compared these two versions with FORK. The performance comparison can be found in Appendix B.3. 4 EXPERIMENTS In this section, we evaluate FORK as an add-on to existing algorithms. We name an algorithm with FORK as algorithm-FORK, e.g. TD3-FORK or SAC-FORK. As an example, a detailed description of TD3-FORK can be found in Appendix A.3. We focused on two algorithms: TD3 (Fujimoto et al., 2018) and SAC (Haarnoja Under review as a conference paper at ICLR 2021 et al., 2018b) because they were found to have the best performance among model-free reinforcement learning algorithms in recent benchmarking studies (Duan et al., 2016a; Wang et al., 2019). We compared the performance of TD3-FORK and SAC-FORK with TD3, SAC and DDPG (Lillicrap et al., 2015). 4.1 BOX2D AND MUJOCO ENVIRONMENTS We selected six environments: BipedalWalker-v3 from Box2D (Catto, 2011), Ant-v3, Hooper-v3, HalfCheetah-v3, Humanoid-v3 and Walker2d-v3 from MuJoCo (Todorov et al., 2012) as shown in Figure 3. All these environments have continuous state spaces and action spaces. 4.2 IMPLEMENTATION DETAILS Terminology. step (or time): one operation, e.g. training Actor with a mini-batch; episode: a single-run of the environment from the beginning to the end, consisting of many steps; and instance: the entire training consisting of multiple episodes. Hyperparameters. Because FORK is an add-on, for TD3, we used the authors’ implementation (https: //github.com/sfujim/TD3); for SAC, we used a PyTorch version (https://github.com/ vitchyr/rlkit) recommended by the authors without any change except adding FORK. The hyperparameters of both TD3 and SAC are summarized in Table 3 in Appendix A.4, and the hyperparameters related to FORK are summarized in Table 4 in the same appendix. We can see TD3-FORK does not require much hyperparameter tuning. The system network and reward network used in the environments are the same except for the Humanoid-v3 for which we use larger system and reward networks because the dimension of the system is higher than other systems. The base weight w0 is the same for all environments, the base rewards are the typical cumulative rewards under TD3 after a successful training, and the system thresholds are the typical estimation errors after about 20,000 steps. SAC-FORK requires slightly more hyperparameter tuning. The base weights were chosen to be smaller values, the base rewards are the typical cumulative rewards under SAC, and the system thresholds are the same as those under TD3-FORK. Initial Exploration. For each task and each algorithm, we use a random policy for exploration for the first 10,000 steps. Each step is one interaction with the environment. Duration of Experiments. For each environment and each algorithm, we ran five different instances with different random seeds. Since we focus on Actor performance, Actor was trained for 0.5 million times for each instance. Since TD3 uses a delayed Actor with frequency 2 (i.e. Actor and Critic are trained with 1:2 ratio), Critic was trained one million times under TD3 and TD3-FORK. For SAC, SAC-FORK and DDPG, Under review as a conference paper at ICLR 2021 Critic was trained 0.5 million times. The performance with the same amount of total training, including Critic training and Actor training, can be found in Appendix B.2, where for each algorithm, Critic and Actor, together, were trained 1.5 millions times. 4.3 RESULTS Figure 4 shows the average cumulative rewards, where we evaluated the policies every 5,000 steps without exploration noise during training process. Each evaluation was averaged over 10 episodes. We train five different instances for each algorithm with same random seeds. The solid curves shows the average cumulative rewards (per episode), and the shaded region represents the standard deviations. The best average cumulative rewards (its definition can be found in Appendix B.1) are summarized in Table 1. We can see that TD3-FORK outperforms all other algorithms. For Ant-v3, TD3-FORK improves the best average cumulative reward by more than 50% (5699.37 (TD3-FORK) versus 3652.11 (TD3)).0 0.1 0.2 0.3 0.4 0.5 Actor Training Times(million) 0 1000 2000 3000 4000 5000 Av er ag e R et ur n We also studied the improvement in terms of sample complexity. In Table 2, we summarized the number of Actor training required under TD3-FORK (SAC-FORK) to achieve the best average cumulative reward under TD3 (SAC). For example, for BipedalWalker-v3, TD3 achieved the best average cumulative reward with 0.4925 million steps of Actor training; and TD3-FORK achieved the same value with only 0.225 million steps of Actor training, reducing the required samples by more than a half. Under review as a conference paper at ICLR 2021 In summary, FORK improves the performance of both TD3 and SAC after being included as an add-on. The improvement is more significant when adding to TD3 than adding to SAC. FORK improves TD3 in all six environments, and improves SAC in three of the six environments. Furthermore, TD3-FORK performs the best in all six environment. More statistics about this set of experiments can be found in Appendix B.1. In Appendix B.2, we also presented experimental results where Actor and Critic together have the same amount of training across all algorithms (i.e. under TD3 and TD3-FORK, Actor was trained 0.5 million times and Critic was trained 1 million times; and under other algorithms, both Actor and Critic were trained 0.75 million times). In this case, TD3-FORK performs the best among four of the six environments, and SAC-FORK performs the best in the rest two environments. 4.4 BIPEDALWALKER-HARDCORE-V3 A variation of TD3-FORK can also solve a well-known difficult environment, BipedalWalker-Hardcorev3, in as few as four hours using a single GPU. From the best of our knowledge, the known algorithm needs to train for days on a 72 cpu AWS EC2 with 64 worker processes taking the raw frames as input (https://github.com/dgriff777/a3c_continuous). You can view the performance on BipedalWalkerHardcore-v3 during and after training at https://youtu.be/0nYQpXtxh-Q. The implementation details can be found in Appendix C. 5 CONCLUSIONS This paper proposes FORK, forward-looking Actor, as an add-on to Actor-Critic algorithms. The evaluation of six environments demonstrated the significant performance improvements by adding FORK to two state-of-the-art model-free reinforcement learning algorithms. A variation of TD3-FORK further solved BipedalWalkerHardcore in as few as four hours with a single GPU. Under review as a conference paper at ICLR 2021 A ADDITIONAL DETAILS OF FORK A.1 REVISED REWARD NETWORK We found from our experiments that the reward network can more accurately predict reward rt when including the next state st+1 as input into the reward network. Figure 5 shows the mean-square-errors (MSE) of the reward network with (st, at) as the input versus with (st, at, st+1) as the input for BipedalWalker-v3 during the first 10,000 steps. We can clearly see that MSE is lower in the revised reward network. A.2 ADAPTIVE WEIGHTS VERSUS FIXED WEIGHTS We compared TD3-FROK with the fixed weights, named as TD3-FORK-F, where the weight is chosen to be 0.4. TD3-FORK performs the best in four out of the six environments. TD3-FORK-F has a worse performance than TD3 on Walker2d-v3. We therefore proposed and used the adaptive weight because of this observation. Under review as a conference paper at ICLR 2021 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Timesteps(million) 0 1000 2000 3000 4000 5000 Av er ag e R et ur n TD3-FORK-F TD3-FORK TD3 A.3 TD3-FORK The detailed description of TD3-FORK can be found in Algorithm 1, and the codes are also submitted as a supplemental material. A.4 HYPERPARAMETERS Table 3 lists the hyper-parameter used in DDPG, SAC, SAC-FORK and TD3-FORK. We kept the same hyperparamter values used in SAC and TD3 codes provided or recommended by the authors. We did not tune these parameters because the goal is to show that FORK is a simple yet powerful add-on to existing Actor-Critic algorithms. Table 4 summarizes the environment specific parameters. In particular, the base weight and base cumulative reward used in implementing the adaptive weight, and threshold for adding FORK. The base cumulative rewards for TD3-FORk are the typical cumulative rewards under TD3 after training Actor for 0.5 million steps. The base cumulative rewards for SAC-FORK are similarly chosen but with a more careful tuning. The thresholds are the typical loss values after training the system networks for about 20,000 times including the first 10,000 exploration steps. In the implementation, FORK is added to Actor training only after the system network can predict the next state reasonably well. We observed TD3-FORK with our intuitive choices of Under review as a conference paper at ICLR 2021 Algorithm 1 TD3-FORK Initialize critic network Qψ1 , Qψ2 system network Fθ, Rη and actor network Aφ with random parameters ψ1, ψ2, θ, η, φ 1: Initialize target network φ′ ← φ, ψ′1 ← ψ1, ψ′2 ← ψ2 Initialize replay buffer B, soft update parameter τ Initialize base reward r0, w0, threshold l̄ and moving average reward r̄ ← 0 Initialize noise clip bound c, state bound (omin, omax) 2: for episode e = 1, . . . ,M do 3: Initialize observation state s0 4: Initialize episode reward r = 0 5: for t = 1, . . . , T do 6: Select action at according to the current policy and exploration noise at ∼ Aφ(s) + t, where t ∼ N (0, σ) 7: Execute action at and observe reward rt, new state st+1 8: Store transition tuple (st, at, rt, st+1) into replay buffer B 9: Sample a random minibatch of N transitions (si, ai, ri, si+1) from B 10: ãi ← πφ′(si+1) + , ∼ clip(N (0, σ̃),−c, c)) 11: Set yi = ri + γminj=1,2Qψ′i(si+1) 12: r ← r + rt 13: Update critic network by minimizing the loss: L(ψ) = 1N ∑2 j=1 ∑ i ( yi −Qψj (si, ai) )2 14: Update state system network by minimizing loss: L(θ) = ‖si+1 − Fθ(si, ai)‖smooth L1 15: Update reward system network by minimizing the loss: L(η) = 1N ∑ i (ri −Rη(si, ai, si+1)) 2 16: if t mod d then 17: Update φ by the sampled policy gradient: 18: if L(θ) > l̄ then 19: ∇φL(φ) = 1N ∑ i∇aQψ1(si, a)|a=Aφ(si)∇φAφ(si) 20: else 21: s′i+1 = clip(Fθ(si, Aφ(si), omin, omax), s ′ i+2 = clip(Fθ(s ′ i+1, Aφ(s ′ i+1)), omin, omax) 22: ∇φL(φ) = 1N ∑ i ( ∇aQψ1(si, a)|a=Aφ(si)∇φAφ(si) + w∇aRη(si, a, s′i+1)|a=Aφ(si)∇φAφ(si) 23: +wγ∇aRη(s′i+1, a, s′i+2)|a=Aφ(s′i+1)∇φAφ(s ′ i+1) + wγ 2∇aQψ1(s′i+2, a)|a=Aφ(s′i+2)∇φAφ(s ′ i+2) ) 24: end if 25: Update target networks: 26: φ′ ← τφ+ (1− τ)φ′ 27: ψ′i ← τψi + (1− τ)ψ′i 28: end if 29: end for 30: Update r̄ ← ((e− 1)r̄ + r)/e 31: Update adaptive weight w ← min(1−max(0, r̄r0 ), 1)w0 32: end for hyperparameters worked well across different environments and required little tuning, while SAC-FORK required some careful tuning on choosing the base weights and the base cumulative rewards. Under review as a conference paper at ICLR 2021 B ADDITIONAL EXPERIMENTAL RESULTS B.1 BEST AVERAGE CUMULATIVE REWARD, STANDARD-DEVIATION, AND BEST INSTANCE CUMULATIVE REWARD Table 5 summarizes the best average cumulative rewards, the associated standard-deviations, and best instance cumulative rewards. They are defined as follows. Recall that each algorithm is trained for five instances, where each instance includes 0.5 million steps of Actor training. During the training process, we evaluated the algorithm every 5,000 steps without the exploration noise. For each evaluation, we calculated Under review as a conference paper at ICLR 2021 the average cumulative rewards (without discount) over 10 episodes, where each episode is 0 ∼ 1, 600 under BipedalWalker-v3, is 0 ∼ 1, 000 under Ant-v3, Walker2d-v3, Hopper-v3, Humanoid-v3, and is exactly 1,000 under HalfCheetah-v3. Now let X(l)τ denote the average cumulative reward at the τ th evaluation during the lth instance. Then Best Average Cumulative Reward (Best Average): max τ 1 5 5∑ l=1 X(l)τ Standard-Deviation: √√√√1 5 5∑ l=1 ( X (l) τ −Xτ )2 Best Instance Cumulative Reward (Best Instance): max l max τ X(l)τ Under review as a conference paper at ICLR 2021 B.2 COMPARISON WITH THE SAME AMOUNT OF TOTAL TRAINING In Section 4, the algorithms were compared assuming the same amount of Actor training since our focus is on the performance of Actor. Since TD3 uses delayed Actor training, Critic of TD3 and TD3-FORK is trained twice as much as Critic of SAC and SAC-FORk when Actor is trained the same number of steps, which gives advantage to TD3 and TD3-FORK. To further compare the performance of TD3-FORK and SAC-FORK, we present the results where for each algorithm, Actor and Critic, together, were trained 1.5 million steps. In particular, Actor was trained 0.5 million steps and Critic is trained 1 million steps under TD3 and TD3-FORK; and Actor and Critic were trained 0.75 million steps each under SAC and SAC-FORK. The results can be found in Figure 7. 0 0.1 0.2 0.3 0.4 0.5 Actor Training Times(million) 0 1000 2000 3000 4000 5000 Av er ag e R et ur n DDPG SAC SAC-FORK TD3-FORK TD3 Table 6 summarizes the best average cumulative rewards, standard-deviations, and the best instance cumulative rewards. We can see that in terms of the best average cumulative rewards, TD3-FORK performed the best in four out of the six environments, including BipedalWalker, Ant, Hopper and HalfCheetah; and SAC-FORK performed the best in the remaining two — Humanoid and Walker2d. B.3 DIFFERENT IMPLEMENTATIONS OF FORK As we mentioned in Section 3, FORK can have different implementations. We considered two examples in Section 4, and compared their performance as add-on to TD3. We call FORK with loss function Equation (5) Under review as a conference paper at ICLR 2021 FORK-S, standing for single-step FORK; call FORK with loss function Equation (6) and w′ = 0.5 FORKDQ, standing for Double-Q FORK; and FORK with loss function Equation (6) and w′ = 0 FORK-Q, standing for Q FORK. From Table 7, we can see that in terms of best average cumulative reward, TD3-FORK performs the best four out of the six environments and TD3-FORK-S performs the best in the remaining two. This is the reason we selected the current form of FORK. C BIPEDALWALKERHARDCORE TD3-FORK-DQ can solve the difficult BipedalWalker-Hardcore-v3 environment with as few as four hours. The hardcore version is much more difficult than BipedalWalker. For example, a known algorithm needs to train for days on a 72 cpu AWS EC2 with 64 worker processes taking the raw frames as input (https: //github.com/dgriff777/a3c_continuous). TD3-FORK-DQ, a variation of TD3-FORK, can solve the problem in as few as four hours by using default GPU setting provided by Google Colab2 and with sensory data (not images). The performance on BipedalWalkerHardcore-v3 during and after training can be viewed at https://youtu.be/0nYQpXtxh-Q. The codes have been submitted as a supplementary materials. To solve BipedalWalkerHardcore, we made several additional changes. 2https://colab.research.google.com/notebooks/intro.ipynb Under review as a conference paper at ICLR 2021 0 0.2M 0.4M 0.6M 0.8M 1M Timesteps 0 1000 2000 3000 4000 5000 Av er ag e C um ul at iv e R ew ar d TD3-FORK-S TD3-FORK-Q TD3-FORK-DQ TD3-FORK TD3 (i) We changed the -100 reward to -5. (ii) We increased other rewards by a factor of 5. (iii) We implemented a replay buffer where failed episodes, in which the bipedalwalker fell down at the end, and successful episodes are added to the replay-buffer with 5:1 ratio. The changes to the rewards (i) and (ii) were suggested in the blog 3. Using reward scaling to improve performance has been also reported in (Henderson et al., 2017). We made change (iii) because we found failed episodes are more useful for learning than successful ones. The reason we believe is that when the bipidedalwalker already knows how to handle a terrain, there is no need to further train using the same type of terrain. When the training is near the end, most of the episodes are successful so adding these successful episodes overwhelm the more useful episodes (failed ones), which slows down the learning. 3https://mp.weixin.qq.com/s?__biz=MzA5MDMwMTIyNQ==&mid=2649294554&idx=1&sn= 9f893801b8917575779430cae89829fb&scene=21#wechat_redirect Under review as a conference paper at ICLR 2021 Environment TD3-FORK TD3 TD3-FORK-S TD3-FORK-Q TD3-FORK-DQ
1. What is the focus of the paper regarding reinforcement learning? 2. What are the strengths and weaknesses of the proposed method compared to prior works? 3. How does the reviewer assess the novelty and significance of the proposed approach? 4. What are the limitations of the experimental setup and comparisons in the paper? 5. Are there any concerns regarding the bias in the policy gradient estimation?
Review
Review Summary This paper focuses on the field of off-policy reinforcement learning. Specifically, the authors propose a model-based reinforcement learning method on top of actor-critic methods. The proposed method trains a dynamics model and a reward function on the off-policy data with supervised learning, and then uses the trained model to generate synthetic future states and rewards during the actor update. During policy update for a given state, the method computes the sum of the Q value estimate of the state and the Q value expansion for a few steps using the learned dynamics model and reward function. The authors implement the proposed method on top of SAC, and TD3, and evaluate their performances on several MuJoCo and Box2D environments. The experiment results show that the proposed method outperforms the model-free baseline in terms of sample efficiency. Comments The paper is well written and the idea proposed in this paper is really easy to understand. The authors also include a wide suite of experiments to demonstrate the sample efficiency of the proposed method. Despite these advantages, I cannot recommend acceptance of this paper due to the lack of novelty and absence of fair baseline comparison, which I will elaborate on next. First of all, despite the title of the paper, the proposed method is really a model-based reinforcement learning method since the system network and reward network are just dynamics model and reward model. The proposed objective for the policy (eqn 2), is merely a sum of current Q value estimate and Q value expansion for a few steps using the learned model, which has been proposed before in various papers such as [1] and [2]. The only difference is that when computing the gradient with respect to the policy, the authors leave out the gradient that passes through the learned model, which results in biased estimate of the policy gradient. Therefore, I’m not convinced about the novelty of the proposed method. Moreover, while proposing a model-based method. The authors do not include baseline comparisons with other model-based RL methods. It is widely known that on low-dimensional control tasks, model-based method outperforms model-free methods ([3]), and therefore merely comparing to model-free baselines is unfair. It would be important to include comparisons to model based methods ([3]). Due to the lack of novelty and fair comparison to existing model-based methods, I cannot recommend acceptance for this paper. References [1] Heess, Nicolas, et al. "Learning continuous control policies by stochastic value gradients." Advances in Neural Information Processing Systems. 2015. [2] Clavera, Ignasi, Yao Fu, and Pieter Abbeel. "Model-Augmented Actor-Critic: Backpropagating through Paths." International Conference on Learning Representations. 2019. [3] Langlois, Eric, et al. "Benchmarking model-based reinforcement learning." arXiv preprint arXiv:1907.02057 (2019).
ICLR
Title FORK: A FORward-looKing Actor for Model-Free Reinforcement Learning Abstract In this paper, we propose a new type of Actor, named forward-looking Actor or FORK for short, for Actor-Critic algorithms. FORK can be easily integrated into a model-free ActorCritic algorithm. Our experiments on six Box2D and MuJoCo environments with continuous state and action spaces demonstrate significant performance improvement FORK can bring to the state-of-the-art algorithms. A variation of FORK can further solve BipedalWalkerHardcore in as few as four hours using a single GPU. 1 INTRODUCTION Deep reinforcement learning has had tremendous successes, and sometimes even superhuman performance, in a wide range of applications including board games (Silver et al., 2016), video games (Vinyals et al., 2019), and robotics (Haarnoja et al., 2018a). A key to these recent successes is the use of deep neural networks as high-capacity function approximators that can harvest a large amount of data samples to approximate high-dimensional state or action value functions, which tackles one of the most challenging issues in reinforcement learning problems with very large state and action spaces. Many modern reinforcement learning algorithms are model-free, so they are applicable in different environments and can readily react to new and unseen states. This paper considers model-free reinforcement learning for problems with continuous state and action spaces, in particular, the Actor-Critic method, where Critic evaluates the state or action values of the Actor’s policy and Actor improves the policy based on the value estimation from Critic. To draw an analogy between Actor-Critic algorithms and human decision making, consider the scenario where a high school student is deciding on which college to attend after graduation. The student, like Actor, is likely to make her/his decision based on the perceived values of the colleges, where the value of a college is based on many factors including (i) the quality of education it offers, its culture, and diversity, which can be viewed as instantaneous rewards of attending the college; and (ii) the career opportunities after finishing the college, which can be thought as the future cumulative reward. We now take this analogy one step further, in human decision making, we often not only consider the “value” of current state and action, but also further forecast the outcome of the current decision and the value of the next state. In the example above, a student often explicitly takes into consideration the first job she/he may have after finishing college, and the “value” of the first job. Since forward-looking is common in human decision making, we are interested in understanding whether such forward-looking decision making can help Actor; in particular, whether it is useful for Actor to forecast the next state and use the value of future states to improve the policy. To our great surprise, a relative straightforward implementation of forward-looking Actor, as an add-on to existing Actor algorithms, improves Actor’s performance by a large margin. Our new Actor, named FOrward-looKing Actor or FORK for short, mimics human decision making where we think multi-step ahead. In particular, FORK includes a neural network that forecasts the next state given the current state and current action, called system network; and a neural network that forecasts the reward given a (state, action) pair, called reward network. With the system network and reward network, Under review as a conference paper at ICLR 2021 FORK can forecast the next state and consider the value of the next state when improving the policy. For example, consider the Deep Deterministic Policy Gradient (DDPG) (Lillicrap et al., 2015), which updates the parameters of Actor as follows: φ← φ+ β∇φQψ(st, Aφ(st)), where st is the state at time t, φ are Actor’s parameters, β is the learning rate, Qψ(s, a) is the Critic network, and Aφ(s) is the Actor network. With DDPG-FORK, the parameters can be updated as follows: φ←φ+ β (∇φQψ(st, Aφ(st)) +∇φRη(st, Aφ(st)) + γ∇φRη(s̃t+1, Aφ(s̃t+1))+ γ2∇φQψ(s̃t+2, Aφ(s̃t+2)) ) , (1) where Rη is the reward network, and s̃t+1 and s̃t+2 are the future states forecast by the system network Fθ. We will see that FORK can be easily incorporated into most deep Actor-Critic algorithms, by adding two additional neural networks (the system network and the reward network), and by adding extra terms to the loss function when training Actor, e.g. adding term Rη(st, Aφ(st)) + γRη(s̃t+1, Aφ(s̃t+1)) + γ 2Qψ(s̃t+2, Aφ(s̃t+2)) for each sampled state st to implement (1). We remark that Equation (1) is just one example of FORK, FORK can have different implementations (a detailed discussion can be found in Section 3). We further remark that learning the system model is not a new idea and has a long history in reinforcement learning, called model-based reinforcement learning (some state-of-the-art model-based reinforcement learning algorithms and the benchmark can be found in (Wang et al., 2019)). Model-based reinforcement learning uses the model in a sophisticated way, often based on deterministic or stochastic optimal control theory to optimize the policy based on the model. FORK only uses the system network as a blackbox to forecast future states, and does not use it as a mathematical model for optimizing control actions. With this key distinction, any model-free Actor-Critic algorithm with FORK remains to be model-free. In our experiments, we added FORK to two state-of-the-art model-free algorithms, according to recent benchmark studies (Duan et al., 2016a; Wang et al., 2019): TD3 (Fujimoto et al., 2018) (for deterministic policies) and SAC (Haarnoja et al., 2018b) (for stochastic policies). The evaluations on six challenging environments with continuous state space and action space show significant improvement when adding FORK. In particular, TD3-FORK performs the best among the all we tested. For Ant-v3, it improves the average cumulative reward by more than 50% than TD3, and achieves TD3’s best performance using only 35% of training samples. BipedalWalker-v3 is considered “solved” when the agent obtains an average cumulative reward of at least 3001. TD3-FORK only needs 0.23 million actor training steps to solve the problem, half of that under TD3. Furthermore, a variation of TD3-FORK solves BipedalWalkerHardcore, a well known difficult environment, with as few as four hours using a single GPU. 1.1 RELATED WORK The idea of using learned models in reinforcement learning is not new, and actually has a long history in reinforcement learning. At a high level, FORK shares a similar spirit as model-based reinforcement learning and rollout. However, in terms of implementation, FORK is very different and much simpler. Rollout in general requires the Monte-Carlo method (Silver et al., 2017) to simulate a finite number of future states from the current state and then combines that with value function approximations to decide the action to take at the current time. FORK does not require any high-fidelity simulation. The key distinction between FORK and model-based reinforcement learning is that model-based reinforcement learning uses the learned 1https://github.com/openai/gym/blob/master/gym/envs/box2d/bipedal_walker.py Under review as a conference paper at ICLR 2021 model in a sophisticated manner. For example, in SVG (Heess et al., 2015), the learned system model is integrated as a part of the calculation of the value gradient, in (Gu et al., 2016), refitted local linear model and rollout are used to derive linear-Gaussian controller, and (Bansal et al., 2017) uses a learned dynamical model to compute the trajectory distribution of a given policy and consequently estimates the corresponding cost using a Bayesian optimization-based policy search. More model-based reinforcement learning algorithms and related benchmarking can be found in (Wang et al., 2019). FORK, on the other hand, only uses the system network to predict future states, and does not use the system model beyond that. Other related work that accelerates reinforcement learning algorithms includes: acceleration through exploration strategies (Gupta et al., 2018), optimizers (Duan et al., 2016b), and intrinsic reward (Zheng et al., 2018), just to name a few. These approaches are complementary to ours. FORK can be added to further accelerate learning. 2 BACKGROUND Reinforcement Learning algorithms aim at learning policies that maximize the cumulative reward by interacting with the environment. We consider a standard reinforcement learning setting defined by a Markov decision process (MDP) (S,A, p0, p, r, γ), where S is a set of states, A is the action space, p0(s) is the probability distribution of the initial state, p : S × S ×A → [0,∞) is the transition density function, which represents the distribution of the next state st+1 given current state st and action at, r : S×A → [rmin, rmax] is the bounded reward function on each transition, and γ ∈ (0, 1] is the discount factor. We consider a discrete-time system. At each time step t, given the current st ∈ S, the agent selects an action at ∈ A based on a (deterministic or stochastic) policy π(at|st), which moves the environment to the next state st+1, and yields a reward rt = r(st, at) to the agent. We consider stationary policies in this paper under which the action is taken based on st, and is independent of other historical information. Starting from time 0, the return of given policy π is the discounted cumulative reward Jπ(i) = T∑ t=0 γtr(st, at), given s0 = i. Jπ(i) is also called the state-value function. Our goal is to learn a policy π∗ that maximizes this cumulative reward π∗ ∈ arg max π Jπ(i) ∀i. We assume our policy is parameterized by parameter φ, denoted by πφ, e.g. by the Actor network in ActorCritic Algorithms. In this case, our goal is to identify the optimal parameter φ∗ that maximizes φ∗ ∈ arg maxJπφ(i). Instead of state-value function, it is often convenient to work with action-value function, Q-function, which is defined as follows: Qπ(s, a) = E [rπ(s, a) + γJπ(s ′)] , where s′ is the next state given current state s and action a. The optimal policy is a policy that satisfies the following Bellman equation (Bellman, 1957): Qπ∗(s, a) = E [ r(s, a) + γ max a′∈A Qπ∗(s ′, a′) ] . When neural networks are used to approximate action-value functions, we denote the action-value function by Qψ(s, a), where ψ is the parameters of the neural network. Under review as a conference paper at ICLR 2021 3 FORK — FORWARD-LOOKING ACTOR This paper focuses on Actor-Critic algorithms, where Critic estimates the state or action value functions of the current policy, and Actor improves the policy based on the value functions. We propose a new type of Actor, FORK. More precisely, a new training algorithm that improves the policy by considering not only the action-value of the current state (or states of the current mini-batch), but also future states and actions forecast using a learned system model and a learned reward model. This forward-looking Actor is illustrated in Figure 1. In FORK, we introduce two additional neural networks: The system network Fθ. The network is used to predict the next state of the environment, i.e., given current state st and action at, it predicts the next state s̃t+1 = Fθ(st, at). With experiences (st, at, st+1), training the system network is a supervised learning problem. The neural network can be trained using mini-batch from replaybuffer and smooth-L1 loss L(θ) = ‖st+1 − Fθ(st, at)‖smooth L1. The reward network Rη. This network predicts the reward given current state st and action at, i.e. r̃t = Rη(st, at). The network can be trained from experience (st, at, rt), with MSE loss L(η) = ‖rt −Rη(st, at)‖2. FORK. With the system network and the reward network, the agent can forecast the next state, the next next stat and so on. Actor can then use the forecast to improve the policy. For example, we consider the following loss function L(φ) = E [ −Qψ(st, Aφ(st))−Rη(st, Aφ(st))− γRη(s̃t+1, Aφ(s̃t+1))− γ2Qψ (s̃t+2, Aφ(s̃t+2)) ] . (2) In the loss function above, st are from data samples (e.g. replay buffer), s̃t+1 and s̃t+2 are calculated from the system network as shown below: s̃t+1 = Fθ(st, Aφ(st)) and s̃t+2 = Fθ(s̃t+1, Aφ(s̃t+1)). (3) Note that when training Actor Aφ with loss function L(φ), all other parameters in L(φ) are regarded as constants except φ (see the PyTorch code in the supplemental materials). The action-function Q, without function approximation, under current policy Aφ satisfies Q(st, Aφ(s)) = E [ r(st, Aφ(st)) + γr(st+1, Aφ(st+1)) + γ 2Q (st+2, Aφ(st+2)) ] , where r, st+1 and st+2 are the actual rewards and states under the current policy, not estimated values. Therefore, the loss function L(φ) can be viewed as the average of two estimators. Given action values from Critic and with a mini-batch of size N, FORK updates its parameters as follows: φ← φ− βtOθL(φ), where βt is the learning rate and OφL(φ) = 1 N N∑ i=1 ( ∇aQψ(si, a)|a=Aφ(si)∇φAφ(si) +∇aRη(si, a)|a=Aφ(si)∇φAφ(si) +γ∇aRη(s̃′i, a)|a=Aφ(s̃′i)∇φAφ(s̃ ′ i).+ γ 2∇aQψ(s̃′′i , a)|a=Aφ(s̃′′i )∇φAφ(s̃ ′′ i ) ) , Under review as a conference paper at ICLR 2021 where s̃′i and s̃ ′′ i are the next state and the next next state estimated from the system network. We note that it is important to use the system network to generate future states as in Equation (3) because they mimic the states under the current policy. If we would sample a sequence of consecutive states from the replay buffer, then the sequence is from the old policy, which does not help the learning. Figure 2 compares TD3-FORK, TD3, and TD3-MT, which samples a sequence of three consecutive states, on the BipedalWalker-v3 environment. We can clearly see that simply using consecutive states from experiences does not help improve learning. In fact, it significantly hurts the learning. Modified Reward Model: We found from our experiments that the reward network can more accurately predict reward rt when including the next state st+1 as input into the reward network (an example can found in Appendix A.1). Therefore, we use a modified reward network Rη(st, at, st+1) in FORK. Adaptive Weight: Loss function L(φ) in our algorithm uses the system network and the reward network to boost learning. In our experiments, we found that the forecasting can significantly improve the performance, except at the end of learning. Since the system and reward networks are not perfect, the errors in prediction can introduce errors/noises. To overcome this issue, we found it is helpful to use an adaptive weight w so that FORK accelerates learning at the beginning but its weight decreases gradually as it gets close to the learning goal. A comparison between fixed weights and adaptive weights can be found in Appendix A.2. We use a simple adaptive weight w = ( r̄ r0 )1 0 w0, where r̄ is the moving average of cumulative reward (per episode), and r0 is a predefined goal, w0 is the initial weight, and (a)10 = a if 0 ≤ a ≤ 1, = 0 if a < 0 and = 1 if a > 1. The loss function with adaptive weight becomes L(φ) = E [ −Qψ(st, Aφ(st))− wRη(st, Aφ(st))− wγRη(s̃t+1, Aφ(s̃t+1))− wγ2Qψ (s̃t+2, Aφ(s̃t+2)) ] . (4) Furthermore, we set a threshold and let w = 0 if the loss of the system network is larger than the threshold. This is to avoid using FORK when the system and reward networks are very noisy. We note that in our experiments, the thresholds were chosen such that w = 0 for around 20, 000 steps at the beginning of each instance, which includes the first 10,000 random exploration steps. Different Implementations of FORK: It is easy to see FORK can be implemented in different forms. For example, instead of two-step ahead, we can use one-step ahead as follows: L(φ) = E [−Qψ(st, Aφ(st))− wRη(st, Aφ(st))− wγQψ (s̃t+1, Aφ(s̃t+1))] , (5) or only use future action values: L(φ) = E [−Qψ(st, Aφ(st))− w (Qψ (s̃t+1, Aφ(s̃t+1)) + w′Qψ (s̃t+2, Aφ(s̃t+2)))] . (6) We compared these two versions with FORK. The performance comparison can be found in Appendix B.3. 4 EXPERIMENTS In this section, we evaluate FORK as an add-on to existing algorithms. We name an algorithm with FORK as algorithm-FORK, e.g. TD3-FORK or SAC-FORK. As an example, a detailed description of TD3-FORK can be found in Appendix A.3. We focused on two algorithms: TD3 (Fujimoto et al., 2018) and SAC (Haarnoja Under review as a conference paper at ICLR 2021 et al., 2018b) because they were found to have the best performance among model-free reinforcement learning algorithms in recent benchmarking studies (Duan et al., 2016a; Wang et al., 2019). We compared the performance of TD3-FORK and SAC-FORK with TD3, SAC and DDPG (Lillicrap et al., 2015). 4.1 BOX2D AND MUJOCO ENVIRONMENTS We selected six environments: BipedalWalker-v3 from Box2D (Catto, 2011), Ant-v3, Hooper-v3, HalfCheetah-v3, Humanoid-v3 and Walker2d-v3 from MuJoCo (Todorov et al., 2012) as shown in Figure 3. All these environments have continuous state spaces and action spaces. 4.2 IMPLEMENTATION DETAILS Terminology. step (or time): one operation, e.g. training Actor with a mini-batch; episode: a single-run of the environment from the beginning to the end, consisting of many steps; and instance: the entire training consisting of multiple episodes. Hyperparameters. Because FORK is an add-on, for TD3, we used the authors’ implementation (https: //github.com/sfujim/TD3); for SAC, we used a PyTorch version (https://github.com/ vitchyr/rlkit) recommended by the authors without any change except adding FORK. The hyperparameters of both TD3 and SAC are summarized in Table 3 in Appendix A.4, and the hyperparameters related to FORK are summarized in Table 4 in the same appendix. We can see TD3-FORK does not require much hyperparameter tuning. The system network and reward network used in the environments are the same except for the Humanoid-v3 for which we use larger system and reward networks because the dimension of the system is higher than other systems. The base weight w0 is the same for all environments, the base rewards are the typical cumulative rewards under TD3 after a successful training, and the system thresholds are the typical estimation errors after about 20,000 steps. SAC-FORK requires slightly more hyperparameter tuning. The base weights were chosen to be smaller values, the base rewards are the typical cumulative rewards under SAC, and the system thresholds are the same as those under TD3-FORK. Initial Exploration. For each task and each algorithm, we use a random policy for exploration for the first 10,000 steps. Each step is one interaction with the environment. Duration of Experiments. For each environment and each algorithm, we ran five different instances with different random seeds. Since we focus on Actor performance, Actor was trained for 0.5 million times for each instance. Since TD3 uses a delayed Actor with frequency 2 (i.e. Actor and Critic are trained with 1:2 ratio), Critic was trained one million times under TD3 and TD3-FORK. For SAC, SAC-FORK and DDPG, Under review as a conference paper at ICLR 2021 Critic was trained 0.5 million times. The performance with the same amount of total training, including Critic training and Actor training, can be found in Appendix B.2, where for each algorithm, Critic and Actor, together, were trained 1.5 millions times. 4.3 RESULTS Figure 4 shows the average cumulative rewards, where we evaluated the policies every 5,000 steps without exploration noise during training process. Each evaluation was averaged over 10 episodes. We train five different instances for each algorithm with same random seeds. The solid curves shows the average cumulative rewards (per episode), and the shaded region represents the standard deviations. The best average cumulative rewards (its definition can be found in Appendix B.1) are summarized in Table 1. We can see that TD3-FORK outperforms all other algorithms. For Ant-v3, TD3-FORK improves the best average cumulative reward by more than 50% (5699.37 (TD3-FORK) versus 3652.11 (TD3)).0 0.1 0.2 0.3 0.4 0.5 Actor Training Times(million) 0 1000 2000 3000 4000 5000 Av er ag e R et ur n We also studied the improvement in terms of sample complexity. In Table 2, we summarized the number of Actor training required under TD3-FORK (SAC-FORK) to achieve the best average cumulative reward under TD3 (SAC). For example, for BipedalWalker-v3, TD3 achieved the best average cumulative reward with 0.4925 million steps of Actor training; and TD3-FORK achieved the same value with only 0.225 million steps of Actor training, reducing the required samples by more than a half. Under review as a conference paper at ICLR 2021 In summary, FORK improves the performance of both TD3 and SAC after being included as an add-on. The improvement is more significant when adding to TD3 than adding to SAC. FORK improves TD3 in all six environments, and improves SAC in three of the six environments. Furthermore, TD3-FORK performs the best in all six environment. More statistics about this set of experiments can be found in Appendix B.1. In Appendix B.2, we also presented experimental results where Actor and Critic together have the same amount of training across all algorithms (i.e. under TD3 and TD3-FORK, Actor was trained 0.5 million times and Critic was trained 1 million times; and under other algorithms, both Actor and Critic were trained 0.75 million times). In this case, TD3-FORK performs the best among four of the six environments, and SAC-FORK performs the best in the rest two environments. 4.4 BIPEDALWALKER-HARDCORE-V3 A variation of TD3-FORK can also solve a well-known difficult environment, BipedalWalker-Hardcorev3, in as few as four hours using a single GPU. From the best of our knowledge, the known algorithm needs to train for days on a 72 cpu AWS EC2 with 64 worker processes taking the raw frames as input (https://github.com/dgriff777/a3c_continuous). You can view the performance on BipedalWalkerHardcore-v3 during and after training at https://youtu.be/0nYQpXtxh-Q. The implementation details can be found in Appendix C. 5 CONCLUSIONS This paper proposes FORK, forward-looking Actor, as an add-on to Actor-Critic algorithms. The evaluation of six environments demonstrated the significant performance improvements by adding FORK to two state-of-the-art model-free reinforcement learning algorithms. A variation of TD3-FORK further solved BipedalWalkerHardcore in as few as four hours with a single GPU. Under review as a conference paper at ICLR 2021 A ADDITIONAL DETAILS OF FORK A.1 REVISED REWARD NETWORK We found from our experiments that the reward network can more accurately predict reward rt when including the next state st+1 as input into the reward network. Figure 5 shows the mean-square-errors (MSE) of the reward network with (st, at) as the input versus with (st, at, st+1) as the input for BipedalWalker-v3 during the first 10,000 steps. We can clearly see that MSE is lower in the revised reward network. A.2 ADAPTIVE WEIGHTS VERSUS FIXED WEIGHTS We compared TD3-FROK with the fixed weights, named as TD3-FORK-F, where the weight is chosen to be 0.4. TD3-FORK performs the best in four out of the six environments. TD3-FORK-F has a worse performance than TD3 on Walker2d-v3. We therefore proposed and used the adaptive weight because of this observation. Under review as a conference paper at ICLR 2021 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Timesteps(million) 0 1000 2000 3000 4000 5000 Av er ag e R et ur n TD3-FORK-F TD3-FORK TD3 A.3 TD3-FORK The detailed description of TD3-FORK can be found in Algorithm 1, and the codes are also submitted as a supplemental material. A.4 HYPERPARAMETERS Table 3 lists the hyper-parameter used in DDPG, SAC, SAC-FORK and TD3-FORK. We kept the same hyperparamter values used in SAC and TD3 codes provided or recommended by the authors. We did not tune these parameters because the goal is to show that FORK is a simple yet powerful add-on to existing Actor-Critic algorithms. Table 4 summarizes the environment specific parameters. In particular, the base weight and base cumulative reward used in implementing the adaptive weight, and threshold for adding FORK. The base cumulative rewards for TD3-FORk are the typical cumulative rewards under TD3 after training Actor for 0.5 million steps. The base cumulative rewards for SAC-FORK are similarly chosen but with a more careful tuning. The thresholds are the typical loss values after training the system networks for about 20,000 times including the first 10,000 exploration steps. In the implementation, FORK is added to Actor training only after the system network can predict the next state reasonably well. We observed TD3-FORK with our intuitive choices of Under review as a conference paper at ICLR 2021 Algorithm 1 TD3-FORK Initialize critic network Qψ1 , Qψ2 system network Fθ, Rη and actor network Aφ with random parameters ψ1, ψ2, θ, η, φ 1: Initialize target network φ′ ← φ, ψ′1 ← ψ1, ψ′2 ← ψ2 Initialize replay buffer B, soft update parameter τ Initialize base reward r0, w0, threshold l̄ and moving average reward r̄ ← 0 Initialize noise clip bound c, state bound (omin, omax) 2: for episode e = 1, . . . ,M do 3: Initialize observation state s0 4: Initialize episode reward r = 0 5: for t = 1, . . . , T do 6: Select action at according to the current policy and exploration noise at ∼ Aφ(s) + t, where t ∼ N (0, σ) 7: Execute action at and observe reward rt, new state st+1 8: Store transition tuple (st, at, rt, st+1) into replay buffer B 9: Sample a random minibatch of N transitions (si, ai, ri, si+1) from B 10: ãi ← πφ′(si+1) + , ∼ clip(N (0, σ̃),−c, c)) 11: Set yi = ri + γminj=1,2Qψ′i(si+1) 12: r ← r + rt 13: Update critic network by minimizing the loss: L(ψ) = 1N ∑2 j=1 ∑ i ( yi −Qψj (si, ai) )2 14: Update state system network by minimizing loss: L(θ) = ‖si+1 − Fθ(si, ai)‖smooth L1 15: Update reward system network by minimizing the loss: L(η) = 1N ∑ i (ri −Rη(si, ai, si+1)) 2 16: if t mod d then 17: Update φ by the sampled policy gradient: 18: if L(θ) > l̄ then 19: ∇φL(φ) = 1N ∑ i∇aQψ1(si, a)|a=Aφ(si)∇φAφ(si) 20: else 21: s′i+1 = clip(Fθ(si, Aφ(si), omin, omax), s ′ i+2 = clip(Fθ(s ′ i+1, Aφ(s ′ i+1)), omin, omax) 22: ∇φL(φ) = 1N ∑ i ( ∇aQψ1(si, a)|a=Aφ(si)∇φAφ(si) + w∇aRη(si, a, s′i+1)|a=Aφ(si)∇φAφ(si) 23: +wγ∇aRη(s′i+1, a, s′i+2)|a=Aφ(s′i+1)∇φAφ(s ′ i+1) + wγ 2∇aQψ1(s′i+2, a)|a=Aφ(s′i+2)∇φAφ(s ′ i+2) ) 24: end if 25: Update target networks: 26: φ′ ← τφ+ (1− τ)φ′ 27: ψ′i ← τψi + (1− τ)ψ′i 28: end if 29: end for 30: Update r̄ ← ((e− 1)r̄ + r)/e 31: Update adaptive weight w ← min(1−max(0, r̄r0 ), 1)w0 32: end for hyperparameters worked well across different environments and required little tuning, while SAC-FORK required some careful tuning on choosing the base weights and the base cumulative rewards. Under review as a conference paper at ICLR 2021 B ADDITIONAL EXPERIMENTAL RESULTS B.1 BEST AVERAGE CUMULATIVE REWARD, STANDARD-DEVIATION, AND BEST INSTANCE CUMULATIVE REWARD Table 5 summarizes the best average cumulative rewards, the associated standard-deviations, and best instance cumulative rewards. They are defined as follows. Recall that each algorithm is trained for five instances, where each instance includes 0.5 million steps of Actor training. During the training process, we evaluated the algorithm every 5,000 steps without the exploration noise. For each evaluation, we calculated Under review as a conference paper at ICLR 2021 the average cumulative rewards (without discount) over 10 episodes, where each episode is 0 ∼ 1, 600 under BipedalWalker-v3, is 0 ∼ 1, 000 under Ant-v3, Walker2d-v3, Hopper-v3, Humanoid-v3, and is exactly 1,000 under HalfCheetah-v3. Now let X(l)τ denote the average cumulative reward at the τ th evaluation during the lth instance. Then Best Average Cumulative Reward (Best Average): max τ 1 5 5∑ l=1 X(l)τ Standard-Deviation: √√√√1 5 5∑ l=1 ( X (l) τ −Xτ )2 Best Instance Cumulative Reward (Best Instance): max l max τ X(l)τ Under review as a conference paper at ICLR 2021 B.2 COMPARISON WITH THE SAME AMOUNT OF TOTAL TRAINING In Section 4, the algorithms were compared assuming the same amount of Actor training since our focus is on the performance of Actor. Since TD3 uses delayed Actor training, Critic of TD3 and TD3-FORK is trained twice as much as Critic of SAC and SAC-FORk when Actor is trained the same number of steps, which gives advantage to TD3 and TD3-FORK. To further compare the performance of TD3-FORK and SAC-FORK, we present the results where for each algorithm, Actor and Critic, together, were trained 1.5 million steps. In particular, Actor was trained 0.5 million steps and Critic is trained 1 million steps under TD3 and TD3-FORK; and Actor and Critic were trained 0.75 million steps each under SAC and SAC-FORK. The results can be found in Figure 7. 0 0.1 0.2 0.3 0.4 0.5 Actor Training Times(million) 0 1000 2000 3000 4000 5000 Av er ag e R et ur n DDPG SAC SAC-FORK TD3-FORK TD3 Table 6 summarizes the best average cumulative rewards, standard-deviations, and the best instance cumulative rewards. We can see that in terms of the best average cumulative rewards, TD3-FORK performed the best in four out of the six environments, including BipedalWalker, Ant, Hopper and HalfCheetah; and SAC-FORK performed the best in the remaining two — Humanoid and Walker2d. B.3 DIFFERENT IMPLEMENTATIONS OF FORK As we mentioned in Section 3, FORK can have different implementations. We considered two examples in Section 4, and compared their performance as add-on to TD3. We call FORK with loss function Equation (5) Under review as a conference paper at ICLR 2021 FORK-S, standing for single-step FORK; call FORK with loss function Equation (6) and w′ = 0.5 FORKDQ, standing for Double-Q FORK; and FORK with loss function Equation (6) and w′ = 0 FORK-Q, standing for Q FORK. From Table 7, we can see that in terms of best average cumulative reward, TD3-FORK performs the best four out of the six environments and TD3-FORK-S performs the best in the remaining two. This is the reason we selected the current form of FORK. C BIPEDALWALKERHARDCORE TD3-FORK-DQ can solve the difficult BipedalWalker-Hardcore-v3 environment with as few as four hours. The hardcore version is much more difficult than BipedalWalker. For example, a known algorithm needs to train for days on a 72 cpu AWS EC2 with 64 worker processes taking the raw frames as input (https: //github.com/dgriff777/a3c_continuous). TD3-FORK-DQ, a variation of TD3-FORK, can solve the problem in as few as four hours by using default GPU setting provided by Google Colab2 and with sensory data (not images). The performance on BipedalWalkerHardcore-v3 during and after training can be viewed at https://youtu.be/0nYQpXtxh-Q. The codes have been submitted as a supplementary materials. To solve BipedalWalkerHardcore, we made several additional changes. 2https://colab.research.google.com/notebooks/intro.ipynb Under review as a conference paper at ICLR 2021 0 0.2M 0.4M 0.6M 0.8M 1M Timesteps 0 1000 2000 3000 4000 5000 Av er ag e C um ul at iv e R ew ar d TD3-FORK-S TD3-FORK-Q TD3-FORK-DQ TD3-FORK TD3 (i) We changed the -100 reward to -5. (ii) We increased other rewards by a factor of 5. (iii) We implemented a replay buffer where failed episodes, in which the bipedalwalker fell down at the end, and successful episodes are added to the replay-buffer with 5:1 ratio. The changes to the rewards (i) and (ii) were suggested in the blog 3. Using reward scaling to improve performance has been also reported in (Henderson et al., 2017). We made change (iii) because we found failed episodes are more useful for learning than successful ones. The reason we believe is that when the bipidedalwalker already knows how to handle a terrain, there is no need to further train using the same type of terrain. When the training is near the end, most of the episodes are successful so adding these successful episodes overwhelm the more useful episodes (failed ones), which slows down the learning. 3https://mp.weixin.qq.com/s?__biz=MzA5MDMwMTIyNQ==&mid=2649294554&idx=1&sn= 9f893801b8917575779430cae89829fb&scene=21#wechat_redirect Under review as a conference paper at ICLR 2021 Environment TD3-FORK TD3 TD3-FORK-S TD3-FORK-Q TD3-FORK-DQ
1. What is the focus of the paper regarding off-policy algorithms? 2. What are the strengths of the proposed approach, particularly in improving training sample efficiency and policy performance? 3. What are the weaknesses of the paper, especially regarding the necessity of the proposed networks? 4. How does the reviewer suggest addressing the concerns about the networks?
Review
Review Summary: The authors proposed a simple modification to popular off policy algorithms, such as SAC and TD3. By employing a model network and reward network, the authors can expand the bellman update operation using predicted next few steps, similar to the GAE (general advantage estimation). The authors demonstrated that algorithms such as TD3, DDPG, and SAC can all benefit from their approach. Pros: The paper is clearly written and easy to understand. The concept is simple and the implementation is straight forward. The results indicated that the training sample efficiency and final policy performance of the tested algorithms have improved for 6 benchmark tasks, compared with their "vanilla" version baseline. Cons: While it is generally understandable that GAE-like approaches can help balance between bias and variance in the Q-value estimation, I am not convinced if the proposed networks are needed. The authors used two additional networks: the model network (they call "system network") and the reward network. The model network computes the standard transition s_t, a_t -> s_t+1, and the reward network estimates the true reward r = R(s_t, a_t, s_t+1). However, as reward function is generally provided, I don't see an additional reward network is necessary here. Second, in off-policy learning, the state transition and next states are already known, instead of using a model network to predict the next states, one can simply sample a small trajectory containing multiple consecutive state-action transitions, and use them to do the Q-value estimation. Recommendations: To address the concern above, I propose the authors to add two more ablation studies: (1) Remove the reward network and only use the reward function. (2) Remove both the reward network and the model/system network, and use the recorded future states to estimate the Q-values.
ICLR
Title FORK: A FORward-looKing Actor for Model-Free Reinforcement Learning Abstract In this paper, we propose a new type of Actor, named forward-looking Actor or FORK for short, for Actor-Critic algorithms. FORK can be easily integrated into a model-free ActorCritic algorithm. Our experiments on six Box2D and MuJoCo environments with continuous state and action spaces demonstrate significant performance improvement FORK can bring to the state-of-the-art algorithms. A variation of FORK can further solve BipedalWalkerHardcore in as few as four hours using a single GPU. 1 INTRODUCTION Deep reinforcement learning has had tremendous successes, and sometimes even superhuman performance, in a wide range of applications including board games (Silver et al., 2016), video games (Vinyals et al., 2019), and robotics (Haarnoja et al., 2018a). A key to these recent successes is the use of deep neural networks as high-capacity function approximators that can harvest a large amount of data samples to approximate high-dimensional state or action value functions, which tackles one of the most challenging issues in reinforcement learning problems with very large state and action spaces. Many modern reinforcement learning algorithms are model-free, so they are applicable in different environments and can readily react to new and unseen states. This paper considers model-free reinforcement learning for problems with continuous state and action spaces, in particular, the Actor-Critic method, where Critic evaluates the state or action values of the Actor’s policy and Actor improves the policy based on the value estimation from Critic. To draw an analogy between Actor-Critic algorithms and human decision making, consider the scenario where a high school student is deciding on which college to attend after graduation. The student, like Actor, is likely to make her/his decision based on the perceived values of the colleges, where the value of a college is based on many factors including (i) the quality of education it offers, its culture, and diversity, which can be viewed as instantaneous rewards of attending the college; and (ii) the career opportunities after finishing the college, which can be thought as the future cumulative reward. We now take this analogy one step further, in human decision making, we often not only consider the “value” of current state and action, but also further forecast the outcome of the current decision and the value of the next state. In the example above, a student often explicitly takes into consideration the first job she/he may have after finishing college, and the “value” of the first job. Since forward-looking is common in human decision making, we are interested in understanding whether such forward-looking decision making can help Actor; in particular, whether it is useful for Actor to forecast the next state and use the value of future states to improve the policy. To our great surprise, a relative straightforward implementation of forward-looking Actor, as an add-on to existing Actor algorithms, improves Actor’s performance by a large margin. Our new Actor, named FOrward-looKing Actor or FORK for short, mimics human decision making where we think multi-step ahead. In particular, FORK includes a neural network that forecasts the next state given the current state and current action, called system network; and a neural network that forecasts the reward given a (state, action) pair, called reward network. With the system network and reward network, Under review as a conference paper at ICLR 2021 FORK can forecast the next state and consider the value of the next state when improving the policy. For example, consider the Deep Deterministic Policy Gradient (DDPG) (Lillicrap et al., 2015), which updates the parameters of Actor as follows: φ← φ+ β∇φQψ(st, Aφ(st)), where st is the state at time t, φ are Actor’s parameters, β is the learning rate, Qψ(s, a) is the Critic network, and Aφ(s) is the Actor network. With DDPG-FORK, the parameters can be updated as follows: φ←φ+ β (∇φQψ(st, Aφ(st)) +∇φRη(st, Aφ(st)) + γ∇φRη(s̃t+1, Aφ(s̃t+1))+ γ2∇φQψ(s̃t+2, Aφ(s̃t+2)) ) , (1) where Rη is the reward network, and s̃t+1 and s̃t+2 are the future states forecast by the system network Fθ. We will see that FORK can be easily incorporated into most deep Actor-Critic algorithms, by adding two additional neural networks (the system network and the reward network), and by adding extra terms to the loss function when training Actor, e.g. adding term Rη(st, Aφ(st)) + γRη(s̃t+1, Aφ(s̃t+1)) + γ 2Qψ(s̃t+2, Aφ(s̃t+2)) for each sampled state st to implement (1). We remark that Equation (1) is just one example of FORK, FORK can have different implementations (a detailed discussion can be found in Section 3). We further remark that learning the system model is not a new idea and has a long history in reinforcement learning, called model-based reinforcement learning (some state-of-the-art model-based reinforcement learning algorithms and the benchmark can be found in (Wang et al., 2019)). Model-based reinforcement learning uses the model in a sophisticated way, often based on deterministic or stochastic optimal control theory to optimize the policy based on the model. FORK only uses the system network as a blackbox to forecast future states, and does not use it as a mathematical model for optimizing control actions. With this key distinction, any model-free Actor-Critic algorithm with FORK remains to be model-free. In our experiments, we added FORK to two state-of-the-art model-free algorithms, according to recent benchmark studies (Duan et al., 2016a; Wang et al., 2019): TD3 (Fujimoto et al., 2018) (for deterministic policies) and SAC (Haarnoja et al., 2018b) (for stochastic policies). The evaluations on six challenging environments with continuous state space and action space show significant improvement when adding FORK. In particular, TD3-FORK performs the best among the all we tested. For Ant-v3, it improves the average cumulative reward by more than 50% than TD3, and achieves TD3’s best performance using only 35% of training samples. BipedalWalker-v3 is considered “solved” when the agent obtains an average cumulative reward of at least 3001. TD3-FORK only needs 0.23 million actor training steps to solve the problem, half of that under TD3. Furthermore, a variation of TD3-FORK solves BipedalWalkerHardcore, a well known difficult environment, with as few as four hours using a single GPU. 1.1 RELATED WORK The idea of using learned models in reinforcement learning is not new, and actually has a long history in reinforcement learning. At a high level, FORK shares a similar spirit as model-based reinforcement learning and rollout. However, in terms of implementation, FORK is very different and much simpler. Rollout in general requires the Monte-Carlo method (Silver et al., 2017) to simulate a finite number of future states from the current state and then combines that with value function approximations to decide the action to take at the current time. FORK does not require any high-fidelity simulation. The key distinction between FORK and model-based reinforcement learning is that model-based reinforcement learning uses the learned 1https://github.com/openai/gym/blob/master/gym/envs/box2d/bipedal_walker.py Under review as a conference paper at ICLR 2021 model in a sophisticated manner. For example, in SVG (Heess et al., 2015), the learned system model is integrated as a part of the calculation of the value gradient, in (Gu et al., 2016), refitted local linear model and rollout are used to derive linear-Gaussian controller, and (Bansal et al., 2017) uses a learned dynamical model to compute the trajectory distribution of a given policy and consequently estimates the corresponding cost using a Bayesian optimization-based policy search. More model-based reinforcement learning algorithms and related benchmarking can be found in (Wang et al., 2019). FORK, on the other hand, only uses the system network to predict future states, and does not use the system model beyond that. Other related work that accelerates reinforcement learning algorithms includes: acceleration through exploration strategies (Gupta et al., 2018), optimizers (Duan et al., 2016b), and intrinsic reward (Zheng et al., 2018), just to name a few. These approaches are complementary to ours. FORK can be added to further accelerate learning. 2 BACKGROUND Reinforcement Learning algorithms aim at learning policies that maximize the cumulative reward by interacting with the environment. We consider a standard reinforcement learning setting defined by a Markov decision process (MDP) (S,A, p0, p, r, γ), where S is a set of states, A is the action space, p0(s) is the probability distribution of the initial state, p : S × S ×A → [0,∞) is the transition density function, which represents the distribution of the next state st+1 given current state st and action at, r : S×A → [rmin, rmax] is the bounded reward function on each transition, and γ ∈ (0, 1] is the discount factor. We consider a discrete-time system. At each time step t, given the current st ∈ S, the agent selects an action at ∈ A based on a (deterministic or stochastic) policy π(at|st), which moves the environment to the next state st+1, and yields a reward rt = r(st, at) to the agent. We consider stationary policies in this paper under which the action is taken based on st, and is independent of other historical information. Starting from time 0, the return of given policy π is the discounted cumulative reward Jπ(i) = T∑ t=0 γtr(st, at), given s0 = i. Jπ(i) is also called the state-value function. Our goal is to learn a policy π∗ that maximizes this cumulative reward π∗ ∈ arg max π Jπ(i) ∀i. We assume our policy is parameterized by parameter φ, denoted by πφ, e.g. by the Actor network in ActorCritic Algorithms. In this case, our goal is to identify the optimal parameter φ∗ that maximizes φ∗ ∈ arg maxJπφ(i). Instead of state-value function, it is often convenient to work with action-value function, Q-function, which is defined as follows: Qπ(s, a) = E [rπ(s, a) + γJπ(s ′)] , where s′ is the next state given current state s and action a. The optimal policy is a policy that satisfies the following Bellman equation (Bellman, 1957): Qπ∗(s, a) = E [ r(s, a) + γ max a′∈A Qπ∗(s ′, a′) ] . When neural networks are used to approximate action-value functions, we denote the action-value function by Qψ(s, a), where ψ is the parameters of the neural network. Under review as a conference paper at ICLR 2021 3 FORK — FORWARD-LOOKING ACTOR This paper focuses on Actor-Critic algorithms, where Critic estimates the state or action value functions of the current policy, and Actor improves the policy based on the value functions. We propose a new type of Actor, FORK. More precisely, a new training algorithm that improves the policy by considering not only the action-value of the current state (or states of the current mini-batch), but also future states and actions forecast using a learned system model and a learned reward model. This forward-looking Actor is illustrated in Figure 1. In FORK, we introduce two additional neural networks: The system network Fθ. The network is used to predict the next state of the environment, i.e., given current state st and action at, it predicts the next state s̃t+1 = Fθ(st, at). With experiences (st, at, st+1), training the system network is a supervised learning problem. The neural network can be trained using mini-batch from replaybuffer and smooth-L1 loss L(θ) = ‖st+1 − Fθ(st, at)‖smooth L1. The reward network Rη. This network predicts the reward given current state st and action at, i.e. r̃t = Rη(st, at). The network can be trained from experience (st, at, rt), with MSE loss L(η) = ‖rt −Rη(st, at)‖2. FORK. With the system network and the reward network, the agent can forecast the next state, the next next stat and so on. Actor can then use the forecast to improve the policy. For example, we consider the following loss function L(φ) = E [ −Qψ(st, Aφ(st))−Rη(st, Aφ(st))− γRη(s̃t+1, Aφ(s̃t+1))− γ2Qψ (s̃t+2, Aφ(s̃t+2)) ] . (2) In the loss function above, st are from data samples (e.g. replay buffer), s̃t+1 and s̃t+2 are calculated from the system network as shown below: s̃t+1 = Fθ(st, Aφ(st)) and s̃t+2 = Fθ(s̃t+1, Aφ(s̃t+1)). (3) Note that when training Actor Aφ with loss function L(φ), all other parameters in L(φ) are regarded as constants except φ (see the PyTorch code in the supplemental materials). The action-function Q, without function approximation, under current policy Aφ satisfies Q(st, Aφ(s)) = E [ r(st, Aφ(st)) + γr(st+1, Aφ(st+1)) + γ 2Q (st+2, Aφ(st+2)) ] , where r, st+1 and st+2 are the actual rewards and states under the current policy, not estimated values. Therefore, the loss function L(φ) can be viewed as the average of two estimators. Given action values from Critic and with a mini-batch of size N, FORK updates its parameters as follows: φ← φ− βtOθL(φ), where βt is the learning rate and OφL(φ) = 1 N N∑ i=1 ( ∇aQψ(si, a)|a=Aφ(si)∇φAφ(si) +∇aRη(si, a)|a=Aφ(si)∇φAφ(si) +γ∇aRη(s̃′i, a)|a=Aφ(s̃′i)∇φAφ(s̃ ′ i).+ γ 2∇aQψ(s̃′′i , a)|a=Aφ(s̃′′i )∇φAφ(s̃ ′′ i ) ) , Under review as a conference paper at ICLR 2021 where s̃′i and s̃ ′′ i are the next state and the next next state estimated from the system network. We note that it is important to use the system network to generate future states as in Equation (3) because they mimic the states under the current policy. If we would sample a sequence of consecutive states from the replay buffer, then the sequence is from the old policy, which does not help the learning. Figure 2 compares TD3-FORK, TD3, and TD3-MT, which samples a sequence of three consecutive states, on the BipedalWalker-v3 environment. We can clearly see that simply using consecutive states from experiences does not help improve learning. In fact, it significantly hurts the learning. Modified Reward Model: We found from our experiments that the reward network can more accurately predict reward rt when including the next state st+1 as input into the reward network (an example can found in Appendix A.1). Therefore, we use a modified reward network Rη(st, at, st+1) in FORK. Adaptive Weight: Loss function L(φ) in our algorithm uses the system network and the reward network to boost learning. In our experiments, we found that the forecasting can significantly improve the performance, except at the end of learning. Since the system and reward networks are not perfect, the errors in prediction can introduce errors/noises. To overcome this issue, we found it is helpful to use an adaptive weight w so that FORK accelerates learning at the beginning but its weight decreases gradually as it gets close to the learning goal. A comparison between fixed weights and adaptive weights can be found in Appendix A.2. We use a simple adaptive weight w = ( r̄ r0 )1 0 w0, where r̄ is the moving average of cumulative reward (per episode), and r0 is a predefined goal, w0 is the initial weight, and (a)10 = a if 0 ≤ a ≤ 1, = 0 if a < 0 and = 1 if a > 1. The loss function with adaptive weight becomes L(φ) = E [ −Qψ(st, Aφ(st))− wRη(st, Aφ(st))− wγRη(s̃t+1, Aφ(s̃t+1))− wγ2Qψ (s̃t+2, Aφ(s̃t+2)) ] . (4) Furthermore, we set a threshold and let w = 0 if the loss of the system network is larger than the threshold. This is to avoid using FORK when the system and reward networks are very noisy. We note that in our experiments, the thresholds were chosen such that w = 0 for around 20, 000 steps at the beginning of each instance, which includes the first 10,000 random exploration steps. Different Implementations of FORK: It is easy to see FORK can be implemented in different forms. For example, instead of two-step ahead, we can use one-step ahead as follows: L(φ) = E [−Qψ(st, Aφ(st))− wRη(st, Aφ(st))− wγQψ (s̃t+1, Aφ(s̃t+1))] , (5) or only use future action values: L(φ) = E [−Qψ(st, Aφ(st))− w (Qψ (s̃t+1, Aφ(s̃t+1)) + w′Qψ (s̃t+2, Aφ(s̃t+2)))] . (6) We compared these two versions with FORK. The performance comparison can be found in Appendix B.3. 4 EXPERIMENTS In this section, we evaluate FORK as an add-on to existing algorithms. We name an algorithm with FORK as algorithm-FORK, e.g. TD3-FORK or SAC-FORK. As an example, a detailed description of TD3-FORK can be found in Appendix A.3. We focused on two algorithms: TD3 (Fujimoto et al., 2018) and SAC (Haarnoja Under review as a conference paper at ICLR 2021 et al., 2018b) because they were found to have the best performance among model-free reinforcement learning algorithms in recent benchmarking studies (Duan et al., 2016a; Wang et al., 2019). We compared the performance of TD3-FORK and SAC-FORK with TD3, SAC and DDPG (Lillicrap et al., 2015). 4.1 BOX2D AND MUJOCO ENVIRONMENTS We selected six environments: BipedalWalker-v3 from Box2D (Catto, 2011), Ant-v3, Hooper-v3, HalfCheetah-v3, Humanoid-v3 and Walker2d-v3 from MuJoCo (Todorov et al., 2012) as shown in Figure 3. All these environments have continuous state spaces and action spaces. 4.2 IMPLEMENTATION DETAILS Terminology. step (or time): one operation, e.g. training Actor with a mini-batch; episode: a single-run of the environment from the beginning to the end, consisting of many steps; and instance: the entire training consisting of multiple episodes. Hyperparameters. Because FORK is an add-on, for TD3, we used the authors’ implementation (https: //github.com/sfujim/TD3); for SAC, we used a PyTorch version (https://github.com/ vitchyr/rlkit) recommended by the authors without any change except adding FORK. The hyperparameters of both TD3 and SAC are summarized in Table 3 in Appendix A.4, and the hyperparameters related to FORK are summarized in Table 4 in the same appendix. We can see TD3-FORK does not require much hyperparameter tuning. The system network and reward network used in the environments are the same except for the Humanoid-v3 for which we use larger system and reward networks because the dimension of the system is higher than other systems. The base weight w0 is the same for all environments, the base rewards are the typical cumulative rewards under TD3 after a successful training, and the system thresholds are the typical estimation errors after about 20,000 steps. SAC-FORK requires slightly more hyperparameter tuning. The base weights were chosen to be smaller values, the base rewards are the typical cumulative rewards under SAC, and the system thresholds are the same as those under TD3-FORK. Initial Exploration. For each task and each algorithm, we use a random policy for exploration for the first 10,000 steps. Each step is one interaction with the environment. Duration of Experiments. For each environment and each algorithm, we ran five different instances with different random seeds. Since we focus on Actor performance, Actor was trained for 0.5 million times for each instance. Since TD3 uses a delayed Actor with frequency 2 (i.e. Actor and Critic are trained with 1:2 ratio), Critic was trained one million times under TD3 and TD3-FORK. For SAC, SAC-FORK and DDPG, Under review as a conference paper at ICLR 2021 Critic was trained 0.5 million times. The performance with the same amount of total training, including Critic training and Actor training, can be found in Appendix B.2, where for each algorithm, Critic and Actor, together, were trained 1.5 millions times. 4.3 RESULTS Figure 4 shows the average cumulative rewards, where we evaluated the policies every 5,000 steps without exploration noise during training process. Each evaluation was averaged over 10 episodes. We train five different instances for each algorithm with same random seeds. The solid curves shows the average cumulative rewards (per episode), and the shaded region represents the standard deviations. The best average cumulative rewards (its definition can be found in Appendix B.1) are summarized in Table 1. We can see that TD3-FORK outperforms all other algorithms. For Ant-v3, TD3-FORK improves the best average cumulative reward by more than 50% (5699.37 (TD3-FORK) versus 3652.11 (TD3)).0 0.1 0.2 0.3 0.4 0.5 Actor Training Times(million) 0 1000 2000 3000 4000 5000 Av er ag e R et ur n We also studied the improvement in terms of sample complexity. In Table 2, we summarized the number of Actor training required under TD3-FORK (SAC-FORK) to achieve the best average cumulative reward under TD3 (SAC). For example, for BipedalWalker-v3, TD3 achieved the best average cumulative reward with 0.4925 million steps of Actor training; and TD3-FORK achieved the same value with only 0.225 million steps of Actor training, reducing the required samples by more than a half. Under review as a conference paper at ICLR 2021 In summary, FORK improves the performance of both TD3 and SAC after being included as an add-on. The improvement is more significant when adding to TD3 than adding to SAC. FORK improves TD3 in all six environments, and improves SAC in three of the six environments. Furthermore, TD3-FORK performs the best in all six environment. More statistics about this set of experiments can be found in Appendix B.1. In Appendix B.2, we also presented experimental results where Actor and Critic together have the same amount of training across all algorithms (i.e. under TD3 and TD3-FORK, Actor was trained 0.5 million times and Critic was trained 1 million times; and under other algorithms, both Actor and Critic were trained 0.75 million times). In this case, TD3-FORK performs the best among four of the six environments, and SAC-FORK performs the best in the rest two environments. 4.4 BIPEDALWALKER-HARDCORE-V3 A variation of TD3-FORK can also solve a well-known difficult environment, BipedalWalker-Hardcorev3, in as few as four hours using a single GPU. From the best of our knowledge, the known algorithm needs to train for days on a 72 cpu AWS EC2 with 64 worker processes taking the raw frames as input (https://github.com/dgriff777/a3c_continuous). You can view the performance on BipedalWalkerHardcore-v3 during and after training at https://youtu.be/0nYQpXtxh-Q. The implementation details can be found in Appendix C. 5 CONCLUSIONS This paper proposes FORK, forward-looking Actor, as an add-on to Actor-Critic algorithms. The evaluation of six environments demonstrated the significant performance improvements by adding FORK to two state-of-the-art model-free reinforcement learning algorithms. A variation of TD3-FORK further solved BipedalWalkerHardcore in as few as four hours with a single GPU. Under review as a conference paper at ICLR 2021 A ADDITIONAL DETAILS OF FORK A.1 REVISED REWARD NETWORK We found from our experiments that the reward network can more accurately predict reward rt when including the next state st+1 as input into the reward network. Figure 5 shows the mean-square-errors (MSE) of the reward network with (st, at) as the input versus with (st, at, st+1) as the input for BipedalWalker-v3 during the first 10,000 steps. We can clearly see that MSE is lower in the revised reward network. A.2 ADAPTIVE WEIGHTS VERSUS FIXED WEIGHTS We compared TD3-FROK with the fixed weights, named as TD3-FORK-F, where the weight is chosen to be 0.4. TD3-FORK performs the best in four out of the six environments. TD3-FORK-F has a worse performance than TD3 on Walker2d-v3. We therefore proposed and used the adaptive weight because of this observation. Under review as a conference paper at ICLR 2021 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Timesteps(million) 0 1000 2000 3000 4000 5000 Av er ag e R et ur n TD3-FORK-F TD3-FORK TD3 A.3 TD3-FORK The detailed description of TD3-FORK can be found in Algorithm 1, and the codes are also submitted as a supplemental material. A.4 HYPERPARAMETERS Table 3 lists the hyper-parameter used in DDPG, SAC, SAC-FORK and TD3-FORK. We kept the same hyperparamter values used in SAC and TD3 codes provided or recommended by the authors. We did not tune these parameters because the goal is to show that FORK is a simple yet powerful add-on to existing Actor-Critic algorithms. Table 4 summarizes the environment specific parameters. In particular, the base weight and base cumulative reward used in implementing the adaptive weight, and threshold for adding FORK. The base cumulative rewards for TD3-FORk are the typical cumulative rewards under TD3 after training Actor for 0.5 million steps. The base cumulative rewards for SAC-FORK are similarly chosen but with a more careful tuning. The thresholds are the typical loss values after training the system networks for about 20,000 times including the first 10,000 exploration steps. In the implementation, FORK is added to Actor training only after the system network can predict the next state reasonably well. We observed TD3-FORK with our intuitive choices of Under review as a conference paper at ICLR 2021 Algorithm 1 TD3-FORK Initialize critic network Qψ1 , Qψ2 system network Fθ, Rη and actor network Aφ with random parameters ψ1, ψ2, θ, η, φ 1: Initialize target network φ′ ← φ, ψ′1 ← ψ1, ψ′2 ← ψ2 Initialize replay buffer B, soft update parameter τ Initialize base reward r0, w0, threshold l̄ and moving average reward r̄ ← 0 Initialize noise clip bound c, state bound (omin, omax) 2: for episode e = 1, . . . ,M do 3: Initialize observation state s0 4: Initialize episode reward r = 0 5: for t = 1, . . . , T do 6: Select action at according to the current policy and exploration noise at ∼ Aφ(s) + t, where t ∼ N (0, σ) 7: Execute action at and observe reward rt, new state st+1 8: Store transition tuple (st, at, rt, st+1) into replay buffer B 9: Sample a random minibatch of N transitions (si, ai, ri, si+1) from B 10: ãi ← πφ′(si+1) + , ∼ clip(N (0, σ̃),−c, c)) 11: Set yi = ri + γminj=1,2Qψ′i(si+1) 12: r ← r + rt 13: Update critic network by minimizing the loss: L(ψ) = 1N ∑2 j=1 ∑ i ( yi −Qψj (si, ai) )2 14: Update state system network by minimizing loss: L(θ) = ‖si+1 − Fθ(si, ai)‖smooth L1 15: Update reward system network by minimizing the loss: L(η) = 1N ∑ i (ri −Rη(si, ai, si+1)) 2 16: if t mod d then 17: Update φ by the sampled policy gradient: 18: if L(θ) > l̄ then 19: ∇φL(φ) = 1N ∑ i∇aQψ1(si, a)|a=Aφ(si)∇φAφ(si) 20: else 21: s′i+1 = clip(Fθ(si, Aφ(si), omin, omax), s ′ i+2 = clip(Fθ(s ′ i+1, Aφ(s ′ i+1)), omin, omax) 22: ∇φL(φ) = 1N ∑ i ( ∇aQψ1(si, a)|a=Aφ(si)∇φAφ(si) + w∇aRη(si, a, s′i+1)|a=Aφ(si)∇φAφ(si) 23: +wγ∇aRη(s′i+1, a, s′i+2)|a=Aφ(s′i+1)∇φAφ(s ′ i+1) + wγ 2∇aQψ1(s′i+2, a)|a=Aφ(s′i+2)∇φAφ(s ′ i+2) ) 24: end if 25: Update target networks: 26: φ′ ← τφ+ (1− τ)φ′ 27: ψ′i ← τψi + (1− τ)ψ′i 28: end if 29: end for 30: Update r̄ ← ((e− 1)r̄ + r)/e 31: Update adaptive weight w ← min(1−max(0, r̄r0 ), 1)w0 32: end for hyperparameters worked well across different environments and required little tuning, while SAC-FORK required some careful tuning on choosing the base weights and the base cumulative rewards. Under review as a conference paper at ICLR 2021 B ADDITIONAL EXPERIMENTAL RESULTS B.1 BEST AVERAGE CUMULATIVE REWARD, STANDARD-DEVIATION, AND BEST INSTANCE CUMULATIVE REWARD Table 5 summarizes the best average cumulative rewards, the associated standard-deviations, and best instance cumulative rewards. They are defined as follows. Recall that each algorithm is trained for five instances, where each instance includes 0.5 million steps of Actor training. During the training process, we evaluated the algorithm every 5,000 steps without the exploration noise. For each evaluation, we calculated Under review as a conference paper at ICLR 2021 the average cumulative rewards (without discount) over 10 episodes, where each episode is 0 ∼ 1, 600 under BipedalWalker-v3, is 0 ∼ 1, 000 under Ant-v3, Walker2d-v3, Hopper-v3, Humanoid-v3, and is exactly 1,000 under HalfCheetah-v3. Now let X(l)τ denote the average cumulative reward at the τ th evaluation during the lth instance. Then Best Average Cumulative Reward (Best Average): max τ 1 5 5∑ l=1 X(l)τ Standard-Deviation: √√√√1 5 5∑ l=1 ( X (l) τ −Xτ )2 Best Instance Cumulative Reward (Best Instance): max l max τ X(l)τ Under review as a conference paper at ICLR 2021 B.2 COMPARISON WITH THE SAME AMOUNT OF TOTAL TRAINING In Section 4, the algorithms were compared assuming the same amount of Actor training since our focus is on the performance of Actor. Since TD3 uses delayed Actor training, Critic of TD3 and TD3-FORK is trained twice as much as Critic of SAC and SAC-FORk when Actor is trained the same number of steps, which gives advantage to TD3 and TD3-FORK. To further compare the performance of TD3-FORK and SAC-FORK, we present the results where for each algorithm, Actor and Critic, together, were trained 1.5 million steps. In particular, Actor was trained 0.5 million steps and Critic is trained 1 million steps under TD3 and TD3-FORK; and Actor and Critic were trained 0.75 million steps each under SAC and SAC-FORK. The results can be found in Figure 7. 0 0.1 0.2 0.3 0.4 0.5 Actor Training Times(million) 0 1000 2000 3000 4000 5000 Av er ag e R et ur n DDPG SAC SAC-FORK TD3-FORK TD3 Table 6 summarizes the best average cumulative rewards, standard-deviations, and the best instance cumulative rewards. We can see that in terms of the best average cumulative rewards, TD3-FORK performed the best in four out of the six environments, including BipedalWalker, Ant, Hopper and HalfCheetah; and SAC-FORK performed the best in the remaining two — Humanoid and Walker2d. B.3 DIFFERENT IMPLEMENTATIONS OF FORK As we mentioned in Section 3, FORK can have different implementations. We considered two examples in Section 4, and compared their performance as add-on to TD3. We call FORK with loss function Equation (5) Under review as a conference paper at ICLR 2021 FORK-S, standing for single-step FORK; call FORK with loss function Equation (6) and w′ = 0.5 FORKDQ, standing for Double-Q FORK; and FORK with loss function Equation (6) and w′ = 0 FORK-Q, standing for Q FORK. From Table 7, we can see that in terms of best average cumulative reward, TD3-FORK performs the best four out of the six environments and TD3-FORK-S performs the best in the remaining two. This is the reason we selected the current form of FORK. C BIPEDALWALKERHARDCORE TD3-FORK-DQ can solve the difficult BipedalWalker-Hardcore-v3 environment with as few as four hours. The hardcore version is much more difficult than BipedalWalker. For example, a known algorithm needs to train for days on a 72 cpu AWS EC2 with 64 worker processes taking the raw frames as input (https: //github.com/dgriff777/a3c_continuous). TD3-FORK-DQ, a variation of TD3-FORK, can solve the problem in as few as four hours by using default GPU setting provided by Google Colab2 and with sensory data (not images). The performance on BipedalWalkerHardcore-v3 during and after training can be viewed at https://youtu.be/0nYQpXtxh-Q. The codes have been submitted as a supplementary materials. To solve BipedalWalkerHardcore, we made several additional changes. 2https://colab.research.google.com/notebooks/intro.ipynb Under review as a conference paper at ICLR 2021 0 0.2M 0.4M 0.6M 0.8M 1M Timesteps 0 1000 2000 3000 4000 5000 Av er ag e C um ul at iv e R ew ar d TD3-FORK-S TD3-FORK-Q TD3-FORK-DQ TD3-FORK TD3 (i) We changed the -100 reward to -5. (ii) We increased other rewards by a factor of 5. (iii) We implemented a replay buffer where failed episodes, in which the bipedalwalker fell down at the end, and successful episodes are added to the replay-buffer with 5:1 ratio. The changes to the rewards (i) and (ii) were suggested in the blog 3. Using reward scaling to improve performance has been also reported in (Henderson et al., 2017). We made change (iii) because we found failed episodes are more useful for learning than successful ones. The reason we believe is that when the bipidedalwalker already knows how to handle a terrain, there is no need to further train using the same type of terrain. When the training is near the end, most of the episodes are successful so adding these successful episodes overwhelm the more useful episodes (failed ones), which slows down the learning. 3https://mp.weixin.qq.com/s?__biz=MzA5MDMwMTIyNQ==&mid=2649294554&idx=1&sn= 9f893801b8917575779430cae89829fb&scene=21#wechat_redirect Under review as a conference paper at ICLR 2021 Environment TD3-FORK TD3 TD3-FORK-S TD3-FORK-Q TD3-FORK-DQ
1. What are the concerns regarding the correctness of the proposed method in equation 2? 2. How does the reviewer think the method FORK updates the policy, and what is the issue with the greedy reward maximization? 3. Why does the reviewer disagree with the author's claim that FORK is simpler than other model-based algorithms? 4. What is the reviewer's opinion on the experimental evaluation of the method, and what changes would they suggest? 5. What is the novelty and significance of the proposed method, and how does it compare to other model-based RL algorithms?
Review
Review Correctness Issue: The loss in equation 2 is the sum of the regular actor loss using the critic + an n-step return version of policy value, which would be reasonable for a policy update. However, the expression provided for the gradient (and the calculations in the code) are incorrect in that they are not the gradient of equation 2 with respect to the policy. In particular, it is not reflective of policy performance, as it fails to account for the fact that the future states in the imagined rollout are also functions of the policy. The resulting policy update in FORK actually consists of: Regular actor update using the critic at the current state Updating to greedily maximize the reward at each intermediate timestep (which does not reflect the policy performance) Another regular actor update from the last state in the rollout. The straightforward way of correcting this to make it optimize the loss would be to simply differentiate through the model, which may not work well as differentiating through learned models often leads to instability (though it may be fine for such short rollouts). Alternatively, a REINFORCE estimator can instead be used for the gradient of the n-step term for stochastic policies. We note that the loss in eq. 6 (if we keep the dependence of the future state on the policy) is also not reflective of policy returns (since it overweights future returns over the present), but can provide reasonable interpretation of the FORK update if we drop the dependence of the future state on the policy as the authors do. The gradient of the FORK-Q variant the authors take from eq.6 is the actually the update we would obtain by using the regular actor update (the ∇ θ π θ ( s ) Q ( s , π ( s ) ) term for deterministic policies) but under a different state distribution. Instead of using the state distribution in the replay buffer, the update takes into account the distribution obtained by running the policy for a few steps starting from the buffer distribution. As running the policy from the buffer distribution would result in a distribution closer to the on-policy state distribution, one explanation for why the FORK-Q update would improve is that the gradient is closer to the on-policy policy gradients by changing the state distribution. The same reasoning can be applied to primary FORK update presented, as it includes an actor update on the value in future states (3rd term in my previous list). The issue is with the greedy reward maximization in the inner steps (2nd term), but maybe it simply doesn't hurt on the environments tested or it perhaps takes advantage of a bias-variance tradeoff in greedily maximizing immediate reward for a few intermediate timesteps. I would like the authors to explicitly address these issues in the paper and present a clear explanation of why their FORK should be a better policy update. Relation with Model Based RL Methods: Overall, I also disagree with the authors' claims that FORK is very different and much simpler than other model based algorithms. With regards to their comment on how their method is somehow simpler? than rollout based methods, their policy update is using a short Monte-Carlo rollout for estimating the policy gradients; the difference being that the rollouts are being used to update a policy rather than to explicitly plan at test time. The authors' claim that FORK does not require high-fidelity simulation seems unsupported to me, and the claim that model-based RL algorithms use the model in a sophisticated way is vague. In particular, Dyna-style algorithms (like STEVE https://arxiv.org/abs/1807.01675, MBPO https://arxiv.org/abs/1906.08253) which use the model to generate experience to help learn the critic, seem to be using the model in the same way as FORK (in the sense that they only use the model to generate samples with short rollouts). There is also no discussion of methods like ME-TRPO https://arxiv.org/abs/1802.10592, SLBO https://arxiv.org/abs/1807.03858, or the algorithms in https://arxiv.org/abs/2004.07804, all of which only the model to generate trajectories to use with a policy gradient algorithm. Overall, I would appreciate much more discussion about how FORK relates to and compares empirically against past RL algorithms that only use the model to generate samples, as well as clarifying the statements about FORK uses the model in a less sophisticated way. On a separate note, using the model to generate n-step-return estimates of the policy value has been previously done in https://arxiv.org/abs/1807.01675 for example. The key difference is that prior work used it to generate target values for learning the Q-function, while here it is used purely for policy updates. Given how similarly the models are used however, I would recommend discussing this line of work explicitly in the related work, even though they are complementary. Experimental Evaluation: Despite the aforementioned correctness issue, the method seems to provide improvements when applied to TD3, and seemingly smaller improvements on top of SAC. However, I find it extremely strange that the authors chose in Figure 4 to plot returns against the number of training steps instead of the number of samples. As acknowledged by the authors, this makes the SAC vs TD3 comparisons incomparable, with TD3 and TD3-FORk enjoying the advantage of having seen twice as many samples. Moreover, comparing only on the number of actor/critic updates isn't even a fair comparison between FORK and baselines, as FORK additionally has to train a dynamics model and reward predictor. I would highly recommend simply showing learning curves with respect to the actual number of environment samples, rather than arbitrarily using the number of actor updates. I would also like to see comparisons against model based RL baselines, particularly MBPO, which uses a Dyna-style update with SAC to compare which method of utilizing the model is better. The authors could also see if the FORK actor update further improves upon MBPO or other model based RL methods. Summary: As it stands, I believe the paper should be rejected due to the correctness issues (and resulting lack of justification for why FORK should give better actor updates), and insufficient discussion of how it relates to prior in model based RL. To consider accepting the paper, I would at least need to see these issues addressed by the authors. Regarding novelty and significance, using a model to predict n-step value estimates (as the paper claims to be doing) or using the model to explicitly adjust the state distribution of the policy update (as I suspect this might be doing instead) for the policy update in an off-policy actor critic algorithm has not been done before as far as I know. However, this change in how the model is used seems fairly minor, and to be convinced it were useful, I would like to see evidence of how it compares against the other model based RL algorithms. In particular, the benefit I imagine it might have over other model based RL methods is in being more robust to poorly fit models, but I would need to see empirical evidence supporting this.
ICLR
Title FORK: A FORward-looKing Actor for Model-Free Reinforcement Learning Abstract In this paper, we propose a new type of Actor, named forward-looking Actor or FORK for short, for Actor-Critic algorithms. FORK can be easily integrated into a model-free ActorCritic algorithm. Our experiments on six Box2D and MuJoCo environments with continuous state and action spaces demonstrate significant performance improvement FORK can bring to the state-of-the-art algorithms. A variation of FORK can further solve BipedalWalkerHardcore in as few as four hours using a single GPU. 1 INTRODUCTION Deep reinforcement learning has had tremendous successes, and sometimes even superhuman performance, in a wide range of applications including board games (Silver et al., 2016), video games (Vinyals et al., 2019), and robotics (Haarnoja et al., 2018a). A key to these recent successes is the use of deep neural networks as high-capacity function approximators that can harvest a large amount of data samples to approximate high-dimensional state or action value functions, which tackles one of the most challenging issues in reinforcement learning problems with very large state and action spaces. Many modern reinforcement learning algorithms are model-free, so they are applicable in different environments and can readily react to new and unseen states. This paper considers model-free reinforcement learning for problems with continuous state and action spaces, in particular, the Actor-Critic method, where Critic evaluates the state or action values of the Actor’s policy and Actor improves the policy based on the value estimation from Critic. To draw an analogy between Actor-Critic algorithms and human decision making, consider the scenario where a high school student is deciding on which college to attend after graduation. The student, like Actor, is likely to make her/his decision based on the perceived values of the colleges, where the value of a college is based on many factors including (i) the quality of education it offers, its culture, and diversity, which can be viewed as instantaneous rewards of attending the college; and (ii) the career opportunities after finishing the college, which can be thought as the future cumulative reward. We now take this analogy one step further, in human decision making, we often not only consider the “value” of current state and action, but also further forecast the outcome of the current decision and the value of the next state. In the example above, a student often explicitly takes into consideration the first job she/he may have after finishing college, and the “value” of the first job. Since forward-looking is common in human decision making, we are interested in understanding whether such forward-looking decision making can help Actor; in particular, whether it is useful for Actor to forecast the next state and use the value of future states to improve the policy. To our great surprise, a relative straightforward implementation of forward-looking Actor, as an add-on to existing Actor algorithms, improves Actor’s performance by a large margin. Our new Actor, named FOrward-looKing Actor or FORK for short, mimics human decision making where we think multi-step ahead. In particular, FORK includes a neural network that forecasts the next state given the current state and current action, called system network; and a neural network that forecasts the reward given a (state, action) pair, called reward network. With the system network and reward network, Under review as a conference paper at ICLR 2021 FORK can forecast the next state and consider the value of the next state when improving the policy. For example, consider the Deep Deterministic Policy Gradient (DDPG) (Lillicrap et al., 2015), which updates the parameters of Actor as follows: φ← φ+ β∇φQψ(st, Aφ(st)), where st is the state at time t, φ are Actor’s parameters, β is the learning rate, Qψ(s, a) is the Critic network, and Aφ(s) is the Actor network. With DDPG-FORK, the parameters can be updated as follows: φ←φ+ β (∇φQψ(st, Aφ(st)) +∇φRη(st, Aφ(st)) + γ∇φRη(s̃t+1, Aφ(s̃t+1))+ γ2∇φQψ(s̃t+2, Aφ(s̃t+2)) ) , (1) where Rη is the reward network, and s̃t+1 and s̃t+2 are the future states forecast by the system network Fθ. We will see that FORK can be easily incorporated into most deep Actor-Critic algorithms, by adding two additional neural networks (the system network and the reward network), and by adding extra terms to the loss function when training Actor, e.g. adding term Rη(st, Aφ(st)) + γRη(s̃t+1, Aφ(s̃t+1)) + γ 2Qψ(s̃t+2, Aφ(s̃t+2)) for each sampled state st to implement (1). We remark that Equation (1) is just one example of FORK, FORK can have different implementations (a detailed discussion can be found in Section 3). We further remark that learning the system model is not a new idea and has a long history in reinforcement learning, called model-based reinforcement learning (some state-of-the-art model-based reinforcement learning algorithms and the benchmark can be found in (Wang et al., 2019)). Model-based reinforcement learning uses the model in a sophisticated way, often based on deterministic or stochastic optimal control theory to optimize the policy based on the model. FORK only uses the system network as a blackbox to forecast future states, and does not use it as a mathematical model for optimizing control actions. With this key distinction, any model-free Actor-Critic algorithm with FORK remains to be model-free. In our experiments, we added FORK to two state-of-the-art model-free algorithms, according to recent benchmark studies (Duan et al., 2016a; Wang et al., 2019): TD3 (Fujimoto et al., 2018) (for deterministic policies) and SAC (Haarnoja et al., 2018b) (for stochastic policies). The evaluations on six challenging environments with continuous state space and action space show significant improvement when adding FORK. In particular, TD3-FORK performs the best among the all we tested. For Ant-v3, it improves the average cumulative reward by more than 50% than TD3, and achieves TD3’s best performance using only 35% of training samples. BipedalWalker-v3 is considered “solved” when the agent obtains an average cumulative reward of at least 3001. TD3-FORK only needs 0.23 million actor training steps to solve the problem, half of that under TD3. Furthermore, a variation of TD3-FORK solves BipedalWalkerHardcore, a well known difficult environment, with as few as four hours using a single GPU. 1.1 RELATED WORK The idea of using learned models in reinforcement learning is not new, and actually has a long history in reinforcement learning. At a high level, FORK shares a similar spirit as model-based reinforcement learning and rollout. However, in terms of implementation, FORK is very different and much simpler. Rollout in general requires the Monte-Carlo method (Silver et al., 2017) to simulate a finite number of future states from the current state and then combines that with value function approximations to decide the action to take at the current time. FORK does not require any high-fidelity simulation. The key distinction between FORK and model-based reinforcement learning is that model-based reinforcement learning uses the learned 1https://github.com/openai/gym/blob/master/gym/envs/box2d/bipedal_walker.py Under review as a conference paper at ICLR 2021 model in a sophisticated manner. For example, in SVG (Heess et al., 2015), the learned system model is integrated as a part of the calculation of the value gradient, in (Gu et al., 2016), refitted local linear model and rollout are used to derive linear-Gaussian controller, and (Bansal et al., 2017) uses a learned dynamical model to compute the trajectory distribution of a given policy and consequently estimates the corresponding cost using a Bayesian optimization-based policy search. More model-based reinforcement learning algorithms and related benchmarking can be found in (Wang et al., 2019). FORK, on the other hand, only uses the system network to predict future states, and does not use the system model beyond that. Other related work that accelerates reinforcement learning algorithms includes: acceleration through exploration strategies (Gupta et al., 2018), optimizers (Duan et al., 2016b), and intrinsic reward (Zheng et al., 2018), just to name a few. These approaches are complementary to ours. FORK can be added to further accelerate learning. 2 BACKGROUND Reinforcement Learning algorithms aim at learning policies that maximize the cumulative reward by interacting with the environment. We consider a standard reinforcement learning setting defined by a Markov decision process (MDP) (S,A, p0, p, r, γ), where S is a set of states, A is the action space, p0(s) is the probability distribution of the initial state, p : S × S ×A → [0,∞) is the transition density function, which represents the distribution of the next state st+1 given current state st and action at, r : S×A → [rmin, rmax] is the bounded reward function on each transition, and γ ∈ (0, 1] is the discount factor. We consider a discrete-time system. At each time step t, given the current st ∈ S, the agent selects an action at ∈ A based on a (deterministic or stochastic) policy π(at|st), which moves the environment to the next state st+1, and yields a reward rt = r(st, at) to the agent. We consider stationary policies in this paper under which the action is taken based on st, and is independent of other historical information. Starting from time 0, the return of given policy π is the discounted cumulative reward Jπ(i) = T∑ t=0 γtr(st, at), given s0 = i. Jπ(i) is also called the state-value function. Our goal is to learn a policy π∗ that maximizes this cumulative reward π∗ ∈ arg max π Jπ(i) ∀i. We assume our policy is parameterized by parameter φ, denoted by πφ, e.g. by the Actor network in ActorCritic Algorithms. In this case, our goal is to identify the optimal parameter φ∗ that maximizes φ∗ ∈ arg maxJπφ(i). Instead of state-value function, it is often convenient to work with action-value function, Q-function, which is defined as follows: Qπ(s, a) = E [rπ(s, a) + γJπ(s ′)] , where s′ is the next state given current state s and action a. The optimal policy is a policy that satisfies the following Bellman equation (Bellman, 1957): Qπ∗(s, a) = E [ r(s, a) + γ max a′∈A Qπ∗(s ′, a′) ] . When neural networks are used to approximate action-value functions, we denote the action-value function by Qψ(s, a), where ψ is the parameters of the neural network. Under review as a conference paper at ICLR 2021 3 FORK — FORWARD-LOOKING ACTOR This paper focuses on Actor-Critic algorithms, where Critic estimates the state or action value functions of the current policy, and Actor improves the policy based on the value functions. We propose a new type of Actor, FORK. More precisely, a new training algorithm that improves the policy by considering not only the action-value of the current state (or states of the current mini-batch), but also future states and actions forecast using a learned system model and a learned reward model. This forward-looking Actor is illustrated in Figure 1. In FORK, we introduce two additional neural networks: The system network Fθ. The network is used to predict the next state of the environment, i.e., given current state st and action at, it predicts the next state s̃t+1 = Fθ(st, at). With experiences (st, at, st+1), training the system network is a supervised learning problem. The neural network can be trained using mini-batch from replaybuffer and smooth-L1 loss L(θ) = ‖st+1 − Fθ(st, at)‖smooth L1. The reward network Rη. This network predicts the reward given current state st and action at, i.e. r̃t = Rη(st, at). The network can be trained from experience (st, at, rt), with MSE loss L(η) = ‖rt −Rη(st, at)‖2. FORK. With the system network and the reward network, the agent can forecast the next state, the next next stat and so on. Actor can then use the forecast to improve the policy. For example, we consider the following loss function L(φ) = E [ −Qψ(st, Aφ(st))−Rη(st, Aφ(st))− γRη(s̃t+1, Aφ(s̃t+1))− γ2Qψ (s̃t+2, Aφ(s̃t+2)) ] . (2) In the loss function above, st are from data samples (e.g. replay buffer), s̃t+1 and s̃t+2 are calculated from the system network as shown below: s̃t+1 = Fθ(st, Aφ(st)) and s̃t+2 = Fθ(s̃t+1, Aφ(s̃t+1)). (3) Note that when training Actor Aφ with loss function L(φ), all other parameters in L(φ) are regarded as constants except φ (see the PyTorch code in the supplemental materials). The action-function Q, without function approximation, under current policy Aφ satisfies Q(st, Aφ(s)) = E [ r(st, Aφ(st)) + γr(st+1, Aφ(st+1)) + γ 2Q (st+2, Aφ(st+2)) ] , where r, st+1 and st+2 are the actual rewards and states under the current policy, not estimated values. Therefore, the loss function L(φ) can be viewed as the average of two estimators. Given action values from Critic and with a mini-batch of size N, FORK updates its parameters as follows: φ← φ− βtOθL(φ), where βt is the learning rate and OφL(φ) = 1 N N∑ i=1 ( ∇aQψ(si, a)|a=Aφ(si)∇φAφ(si) +∇aRη(si, a)|a=Aφ(si)∇φAφ(si) +γ∇aRη(s̃′i, a)|a=Aφ(s̃′i)∇φAφ(s̃ ′ i).+ γ 2∇aQψ(s̃′′i , a)|a=Aφ(s̃′′i )∇φAφ(s̃ ′′ i ) ) , Under review as a conference paper at ICLR 2021 where s̃′i and s̃ ′′ i are the next state and the next next state estimated from the system network. We note that it is important to use the system network to generate future states as in Equation (3) because they mimic the states under the current policy. If we would sample a sequence of consecutive states from the replay buffer, then the sequence is from the old policy, which does not help the learning. Figure 2 compares TD3-FORK, TD3, and TD3-MT, which samples a sequence of three consecutive states, on the BipedalWalker-v3 environment. We can clearly see that simply using consecutive states from experiences does not help improve learning. In fact, it significantly hurts the learning. Modified Reward Model: We found from our experiments that the reward network can more accurately predict reward rt when including the next state st+1 as input into the reward network (an example can found in Appendix A.1). Therefore, we use a modified reward network Rη(st, at, st+1) in FORK. Adaptive Weight: Loss function L(φ) in our algorithm uses the system network and the reward network to boost learning. In our experiments, we found that the forecasting can significantly improve the performance, except at the end of learning. Since the system and reward networks are not perfect, the errors in prediction can introduce errors/noises. To overcome this issue, we found it is helpful to use an adaptive weight w so that FORK accelerates learning at the beginning but its weight decreases gradually as it gets close to the learning goal. A comparison between fixed weights and adaptive weights can be found in Appendix A.2. We use a simple adaptive weight w = ( r̄ r0 )1 0 w0, where r̄ is the moving average of cumulative reward (per episode), and r0 is a predefined goal, w0 is the initial weight, and (a)10 = a if 0 ≤ a ≤ 1, = 0 if a < 0 and = 1 if a > 1. The loss function with adaptive weight becomes L(φ) = E [ −Qψ(st, Aφ(st))− wRη(st, Aφ(st))− wγRη(s̃t+1, Aφ(s̃t+1))− wγ2Qψ (s̃t+2, Aφ(s̃t+2)) ] . (4) Furthermore, we set a threshold and let w = 0 if the loss of the system network is larger than the threshold. This is to avoid using FORK when the system and reward networks are very noisy. We note that in our experiments, the thresholds were chosen such that w = 0 for around 20, 000 steps at the beginning of each instance, which includes the first 10,000 random exploration steps. Different Implementations of FORK: It is easy to see FORK can be implemented in different forms. For example, instead of two-step ahead, we can use one-step ahead as follows: L(φ) = E [−Qψ(st, Aφ(st))− wRη(st, Aφ(st))− wγQψ (s̃t+1, Aφ(s̃t+1))] , (5) or only use future action values: L(φ) = E [−Qψ(st, Aφ(st))− w (Qψ (s̃t+1, Aφ(s̃t+1)) + w′Qψ (s̃t+2, Aφ(s̃t+2)))] . (6) We compared these two versions with FORK. The performance comparison can be found in Appendix B.3. 4 EXPERIMENTS In this section, we evaluate FORK as an add-on to existing algorithms. We name an algorithm with FORK as algorithm-FORK, e.g. TD3-FORK or SAC-FORK. As an example, a detailed description of TD3-FORK can be found in Appendix A.3. We focused on two algorithms: TD3 (Fujimoto et al., 2018) and SAC (Haarnoja Under review as a conference paper at ICLR 2021 et al., 2018b) because they were found to have the best performance among model-free reinforcement learning algorithms in recent benchmarking studies (Duan et al., 2016a; Wang et al., 2019). We compared the performance of TD3-FORK and SAC-FORK with TD3, SAC and DDPG (Lillicrap et al., 2015). 4.1 BOX2D AND MUJOCO ENVIRONMENTS We selected six environments: BipedalWalker-v3 from Box2D (Catto, 2011), Ant-v3, Hooper-v3, HalfCheetah-v3, Humanoid-v3 and Walker2d-v3 from MuJoCo (Todorov et al., 2012) as shown in Figure 3. All these environments have continuous state spaces and action spaces. 4.2 IMPLEMENTATION DETAILS Terminology. step (or time): one operation, e.g. training Actor with a mini-batch; episode: a single-run of the environment from the beginning to the end, consisting of many steps; and instance: the entire training consisting of multiple episodes. Hyperparameters. Because FORK is an add-on, for TD3, we used the authors’ implementation (https: //github.com/sfujim/TD3); for SAC, we used a PyTorch version (https://github.com/ vitchyr/rlkit) recommended by the authors without any change except adding FORK. The hyperparameters of both TD3 and SAC are summarized in Table 3 in Appendix A.4, and the hyperparameters related to FORK are summarized in Table 4 in the same appendix. We can see TD3-FORK does not require much hyperparameter tuning. The system network and reward network used in the environments are the same except for the Humanoid-v3 for which we use larger system and reward networks because the dimension of the system is higher than other systems. The base weight w0 is the same for all environments, the base rewards are the typical cumulative rewards under TD3 after a successful training, and the system thresholds are the typical estimation errors after about 20,000 steps. SAC-FORK requires slightly more hyperparameter tuning. The base weights were chosen to be smaller values, the base rewards are the typical cumulative rewards under SAC, and the system thresholds are the same as those under TD3-FORK. Initial Exploration. For each task and each algorithm, we use a random policy for exploration for the first 10,000 steps. Each step is one interaction with the environment. Duration of Experiments. For each environment and each algorithm, we ran five different instances with different random seeds. Since we focus on Actor performance, Actor was trained for 0.5 million times for each instance. Since TD3 uses a delayed Actor with frequency 2 (i.e. Actor and Critic are trained with 1:2 ratio), Critic was trained one million times under TD3 and TD3-FORK. For SAC, SAC-FORK and DDPG, Under review as a conference paper at ICLR 2021 Critic was trained 0.5 million times. The performance with the same amount of total training, including Critic training and Actor training, can be found in Appendix B.2, where for each algorithm, Critic and Actor, together, were trained 1.5 millions times. 4.3 RESULTS Figure 4 shows the average cumulative rewards, where we evaluated the policies every 5,000 steps without exploration noise during training process. Each evaluation was averaged over 10 episodes. We train five different instances for each algorithm with same random seeds. The solid curves shows the average cumulative rewards (per episode), and the shaded region represents the standard deviations. The best average cumulative rewards (its definition can be found in Appendix B.1) are summarized in Table 1. We can see that TD3-FORK outperforms all other algorithms. For Ant-v3, TD3-FORK improves the best average cumulative reward by more than 50% (5699.37 (TD3-FORK) versus 3652.11 (TD3)).0 0.1 0.2 0.3 0.4 0.5 Actor Training Times(million) 0 1000 2000 3000 4000 5000 Av er ag e R et ur n We also studied the improvement in terms of sample complexity. In Table 2, we summarized the number of Actor training required under TD3-FORK (SAC-FORK) to achieve the best average cumulative reward under TD3 (SAC). For example, for BipedalWalker-v3, TD3 achieved the best average cumulative reward with 0.4925 million steps of Actor training; and TD3-FORK achieved the same value with only 0.225 million steps of Actor training, reducing the required samples by more than a half. Under review as a conference paper at ICLR 2021 In summary, FORK improves the performance of both TD3 and SAC after being included as an add-on. The improvement is more significant when adding to TD3 than adding to SAC. FORK improves TD3 in all six environments, and improves SAC in three of the six environments. Furthermore, TD3-FORK performs the best in all six environment. More statistics about this set of experiments can be found in Appendix B.1. In Appendix B.2, we also presented experimental results where Actor and Critic together have the same amount of training across all algorithms (i.e. under TD3 and TD3-FORK, Actor was trained 0.5 million times and Critic was trained 1 million times; and under other algorithms, both Actor and Critic were trained 0.75 million times). In this case, TD3-FORK performs the best among four of the six environments, and SAC-FORK performs the best in the rest two environments. 4.4 BIPEDALWALKER-HARDCORE-V3 A variation of TD3-FORK can also solve a well-known difficult environment, BipedalWalker-Hardcorev3, in as few as four hours using a single GPU. From the best of our knowledge, the known algorithm needs to train for days on a 72 cpu AWS EC2 with 64 worker processes taking the raw frames as input (https://github.com/dgriff777/a3c_continuous). You can view the performance on BipedalWalkerHardcore-v3 during and after training at https://youtu.be/0nYQpXtxh-Q. The implementation details can be found in Appendix C. 5 CONCLUSIONS This paper proposes FORK, forward-looking Actor, as an add-on to Actor-Critic algorithms. The evaluation of six environments demonstrated the significant performance improvements by adding FORK to two state-of-the-art model-free reinforcement learning algorithms. A variation of TD3-FORK further solved BipedalWalkerHardcore in as few as four hours with a single GPU. Under review as a conference paper at ICLR 2021 A ADDITIONAL DETAILS OF FORK A.1 REVISED REWARD NETWORK We found from our experiments that the reward network can more accurately predict reward rt when including the next state st+1 as input into the reward network. Figure 5 shows the mean-square-errors (MSE) of the reward network with (st, at) as the input versus with (st, at, st+1) as the input for BipedalWalker-v3 during the first 10,000 steps. We can clearly see that MSE is lower in the revised reward network. A.2 ADAPTIVE WEIGHTS VERSUS FIXED WEIGHTS We compared TD3-FROK with the fixed weights, named as TD3-FORK-F, where the weight is chosen to be 0.4. TD3-FORK performs the best in four out of the six environments. TD3-FORK-F has a worse performance than TD3 on Walker2d-v3. We therefore proposed and used the adaptive weight because of this observation. Under review as a conference paper at ICLR 2021 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Timesteps(million) 0 1000 2000 3000 4000 5000 Av er ag e R et ur n TD3-FORK-F TD3-FORK TD3 A.3 TD3-FORK The detailed description of TD3-FORK can be found in Algorithm 1, and the codes are also submitted as a supplemental material. A.4 HYPERPARAMETERS Table 3 lists the hyper-parameter used in DDPG, SAC, SAC-FORK and TD3-FORK. We kept the same hyperparamter values used in SAC and TD3 codes provided or recommended by the authors. We did not tune these parameters because the goal is to show that FORK is a simple yet powerful add-on to existing Actor-Critic algorithms. Table 4 summarizes the environment specific parameters. In particular, the base weight and base cumulative reward used in implementing the adaptive weight, and threshold for adding FORK. The base cumulative rewards for TD3-FORk are the typical cumulative rewards under TD3 after training Actor for 0.5 million steps. The base cumulative rewards for SAC-FORK are similarly chosen but with a more careful tuning. The thresholds are the typical loss values after training the system networks for about 20,000 times including the first 10,000 exploration steps. In the implementation, FORK is added to Actor training only after the system network can predict the next state reasonably well. We observed TD3-FORK with our intuitive choices of Under review as a conference paper at ICLR 2021 Algorithm 1 TD3-FORK Initialize critic network Qψ1 , Qψ2 system network Fθ, Rη and actor network Aφ with random parameters ψ1, ψ2, θ, η, φ 1: Initialize target network φ′ ← φ, ψ′1 ← ψ1, ψ′2 ← ψ2 Initialize replay buffer B, soft update parameter τ Initialize base reward r0, w0, threshold l̄ and moving average reward r̄ ← 0 Initialize noise clip bound c, state bound (omin, omax) 2: for episode e = 1, . . . ,M do 3: Initialize observation state s0 4: Initialize episode reward r = 0 5: for t = 1, . . . , T do 6: Select action at according to the current policy and exploration noise at ∼ Aφ(s) + t, where t ∼ N (0, σ) 7: Execute action at and observe reward rt, new state st+1 8: Store transition tuple (st, at, rt, st+1) into replay buffer B 9: Sample a random minibatch of N transitions (si, ai, ri, si+1) from B 10: ãi ← πφ′(si+1) + , ∼ clip(N (0, σ̃),−c, c)) 11: Set yi = ri + γminj=1,2Qψ′i(si+1) 12: r ← r + rt 13: Update critic network by minimizing the loss: L(ψ) = 1N ∑2 j=1 ∑ i ( yi −Qψj (si, ai) )2 14: Update state system network by minimizing loss: L(θ) = ‖si+1 − Fθ(si, ai)‖smooth L1 15: Update reward system network by minimizing the loss: L(η) = 1N ∑ i (ri −Rη(si, ai, si+1)) 2 16: if t mod d then 17: Update φ by the sampled policy gradient: 18: if L(θ) > l̄ then 19: ∇φL(φ) = 1N ∑ i∇aQψ1(si, a)|a=Aφ(si)∇φAφ(si) 20: else 21: s′i+1 = clip(Fθ(si, Aφ(si), omin, omax), s ′ i+2 = clip(Fθ(s ′ i+1, Aφ(s ′ i+1)), omin, omax) 22: ∇φL(φ) = 1N ∑ i ( ∇aQψ1(si, a)|a=Aφ(si)∇φAφ(si) + w∇aRη(si, a, s′i+1)|a=Aφ(si)∇φAφ(si) 23: +wγ∇aRη(s′i+1, a, s′i+2)|a=Aφ(s′i+1)∇φAφ(s ′ i+1) + wγ 2∇aQψ1(s′i+2, a)|a=Aφ(s′i+2)∇φAφ(s ′ i+2) ) 24: end if 25: Update target networks: 26: φ′ ← τφ+ (1− τ)φ′ 27: ψ′i ← τψi + (1− τ)ψ′i 28: end if 29: end for 30: Update r̄ ← ((e− 1)r̄ + r)/e 31: Update adaptive weight w ← min(1−max(0, r̄r0 ), 1)w0 32: end for hyperparameters worked well across different environments and required little tuning, while SAC-FORK required some careful tuning on choosing the base weights and the base cumulative rewards. Under review as a conference paper at ICLR 2021 B ADDITIONAL EXPERIMENTAL RESULTS B.1 BEST AVERAGE CUMULATIVE REWARD, STANDARD-DEVIATION, AND BEST INSTANCE CUMULATIVE REWARD Table 5 summarizes the best average cumulative rewards, the associated standard-deviations, and best instance cumulative rewards. They are defined as follows. Recall that each algorithm is trained for five instances, where each instance includes 0.5 million steps of Actor training. During the training process, we evaluated the algorithm every 5,000 steps without the exploration noise. For each evaluation, we calculated Under review as a conference paper at ICLR 2021 the average cumulative rewards (without discount) over 10 episodes, where each episode is 0 ∼ 1, 600 under BipedalWalker-v3, is 0 ∼ 1, 000 under Ant-v3, Walker2d-v3, Hopper-v3, Humanoid-v3, and is exactly 1,000 under HalfCheetah-v3. Now let X(l)τ denote the average cumulative reward at the τ th evaluation during the lth instance. Then Best Average Cumulative Reward (Best Average): max τ 1 5 5∑ l=1 X(l)τ Standard-Deviation: √√√√1 5 5∑ l=1 ( X (l) τ −Xτ )2 Best Instance Cumulative Reward (Best Instance): max l max τ X(l)τ Under review as a conference paper at ICLR 2021 B.2 COMPARISON WITH THE SAME AMOUNT OF TOTAL TRAINING In Section 4, the algorithms were compared assuming the same amount of Actor training since our focus is on the performance of Actor. Since TD3 uses delayed Actor training, Critic of TD3 and TD3-FORK is trained twice as much as Critic of SAC and SAC-FORk when Actor is trained the same number of steps, which gives advantage to TD3 and TD3-FORK. To further compare the performance of TD3-FORK and SAC-FORK, we present the results where for each algorithm, Actor and Critic, together, were trained 1.5 million steps. In particular, Actor was trained 0.5 million steps and Critic is trained 1 million steps under TD3 and TD3-FORK; and Actor and Critic were trained 0.75 million steps each under SAC and SAC-FORK. The results can be found in Figure 7. 0 0.1 0.2 0.3 0.4 0.5 Actor Training Times(million) 0 1000 2000 3000 4000 5000 Av er ag e R et ur n DDPG SAC SAC-FORK TD3-FORK TD3 Table 6 summarizes the best average cumulative rewards, standard-deviations, and the best instance cumulative rewards. We can see that in terms of the best average cumulative rewards, TD3-FORK performed the best in four out of the six environments, including BipedalWalker, Ant, Hopper and HalfCheetah; and SAC-FORK performed the best in the remaining two — Humanoid and Walker2d. B.3 DIFFERENT IMPLEMENTATIONS OF FORK As we mentioned in Section 3, FORK can have different implementations. We considered two examples in Section 4, and compared their performance as add-on to TD3. We call FORK with loss function Equation (5) Under review as a conference paper at ICLR 2021 FORK-S, standing for single-step FORK; call FORK with loss function Equation (6) and w′ = 0.5 FORKDQ, standing for Double-Q FORK; and FORK with loss function Equation (6) and w′ = 0 FORK-Q, standing for Q FORK. From Table 7, we can see that in terms of best average cumulative reward, TD3-FORK performs the best four out of the six environments and TD3-FORK-S performs the best in the remaining two. This is the reason we selected the current form of FORK. C BIPEDALWALKERHARDCORE TD3-FORK-DQ can solve the difficult BipedalWalker-Hardcore-v3 environment with as few as four hours. The hardcore version is much more difficult than BipedalWalker. For example, a known algorithm needs to train for days on a 72 cpu AWS EC2 with 64 worker processes taking the raw frames as input (https: //github.com/dgriff777/a3c_continuous). TD3-FORK-DQ, a variation of TD3-FORK, can solve the problem in as few as four hours by using default GPU setting provided by Google Colab2 and with sensory data (not images). The performance on BipedalWalkerHardcore-v3 during and after training can be viewed at https://youtu.be/0nYQpXtxh-Q. The codes have been submitted as a supplementary materials. To solve BipedalWalkerHardcore, we made several additional changes. 2https://colab.research.google.com/notebooks/intro.ipynb Under review as a conference paper at ICLR 2021 0 0.2M 0.4M 0.6M 0.8M 1M Timesteps 0 1000 2000 3000 4000 5000 Av er ag e C um ul at iv e R ew ar d TD3-FORK-S TD3-FORK-Q TD3-FORK-DQ TD3-FORK TD3 (i) We changed the -100 reward to -5. (ii) We increased other rewards by a factor of 5. (iii) We implemented a replay buffer where failed episodes, in which the bipedalwalker fell down at the end, and successful episodes are added to the replay-buffer with 5:1 ratio. The changes to the rewards (i) and (ii) were suggested in the blog 3. Using reward scaling to improve performance has been also reported in (Henderson et al., 2017). We made change (iii) because we found failed episodes are more useful for learning than successful ones. The reason we believe is that when the bipidedalwalker already knows how to handle a terrain, there is no need to further train using the same type of terrain. When the training is near the end, most of the episodes are successful so adding these successful episodes overwhelm the more useful episodes (failed ones), which slows down the learning. 3https://mp.weixin.qq.com/s?__biz=MzA5MDMwMTIyNQ==&mid=2649294554&idx=1&sn= 9f893801b8917575779430cae89829fb&scene=21#wechat_redirect Under review as a conference paper at ICLR 2021 Environment TD3-FORK TD3 TD3-FORK-S TD3-FORK-Q TD3-FORK-DQ
1. What is the main contribution of the paper in the field of reinforcement learning? 2. What are the strengths and weaknesses of the proposed policy gradient algorithm? 3. How does the reviewer assess the novelty and significance of the proposed approach compared to prior works in model-based RL? 4. What are the limitations of the experimental results presented in the paper? 5. Does the reviewer have any concerns or questions regarding the motivation and rationale behind the proposed method?
Review
Review The paper proposes to combine ideas from model-based RL into model-free off-policy policy gradient algorithms (like SAC and TD3). Specifically, the paper proposes to learn auxiliary models of environment rewards and dynamics and use a two-step rollout from these models during the computation of the policy gradient. The paper presents results of this mechanism applied to standard SAC and TD3 implementations on a variety of continuous control environments with favorable results. Strengths: -- The story is generally easy to follow. -- I appreciate that the authors didn't just evaluate on the extremely common (and somewhat saturated) MuJoCo benchmarks, and presented additional results on Bipedal Walker. -- As far as I can tell, the policy update proposed here is novel. Weaknesses: -- While the policy update appears novel, it is very similar to techniques in the model-based RL literature. Given this, it would greatly improve the paper if it had appropriate comparisons to similar model-based techniques, especially those which claim to combine model-free updates with model-based techniques; for example https://arxiv.org/abs/1906.08253 -- Moreover, experimentally it would be nice to see comparisons to state-of-the-art model based RL methods. For example, https://arxiv.org/abs/1802.10592 -- In terms of the current experiment results, I found the conclusions favorable to the proposed technique, but not a very compelling demonstration. For example, in Table 1, almost all the environments produce only a slight benefit for the proposed method. It appears the only significant benefit is on Ant. Similarly in Table 2, we see the results of SAC-FORK are sometimes much worse than SAC on its own. -- In terms of motivation for the method, I was not entirely convinced of why the proposed update is needed. The paper appeals to the idea of needing to reason about values in the future. But shouldn't the Q-value already encapsulate this? Moreover, the proposed update ends up only optimizing actions in the future, rather than somehow reasoning about the values at steps t+1, t+2 to decide the best action at step t.
ICLR
Title Distribution-Interpolation Trade off in Generative Models Abstract We investigate the properties of multidimensional probability distributions in the context of latent space prior distributions of implicit generative models. Our work revolves around the phenomena arising while decoding linear interpolations between two random latent vectors – regions of latent space in close proximity to the origin of the space are oversampled, which restricts the usability of linear interpolations as a tool to analyse the latent space. We show that the distribution mismatch can be eliminated completely by a proper choice of the latent probability distribution or using non-linear interpolations. We prove that there is a trade off between the interpolation being linear, and the latent distribution having even the most basic properties required for stable training, such as finite mean. We use the multidimensional Cauchy distribution as an example of the prior distribution, and also provide a general method of creating non-linear interpolations, that is easily applicable to a large family of commonly used latent distributions. 1 INTRODUCTION Generative latent variable models have grown to be a very popular research topic, with Variational Auto-Encoders (VAEs) (Kingma & Welling, 2013) and Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) gaining a lot of interest in the last few years. VAEs use a stochastic encoder network to embed input data in a typically lower dimensional space, using a conditional probability distribution p(z|x) over possible latent space codes z ∈ RD. A stochastic decoder network is then used to reconstruct the original sample. GANs, on the other hand, use a generator network that creates data samples from noise z ∼ p(z), where p(z) is a fixed prior distribution, and train a discriminator network jointly to distinguish between real and generated data. Both of these model families require a probability distribution to be defined on the latent space. The most popular variants are the multidimensional normal distribution and the uniform distribution on the zero-centred hypercube. Given a trained model, studying the structure of the latent space is a common way to measure generator capabilities. 1.1 MOTIVATION BEHIND INTERPOLATIONS There are various methods used to analyse the latent space. Locally, one can sample and decode points in close neighbourhood of a given latent vector to investigate a small region in the space. On the other hand, global methods are designed to capture long-distance relationships between points in the space, e.g. latent arithmetics, latent directions analysis, and interpolations (see e.g. Mikolov et al. (2013); Kilcher et al. (2017); Radford et al. (2015); White (2016); Agustsson et al. (2017)). The main advantage of using interpolations is the interpretability that comes with dealing with onedimensional curves, instead of high-dimensional Euclidean space. For example, if the model has managed to find a meaningful representation, one would expect the latent space to be organised in a way that reflects the internal structure of the training dataset. In that case, decoding an interpolation will show a gradual transformation of one endpoint into the other. Contrarily, if the model memorises the data, the latent space might consist of regions corresponding to particular training examples, divided by boundaries with unnatural, abrupt changes in generated data (Arvanitidis et al., 2017). We ∗These two authors contributed equally This work was supported by National Science Centre, Poland (grants no. 2015/19/B/ST6/01819). need to note that this notion of "meaningful representation" is not enforced by the training objective. However, it is not contradicting the objective, making it necessary to use additional tools to evaluate whether the learned manifold is coherently structured and equipped with desirable qualities. What distinguishes interpolations from other low-dimensional methods is the shortest path property. In absence of any additional knowledge about the latent space, it feels natural to use the Euclidean metric. In that case, the shortest path between two points is defined as a segment. This is, probably the most popular, linear interpolation, formally defined as fL(x1, x2, λ) = (1−λ)x1+λx2, for λ ∈ [0, 1], where x1, x2 are the endpoints. Other definitions of shortest path might yield different interpolations, we will study some of them later on. While traversing the latent space along the shortest path between two points, a well-trained model should transform the samples in a sensible way. For example, if the modelled data has a natural hierarchy, we would expect the interpolation to reflect it, i.e. an image of a truck should not arise on a path between images of a cat and a dog. Also, if the data can be described with a set of features, then an interpolation should maintain any features shared by the endpoints along the path. For example, consider a dataset of images of human faces, with features such as wearing sunglasses, having a long beard, etc. Again, this is not enforced by the training objective. If one would desire such property, it is necessary to somehow include the information about the trained manifold in the interpolation scheme. There has been an amount of work done on equipping the latent space with a stochastic Riemannian metric (Arvanitidis et al., 2017) that additionally depends on the generator function. The role of the shortest paths is fulfilled by the geodesics, and the metric is defined precisely to enforce some of the properties mentioned above. This approach is somewhat complementary to the one we are concerned with – instead of analysing the latent space using simple tools, we would need to find a more sophisticated metric that describes the latent space comprehensively, and then analyse the metric itself. If our goal was solely the quality of generated interpolation samples, the aforementioned approach would be preferable. However, in this work we are concerned with evaluating the properties directly connected with the model’s objective. With that in mind, we criticise the broad use of linear interpolations in this particular context. In this work we shall theoretically prove that linear interpolations are an incorrect tool for the stated task, and propose a simple, suitable interpolation variant. 1.2 THE DISTRIBUTION MISMATCH While considered useful, the linear interpolation used in conjunction with the most popular latent distributions results in a distribution mismatch (also defined in Agustsson et al. (2017); Kilcher et al. (2017)). That is, if we fix the λ coefficient and interpolate linearly between two endpoints sampled from the latent space distribution, the probability distribution of the resulting vectors will differ significantly from the latent distribution. This can be partially explained by the well-known fact that in high dimensions the norms of vectors drawn from the latent distribution are concentrated around a certain value. As a consequence, the midpoints of sampled pairs of latent vectors will have, on average, significantly smaller norm. Thus, the linear interpolation oversamples regions in close proximity of the origin of the latent space. A thorough analysis of this phenomenon will be conducted in section 2.1. Such behaviour raises questions about the applicability of the linear interpolation to study the latent space. Indeed, changing the latent distribution after the model was trained may have unexpected consequences. In Kilcher et al. (2017), experiments conducted using a DCGAN model (Radford et al., 2015) on the celebA dataset (Liu et al., 2015) showed flawed data generation near the latent space origin. Other works concerning the traversal of latent space do not mention this effect, e.g. Agustsson et al. (2017). We recreated this experiment, and concluded that it might be caused by stopping the training process too early (see Appendix C figure 6 for a visualisation). This may explain the apparent disagreement in the literature. Nevertheless, with either a midpoint decoding to a median face, or a non-sensible sample, the interpolation is not informative – we would like to see smooth change of features, and not a transition through the same, homogeneous region. The solution is, either, to change the latent distribution so that the linear interpolation will not cause a distribution mismatch, or redefine the shortest path property. A simple well-known compromise is to use spherical interpolations (Shoemake, 1985; White, 2016). As the latent distribution is concentrated around a sphere, replacing segments with arcs causes relatively small distribution mismatch (see section 3.2). Nonetheless, reducing the consequences of the distribution mismatch is still a popular research topic (Agustsson et al., 2017; Kilcher et al., 2017; Arvanitidis et al., 2017). 1.3 MAIN CONTRIBUTIONS In section 2.1 we show that if the linear interpolation does not change the latent probability distribution, then it must be trivial or "pathological" (with undefined expected value). Then, in section 2.2, we give an example of such an invariant distribution, namely the Cauchy distribution, thus proving its existence. We also discuss the negative consequences of choosing a heavy-tailed probability distribution as the latent prior. In section 3 we relax the Euclidean shortest path property of interpolations, and investigate nonlinear interpolations that do not cause the latent distribution mismatch. We describe a general framework for creating such interpolations, and give two concrete examples in sections 3.4 and 3.5. We find these interpolations to be appropriate for evaluating the model’s objective induced properties in contrast to the linear interpolations. The experiments conducted using the DCGAN model on the CelebA dataset are presented solely to illustrate the problem, not to study the DCGAN itself, theoretically or empirically. 2 LATENT DISTRIBUTIONS In this section we will tackle the problem of distribution mismatch by selecting a proper latent distribution. Let us assume that we want to train a generative model which has a D-dimensional latent space and a fixed latent probability distribution, defined by a random variable Z. We denote by X ∼ X that the random variableX has distributionX . Xn ' X represents the fact that the sequence of random variables {Xn}n∈N converges weakly to a random variable with distributionX as n tends to infinity. By Xn ' Xn we mean that limn→∞ supx∈R |CDFXn(x) − CDFXn(x)| = 0, where CDFX denotes the cumulative distribution function of X . The index n will usually be omitted for readability. In other words, by X ' X we mean, informally, that X has distribution similar to X . 2.1 LINEAR INTERPOLATION INVARIANCE PROPERTY Property 2.1 (Linear Interpolation Invariance). If Z defines a distribution on the D-dimensional latent space, Z(1) and Z(2) are independent and distributed identically to Z, and for every λ ∈ [0, 1] the random variable fL(Z(1),Z(2), λ) := (1− λ)Z(1) + λZ(2) is distributed identically to Z, then we will say that Z has the linear interpolation invariance property, or that linear interpolation does not change the distribution of Z. The most commonly used latent probability distributions Z are products of D independent random variables. That is, Z = (Z1, Z2, . . . , ZD), where Z1, Z2, . . . , ZD are the independent marginals distributed identically to Z. If the norms of Z concentrate around a certain value, then the latent distribution resembles sampling from a zero-centred sphere and the linear interpolation oversamples regions in the proximity of the origin of the latent space. As a consequence, Z does not have the linear interpolation invariance property. The following observation will shed light upon this problem. Let N (µ, σ2) denote the normal distribution with mean µ and variance σ2. Observation 2.1. Let us assume that Z2 has finite mean µ and finite variance σ2. If µ > 0, then ‖Z‖ ' N (√ Dµ, σ 2 4µ ) as D →∞. If µ = 0, then ‖Z‖ = 0 almost everywhere. The proof of this and all further observations is presented in the appendix B. For example, if Z ∼ N (0, 1), then Z is distributed according to the D-dimensional normal distribution with mean 0 and identity covariance matrix I. Z2 has moments µ = 1, σ2 = 2, thus ‖Z‖ ' N (√ D, 12 ) . The second example is Z ∼ U(−1, 1), where U(a, b) is the uniform distribu- If Z ∼ N (0, 1), then ‖Z‖ is distributed according to the chi distribution, equal to the square root of the chi-squared distribution. tion on the interval [a, b], and Z is distributed uniformly on the hypercube [−1, 1]D. In that case, Z2 has moments µ = 13 , σ 2 = 445 , thus ‖Z‖ ' N (√ D 3 , 1 15 ) . It is worth noting that the variance of the approximated probability distribution of ‖Z‖, the thickness of the sphere, does not change as D tends to infinity – only the radius of the sphere is affected. On the other hand, if the latent distribution is normalised (divided by the expected value of ‖Z‖), then the distribution concentrates around the unit sphere (not necessarily uniformly), and we observe the so-called soap bubble phenomenon (Ferenc, 2017). One might think that the factorisation of the latent probability distribution is the main reason why the linear interpolation changes the distribution. Unfortunately, this is not the case. Let Z := 12 (Z (1) + Z(2)), where Z(1),Z(2) are two independent samples from Z. Therefore, Z is the distribution of the middle points of a linear interpolation between two vectors drawn independently from Z. Observation 2.2. If Z has a finite mean, and Z is distributed identically to Z, then Z must be concentrated at a single point. If a probability distribution is not heavy-tailed, then its tails are bounded by the exponential distribution, which in turn means that it has a finite mean. Therefore, all distributions having undefined expected value must be heavy-tailed. We will refer to this later on, as the heavy tails may have strong negative impact on the training procedure. There have been attempts to find Z, with finite mean, such that Z is at least similar to Z. Kilcher et al. (2017) managed to reduce the distribution mismatch by defining the latent distribution as V ∼ U(SD−1), r ∼ Γ(1 2 , θ), θ > 0, Z = √ rV, where U(SD−1) is the uniform distribution on the unit sphere, and Γ( 12 , θ) is the gamma distribution. We extend this idea by using a distribution that has no finite mean, namely the Cauchy distribution. 2.2 THE CAUCHY DISTRIBUTION The standard Cauchy distribution is denoted by C(0, 1), and its density function is defined as 1/ ( π(1 + x2) ) . The most important property of the Cauchy distribution is the fact that if C(1), . . . , C(n) are independent samples from the standard Cauchy distribution, and λ1, . . . , λn ∈ [0, 1] with λ1 + . . . + λn = 1, then λ1C(1) + . . . + λnC(n) is also distributed according to the standard Cauchy distribution. In case of n = 2 it means that the Cauchy distribution satisfies the distribution matching property. On the other hand, as a consequence of observation 2.2, the Cauchy distribution cannot have finite mean. In fact, all of its moments of order greater than or equal to one are undefined. See Siegrist (2017) for further details. There are two ways of using the Cauchy distribution in high dimensional spaces while retaining the distribution matching property. The multidimensional Cauchy distribution is defined as a product of independent standard Cauchy distributions. Then, the linear interpolation invariance property can be simply proved by applying the above formulas coordinate-wise. In the case of vectors drawn from the multidimensional Cauchy distribution we may expect that some of the coordinates will be sufficiently larger, by absolute value, than the others (Hansen et al., 2006), thus making the latent distribution similar to coordinate-wise sampling. In contrast, the multivariate Cauchy distribution comes with the isotropy property at the cost of the canonical directions becoming statistically dependent. There are multiple ways of defining it, and further analysis is out of the scope of this paper. We tested both variants as latent distributions with similar results. From now on, we shall concentrate on the non-isotropic Cauchy distribution. The Cauchy distribution is a member of the family of stable distributions, and has been previously used to model heavy-tailed data (Nolan, 2018). However, according to our best knowledge, the Cauchy distribution has never been used as the latent distribution in generative models. Figure 1 presents a decoded linear interpolations between random latent vectors using a DCGAN model trained on the CelebA dataset for the Cauchy distribution and the distribution from Kilcher et al. (2017). It should be noted that if D is large enough, the distribution of the norms of vectors sampled from the D-dimensional Cauchy distribution has a low density near zero – similarly to the normal and uniform distributions – but linear interpolations do not oversample this part of the latent space, due to the heavy-tailed nature of the Cauchy distribution. Comparison of the distributions of norms is given in Figure 2. The distribution-interpolation trade off states that if the probability distribution has the linear interpolation invariance property, then it must be trivial or heavy-tailed. In case of the Cauchy distribution we observed issues with generating images if the norm of the sampled latent vector was relatively large (the probability distribution of the norms is also heavy-tailed). Some of those faulty examples are presented in the appendix C. This is consistent with the known fact, that artificial networks perform poorly if their inputs are not normalised (see e.g. Glorot & Bengio (2010)). A probability distribution having the linear interpolation invariance property cannot be normalised using linear transformations. For example, the batch normalisation technique (Ioffe & Szegedy, 2015) would be highly ineffective, as the mean of a batch of samples is, in fact, a single sample from the distribution. On the other hand, using a non-linear normalisation (e.g., clipping the norm of the latent vectors in subsequent layers), is mostly equivalent to changing the latent probability distribution and making the interpolation non-linear. This idea will be explored in the next section. 3 INTERPOLATIONS In this section we review the most popular variants of interpolations, with an emphasis on the distribution mismatch analysis. We also present two new examples of interpolations stemming from a general scheme, that perform well with the popular latent priors. An interpolation on the latent space RD is formally defined as a function f : RD × RD × [0, 1] 3 (x1, x2, λ) 7→ x ∈ RD. For brevity, we will represent f(x1, x2, λ) by fx1,x2(λ). Property 3.1 (Distribution Matching Property). If Z defines a distribution on the D-dimensional latent space, Z(1) and Z(2) are independent and distributed identically to Z, and for every λ ∈ [0, 1] the random variable fZ(1),Z(2)(λ) is distributed identically to Z, then we will say that the interpolation f has the distribution matching property in conjunction with Z, or that the interpolation f does not change the distribution of Z. 3.1 LINEAR INTERPOLATION The linear interpolation is defined as fLx1,x2(λ) = (1 − λ)x1 + λx2. This interpolation does not satisfy the distribution matching property for the most commonly used probability distributions, as they have a finite mean. A notable exception is the Cauchy distribution. This was discussed in details in the previous section. 3.2 SPHERICAL LINEAR INTERPOLATION As in Shoemake (1985); White (2016), the spherical linear interpolation is defined as fSLx1,x2(λ) = sin [(1− λ)Ω] sin Ω x1 + sin[λΩ] sin Ω x2, where Ω is the angle between vectors x1 and x2. Note that this interpolation is undefined for parallel endpoint vectors, and the definition cannot be extended without losing the continuity. Also, if vectors x1 and x2 have the same length R, then the interpolation corresponds to a geodesic on the sphere of radius R. In this regard, it might be said that the spherical linear interpolation is defined as the shortest path on the sphere. The most important fact is that this interpolation can have the distribution matching property. Observation 3.1. If Z has uniform distribution on the zero-centred sphere of radius R > 0, then fSL does not change the distribution of Z. 3.3 NORMALISED INTERPOLATION Introduced in Agustsson et al. (2017), the normalised interpolation is defined as fNx1,x2(λ) = (1− λ)x1 + λx2√ (1− λ)2 + λ2 . Observation 3.2. If Z ∼ N (0, I), then fN does not change the distribution of Z. If vectors x1 and x2 are orthogonal and have equal length, then the curve defined by this interpolation is equal to the one of the spherical linear interpolation. On the other hand, the normalised interpolation behaves poorly if x1 is close to x2. In the extreme case of x1 = x2 the interpolation is not constant with respect to λ, which violates any sensible definition of the shortest path. 3.4 CAUCHY-LINEAR INTERPOLATION Here we present a general way of designing interpolations that have the distribution matching property in conjunction with a given probability distribution Z. This method requires some additional assumptions about Z, but it works well with the most popular latent distributions. Let L be the D-dimensional latent space, Z define the probability distribution on the latent space, C be distributed according to the D-dimensional Cauchy distribution on L, K be a subset of L such that Z is concentrated on this set, and g : L → K be a bijection such that g(C) is distributed identically to Z on K. Then for x1, x2 ∈ K we define the Cauchy-linear interpolation as fCLx1,x2(λ) = g ( (1− λ)g−1(x1) + λg−1(x2) ) . In other words, for endpoints x1, x2 ∼ Z: 1. Transform x1 and x2 using g−1. This step changes the latent distribution to the D-dimensional Cauchy distribution. 2. Linearly interpolate between the transformations to get xλ = (1− λ)g−1(x1) + λg−1(x2) for all λ ∈ [0, 1]. The transformed latent distribution remains unchanged. Originally referred to as distribution matched. 3. Transform xλ back to the original space using g. We end up with the original latent distribution. Observation 3.3. With the above assumptions about g the Cauchy-linear interpolation does not change the distribution of Z. Finding an appropriate function g might seem hard, but in practice it usually is fairly straightforward. For example, if Z is distributed identically to the product of D independent one-dimensional distributions Z, then we can define g as CDF−1C ◦ CDFZ applied to every coordinate. 3.5 SPHERICAL CAUCHY-LINEAR INTERPOLATION We might want the interpolation to have some other desired properties. For example, to behave exactly as the spherical linear interpolation if only the endpoints have equal norm. For that purpose, we need to make additional assumptions. Let Z be isotropic, C be distributed according to the onedimensional Cauchy distribution, and g : R→ (0,+∞) be a bijection such that g(C) is distributed identically as ‖Z‖ on (0,+∞). Then we can modify the spherical linear interpolation formula to define what we call the spherical Cauchy-linear interpolation fSCLx1,x2(λ) = ( sin [(1− λ)Ω] sin Ω x1 ‖x1‖ + sin[λΩ] sin Ω x2 ‖x2‖ )[ g ( (1− λ)g−1(‖x1‖) + λg−1(‖x2‖) )] , where Ω is the angle between vectors x1 and x2. In other words: 1. Interpolate the directions of latent vectors using the spherical linear interpolation. 2. Interpolate the norms using the Cauchy-linear interpolation. Observation 3.4. With the above assumptions about g, the spherical Cauchy-linear interpolation does not change the distribution of Z if the Z distribution is isotropic. The simplest candidate for the g function is CDF−1C ◦CDF‖Z‖, but we usually need to know more about Z to check if the assumptions hold. For example, let Z be aD-dimensional normal distribution with zero mean and identity covariance matrix. Then ‖Z‖ ∼ √ χ2D and CDF√ χ2D (x) = CDFχ2D (x 2) = 1 Γ(D/2) γ ( D 2 , x2 2 ) , for every x ≥ 0, where Γ denotes the gamma function, and γ is the lower incomplete gamma function. Thus we set g(x) = ( CDF−1C ◦ CDFχ2D ) (x2), with g−1(x) = √( CDF−1 χ2D ◦ CDFC ) (x). Figure 3 shows comparison of the Cauchy-linear and the spherical Cauchy-linear interpolations on a two-dimensional plane for pairs of vectors sampled from different probability distributions. It illustrates how these interpolations manage to keep the distributions unchanged. Figure 4 is an illustration of distribution matching property for Cauchy-linear interpolation. We also compare the data samples generated by the DCGAN model trained on the CelebA dataset; the results are shown in figure 5. 4 SUMMARY We investigated the properties of multidimensional probability distributions in the context of generative models. We found out that there is a certain trade-off: it is impossible to define a latent probability distribution with a finite mean and the linear interpolation invariance property. The D-dimensional Cauchy distribution serves as an example of a latent probability distribution that remains unchanged by linear interpolation, at the cost of poor model performance, due to the heavytailed nature. Instead of using the Cauchy distribution as the latent distribution, we propose to use it to define nonlinear interpolations that have the distribution matching property. The assumption of the shortest path being a straight line must be relaxed, but our scheme is general enough to provide a way of incorporating other desirable properties. We observe that there are three different goals when using interpolations for studying a generative model. Firstly, to check whether the training objective was fulfilled, one must use an interpolation that does not cause the distribution mismatch. This is, in our opinion, a necessary step before performing any further evaluation of the trained model. Secondly, if one is interested in the manifold convexity, linear interpolations are a suitable method provided the above analysis yields positive results. Finally, to perform a complete investigation of the learned manifold one can employ methods that incorporate some information about the trained model, e.g. the approach of Arvanitidis et al. (2017) mentioned in section 1.1. We do not propose to completely abandon the use of linear interpolations, as the convexity of the learned manifold is still an interesting research topic. For instance, we have observed that generative models are capable of generating sensible images from seemingly out-of-distribution regions, e.g. the emergence of the median face mentioned in the introduction. In our opinion, this is a promising direction for future research. A EXPERIMENTAL SETUP All experiments were conducted using a DCGAN model (Radford et al., 2015), in which the generator network consisted of a linear layer with 8192 neurons, followed by four convolution transposition layers, each using 5 × 5 filters and strides of 2, with number of filters in order of layers: 256, 128, 64, 3. Except for the output layer, where tanh activation function was used, all previous layers used ReLU. Discriminator’s architecture mirrored the one from the generator, with a single exception of using leaky ReLU instead of vanilla ReLU function for all except the last layer. No batch normalisation was used in both networks. Adam optimiser with learning rate of 2e−4 and momentum set to 0.5 was used. Batch size 64 was used throughout all experiments. If not explicitly stated otherwise, latent space dimension was set to 100. For the CelebA dataset we resized the input images to 64× 64. B PROOFS Observation 2.1. Let us assume that Z2 has finite mean µ and finite variance σ2. If µ > 0, then ‖Z‖ ' N (√ Dµ, σ 2 4µ ) as D →∞. If µ = 0, then ‖Z‖ = 0 almost everywhere. Proof. Recall that Z,Z1, . . . , ZD are independent and identically distributed. Therefore Z2, Z21 , . . . , Z 2 D are also independent and identically distributed. Z = (Z1, . . . , ZD) and ‖Z‖2 = Z21 + . . .+ Z2D. Z2 ≥ 0, therefore µ ≥ 0. If µ = 0, then Z2 = 0 almost everywhere, Z2 = 0 almost everywhere, Z = 0 almost everywhere, and finally ‖Z‖ = 0 almost everywhere. From now on we will assume that µ > 0. Using the central limit theorem we know that √ D (Z21+...+Z2D D − µ ) converges in distribution to N (0, σ2) with D → ∞. The convergence of cumulative distribution functions is uniform, because the limit is continuous everywhere ∀ >0∃D>0∀D>D,D∈N∀x∈R : ∣∣∣Pr(√D(Z21 + . . .+ Z2D D − µ ) ≤ x ) − CDFN (0,σ2)(x) ∣∣∣ < . D > 0, thus Pr (√ D (Z21 + . . .+ Z2D D − µ ) ≤ x ) = Pr ( Z21 + . . .+ Z 2 D ≤ Dµ+ x √ D ) = CDF‖Z‖2 ( Dµ+ x √ D ) . Additionally, CDFN (0,σ2)(x) = CDFN (Dµ,Dσ2) ( Dµ+ x √ D ) , and now we have ∀ >0∃D>0∀D>D,D∈N∀x∈R : ∣∣CDF‖Z‖2(Dµ+ x√D)− CDFN (Dµ,Dσ2)(Dµ+ x√D)∣∣ < . Finally, the function R 3 x 7→ Dµ+ x √ D ∈ R is a bijection (again, because D > 0), so we may substitute Dµ + x √ D with x and the innermost statement will hold for every x ∈ R ∀ >0∃D>0∀D>D,D∈N∀x∈R : ∣∣CDF‖Z‖2(x)− CDFN (Dµ,Dσ2)(x)∣∣ < . (1) Before taking square root of the normal distribution we must deal with negative values. LetN+(ν, τ) be defined by its cumulative distribution function: CDFN+(ν,τ)(x) = { 0 if x < 0 , CDFN (ν,τ)(x) if x ≥ 0 . The idea is to take all negative values of N (ν, τ) and concentrate them at zero. Now we can modify (1) ∀ >0∃D>0∀D>D,D∈N∀x∈R : ∣∣CDF‖Z‖2(x)− CDFN+(Dµ,Dσ2)(x)∣∣ < , (2) for x ≥ 0 we simply use (1), for x < 0 the inequality simplifies to |0− 0| < . Since ‖Z‖2 and N+(Dµ,Dσ2) are non-negative, we are allowed to take the square root of these random variables. The square root is a strictly increasing function, thus for x ≥ 0 we have CDFN+(Dµ,Dσ2)(x 2) = CDF√N+(Dµ,Dσ2)(x) and CDF‖Z‖2(x 2) = CDF‖Z‖(x) , therefore we can approximate the variable ‖Z‖ ∀ >0∃D>0∀D>D,D∈N∀x∈R : ∣∣CDF‖Z‖(x)− CDF√N+(Dµ,Dσ2)(x)∣∣ < , (3) for x ≥ 0 we substitute x2 for x in (2), for x < 0 the inequality simplifies, again, to |0− 0| < . This paragraph is a summary of the second part of the proof. To calculate √ N+(Dµ,Dσ2) we observe that, informally, in proximity of Dµ the square root behaves approximately like scaling with constant (2 √ Dµ)−1. Additionally, N (Dµ,Dσ2) has width proportional to √ D, which is infinitesimally smaller than Dµ, so we expect the result to be√ N+(Dµ,Dσ2) ' N (√ Dµ, σ2 4µ ) . Let us define b = { CDF−1N (0,σ2/(4µ))(1− ) if ∈ (0, 1 2 ) , 0 if ≥ 12 . Here b is defined so that the probability of x drawn from N (√ Dµ, σ 2 4µ ) being at least b far from the mean is equal to 2 . Also, note that b does not depend on D. For now we will assume that√ Dµ− b > 0 – this is always true for sufficiently large D, as µ > 0 ∀ >0∃D>0∀D>D,D∈N : √ Dµ− b > 0 . (4) Now let us assume that we have a fixed > 0. For x ∈ [−b , b ] we write the following inequalities Dµ+ 2x √ Dµ ≤ (√ Dµ+ x )2 ≤ Dµ+ 2x√Dµ+ b2 , which are equivalent to 0 ≤ x2 ≤ b2 , thus true. Every cumulative distribution function is weakly increasing, therefore CDFN (Dµ,Dσ2) ( Dµ+ 2x √ Dµ ) ≤ CDFN (Dµ,Dσ2) ((√ Dµ+ x )2) ≤ ≤ CDFN (Dµ,Dσ2) ( Dµ+ 2x √ Dµ+ b2 ) . Because we assumed that (√ Dµ + x )2 > 0 for x ∈ [−b , b ], we can replace N (Dµ,Dσ2) with N+(Dµ,Dσ2) CDFN (Dµ,Dσ2) ( Dµ+ 2x √ Dµ ) ≤ CDFN+(Dµ,Dσ2) ((√ Dµ+ x )2) ≤ ≤ CDFN (Dµ,Dσ2) ( Dµ+ 2x √ Dµ+ b2 ) . We transform the outer distributions using basic properties of the normal distribution. We also take square root of the middle distribution and obtain CDFN ( √ Dµ,σ2/(4µ)) (√ Dµ+ x ) ≤ CDF√N+(Dµ,Dσ2) (√ Dµ+ x ) ≤ ≤ CDFN (√Dµ,σ2/(4µ)) (√ Dµ+ x+ b2 / ( 2 √ Dµ )) . (5) b2 /(2 √ Dµ) → 0 with D → ∞ and CDFN (√Dµ,σ2/(4µ)) is continuous, thus we have uniform convergence ∀ >0∃D>0∀D>D,D∈N∀x∈R :∣∣∣CDFN(√Dµ,σ2/(4µ))(√Dµ+ x)− CDFN (√Dµ,σ2/(4µ))(√Dµ+ x+ b2 /(2√Dµ))∣∣∣ < . Using (5) we get ∀ >0∃D>0∀D>D,D∈N∀x∈[−b ,b ] : [√ Dµ− b > 0 =⇒∣∣∣CDFN(√Dµ,σ2/(4µ))(√Dµ+ x)− CDF√N+(Dµ,Dσ2)(√Dµ+ x)∣∣∣ < ] . (6) Now we will extend this result to all x ∈ R. For > 0 we have CDF N (√ Dµ,σ2/(4µ) )(√Dµ− b ) ≤ , (7) CDF N (√ Dµ,σ2/(4µ) )(√Dµ+ b ) ≥ 1− . (8) Substituting −b and b for x in (6), and using (7) and (8) respectively, we obtain ∀ >0∃D>0∀D>D,D∈N : CDF√N+(Dµ,Dσ2) (√ Dµ− b ) < 2 , (9) ∀ >0∃D>0∀D>D,D∈N : CDF√N+(Dµ,Dσ2) (√ Dµ+ b ) > 1− 2 . (10) Cumulative distribution functions are increasing functions with values in [0, 1], thus combining (7) and (9) ∀ >0∀x<−b : 0 ≤ CDFN (√ Dµ,σ2/(4µ) )(√Dµ+ x) ≤ , ∀ >0∃D>0∀D>D,D∈N∀x<−b : 0 ≤ CDF√N+(Dµ,Dσ2) (√ Dµ+ x ) < 2 , ∀ >0∃D>0∀D>D,D∈N∀x<−b :∣∣∣CDFN(√Dµ,σ2/(4µ))(√Dµ+ x)− CDF√N+(Dµ,Dσ2)(√Dµ+ x)∣∣∣ < 2 . (11) Analogically, using (8) and (10) ∀ >0∀x>b : 1 ≥ CDFN (√Dµ,σ2/(4µ))( √ Dµ+ x) ≥ 1− , ∀ >0∃D>0∀D>D,D∈N∀x>b : 1 ≥ CDF√N+(Dµ,Dσ2)( √ Dµ+ x) > 1− 2 , ∀ >0∃D>0∀D>D,D∈N∀x>b :∣∣∣CDFN(√Dµ,σ2/(4µ))(√Dµ+ x)− CDF√N+(Dµ,Dσ2)(√Dµ+ x)∣∣∣ < 2 . (12) Thus, ∀ >0∃D>0∀D>D,D∈N∀x∈R : [√ Dµ− b > 0 =⇒∣∣∣CDFN(√Dµ,σ2/(4µ))(√Dµ+ x)− CDF√N+(Dµ,Dσ2)(√Dµ+ x)∣∣∣ < 2 ] , (13) because for any > 0 we may define D := max{D1,D2,D3}, where D1,D2,D3 are taken from (6), (11) and (12). To simplify, ∀ >0∃D>0∀D>D,D∈N∀x∈R : ∣∣∣CDFN(√Dµ,σ2/(4µ))(x)− CDF√N+(Dµ,Dσ2)(x)∣∣∣ < 2 , (14) because for any > 0 we may define D := max{D1,D2}, where D1,D2 are taken from (4) and (13), making the antecedent true. We also replaced √ Dµ+ x with x, since now the statement holds for all x ∈ R. Finally, we combine (3) and (14) using the triangle inequality ∀ >0∃D>0∀D>D,D∈N∀x∈R : ∣∣∣CDF‖Z‖(x)− CDFN(√Dµ,σ2/(4µ))(x)∣∣∣ < 3 , (15) because for any > 0 we may define D := max{D1,D2}, where D1,D2 are taken from (3) and (14), and since it is true for any positive , we replace 3 with ∀ >0∃D>0∀D>D,D∈N∀x∈R : ∣∣∣CDF‖Z‖(x)− CDFN(√Dµ,σ2/(4µ))(x)∣∣∣ < , because for any > 0 we may define D := D1, where D1 is taken from (15), substituting 3 for . Observation 2.2. If Z has a finite mean, and Z is distributed identically to Z, then Z must be concentrated at a single point. Proof. Let Z,Z(1),Z(2),Z(3), . . . be an infinite sequence of independent and identically distributed random variables. Using induction on n we can show that 12n ( Z(1) + . . . + Z(2 n) ) is distributed identically to Z. Indeed, for n = 1 this is one of the theorem’s assumptions. To prove the inductive step let us define A := 1 2n ( Z(1) + . . .+ Z(2 n) ) , B := 1 2n ( Z(2 n+1) + . . .+ Z(2 n+1) ) . A and B are independent – they are defined as functions of independent variables – and, by the inductive hypothesis, distributed identically to Z. Finally, it is sufficient to observe that 1 2n+1 ( Z(1) + . . .+ Z(2 n+1) ) = A + B 2 . Z has finite mean – let us denote it by µ. Let also N+ be the set of strictly positive natural numbers. By the law of large numbers the sequence { 1n (Z (1) + . . . + Z(n))}n∈N+ converges in probability to µ. The same is true for any infinite subsequence, in particular for { 12n (Z (1) + . . .+Z(2 n))}n∈N+ , but we have shown that all elements of this subsequence are distributed identically to Z, thus Z must be concentrated at µ. Observation 3.1. If Z has uniform distribution on the zero-centred sphere of radius R > 0, then fSL does not change the distribution of Z. Proof. Let Z,Z(1),Z(2) be independent and identically distributed. Let λ ∈ [0, 1] be a fixed real number. The random variable fSL Z(1),Z(2) (λ) is defined almost everywhere (with the exception of parallel samples from Z(1),Z(2)) and is also concentrated on the zero-centred sphere of radius R (because if ‖x1‖ = ‖x2‖, then ‖fSLx1,x2(λ)‖ = ‖x1‖ = ‖x2‖). Let iso be any linear isometry of the latent space. ‖iso(x)‖ = ‖x‖, thus iso is also an isometry of the zero-centred sphere of radius R. Additionally, we have iso ( fSLx1,x2(λ) ) = iso ( sin [(1− λ)Ω] sin Ω x1 + sin[λΩ] sin Ω x2 ) = sin [(1− λ)Ω] sin Ω iso(x1) + sin[λΩ] sin Ω iso(x2) = fSLiso(x1),iso(x2)(λ) and the last equality holds because the isometry does not change the angle Ω between x1 and x2. Thus, iso ( fSL Z(1),Z(2) (λ) ) = fSL iso(Z(1)),iso(Z(2)) (λ), and this is distributed identically to fSL Z(1),Z(2) (λ), because Z(1),Z(2), both uniform distributions, are invariant to iso. In that case, fSL Z(1),Z(2) (λ) is concentrated on the zero-centred sphere of radius R and invariant to all linear isometries of the latent space. The only distribution having these properties is the uniform distribution on the sphere. Observation 3.2. If Z ∼ N (0, I), then fN does not change the distribution of Z. Proof. Let Z,Z(1),Z(2) be independent and identically distributed. Let λ ∈ [0, 1] be a fixed real number. The random variables Z(1) and Z(2) are both distributed according to N (0, I). Using the definition of fN and elementary properties of the normal distribution we conclude fNZ(1),Z(2)(λ) = (1− λ)Z(1) + λZ(2)√ (1− λ)2 + λ2 ∼ N ( (1− λ)0 + λ0√ (1− λ)2 + λ2 , (1− λ)2I + λ2I (1− λ)2 + λ2 ) = N (0, I). Observation 3.3. With the above assumptions about g the Cauchy-linear interpolation does not change the distribution of Z. Proof. Let Z,Z(1),Z(2) be independent and identically distributed. Let λ ∈ [0, 1] be a fixed real number. First observe that g−1(Z(1)) and g−1(Z(2)) are independent (because Z(1),Z(2) are independent) and distributed identically to C (property of g). Likewise, (1− λ)g−1(Z(1)) + λg−1(Z(2)) ∼ C (property of the Cauchy distribution). Therefore, g((1− λ)g−1(Z(1)) + λg−1(Z(1))) ∼ Z (property of g). Observation 3.4. With the above assumptions about g, the spherical Cauchy-linear interpolation does not change the distribution of Z if the Z distribution is isotropic. Proof. Let Z,Z(1),Z(2) be independent and identically distributed. Let λ ∈ [0, 1] be a fixed real number. The following statements are straightforward consequences of Z(1),Z(2) being isotropic (and also independent). 1. The random variables Z(1) ‖Z(1)‖ , Z(2) ‖Z(2)‖ , ‖Z(1)‖, ‖Z(2)‖ are independent, 2. ‖Z(1)‖ and ‖Z(2)‖ are both distributed identically to ‖Z‖, 3. Z(1) ‖Z(1)‖ and Z(2) ‖Z(2)‖ are both distributed uniformly on the sphere of radius 1. The next two statements are consequences of Observations 3.1 and 3.3 respectively. 4. The random variable fSCL Z(1),Z(2) (λ) ‖fSCL Z(1),Z(2) (λ)‖ = sin [(1− λ)Ω] sin Ω Z(1) ‖Z(1)‖ + sin[λΩ] sin Ω Z(2) ‖Z(2)‖ is dis- tributed uniformly on the unit sphere. 5. The random variable ‖fSCL Z(1),Z(2) (λ)‖ = g ( (1 − λ)g−1(‖Z(1)‖) + λg−1(‖Z(2)‖) ) is dis- tributed identically to ‖Z‖. fSCL Z(1),Z(2) (λ) ‖fSCL Z(1),Z(2) (λ)‖ and ‖fSCL Z(1),Z(2) (λ)‖ are independent, because they are functions of independent random variables (Ω is a function of Z(1) ‖Z(1)‖ and Z(2) ‖Z(2)‖ ), therefore fSCL Z(1),Z(2) (λ) is isotropic. Using the statement 5. and the fact that two isotropic probability distributions are equal if and only if the distributions of their euclidean norms are equal we conclude that fSCL Z(1),Z(2) (λ) is distributed identically to Z. C THE CAUCHY DISTRIBUTION – SAMPLES AND INTERPOLATIONS D MORE CAUCHY-LINEAR AND SPHERICAL CAUCHY-LINEAR INTERPOLATIONS
1. What are the issues with evaluating implicit generative models using linear interpolations? 2. What are the contributions of the paper regarding the problems with linear interpolation-based evaluation? 3. How does the paper propose to address the problem of non-linearity in interpolations? 4. What is the significance of the observation that Cauchy distribution satisfies point (a) from (2)? 5. What are the strengths and weaknesses of the proposed non-linear ways to interpolate? 6. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review The paper discusses linear interpolations in the latent space, which is one of the common ways used nowadays to evaluate a quality of implicit generative models. More precisely, what researchers often do in this field is to (a) take a trained model (which often comes with a "decoder" or a "generator", that is a function mapping a noise sampled from a prior distribution Pz defined over the latent space Z to the data space), (b) sample two independent points Z1 and Z2 from Pz, and (c) report images obtained by decoding linear interpolations between Z1 and Z2 in the latent space. Researchers often tend to judge the quality of the model based on these interpolations, concluding that the model performs poorly if the interpolations don't look realistic and vice versa. The authors of the paper argue that this procedure has drawbacks, because in typical modern use cases (Gaussian / uniform prior Pz) the aforementioned interpolations are not distributed according to Pz anymore, and thus are likely to be out of the domain where decoder was actually trained. I would say the main contributions of the paper are: (1) The sole fact that the paper highlights the problems of linear interpolation based evaluation is already important. (2) Observation 2.2, stating that if (a) Pz has a finite mean and (b) aforementioned linear interpolations are still distributed according to Pz, then Pz is a Dirac distribution (a point mass). (3) The authors notice that Cauchy distribution satisfies point (a) from (2), but as a result does not have a mean. The authors present some set of experiments, where DCGAN generator is trained on the CelebA dataset with the Cauchy prior. The interpolations supposedly look nice but the sampling gets problematic, because a heavy tailed Cauchy often results in the Z samples with excessively large norm, where generator performs poorly. (4) The authors propose several non-linear ways to interpolate, which keep the prior distribution unchanged (Sections 3.4 and 3.5). In other words, instead of using a linear interpolation and Pz compatible with it (which is necessarily is heavy tailed as shown in Observation 2.2), the authors propose to use non linear interpolations which work with nicer priors Pz, in particular the ones with finite mean. I think this topic is very interesting and important, given there is still an unfortunate lack of well-behaved and widely accepted evaluation metrics in the field of unsupervised generative models. Unfortunately, I felt the exposition of the paper was rather confusing and, more importantly, I did not find a clear goal of the paper or any concrete conclusions. One possible conclusion could be that the generative modelling community should stop reporting the linear interpolations. However, I feel the paper is lacking a convincing evidence (from what I could find the authors base all the conclusions on one set of similar experiments performed with one generator architecture on one data set) in order to be viewed as a significant contribution to the generative modeling field. On the other hand, the paper has not enough insights to constitute a significant theoretical contribution (I would expect Observation 2.2 to be already known in the probability field). Overall, I have to conclude that the paper is not ready to be published but I am willing to give it a chance.
ICLR
Title Distribution-Interpolation Trade off in Generative Models Abstract We investigate the properties of multidimensional probability distributions in the context of latent space prior distributions of implicit generative models. Our work revolves around the phenomena arising while decoding linear interpolations between two random latent vectors – regions of latent space in close proximity to the origin of the space are oversampled, which restricts the usability of linear interpolations as a tool to analyse the latent space. We show that the distribution mismatch can be eliminated completely by a proper choice of the latent probability distribution or using non-linear interpolations. We prove that there is a trade off between the interpolation being linear, and the latent distribution having even the most basic properties required for stable training, such as finite mean. We use the multidimensional Cauchy distribution as an example of the prior distribution, and also provide a general method of creating non-linear interpolations, that is easily applicable to a large family of commonly used latent distributions. 1 INTRODUCTION Generative latent variable models have grown to be a very popular research topic, with Variational Auto-Encoders (VAEs) (Kingma & Welling, 2013) and Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) gaining a lot of interest in the last few years. VAEs use a stochastic encoder network to embed input data in a typically lower dimensional space, using a conditional probability distribution p(z|x) over possible latent space codes z ∈ RD. A stochastic decoder network is then used to reconstruct the original sample. GANs, on the other hand, use a generator network that creates data samples from noise z ∼ p(z), where p(z) is a fixed prior distribution, and train a discriminator network jointly to distinguish between real and generated data. Both of these model families require a probability distribution to be defined on the latent space. The most popular variants are the multidimensional normal distribution and the uniform distribution on the zero-centred hypercube. Given a trained model, studying the structure of the latent space is a common way to measure generator capabilities. 1.1 MOTIVATION BEHIND INTERPOLATIONS There are various methods used to analyse the latent space. Locally, one can sample and decode points in close neighbourhood of a given latent vector to investigate a small region in the space. On the other hand, global methods are designed to capture long-distance relationships between points in the space, e.g. latent arithmetics, latent directions analysis, and interpolations (see e.g. Mikolov et al. (2013); Kilcher et al. (2017); Radford et al. (2015); White (2016); Agustsson et al. (2017)). The main advantage of using interpolations is the interpretability that comes with dealing with onedimensional curves, instead of high-dimensional Euclidean space. For example, if the model has managed to find a meaningful representation, one would expect the latent space to be organised in a way that reflects the internal structure of the training dataset. In that case, decoding an interpolation will show a gradual transformation of one endpoint into the other. Contrarily, if the model memorises the data, the latent space might consist of regions corresponding to particular training examples, divided by boundaries with unnatural, abrupt changes in generated data (Arvanitidis et al., 2017). We ∗These two authors contributed equally This work was supported by National Science Centre, Poland (grants no. 2015/19/B/ST6/01819). need to note that this notion of "meaningful representation" is not enforced by the training objective. However, it is not contradicting the objective, making it necessary to use additional tools to evaluate whether the learned manifold is coherently structured and equipped with desirable qualities. What distinguishes interpolations from other low-dimensional methods is the shortest path property. In absence of any additional knowledge about the latent space, it feels natural to use the Euclidean metric. In that case, the shortest path between two points is defined as a segment. This is, probably the most popular, linear interpolation, formally defined as fL(x1, x2, λ) = (1−λ)x1+λx2, for λ ∈ [0, 1], where x1, x2 are the endpoints. Other definitions of shortest path might yield different interpolations, we will study some of them later on. While traversing the latent space along the shortest path between two points, a well-trained model should transform the samples in a sensible way. For example, if the modelled data has a natural hierarchy, we would expect the interpolation to reflect it, i.e. an image of a truck should not arise on a path between images of a cat and a dog. Also, if the data can be described with a set of features, then an interpolation should maintain any features shared by the endpoints along the path. For example, consider a dataset of images of human faces, with features such as wearing sunglasses, having a long beard, etc. Again, this is not enforced by the training objective. If one would desire such property, it is necessary to somehow include the information about the trained manifold in the interpolation scheme. There has been an amount of work done on equipping the latent space with a stochastic Riemannian metric (Arvanitidis et al., 2017) that additionally depends on the generator function. The role of the shortest paths is fulfilled by the geodesics, and the metric is defined precisely to enforce some of the properties mentioned above. This approach is somewhat complementary to the one we are concerned with – instead of analysing the latent space using simple tools, we would need to find a more sophisticated metric that describes the latent space comprehensively, and then analyse the metric itself. If our goal was solely the quality of generated interpolation samples, the aforementioned approach would be preferable. However, in this work we are concerned with evaluating the properties directly connected with the model’s objective. With that in mind, we criticise the broad use of linear interpolations in this particular context. In this work we shall theoretically prove that linear interpolations are an incorrect tool for the stated task, and propose a simple, suitable interpolation variant. 1.2 THE DISTRIBUTION MISMATCH While considered useful, the linear interpolation used in conjunction with the most popular latent distributions results in a distribution mismatch (also defined in Agustsson et al. (2017); Kilcher et al. (2017)). That is, if we fix the λ coefficient and interpolate linearly between two endpoints sampled from the latent space distribution, the probability distribution of the resulting vectors will differ significantly from the latent distribution. This can be partially explained by the well-known fact that in high dimensions the norms of vectors drawn from the latent distribution are concentrated around a certain value. As a consequence, the midpoints of sampled pairs of latent vectors will have, on average, significantly smaller norm. Thus, the linear interpolation oversamples regions in close proximity of the origin of the latent space. A thorough analysis of this phenomenon will be conducted in section 2.1. Such behaviour raises questions about the applicability of the linear interpolation to study the latent space. Indeed, changing the latent distribution after the model was trained may have unexpected consequences. In Kilcher et al. (2017), experiments conducted using a DCGAN model (Radford et al., 2015) on the celebA dataset (Liu et al., 2015) showed flawed data generation near the latent space origin. Other works concerning the traversal of latent space do not mention this effect, e.g. Agustsson et al. (2017). We recreated this experiment, and concluded that it might be caused by stopping the training process too early (see Appendix C figure 6 for a visualisation). This may explain the apparent disagreement in the literature. Nevertheless, with either a midpoint decoding to a median face, or a non-sensible sample, the interpolation is not informative – we would like to see smooth change of features, and not a transition through the same, homogeneous region. The solution is, either, to change the latent distribution so that the linear interpolation will not cause a distribution mismatch, or redefine the shortest path property. A simple well-known compromise is to use spherical interpolations (Shoemake, 1985; White, 2016). As the latent distribution is concentrated around a sphere, replacing segments with arcs causes relatively small distribution mismatch (see section 3.2). Nonetheless, reducing the consequences of the distribution mismatch is still a popular research topic (Agustsson et al., 2017; Kilcher et al., 2017; Arvanitidis et al., 2017). 1.3 MAIN CONTRIBUTIONS In section 2.1 we show that if the linear interpolation does not change the latent probability distribution, then it must be trivial or "pathological" (with undefined expected value). Then, in section 2.2, we give an example of such an invariant distribution, namely the Cauchy distribution, thus proving its existence. We also discuss the negative consequences of choosing a heavy-tailed probability distribution as the latent prior. In section 3 we relax the Euclidean shortest path property of interpolations, and investigate nonlinear interpolations that do not cause the latent distribution mismatch. We describe a general framework for creating such interpolations, and give two concrete examples in sections 3.4 and 3.5. We find these interpolations to be appropriate for evaluating the model’s objective induced properties in contrast to the linear interpolations. The experiments conducted using the DCGAN model on the CelebA dataset are presented solely to illustrate the problem, not to study the DCGAN itself, theoretically or empirically. 2 LATENT DISTRIBUTIONS In this section we will tackle the problem of distribution mismatch by selecting a proper latent distribution. Let us assume that we want to train a generative model which has a D-dimensional latent space and a fixed latent probability distribution, defined by a random variable Z. We denote by X ∼ X that the random variableX has distributionX . Xn ' X represents the fact that the sequence of random variables {Xn}n∈N converges weakly to a random variable with distributionX as n tends to infinity. By Xn ' Xn we mean that limn→∞ supx∈R |CDFXn(x) − CDFXn(x)| = 0, where CDFX denotes the cumulative distribution function of X . The index n will usually be omitted for readability. In other words, by X ' X we mean, informally, that X has distribution similar to X . 2.1 LINEAR INTERPOLATION INVARIANCE PROPERTY Property 2.1 (Linear Interpolation Invariance). If Z defines a distribution on the D-dimensional latent space, Z(1) and Z(2) are independent and distributed identically to Z, and for every λ ∈ [0, 1] the random variable fL(Z(1),Z(2), λ) := (1− λ)Z(1) + λZ(2) is distributed identically to Z, then we will say that Z has the linear interpolation invariance property, or that linear interpolation does not change the distribution of Z. The most commonly used latent probability distributions Z are products of D independent random variables. That is, Z = (Z1, Z2, . . . , ZD), where Z1, Z2, . . . , ZD are the independent marginals distributed identically to Z. If the norms of Z concentrate around a certain value, then the latent distribution resembles sampling from a zero-centred sphere and the linear interpolation oversamples regions in the proximity of the origin of the latent space. As a consequence, Z does not have the linear interpolation invariance property. The following observation will shed light upon this problem. Let N (µ, σ2) denote the normal distribution with mean µ and variance σ2. Observation 2.1. Let us assume that Z2 has finite mean µ and finite variance σ2. If µ > 0, then ‖Z‖ ' N (√ Dµ, σ 2 4µ ) as D →∞. If µ = 0, then ‖Z‖ = 0 almost everywhere. The proof of this and all further observations is presented in the appendix B. For example, if Z ∼ N (0, 1), then Z is distributed according to the D-dimensional normal distribution with mean 0 and identity covariance matrix I. Z2 has moments µ = 1, σ2 = 2, thus ‖Z‖ ' N (√ D, 12 ) . The second example is Z ∼ U(−1, 1), where U(a, b) is the uniform distribu- If Z ∼ N (0, 1), then ‖Z‖ is distributed according to the chi distribution, equal to the square root of the chi-squared distribution. tion on the interval [a, b], and Z is distributed uniformly on the hypercube [−1, 1]D. In that case, Z2 has moments µ = 13 , σ 2 = 445 , thus ‖Z‖ ' N (√ D 3 , 1 15 ) . It is worth noting that the variance of the approximated probability distribution of ‖Z‖, the thickness of the sphere, does not change as D tends to infinity – only the radius of the sphere is affected. On the other hand, if the latent distribution is normalised (divided by the expected value of ‖Z‖), then the distribution concentrates around the unit sphere (not necessarily uniformly), and we observe the so-called soap bubble phenomenon (Ferenc, 2017). One might think that the factorisation of the latent probability distribution is the main reason why the linear interpolation changes the distribution. Unfortunately, this is not the case. Let Z := 12 (Z (1) + Z(2)), where Z(1),Z(2) are two independent samples from Z. Therefore, Z is the distribution of the middle points of a linear interpolation between two vectors drawn independently from Z. Observation 2.2. If Z has a finite mean, and Z is distributed identically to Z, then Z must be concentrated at a single point. If a probability distribution is not heavy-tailed, then its tails are bounded by the exponential distribution, which in turn means that it has a finite mean. Therefore, all distributions having undefined expected value must be heavy-tailed. We will refer to this later on, as the heavy tails may have strong negative impact on the training procedure. There have been attempts to find Z, with finite mean, such that Z is at least similar to Z. Kilcher et al. (2017) managed to reduce the distribution mismatch by defining the latent distribution as V ∼ U(SD−1), r ∼ Γ(1 2 , θ), θ > 0, Z = √ rV, where U(SD−1) is the uniform distribution on the unit sphere, and Γ( 12 , θ) is the gamma distribution. We extend this idea by using a distribution that has no finite mean, namely the Cauchy distribution. 2.2 THE CAUCHY DISTRIBUTION The standard Cauchy distribution is denoted by C(0, 1), and its density function is defined as 1/ ( π(1 + x2) ) . The most important property of the Cauchy distribution is the fact that if C(1), . . . , C(n) are independent samples from the standard Cauchy distribution, and λ1, . . . , λn ∈ [0, 1] with λ1 + . . . + λn = 1, then λ1C(1) + . . . + λnC(n) is also distributed according to the standard Cauchy distribution. In case of n = 2 it means that the Cauchy distribution satisfies the distribution matching property. On the other hand, as a consequence of observation 2.2, the Cauchy distribution cannot have finite mean. In fact, all of its moments of order greater than or equal to one are undefined. See Siegrist (2017) for further details. There are two ways of using the Cauchy distribution in high dimensional spaces while retaining the distribution matching property. The multidimensional Cauchy distribution is defined as a product of independent standard Cauchy distributions. Then, the linear interpolation invariance property can be simply proved by applying the above formulas coordinate-wise. In the case of vectors drawn from the multidimensional Cauchy distribution we may expect that some of the coordinates will be sufficiently larger, by absolute value, than the others (Hansen et al., 2006), thus making the latent distribution similar to coordinate-wise sampling. In contrast, the multivariate Cauchy distribution comes with the isotropy property at the cost of the canonical directions becoming statistically dependent. There are multiple ways of defining it, and further analysis is out of the scope of this paper. We tested both variants as latent distributions with similar results. From now on, we shall concentrate on the non-isotropic Cauchy distribution. The Cauchy distribution is a member of the family of stable distributions, and has been previously used to model heavy-tailed data (Nolan, 2018). However, according to our best knowledge, the Cauchy distribution has never been used as the latent distribution in generative models. Figure 1 presents a decoded linear interpolations between random latent vectors using a DCGAN model trained on the CelebA dataset for the Cauchy distribution and the distribution from Kilcher et al. (2017). It should be noted that if D is large enough, the distribution of the norms of vectors sampled from the D-dimensional Cauchy distribution has a low density near zero – similarly to the normal and uniform distributions – but linear interpolations do not oversample this part of the latent space, due to the heavy-tailed nature of the Cauchy distribution. Comparison of the distributions of norms is given in Figure 2. The distribution-interpolation trade off states that if the probability distribution has the linear interpolation invariance property, then it must be trivial or heavy-tailed. In case of the Cauchy distribution we observed issues with generating images if the norm of the sampled latent vector was relatively large (the probability distribution of the norms is also heavy-tailed). Some of those faulty examples are presented in the appendix C. This is consistent with the known fact, that artificial networks perform poorly if their inputs are not normalised (see e.g. Glorot & Bengio (2010)). A probability distribution having the linear interpolation invariance property cannot be normalised using linear transformations. For example, the batch normalisation technique (Ioffe & Szegedy, 2015) would be highly ineffective, as the mean of a batch of samples is, in fact, a single sample from the distribution. On the other hand, using a non-linear normalisation (e.g., clipping the norm of the latent vectors in subsequent layers), is mostly equivalent to changing the latent probability distribution and making the interpolation non-linear. This idea will be explored in the next section. 3 INTERPOLATIONS In this section we review the most popular variants of interpolations, with an emphasis on the distribution mismatch analysis. We also present two new examples of interpolations stemming from a general scheme, that perform well with the popular latent priors. An interpolation on the latent space RD is formally defined as a function f : RD × RD × [0, 1] 3 (x1, x2, λ) 7→ x ∈ RD. For brevity, we will represent f(x1, x2, λ) by fx1,x2(λ). Property 3.1 (Distribution Matching Property). If Z defines a distribution on the D-dimensional latent space, Z(1) and Z(2) are independent and distributed identically to Z, and for every λ ∈ [0, 1] the random variable fZ(1),Z(2)(λ) is distributed identically to Z, then we will say that the interpolation f has the distribution matching property in conjunction with Z, or that the interpolation f does not change the distribution of Z. 3.1 LINEAR INTERPOLATION The linear interpolation is defined as fLx1,x2(λ) = (1 − λ)x1 + λx2. This interpolation does not satisfy the distribution matching property for the most commonly used probability distributions, as they have a finite mean. A notable exception is the Cauchy distribution. This was discussed in details in the previous section. 3.2 SPHERICAL LINEAR INTERPOLATION As in Shoemake (1985); White (2016), the spherical linear interpolation is defined as fSLx1,x2(λ) = sin [(1− λ)Ω] sin Ω x1 + sin[λΩ] sin Ω x2, where Ω is the angle between vectors x1 and x2. Note that this interpolation is undefined for parallel endpoint vectors, and the definition cannot be extended without losing the continuity. Also, if vectors x1 and x2 have the same length R, then the interpolation corresponds to a geodesic on the sphere of radius R. In this regard, it might be said that the spherical linear interpolation is defined as the shortest path on the sphere. The most important fact is that this interpolation can have the distribution matching property. Observation 3.1. If Z has uniform distribution on the zero-centred sphere of radius R > 0, then fSL does not change the distribution of Z. 3.3 NORMALISED INTERPOLATION Introduced in Agustsson et al. (2017), the normalised interpolation is defined as fNx1,x2(λ) = (1− λ)x1 + λx2√ (1− λ)2 + λ2 . Observation 3.2. If Z ∼ N (0, I), then fN does not change the distribution of Z. If vectors x1 and x2 are orthogonal and have equal length, then the curve defined by this interpolation is equal to the one of the spherical linear interpolation. On the other hand, the normalised interpolation behaves poorly if x1 is close to x2. In the extreme case of x1 = x2 the interpolation is not constant with respect to λ, which violates any sensible definition of the shortest path. 3.4 CAUCHY-LINEAR INTERPOLATION Here we present a general way of designing interpolations that have the distribution matching property in conjunction with a given probability distribution Z. This method requires some additional assumptions about Z, but it works well with the most popular latent distributions. Let L be the D-dimensional latent space, Z define the probability distribution on the latent space, C be distributed according to the D-dimensional Cauchy distribution on L, K be a subset of L such that Z is concentrated on this set, and g : L → K be a bijection such that g(C) is distributed identically to Z on K. Then for x1, x2 ∈ K we define the Cauchy-linear interpolation as fCLx1,x2(λ) = g ( (1− λ)g−1(x1) + λg−1(x2) ) . In other words, for endpoints x1, x2 ∼ Z: 1. Transform x1 and x2 using g−1. This step changes the latent distribution to the D-dimensional Cauchy distribution. 2. Linearly interpolate between the transformations to get xλ = (1− λ)g−1(x1) + λg−1(x2) for all λ ∈ [0, 1]. The transformed latent distribution remains unchanged. Originally referred to as distribution matched. 3. Transform xλ back to the original space using g. We end up with the original latent distribution. Observation 3.3. With the above assumptions about g the Cauchy-linear interpolation does not change the distribution of Z. Finding an appropriate function g might seem hard, but in practice it usually is fairly straightforward. For example, if Z is distributed identically to the product of D independent one-dimensional distributions Z, then we can define g as CDF−1C ◦ CDFZ applied to every coordinate. 3.5 SPHERICAL CAUCHY-LINEAR INTERPOLATION We might want the interpolation to have some other desired properties. For example, to behave exactly as the spherical linear interpolation if only the endpoints have equal norm. For that purpose, we need to make additional assumptions. Let Z be isotropic, C be distributed according to the onedimensional Cauchy distribution, and g : R→ (0,+∞) be a bijection such that g(C) is distributed identically as ‖Z‖ on (0,+∞). Then we can modify the spherical linear interpolation formula to define what we call the spherical Cauchy-linear interpolation fSCLx1,x2(λ) = ( sin [(1− λ)Ω] sin Ω x1 ‖x1‖ + sin[λΩ] sin Ω x2 ‖x2‖ )[ g ( (1− λ)g−1(‖x1‖) + λg−1(‖x2‖) )] , where Ω is the angle between vectors x1 and x2. In other words: 1. Interpolate the directions of latent vectors using the spherical linear interpolation. 2. Interpolate the norms using the Cauchy-linear interpolation. Observation 3.4. With the above assumptions about g, the spherical Cauchy-linear interpolation does not change the distribution of Z if the Z distribution is isotropic. The simplest candidate for the g function is CDF−1C ◦CDF‖Z‖, but we usually need to know more about Z to check if the assumptions hold. For example, let Z be aD-dimensional normal distribution with zero mean and identity covariance matrix. Then ‖Z‖ ∼ √ χ2D and CDF√ χ2D (x) = CDFχ2D (x 2) = 1 Γ(D/2) γ ( D 2 , x2 2 ) , for every x ≥ 0, where Γ denotes the gamma function, and γ is the lower incomplete gamma function. Thus we set g(x) = ( CDF−1C ◦ CDFχ2D ) (x2), with g−1(x) = √( CDF−1 χ2D ◦ CDFC ) (x). Figure 3 shows comparison of the Cauchy-linear and the spherical Cauchy-linear interpolations on a two-dimensional plane for pairs of vectors sampled from different probability distributions. It illustrates how these interpolations manage to keep the distributions unchanged. Figure 4 is an illustration of distribution matching property for Cauchy-linear interpolation. We also compare the data samples generated by the DCGAN model trained on the CelebA dataset; the results are shown in figure 5. 4 SUMMARY We investigated the properties of multidimensional probability distributions in the context of generative models. We found out that there is a certain trade-off: it is impossible to define a latent probability distribution with a finite mean and the linear interpolation invariance property. The D-dimensional Cauchy distribution serves as an example of a latent probability distribution that remains unchanged by linear interpolation, at the cost of poor model performance, due to the heavytailed nature. Instead of using the Cauchy distribution as the latent distribution, we propose to use it to define nonlinear interpolations that have the distribution matching property. The assumption of the shortest path being a straight line must be relaxed, but our scheme is general enough to provide a way of incorporating other desirable properties. We observe that there are three different goals when using interpolations for studying a generative model. Firstly, to check whether the training objective was fulfilled, one must use an interpolation that does not cause the distribution mismatch. This is, in our opinion, a necessary step before performing any further evaluation of the trained model. Secondly, if one is interested in the manifold convexity, linear interpolations are a suitable method provided the above analysis yields positive results. Finally, to perform a complete investigation of the learned manifold one can employ methods that incorporate some information about the trained model, e.g. the approach of Arvanitidis et al. (2017) mentioned in section 1.1. We do not propose to completely abandon the use of linear interpolations, as the convexity of the learned manifold is still an interesting research topic. For instance, we have observed that generative models are capable of generating sensible images from seemingly out-of-distribution regions, e.g. the emergence of the median face mentioned in the introduction. In our opinion, this is a promising direction for future research. A EXPERIMENTAL SETUP All experiments were conducted using a DCGAN model (Radford et al., 2015), in which the generator network consisted of a linear layer with 8192 neurons, followed by four convolution transposition layers, each using 5 × 5 filters and strides of 2, with number of filters in order of layers: 256, 128, 64, 3. Except for the output layer, where tanh activation function was used, all previous layers used ReLU. Discriminator’s architecture mirrored the one from the generator, with a single exception of using leaky ReLU instead of vanilla ReLU function for all except the last layer. No batch normalisation was used in both networks. Adam optimiser with learning rate of 2e−4 and momentum set to 0.5 was used. Batch size 64 was used throughout all experiments. If not explicitly stated otherwise, latent space dimension was set to 100. For the CelebA dataset we resized the input images to 64× 64. B PROOFS Observation 2.1. Let us assume that Z2 has finite mean µ and finite variance σ2. If µ > 0, then ‖Z‖ ' N (√ Dµ, σ 2 4µ ) as D →∞. If µ = 0, then ‖Z‖ = 0 almost everywhere. Proof. Recall that Z,Z1, . . . , ZD are independent and identically distributed. Therefore Z2, Z21 , . . . , Z 2 D are also independent and identically distributed. Z = (Z1, . . . , ZD) and ‖Z‖2 = Z21 + . . .+ Z2D. Z2 ≥ 0, therefore µ ≥ 0. If µ = 0, then Z2 = 0 almost everywhere, Z2 = 0 almost everywhere, Z = 0 almost everywhere, and finally ‖Z‖ = 0 almost everywhere. From now on we will assume that µ > 0. Using the central limit theorem we know that √ D (Z21+...+Z2D D − µ ) converges in distribution to N (0, σ2) with D → ∞. The convergence of cumulative distribution functions is uniform, because the limit is continuous everywhere ∀ >0∃D>0∀D>D,D∈N∀x∈R : ∣∣∣Pr(√D(Z21 + . . .+ Z2D D − µ ) ≤ x ) − CDFN (0,σ2)(x) ∣∣∣ < . D > 0, thus Pr (√ D (Z21 + . . .+ Z2D D − µ ) ≤ x ) = Pr ( Z21 + . . .+ Z 2 D ≤ Dµ+ x √ D ) = CDF‖Z‖2 ( Dµ+ x √ D ) . Additionally, CDFN (0,σ2)(x) = CDFN (Dµ,Dσ2) ( Dµ+ x √ D ) , and now we have ∀ >0∃D>0∀D>D,D∈N∀x∈R : ∣∣CDF‖Z‖2(Dµ+ x√D)− CDFN (Dµ,Dσ2)(Dµ+ x√D)∣∣ < . Finally, the function R 3 x 7→ Dµ+ x √ D ∈ R is a bijection (again, because D > 0), so we may substitute Dµ + x √ D with x and the innermost statement will hold for every x ∈ R ∀ >0∃D>0∀D>D,D∈N∀x∈R : ∣∣CDF‖Z‖2(x)− CDFN (Dµ,Dσ2)(x)∣∣ < . (1) Before taking square root of the normal distribution we must deal with negative values. LetN+(ν, τ) be defined by its cumulative distribution function: CDFN+(ν,τ)(x) = { 0 if x < 0 , CDFN (ν,τ)(x) if x ≥ 0 . The idea is to take all negative values of N (ν, τ) and concentrate them at zero. Now we can modify (1) ∀ >0∃D>0∀D>D,D∈N∀x∈R : ∣∣CDF‖Z‖2(x)− CDFN+(Dµ,Dσ2)(x)∣∣ < , (2) for x ≥ 0 we simply use (1), for x < 0 the inequality simplifies to |0− 0| < . Since ‖Z‖2 and N+(Dµ,Dσ2) are non-negative, we are allowed to take the square root of these random variables. The square root is a strictly increasing function, thus for x ≥ 0 we have CDFN+(Dµ,Dσ2)(x 2) = CDF√N+(Dµ,Dσ2)(x) and CDF‖Z‖2(x 2) = CDF‖Z‖(x) , therefore we can approximate the variable ‖Z‖ ∀ >0∃D>0∀D>D,D∈N∀x∈R : ∣∣CDF‖Z‖(x)− CDF√N+(Dµ,Dσ2)(x)∣∣ < , (3) for x ≥ 0 we substitute x2 for x in (2), for x < 0 the inequality simplifies, again, to |0− 0| < . This paragraph is a summary of the second part of the proof. To calculate √ N+(Dµ,Dσ2) we observe that, informally, in proximity of Dµ the square root behaves approximately like scaling with constant (2 √ Dµ)−1. Additionally, N (Dµ,Dσ2) has width proportional to √ D, which is infinitesimally smaller than Dµ, so we expect the result to be√ N+(Dµ,Dσ2) ' N (√ Dµ, σ2 4µ ) . Let us define b = { CDF−1N (0,σ2/(4µ))(1− ) if ∈ (0, 1 2 ) , 0 if ≥ 12 . Here b is defined so that the probability of x drawn from N (√ Dµ, σ 2 4µ ) being at least b far from the mean is equal to 2 . Also, note that b does not depend on D. For now we will assume that√ Dµ− b > 0 – this is always true for sufficiently large D, as µ > 0 ∀ >0∃D>0∀D>D,D∈N : √ Dµ− b > 0 . (4) Now let us assume that we have a fixed > 0. For x ∈ [−b , b ] we write the following inequalities Dµ+ 2x √ Dµ ≤ (√ Dµ+ x )2 ≤ Dµ+ 2x√Dµ+ b2 , which are equivalent to 0 ≤ x2 ≤ b2 , thus true. Every cumulative distribution function is weakly increasing, therefore CDFN (Dµ,Dσ2) ( Dµ+ 2x √ Dµ ) ≤ CDFN (Dµ,Dσ2) ((√ Dµ+ x )2) ≤ ≤ CDFN (Dµ,Dσ2) ( Dµ+ 2x √ Dµ+ b2 ) . Because we assumed that (√ Dµ + x )2 > 0 for x ∈ [−b , b ], we can replace N (Dµ,Dσ2) with N+(Dµ,Dσ2) CDFN (Dµ,Dσ2) ( Dµ+ 2x √ Dµ ) ≤ CDFN+(Dµ,Dσ2) ((√ Dµ+ x )2) ≤ ≤ CDFN (Dµ,Dσ2) ( Dµ+ 2x √ Dµ+ b2 ) . We transform the outer distributions using basic properties of the normal distribution. We also take square root of the middle distribution and obtain CDFN ( √ Dµ,σ2/(4µ)) (√ Dµ+ x ) ≤ CDF√N+(Dµ,Dσ2) (√ Dµ+ x ) ≤ ≤ CDFN (√Dµ,σ2/(4µ)) (√ Dµ+ x+ b2 / ( 2 √ Dµ )) . (5) b2 /(2 √ Dµ) → 0 with D → ∞ and CDFN (√Dµ,σ2/(4µ)) is continuous, thus we have uniform convergence ∀ >0∃D>0∀D>D,D∈N∀x∈R :∣∣∣CDFN(√Dµ,σ2/(4µ))(√Dµ+ x)− CDFN (√Dµ,σ2/(4µ))(√Dµ+ x+ b2 /(2√Dµ))∣∣∣ < . Using (5) we get ∀ >0∃D>0∀D>D,D∈N∀x∈[−b ,b ] : [√ Dµ− b > 0 =⇒∣∣∣CDFN(√Dµ,σ2/(4µ))(√Dµ+ x)− CDF√N+(Dµ,Dσ2)(√Dµ+ x)∣∣∣ < ] . (6) Now we will extend this result to all x ∈ R. For > 0 we have CDF N (√ Dµ,σ2/(4µ) )(√Dµ− b ) ≤ , (7) CDF N (√ Dµ,σ2/(4µ) )(√Dµ+ b ) ≥ 1− . (8) Substituting −b and b for x in (6), and using (7) and (8) respectively, we obtain ∀ >0∃D>0∀D>D,D∈N : CDF√N+(Dµ,Dσ2) (√ Dµ− b ) < 2 , (9) ∀ >0∃D>0∀D>D,D∈N : CDF√N+(Dµ,Dσ2) (√ Dµ+ b ) > 1− 2 . (10) Cumulative distribution functions are increasing functions with values in [0, 1], thus combining (7) and (9) ∀ >0∀x<−b : 0 ≤ CDFN (√ Dµ,σ2/(4µ) )(√Dµ+ x) ≤ , ∀ >0∃D>0∀D>D,D∈N∀x<−b : 0 ≤ CDF√N+(Dµ,Dσ2) (√ Dµ+ x ) < 2 , ∀ >0∃D>0∀D>D,D∈N∀x<−b :∣∣∣CDFN(√Dµ,σ2/(4µ))(√Dµ+ x)− CDF√N+(Dµ,Dσ2)(√Dµ+ x)∣∣∣ < 2 . (11) Analogically, using (8) and (10) ∀ >0∀x>b : 1 ≥ CDFN (√Dµ,σ2/(4µ))( √ Dµ+ x) ≥ 1− , ∀ >0∃D>0∀D>D,D∈N∀x>b : 1 ≥ CDF√N+(Dµ,Dσ2)( √ Dµ+ x) > 1− 2 , ∀ >0∃D>0∀D>D,D∈N∀x>b :∣∣∣CDFN(√Dµ,σ2/(4µ))(√Dµ+ x)− CDF√N+(Dµ,Dσ2)(√Dµ+ x)∣∣∣ < 2 . (12) Thus, ∀ >0∃D>0∀D>D,D∈N∀x∈R : [√ Dµ− b > 0 =⇒∣∣∣CDFN(√Dµ,σ2/(4µ))(√Dµ+ x)− CDF√N+(Dµ,Dσ2)(√Dµ+ x)∣∣∣ < 2 ] , (13) because for any > 0 we may define D := max{D1,D2,D3}, where D1,D2,D3 are taken from (6), (11) and (12). To simplify, ∀ >0∃D>0∀D>D,D∈N∀x∈R : ∣∣∣CDFN(√Dµ,σ2/(4µ))(x)− CDF√N+(Dµ,Dσ2)(x)∣∣∣ < 2 , (14) because for any > 0 we may define D := max{D1,D2}, where D1,D2 are taken from (4) and (13), making the antecedent true. We also replaced √ Dµ+ x with x, since now the statement holds for all x ∈ R. Finally, we combine (3) and (14) using the triangle inequality ∀ >0∃D>0∀D>D,D∈N∀x∈R : ∣∣∣CDF‖Z‖(x)− CDFN(√Dµ,σ2/(4µ))(x)∣∣∣ < 3 , (15) because for any > 0 we may define D := max{D1,D2}, where D1,D2 are taken from (3) and (14), and since it is true for any positive , we replace 3 with ∀ >0∃D>0∀D>D,D∈N∀x∈R : ∣∣∣CDF‖Z‖(x)− CDFN(√Dµ,σ2/(4µ))(x)∣∣∣ < , because for any > 0 we may define D := D1, where D1 is taken from (15), substituting 3 for . Observation 2.2. If Z has a finite mean, and Z is distributed identically to Z, then Z must be concentrated at a single point. Proof. Let Z,Z(1),Z(2),Z(3), . . . be an infinite sequence of independent and identically distributed random variables. Using induction on n we can show that 12n ( Z(1) + . . . + Z(2 n) ) is distributed identically to Z. Indeed, for n = 1 this is one of the theorem’s assumptions. To prove the inductive step let us define A := 1 2n ( Z(1) + . . .+ Z(2 n) ) , B := 1 2n ( Z(2 n+1) + . . .+ Z(2 n+1) ) . A and B are independent – they are defined as functions of independent variables – and, by the inductive hypothesis, distributed identically to Z. Finally, it is sufficient to observe that 1 2n+1 ( Z(1) + . . .+ Z(2 n+1) ) = A + B 2 . Z has finite mean – let us denote it by µ. Let also N+ be the set of strictly positive natural numbers. By the law of large numbers the sequence { 1n (Z (1) + . . . + Z(n))}n∈N+ converges in probability to µ. The same is true for any infinite subsequence, in particular for { 12n (Z (1) + . . .+Z(2 n))}n∈N+ , but we have shown that all elements of this subsequence are distributed identically to Z, thus Z must be concentrated at µ. Observation 3.1. If Z has uniform distribution on the zero-centred sphere of radius R > 0, then fSL does not change the distribution of Z. Proof. Let Z,Z(1),Z(2) be independent and identically distributed. Let λ ∈ [0, 1] be a fixed real number. The random variable fSL Z(1),Z(2) (λ) is defined almost everywhere (with the exception of parallel samples from Z(1),Z(2)) and is also concentrated on the zero-centred sphere of radius R (because if ‖x1‖ = ‖x2‖, then ‖fSLx1,x2(λ)‖ = ‖x1‖ = ‖x2‖). Let iso be any linear isometry of the latent space. ‖iso(x)‖ = ‖x‖, thus iso is also an isometry of the zero-centred sphere of radius R. Additionally, we have iso ( fSLx1,x2(λ) ) = iso ( sin [(1− λ)Ω] sin Ω x1 + sin[λΩ] sin Ω x2 ) = sin [(1− λ)Ω] sin Ω iso(x1) + sin[λΩ] sin Ω iso(x2) = fSLiso(x1),iso(x2)(λ) and the last equality holds because the isometry does not change the angle Ω between x1 and x2. Thus, iso ( fSL Z(1),Z(2) (λ) ) = fSL iso(Z(1)),iso(Z(2)) (λ), and this is distributed identically to fSL Z(1),Z(2) (λ), because Z(1),Z(2), both uniform distributions, are invariant to iso. In that case, fSL Z(1),Z(2) (λ) is concentrated on the zero-centred sphere of radius R and invariant to all linear isometries of the latent space. The only distribution having these properties is the uniform distribution on the sphere. Observation 3.2. If Z ∼ N (0, I), then fN does not change the distribution of Z. Proof. Let Z,Z(1),Z(2) be independent and identically distributed. Let λ ∈ [0, 1] be a fixed real number. The random variables Z(1) and Z(2) are both distributed according to N (0, I). Using the definition of fN and elementary properties of the normal distribution we conclude fNZ(1),Z(2)(λ) = (1− λ)Z(1) + λZ(2)√ (1− λ)2 + λ2 ∼ N ( (1− λ)0 + λ0√ (1− λ)2 + λ2 , (1− λ)2I + λ2I (1− λ)2 + λ2 ) = N (0, I). Observation 3.3. With the above assumptions about g the Cauchy-linear interpolation does not change the distribution of Z. Proof. Let Z,Z(1),Z(2) be independent and identically distributed. Let λ ∈ [0, 1] be a fixed real number. First observe that g−1(Z(1)) and g−1(Z(2)) are independent (because Z(1),Z(2) are independent) and distributed identically to C (property of g). Likewise, (1− λ)g−1(Z(1)) + λg−1(Z(2)) ∼ C (property of the Cauchy distribution). Therefore, g((1− λ)g−1(Z(1)) + λg−1(Z(1))) ∼ Z (property of g). Observation 3.4. With the above assumptions about g, the spherical Cauchy-linear interpolation does not change the distribution of Z if the Z distribution is isotropic. Proof. Let Z,Z(1),Z(2) be independent and identically distributed. Let λ ∈ [0, 1] be a fixed real number. The following statements are straightforward consequences of Z(1),Z(2) being isotropic (and also independent). 1. The random variables Z(1) ‖Z(1)‖ , Z(2) ‖Z(2)‖ , ‖Z(1)‖, ‖Z(2)‖ are independent, 2. ‖Z(1)‖ and ‖Z(2)‖ are both distributed identically to ‖Z‖, 3. Z(1) ‖Z(1)‖ and Z(2) ‖Z(2)‖ are both distributed uniformly on the sphere of radius 1. The next two statements are consequences of Observations 3.1 and 3.3 respectively. 4. The random variable fSCL Z(1),Z(2) (λ) ‖fSCL Z(1),Z(2) (λ)‖ = sin [(1− λ)Ω] sin Ω Z(1) ‖Z(1)‖ + sin[λΩ] sin Ω Z(2) ‖Z(2)‖ is dis- tributed uniformly on the unit sphere. 5. The random variable ‖fSCL Z(1),Z(2) (λ)‖ = g ( (1 − λ)g−1(‖Z(1)‖) + λg−1(‖Z(2)‖) ) is dis- tributed identically to ‖Z‖. fSCL Z(1),Z(2) (λ) ‖fSCL Z(1),Z(2) (λ)‖ and ‖fSCL Z(1),Z(2) (λ)‖ are independent, because they are functions of independent random variables (Ω is a function of Z(1) ‖Z(1)‖ and Z(2) ‖Z(2)‖ ), therefore fSCL Z(1),Z(2) (λ) is isotropic. Using the statement 5. and the fact that two isotropic probability distributions are equal if and only if the distributions of their euclidean norms are equal we conclude that fSCL Z(1),Z(2) (λ) is distributed identically to Z. C THE CAUCHY DISTRIBUTION – SAMPLES AND INTERPOLATIONS D MORE CAUCHY-LINEAR AND SPHERICAL CAUCHY-LINEAR INTERPOLATIONS
1. What is the main contribution of the paper regarding implicit generative models? 2. What are the strengths and weaknesses of the paper's technical analysis? 3. How does the reviewer assess the clarity and quality of the paper's content? 4. Do you have any questions or concerns about the paper's methodology, results, or conclusions? 5. Are there any minor comments or suggestions for improving the paper?
Review
Review The authors study the problem of when the linear interpolant between two random variables follows the same distribution. This is related to the prior distribution of an implicit generative model. In the paper, the authors show that the Cauchy distribution has such a property, however due to the heavy-tails is not particularly useful. In addition, they propose a non-linear interpolation that naturally has this property. Technically the paper in my opinion is solid. Also, the paper is ok-written, but I think it needs improvements (see comments). Comments: #1) In my opinion the motivation is not very clear and should be improved. In the paper is mentioned that the goal of shortest path interpolation is to get smooth transformations. So, in principle, I am really skeptical when the linear interpolant is utilized as the shortest path. Even then, what is the actual benefit of having the property that the linear interpolants follow the same distribution as the prior? How this is related to smoother transformations? What I understand is that, if we interpolate between several random samples, we will get less samples near the origin, and additionally, these samples will follow the prior? But how this induces smoothness in the overall transformation? I think this should be explained properly in the text i.e. why is it interesting to solve the proposed problem. #2) From Observation 2.2. we should realize that the distribution matching property holds if the distribution has infinite mean? I think that this is implicitly mentioned in Section 2.2. paragraph 1, but I believe that it should be explicitly stated. #3) Fig.1 does not show something interesting, and if it does it is not explained. In Fig. 2 I think that interpolations between the same images should be provided such that to have a direct comparison. Also, in Fig. 3 the norm of Z can be shown in order to be clear that the Cauchy distribution has the desired property. #4) Section 2.2. paragraph 6, first sentence. Here it is stated that the distribution "must be trivial or heavy-tailed". This refers only to the Cauchy distribution? Since earlier the condition was the infinite mean. How these are related? Needs clarification in the text. #4) In Figure 4, I believe that the norms of the interpolants should be presented as well, such that to show if the desired property is true. Also in Figure 5, what we should see? What are the improvements when using the proposed non-linear interpolation? Minor comments: #1) Section 1.2. paragraph 2. For each trained model the latent space usually has different structure e.g. different untrained regions. So I believe that interpolations is not the proper way to compare different models. #2) Section 1.3 paragraph 1, in my opinion the term "pathological" should be explained precisely here. So it makes clear to the reader what he should expect. #3) Section 2.2. paragraph 2. The coordinate-wise implies that some Z_i are near zero and some others significantly larger? In generally, I like the presented analysis. However, I do not fully understand the motivation. I think that choosing the shortest path guarantees smooth transformations. I do not see why the distribution matching property provides smoother transformations. To my understanding, this is simply a way to generate less samples near the origin, but this does not directly means smoother transformations of the generated images. I believe that the motivation and the actual implications of the discussed property have to be explained better.
ICLR
Title Distribution-Interpolation Trade off in Generative Models Abstract We investigate the properties of multidimensional probability distributions in the context of latent space prior distributions of implicit generative models. Our work revolves around the phenomena arising while decoding linear interpolations between two random latent vectors – regions of latent space in close proximity to the origin of the space are oversampled, which restricts the usability of linear interpolations as a tool to analyse the latent space. We show that the distribution mismatch can be eliminated completely by a proper choice of the latent probability distribution or using non-linear interpolations. We prove that there is a trade off between the interpolation being linear, and the latent distribution having even the most basic properties required for stable training, such as finite mean. We use the multidimensional Cauchy distribution as an example of the prior distribution, and also provide a general method of creating non-linear interpolations, that is easily applicable to a large family of commonly used latent distributions. 1 INTRODUCTION Generative latent variable models have grown to be a very popular research topic, with Variational Auto-Encoders (VAEs) (Kingma & Welling, 2013) and Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) gaining a lot of interest in the last few years. VAEs use a stochastic encoder network to embed input data in a typically lower dimensional space, using a conditional probability distribution p(z|x) over possible latent space codes z ∈ RD. A stochastic decoder network is then used to reconstruct the original sample. GANs, on the other hand, use a generator network that creates data samples from noise z ∼ p(z), where p(z) is a fixed prior distribution, and train a discriminator network jointly to distinguish between real and generated data. Both of these model families require a probability distribution to be defined on the latent space. The most popular variants are the multidimensional normal distribution and the uniform distribution on the zero-centred hypercube. Given a trained model, studying the structure of the latent space is a common way to measure generator capabilities. 1.1 MOTIVATION BEHIND INTERPOLATIONS There are various methods used to analyse the latent space. Locally, one can sample and decode points in close neighbourhood of a given latent vector to investigate a small region in the space. On the other hand, global methods are designed to capture long-distance relationships between points in the space, e.g. latent arithmetics, latent directions analysis, and interpolations (see e.g. Mikolov et al. (2013); Kilcher et al. (2017); Radford et al. (2015); White (2016); Agustsson et al. (2017)). The main advantage of using interpolations is the interpretability that comes with dealing with onedimensional curves, instead of high-dimensional Euclidean space. For example, if the model has managed to find a meaningful representation, one would expect the latent space to be organised in a way that reflects the internal structure of the training dataset. In that case, decoding an interpolation will show a gradual transformation of one endpoint into the other. Contrarily, if the model memorises the data, the latent space might consist of regions corresponding to particular training examples, divided by boundaries with unnatural, abrupt changes in generated data (Arvanitidis et al., 2017). We ∗These two authors contributed equally This work was supported by National Science Centre, Poland (grants no. 2015/19/B/ST6/01819). need to note that this notion of "meaningful representation" is not enforced by the training objective. However, it is not contradicting the objective, making it necessary to use additional tools to evaluate whether the learned manifold is coherently structured and equipped with desirable qualities. What distinguishes interpolations from other low-dimensional methods is the shortest path property. In absence of any additional knowledge about the latent space, it feels natural to use the Euclidean metric. In that case, the shortest path between two points is defined as a segment. This is, probably the most popular, linear interpolation, formally defined as fL(x1, x2, λ) = (1−λ)x1+λx2, for λ ∈ [0, 1], where x1, x2 are the endpoints. Other definitions of shortest path might yield different interpolations, we will study some of them later on. While traversing the latent space along the shortest path between two points, a well-trained model should transform the samples in a sensible way. For example, if the modelled data has a natural hierarchy, we would expect the interpolation to reflect it, i.e. an image of a truck should not arise on a path between images of a cat and a dog. Also, if the data can be described with a set of features, then an interpolation should maintain any features shared by the endpoints along the path. For example, consider a dataset of images of human faces, with features such as wearing sunglasses, having a long beard, etc. Again, this is not enforced by the training objective. If one would desire such property, it is necessary to somehow include the information about the trained manifold in the interpolation scheme. There has been an amount of work done on equipping the latent space with a stochastic Riemannian metric (Arvanitidis et al., 2017) that additionally depends on the generator function. The role of the shortest paths is fulfilled by the geodesics, and the metric is defined precisely to enforce some of the properties mentioned above. This approach is somewhat complementary to the one we are concerned with – instead of analysing the latent space using simple tools, we would need to find a more sophisticated metric that describes the latent space comprehensively, and then analyse the metric itself. If our goal was solely the quality of generated interpolation samples, the aforementioned approach would be preferable. However, in this work we are concerned with evaluating the properties directly connected with the model’s objective. With that in mind, we criticise the broad use of linear interpolations in this particular context. In this work we shall theoretically prove that linear interpolations are an incorrect tool for the stated task, and propose a simple, suitable interpolation variant. 1.2 THE DISTRIBUTION MISMATCH While considered useful, the linear interpolation used in conjunction with the most popular latent distributions results in a distribution mismatch (also defined in Agustsson et al. (2017); Kilcher et al. (2017)). That is, if we fix the λ coefficient and interpolate linearly between two endpoints sampled from the latent space distribution, the probability distribution of the resulting vectors will differ significantly from the latent distribution. This can be partially explained by the well-known fact that in high dimensions the norms of vectors drawn from the latent distribution are concentrated around a certain value. As a consequence, the midpoints of sampled pairs of latent vectors will have, on average, significantly smaller norm. Thus, the linear interpolation oversamples regions in close proximity of the origin of the latent space. A thorough analysis of this phenomenon will be conducted in section 2.1. Such behaviour raises questions about the applicability of the linear interpolation to study the latent space. Indeed, changing the latent distribution after the model was trained may have unexpected consequences. In Kilcher et al. (2017), experiments conducted using a DCGAN model (Radford et al., 2015) on the celebA dataset (Liu et al., 2015) showed flawed data generation near the latent space origin. Other works concerning the traversal of latent space do not mention this effect, e.g. Agustsson et al. (2017). We recreated this experiment, and concluded that it might be caused by stopping the training process too early (see Appendix C figure 6 for a visualisation). This may explain the apparent disagreement in the literature. Nevertheless, with either a midpoint decoding to a median face, or a non-sensible sample, the interpolation is not informative – we would like to see smooth change of features, and not a transition through the same, homogeneous region. The solution is, either, to change the latent distribution so that the linear interpolation will not cause a distribution mismatch, or redefine the shortest path property. A simple well-known compromise is to use spherical interpolations (Shoemake, 1985; White, 2016). As the latent distribution is concentrated around a sphere, replacing segments with arcs causes relatively small distribution mismatch (see section 3.2). Nonetheless, reducing the consequences of the distribution mismatch is still a popular research topic (Agustsson et al., 2017; Kilcher et al., 2017; Arvanitidis et al., 2017). 1.3 MAIN CONTRIBUTIONS In section 2.1 we show that if the linear interpolation does not change the latent probability distribution, then it must be trivial or "pathological" (with undefined expected value). Then, in section 2.2, we give an example of such an invariant distribution, namely the Cauchy distribution, thus proving its existence. We also discuss the negative consequences of choosing a heavy-tailed probability distribution as the latent prior. In section 3 we relax the Euclidean shortest path property of interpolations, and investigate nonlinear interpolations that do not cause the latent distribution mismatch. We describe a general framework for creating such interpolations, and give two concrete examples in sections 3.4 and 3.5. We find these interpolations to be appropriate for evaluating the model’s objective induced properties in contrast to the linear interpolations. The experiments conducted using the DCGAN model on the CelebA dataset are presented solely to illustrate the problem, not to study the DCGAN itself, theoretically or empirically. 2 LATENT DISTRIBUTIONS In this section we will tackle the problem of distribution mismatch by selecting a proper latent distribution. Let us assume that we want to train a generative model which has a D-dimensional latent space and a fixed latent probability distribution, defined by a random variable Z. We denote by X ∼ X that the random variableX has distributionX . Xn ' X represents the fact that the sequence of random variables {Xn}n∈N converges weakly to a random variable with distributionX as n tends to infinity. By Xn ' Xn we mean that limn→∞ supx∈R |CDFXn(x) − CDFXn(x)| = 0, where CDFX denotes the cumulative distribution function of X . The index n will usually be omitted for readability. In other words, by X ' X we mean, informally, that X has distribution similar to X . 2.1 LINEAR INTERPOLATION INVARIANCE PROPERTY Property 2.1 (Linear Interpolation Invariance). If Z defines a distribution on the D-dimensional latent space, Z(1) and Z(2) are independent and distributed identically to Z, and for every λ ∈ [0, 1] the random variable fL(Z(1),Z(2), λ) := (1− λ)Z(1) + λZ(2) is distributed identically to Z, then we will say that Z has the linear interpolation invariance property, or that linear interpolation does not change the distribution of Z. The most commonly used latent probability distributions Z are products of D independent random variables. That is, Z = (Z1, Z2, . . . , ZD), where Z1, Z2, . . . , ZD are the independent marginals distributed identically to Z. If the norms of Z concentrate around a certain value, then the latent distribution resembles sampling from a zero-centred sphere and the linear interpolation oversamples regions in the proximity of the origin of the latent space. As a consequence, Z does not have the linear interpolation invariance property. The following observation will shed light upon this problem. Let N (µ, σ2) denote the normal distribution with mean µ and variance σ2. Observation 2.1. Let us assume that Z2 has finite mean µ and finite variance σ2. If µ > 0, then ‖Z‖ ' N (√ Dµ, σ 2 4µ ) as D →∞. If µ = 0, then ‖Z‖ = 0 almost everywhere. The proof of this and all further observations is presented in the appendix B. For example, if Z ∼ N (0, 1), then Z is distributed according to the D-dimensional normal distribution with mean 0 and identity covariance matrix I. Z2 has moments µ = 1, σ2 = 2, thus ‖Z‖ ' N (√ D, 12 ) . The second example is Z ∼ U(−1, 1), where U(a, b) is the uniform distribu- If Z ∼ N (0, 1), then ‖Z‖ is distributed according to the chi distribution, equal to the square root of the chi-squared distribution. tion on the interval [a, b], and Z is distributed uniformly on the hypercube [−1, 1]D. In that case, Z2 has moments µ = 13 , σ 2 = 445 , thus ‖Z‖ ' N (√ D 3 , 1 15 ) . It is worth noting that the variance of the approximated probability distribution of ‖Z‖, the thickness of the sphere, does not change as D tends to infinity – only the radius of the sphere is affected. On the other hand, if the latent distribution is normalised (divided by the expected value of ‖Z‖), then the distribution concentrates around the unit sphere (not necessarily uniformly), and we observe the so-called soap bubble phenomenon (Ferenc, 2017). One might think that the factorisation of the latent probability distribution is the main reason why the linear interpolation changes the distribution. Unfortunately, this is not the case. Let Z := 12 (Z (1) + Z(2)), where Z(1),Z(2) are two independent samples from Z. Therefore, Z is the distribution of the middle points of a linear interpolation between two vectors drawn independently from Z. Observation 2.2. If Z has a finite mean, and Z is distributed identically to Z, then Z must be concentrated at a single point. If a probability distribution is not heavy-tailed, then its tails are bounded by the exponential distribution, which in turn means that it has a finite mean. Therefore, all distributions having undefined expected value must be heavy-tailed. We will refer to this later on, as the heavy tails may have strong negative impact on the training procedure. There have been attempts to find Z, with finite mean, such that Z is at least similar to Z. Kilcher et al. (2017) managed to reduce the distribution mismatch by defining the latent distribution as V ∼ U(SD−1), r ∼ Γ(1 2 , θ), θ > 0, Z = √ rV, where U(SD−1) is the uniform distribution on the unit sphere, and Γ( 12 , θ) is the gamma distribution. We extend this idea by using a distribution that has no finite mean, namely the Cauchy distribution. 2.2 THE CAUCHY DISTRIBUTION The standard Cauchy distribution is denoted by C(0, 1), and its density function is defined as 1/ ( π(1 + x2) ) . The most important property of the Cauchy distribution is the fact that if C(1), . . . , C(n) are independent samples from the standard Cauchy distribution, and λ1, . . . , λn ∈ [0, 1] with λ1 + . . . + λn = 1, then λ1C(1) + . . . + λnC(n) is also distributed according to the standard Cauchy distribution. In case of n = 2 it means that the Cauchy distribution satisfies the distribution matching property. On the other hand, as a consequence of observation 2.2, the Cauchy distribution cannot have finite mean. In fact, all of its moments of order greater than or equal to one are undefined. See Siegrist (2017) for further details. There are two ways of using the Cauchy distribution in high dimensional spaces while retaining the distribution matching property. The multidimensional Cauchy distribution is defined as a product of independent standard Cauchy distributions. Then, the linear interpolation invariance property can be simply proved by applying the above formulas coordinate-wise. In the case of vectors drawn from the multidimensional Cauchy distribution we may expect that some of the coordinates will be sufficiently larger, by absolute value, than the others (Hansen et al., 2006), thus making the latent distribution similar to coordinate-wise sampling. In contrast, the multivariate Cauchy distribution comes with the isotropy property at the cost of the canonical directions becoming statistically dependent. There are multiple ways of defining it, and further analysis is out of the scope of this paper. We tested both variants as latent distributions with similar results. From now on, we shall concentrate on the non-isotropic Cauchy distribution. The Cauchy distribution is a member of the family of stable distributions, and has been previously used to model heavy-tailed data (Nolan, 2018). However, according to our best knowledge, the Cauchy distribution has never been used as the latent distribution in generative models. Figure 1 presents a decoded linear interpolations between random latent vectors using a DCGAN model trained on the CelebA dataset for the Cauchy distribution and the distribution from Kilcher et al. (2017). It should be noted that if D is large enough, the distribution of the norms of vectors sampled from the D-dimensional Cauchy distribution has a low density near zero – similarly to the normal and uniform distributions – but linear interpolations do not oversample this part of the latent space, due to the heavy-tailed nature of the Cauchy distribution. Comparison of the distributions of norms is given in Figure 2. The distribution-interpolation trade off states that if the probability distribution has the linear interpolation invariance property, then it must be trivial or heavy-tailed. In case of the Cauchy distribution we observed issues with generating images if the norm of the sampled latent vector was relatively large (the probability distribution of the norms is also heavy-tailed). Some of those faulty examples are presented in the appendix C. This is consistent with the known fact, that artificial networks perform poorly if their inputs are not normalised (see e.g. Glorot & Bengio (2010)). A probability distribution having the linear interpolation invariance property cannot be normalised using linear transformations. For example, the batch normalisation technique (Ioffe & Szegedy, 2015) would be highly ineffective, as the mean of a batch of samples is, in fact, a single sample from the distribution. On the other hand, using a non-linear normalisation (e.g., clipping the norm of the latent vectors in subsequent layers), is mostly equivalent to changing the latent probability distribution and making the interpolation non-linear. This idea will be explored in the next section. 3 INTERPOLATIONS In this section we review the most popular variants of interpolations, with an emphasis on the distribution mismatch analysis. We also present two new examples of interpolations stemming from a general scheme, that perform well with the popular latent priors. An interpolation on the latent space RD is formally defined as a function f : RD × RD × [0, 1] 3 (x1, x2, λ) 7→ x ∈ RD. For brevity, we will represent f(x1, x2, λ) by fx1,x2(λ). Property 3.1 (Distribution Matching Property). If Z defines a distribution on the D-dimensional latent space, Z(1) and Z(2) are independent and distributed identically to Z, and for every λ ∈ [0, 1] the random variable fZ(1),Z(2)(λ) is distributed identically to Z, then we will say that the interpolation f has the distribution matching property in conjunction with Z, or that the interpolation f does not change the distribution of Z. 3.1 LINEAR INTERPOLATION The linear interpolation is defined as fLx1,x2(λ) = (1 − λ)x1 + λx2. This interpolation does not satisfy the distribution matching property for the most commonly used probability distributions, as they have a finite mean. A notable exception is the Cauchy distribution. This was discussed in details in the previous section. 3.2 SPHERICAL LINEAR INTERPOLATION As in Shoemake (1985); White (2016), the spherical linear interpolation is defined as fSLx1,x2(λ) = sin [(1− λ)Ω] sin Ω x1 + sin[λΩ] sin Ω x2, where Ω is the angle between vectors x1 and x2. Note that this interpolation is undefined for parallel endpoint vectors, and the definition cannot be extended without losing the continuity. Also, if vectors x1 and x2 have the same length R, then the interpolation corresponds to a geodesic on the sphere of radius R. In this regard, it might be said that the spherical linear interpolation is defined as the shortest path on the sphere. The most important fact is that this interpolation can have the distribution matching property. Observation 3.1. If Z has uniform distribution on the zero-centred sphere of radius R > 0, then fSL does not change the distribution of Z. 3.3 NORMALISED INTERPOLATION Introduced in Agustsson et al. (2017), the normalised interpolation is defined as fNx1,x2(λ) = (1− λ)x1 + λx2√ (1− λ)2 + λ2 . Observation 3.2. If Z ∼ N (0, I), then fN does not change the distribution of Z. If vectors x1 and x2 are orthogonal and have equal length, then the curve defined by this interpolation is equal to the one of the spherical linear interpolation. On the other hand, the normalised interpolation behaves poorly if x1 is close to x2. In the extreme case of x1 = x2 the interpolation is not constant with respect to λ, which violates any sensible definition of the shortest path. 3.4 CAUCHY-LINEAR INTERPOLATION Here we present a general way of designing interpolations that have the distribution matching property in conjunction with a given probability distribution Z. This method requires some additional assumptions about Z, but it works well with the most popular latent distributions. Let L be the D-dimensional latent space, Z define the probability distribution on the latent space, C be distributed according to the D-dimensional Cauchy distribution on L, K be a subset of L such that Z is concentrated on this set, and g : L → K be a bijection such that g(C) is distributed identically to Z on K. Then for x1, x2 ∈ K we define the Cauchy-linear interpolation as fCLx1,x2(λ) = g ( (1− λ)g−1(x1) + λg−1(x2) ) . In other words, for endpoints x1, x2 ∼ Z: 1. Transform x1 and x2 using g−1. This step changes the latent distribution to the D-dimensional Cauchy distribution. 2. Linearly interpolate between the transformations to get xλ = (1− λ)g−1(x1) + λg−1(x2) for all λ ∈ [0, 1]. The transformed latent distribution remains unchanged. Originally referred to as distribution matched. 3. Transform xλ back to the original space using g. We end up with the original latent distribution. Observation 3.3. With the above assumptions about g the Cauchy-linear interpolation does not change the distribution of Z. Finding an appropriate function g might seem hard, but in practice it usually is fairly straightforward. For example, if Z is distributed identically to the product of D independent one-dimensional distributions Z, then we can define g as CDF−1C ◦ CDFZ applied to every coordinate. 3.5 SPHERICAL CAUCHY-LINEAR INTERPOLATION We might want the interpolation to have some other desired properties. For example, to behave exactly as the spherical linear interpolation if only the endpoints have equal norm. For that purpose, we need to make additional assumptions. Let Z be isotropic, C be distributed according to the onedimensional Cauchy distribution, and g : R→ (0,+∞) be a bijection such that g(C) is distributed identically as ‖Z‖ on (0,+∞). Then we can modify the spherical linear interpolation formula to define what we call the spherical Cauchy-linear interpolation fSCLx1,x2(λ) = ( sin [(1− λ)Ω] sin Ω x1 ‖x1‖ + sin[λΩ] sin Ω x2 ‖x2‖ )[ g ( (1− λ)g−1(‖x1‖) + λg−1(‖x2‖) )] , where Ω is the angle between vectors x1 and x2. In other words: 1. Interpolate the directions of latent vectors using the spherical linear interpolation. 2. Interpolate the norms using the Cauchy-linear interpolation. Observation 3.4. With the above assumptions about g, the spherical Cauchy-linear interpolation does not change the distribution of Z if the Z distribution is isotropic. The simplest candidate for the g function is CDF−1C ◦CDF‖Z‖, but we usually need to know more about Z to check if the assumptions hold. For example, let Z be aD-dimensional normal distribution with zero mean and identity covariance matrix. Then ‖Z‖ ∼ √ χ2D and CDF√ χ2D (x) = CDFχ2D (x 2) = 1 Γ(D/2) γ ( D 2 , x2 2 ) , for every x ≥ 0, where Γ denotes the gamma function, and γ is the lower incomplete gamma function. Thus we set g(x) = ( CDF−1C ◦ CDFχ2D ) (x2), with g−1(x) = √( CDF−1 χ2D ◦ CDFC ) (x). Figure 3 shows comparison of the Cauchy-linear and the spherical Cauchy-linear interpolations on a two-dimensional plane for pairs of vectors sampled from different probability distributions. It illustrates how these interpolations manage to keep the distributions unchanged. Figure 4 is an illustration of distribution matching property for Cauchy-linear interpolation. We also compare the data samples generated by the DCGAN model trained on the CelebA dataset; the results are shown in figure 5. 4 SUMMARY We investigated the properties of multidimensional probability distributions in the context of generative models. We found out that there is a certain trade-off: it is impossible to define a latent probability distribution with a finite mean and the linear interpolation invariance property. The D-dimensional Cauchy distribution serves as an example of a latent probability distribution that remains unchanged by linear interpolation, at the cost of poor model performance, due to the heavytailed nature. Instead of using the Cauchy distribution as the latent distribution, we propose to use it to define nonlinear interpolations that have the distribution matching property. The assumption of the shortest path being a straight line must be relaxed, but our scheme is general enough to provide a way of incorporating other desirable properties. We observe that there are three different goals when using interpolations for studying a generative model. Firstly, to check whether the training objective was fulfilled, one must use an interpolation that does not cause the distribution mismatch. This is, in our opinion, a necessary step before performing any further evaluation of the trained model. Secondly, if one is interested in the manifold convexity, linear interpolations are a suitable method provided the above analysis yields positive results. Finally, to perform a complete investigation of the learned manifold one can employ methods that incorporate some information about the trained model, e.g. the approach of Arvanitidis et al. (2017) mentioned in section 1.1. We do not propose to completely abandon the use of linear interpolations, as the convexity of the learned manifold is still an interesting research topic. For instance, we have observed that generative models are capable of generating sensible images from seemingly out-of-distribution regions, e.g. the emergence of the median face mentioned in the introduction. In our opinion, this is a promising direction for future research. A EXPERIMENTAL SETUP All experiments were conducted using a DCGAN model (Radford et al., 2015), in which the generator network consisted of a linear layer with 8192 neurons, followed by four convolution transposition layers, each using 5 × 5 filters and strides of 2, with number of filters in order of layers: 256, 128, 64, 3. Except for the output layer, where tanh activation function was used, all previous layers used ReLU. Discriminator’s architecture mirrored the one from the generator, with a single exception of using leaky ReLU instead of vanilla ReLU function for all except the last layer. No batch normalisation was used in both networks. Adam optimiser with learning rate of 2e−4 and momentum set to 0.5 was used. Batch size 64 was used throughout all experiments. If not explicitly stated otherwise, latent space dimension was set to 100. For the CelebA dataset we resized the input images to 64× 64. B PROOFS Observation 2.1. Let us assume that Z2 has finite mean µ and finite variance σ2. If µ > 0, then ‖Z‖ ' N (√ Dµ, σ 2 4µ ) as D →∞. If µ = 0, then ‖Z‖ = 0 almost everywhere. Proof. Recall that Z,Z1, . . . , ZD are independent and identically distributed. Therefore Z2, Z21 , . . . , Z 2 D are also independent and identically distributed. Z = (Z1, . . . , ZD) and ‖Z‖2 = Z21 + . . .+ Z2D. Z2 ≥ 0, therefore µ ≥ 0. If µ = 0, then Z2 = 0 almost everywhere, Z2 = 0 almost everywhere, Z = 0 almost everywhere, and finally ‖Z‖ = 0 almost everywhere. From now on we will assume that µ > 0. Using the central limit theorem we know that √ D (Z21+...+Z2D D − µ ) converges in distribution to N (0, σ2) with D → ∞. The convergence of cumulative distribution functions is uniform, because the limit is continuous everywhere ∀ >0∃D>0∀D>D,D∈N∀x∈R : ∣∣∣Pr(√D(Z21 + . . .+ Z2D D − µ ) ≤ x ) − CDFN (0,σ2)(x) ∣∣∣ < . D > 0, thus Pr (√ D (Z21 + . . .+ Z2D D − µ ) ≤ x ) = Pr ( Z21 + . . .+ Z 2 D ≤ Dµ+ x √ D ) = CDF‖Z‖2 ( Dµ+ x √ D ) . Additionally, CDFN (0,σ2)(x) = CDFN (Dµ,Dσ2) ( Dµ+ x √ D ) , and now we have ∀ >0∃D>0∀D>D,D∈N∀x∈R : ∣∣CDF‖Z‖2(Dµ+ x√D)− CDFN (Dµ,Dσ2)(Dµ+ x√D)∣∣ < . Finally, the function R 3 x 7→ Dµ+ x √ D ∈ R is a bijection (again, because D > 0), so we may substitute Dµ + x √ D with x and the innermost statement will hold for every x ∈ R ∀ >0∃D>0∀D>D,D∈N∀x∈R : ∣∣CDF‖Z‖2(x)− CDFN (Dµ,Dσ2)(x)∣∣ < . (1) Before taking square root of the normal distribution we must deal with negative values. LetN+(ν, τ) be defined by its cumulative distribution function: CDFN+(ν,τ)(x) = { 0 if x < 0 , CDFN (ν,τ)(x) if x ≥ 0 . The idea is to take all negative values of N (ν, τ) and concentrate them at zero. Now we can modify (1) ∀ >0∃D>0∀D>D,D∈N∀x∈R : ∣∣CDF‖Z‖2(x)− CDFN+(Dµ,Dσ2)(x)∣∣ < , (2) for x ≥ 0 we simply use (1), for x < 0 the inequality simplifies to |0− 0| < . Since ‖Z‖2 and N+(Dµ,Dσ2) are non-negative, we are allowed to take the square root of these random variables. The square root is a strictly increasing function, thus for x ≥ 0 we have CDFN+(Dµ,Dσ2)(x 2) = CDF√N+(Dµ,Dσ2)(x) and CDF‖Z‖2(x 2) = CDF‖Z‖(x) , therefore we can approximate the variable ‖Z‖ ∀ >0∃D>0∀D>D,D∈N∀x∈R : ∣∣CDF‖Z‖(x)− CDF√N+(Dµ,Dσ2)(x)∣∣ < , (3) for x ≥ 0 we substitute x2 for x in (2), for x < 0 the inequality simplifies, again, to |0− 0| < . This paragraph is a summary of the second part of the proof. To calculate √ N+(Dµ,Dσ2) we observe that, informally, in proximity of Dµ the square root behaves approximately like scaling with constant (2 √ Dµ)−1. Additionally, N (Dµ,Dσ2) has width proportional to √ D, which is infinitesimally smaller than Dµ, so we expect the result to be√ N+(Dµ,Dσ2) ' N (√ Dµ, σ2 4µ ) . Let us define b = { CDF−1N (0,σ2/(4µ))(1− ) if ∈ (0, 1 2 ) , 0 if ≥ 12 . Here b is defined so that the probability of x drawn from N (√ Dµ, σ 2 4µ ) being at least b far from the mean is equal to 2 . Also, note that b does not depend on D. For now we will assume that√ Dµ− b > 0 – this is always true for sufficiently large D, as µ > 0 ∀ >0∃D>0∀D>D,D∈N : √ Dµ− b > 0 . (4) Now let us assume that we have a fixed > 0. For x ∈ [−b , b ] we write the following inequalities Dµ+ 2x √ Dµ ≤ (√ Dµ+ x )2 ≤ Dµ+ 2x√Dµ+ b2 , which are equivalent to 0 ≤ x2 ≤ b2 , thus true. Every cumulative distribution function is weakly increasing, therefore CDFN (Dµ,Dσ2) ( Dµ+ 2x √ Dµ ) ≤ CDFN (Dµ,Dσ2) ((√ Dµ+ x )2) ≤ ≤ CDFN (Dµ,Dσ2) ( Dµ+ 2x √ Dµ+ b2 ) . Because we assumed that (√ Dµ + x )2 > 0 for x ∈ [−b , b ], we can replace N (Dµ,Dσ2) with N+(Dµ,Dσ2) CDFN (Dµ,Dσ2) ( Dµ+ 2x √ Dµ ) ≤ CDFN+(Dµ,Dσ2) ((√ Dµ+ x )2) ≤ ≤ CDFN (Dµ,Dσ2) ( Dµ+ 2x √ Dµ+ b2 ) . We transform the outer distributions using basic properties of the normal distribution. We also take square root of the middle distribution and obtain CDFN ( √ Dµ,σ2/(4µ)) (√ Dµ+ x ) ≤ CDF√N+(Dµ,Dσ2) (√ Dµ+ x ) ≤ ≤ CDFN (√Dµ,σ2/(4µ)) (√ Dµ+ x+ b2 / ( 2 √ Dµ )) . (5) b2 /(2 √ Dµ) → 0 with D → ∞ and CDFN (√Dµ,σ2/(4µ)) is continuous, thus we have uniform convergence ∀ >0∃D>0∀D>D,D∈N∀x∈R :∣∣∣CDFN(√Dµ,σ2/(4µ))(√Dµ+ x)− CDFN (√Dµ,σ2/(4µ))(√Dµ+ x+ b2 /(2√Dµ))∣∣∣ < . Using (5) we get ∀ >0∃D>0∀D>D,D∈N∀x∈[−b ,b ] : [√ Dµ− b > 0 =⇒∣∣∣CDFN(√Dµ,σ2/(4µ))(√Dµ+ x)− CDF√N+(Dµ,Dσ2)(√Dµ+ x)∣∣∣ < ] . (6) Now we will extend this result to all x ∈ R. For > 0 we have CDF N (√ Dµ,σ2/(4µ) )(√Dµ− b ) ≤ , (7) CDF N (√ Dµ,σ2/(4µ) )(√Dµ+ b ) ≥ 1− . (8) Substituting −b and b for x in (6), and using (7) and (8) respectively, we obtain ∀ >0∃D>0∀D>D,D∈N : CDF√N+(Dµ,Dσ2) (√ Dµ− b ) < 2 , (9) ∀ >0∃D>0∀D>D,D∈N : CDF√N+(Dµ,Dσ2) (√ Dµ+ b ) > 1− 2 . (10) Cumulative distribution functions are increasing functions with values in [0, 1], thus combining (7) and (9) ∀ >0∀x<−b : 0 ≤ CDFN (√ Dµ,σ2/(4µ) )(√Dµ+ x) ≤ , ∀ >0∃D>0∀D>D,D∈N∀x<−b : 0 ≤ CDF√N+(Dµ,Dσ2) (√ Dµ+ x ) < 2 , ∀ >0∃D>0∀D>D,D∈N∀x<−b :∣∣∣CDFN(√Dµ,σ2/(4µ))(√Dµ+ x)− CDF√N+(Dµ,Dσ2)(√Dµ+ x)∣∣∣ < 2 . (11) Analogically, using (8) and (10) ∀ >0∀x>b : 1 ≥ CDFN (√Dµ,σ2/(4µ))( √ Dµ+ x) ≥ 1− , ∀ >0∃D>0∀D>D,D∈N∀x>b : 1 ≥ CDF√N+(Dµ,Dσ2)( √ Dµ+ x) > 1− 2 , ∀ >0∃D>0∀D>D,D∈N∀x>b :∣∣∣CDFN(√Dµ,σ2/(4µ))(√Dµ+ x)− CDF√N+(Dµ,Dσ2)(√Dµ+ x)∣∣∣ < 2 . (12) Thus, ∀ >0∃D>0∀D>D,D∈N∀x∈R : [√ Dµ− b > 0 =⇒∣∣∣CDFN(√Dµ,σ2/(4µ))(√Dµ+ x)− CDF√N+(Dµ,Dσ2)(√Dµ+ x)∣∣∣ < 2 ] , (13) because for any > 0 we may define D := max{D1,D2,D3}, where D1,D2,D3 are taken from (6), (11) and (12). To simplify, ∀ >0∃D>0∀D>D,D∈N∀x∈R : ∣∣∣CDFN(√Dµ,σ2/(4µ))(x)− CDF√N+(Dµ,Dσ2)(x)∣∣∣ < 2 , (14) because for any > 0 we may define D := max{D1,D2}, where D1,D2 are taken from (4) and (13), making the antecedent true. We also replaced √ Dµ+ x with x, since now the statement holds for all x ∈ R. Finally, we combine (3) and (14) using the triangle inequality ∀ >0∃D>0∀D>D,D∈N∀x∈R : ∣∣∣CDF‖Z‖(x)− CDFN(√Dµ,σ2/(4µ))(x)∣∣∣ < 3 , (15) because for any > 0 we may define D := max{D1,D2}, where D1,D2 are taken from (3) and (14), and since it is true for any positive , we replace 3 with ∀ >0∃D>0∀D>D,D∈N∀x∈R : ∣∣∣CDF‖Z‖(x)− CDFN(√Dµ,σ2/(4µ))(x)∣∣∣ < , because for any > 0 we may define D := D1, where D1 is taken from (15), substituting 3 for . Observation 2.2. If Z has a finite mean, and Z is distributed identically to Z, then Z must be concentrated at a single point. Proof. Let Z,Z(1),Z(2),Z(3), . . . be an infinite sequence of independent and identically distributed random variables. Using induction on n we can show that 12n ( Z(1) + . . . + Z(2 n) ) is distributed identically to Z. Indeed, for n = 1 this is one of the theorem’s assumptions. To prove the inductive step let us define A := 1 2n ( Z(1) + . . .+ Z(2 n) ) , B := 1 2n ( Z(2 n+1) + . . .+ Z(2 n+1) ) . A and B are independent – they are defined as functions of independent variables – and, by the inductive hypothesis, distributed identically to Z. Finally, it is sufficient to observe that 1 2n+1 ( Z(1) + . . .+ Z(2 n+1) ) = A + B 2 . Z has finite mean – let us denote it by µ. Let also N+ be the set of strictly positive natural numbers. By the law of large numbers the sequence { 1n (Z (1) + . . . + Z(n))}n∈N+ converges in probability to µ. The same is true for any infinite subsequence, in particular for { 12n (Z (1) + . . .+Z(2 n))}n∈N+ , but we have shown that all elements of this subsequence are distributed identically to Z, thus Z must be concentrated at µ. Observation 3.1. If Z has uniform distribution on the zero-centred sphere of radius R > 0, then fSL does not change the distribution of Z. Proof. Let Z,Z(1),Z(2) be independent and identically distributed. Let λ ∈ [0, 1] be a fixed real number. The random variable fSL Z(1),Z(2) (λ) is defined almost everywhere (with the exception of parallel samples from Z(1),Z(2)) and is also concentrated on the zero-centred sphere of radius R (because if ‖x1‖ = ‖x2‖, then ‖fSLx1,x2(λ)‖ = ‖x1‖ = ‖x2‖). Let iso be any linear isometry of the latent space. ‖iso(x)‖ = ‖x‖, thus iso is also an isometry of the zero-centred sphere of radius R. Additionally, we have iso ( fSLx1,x2(λ) ) = iso ( sin [(1− λ)Ω] sin Ω x1 + sin[λΩ] sin Ω x2 ) = sin [(1− λ)Ω] sin Ω iso(x1) + sin[λΩ] sin Ω iso(x2) = fSLiso(x1),iso(x2)(λ) and the last equality holds because the isometry does not change the angle Ω between x1 and x2. Thus, iso ( fSL Z(1),Z(2) (λ) ) = fSL iso(Z(1)),iso(Z(2)) (λ), and this is distributed identically to fSL Z(1),Z(2) (λ), because Z(1),Z(2), both uniform distributions, are invariant to iso. In that case, fSL Z(1),Z(2) (λ) is concentrated on the zero-centred sphere of radius R and invariant to all linear isometries of the latent space. The only distribution having these properties is the uniform distribution on the sphere. Observation 3.2. If Z ∼ N (0, I), then fN does not change the distribution of Z. Proof. Let Z,Z(1),Z(2) be independent and identically distributed. Let λ ∈ [0, 1] be a fixed real number. The random variables Z(1) and Z(2) are both distributed according to N (0, I). Using the definition of fN and elementary properties of the normal distribution we conclude fNZ(1),Z(2)(λ) = (1− λ)Z(1) + λZ(2)√ (1− λ)2 + λ2 ∼ N ( (1− λ)0 + λ0√ (1− λ)2 + λ2 , (1− λ)2I + λ2I (1− λ)2 + λ2 ) = N (0, I). Observation 3.3. With the above assumptions about g the Cauchy-linear interpolation does not change the distribution of Z. Proof. Let Z,Z(1),Z(2) be independent and identically distributed. Let λ ∈ [0, 1] be a fixed real number. First observe that g−1(Z(1)) and g−1(Z(2)) are independent (because Z(1),Z(2) are independent) and distributed identically to C (property of g). Likewise, (1− λ)g−1(Z(1)) + λg−1(Z(2)) ∼ C (property of the Cauchy distribution). Therefore, g((1− λ)g−1(Z(1)) + λg−1(Z(1))) ∼ Z (property of g). Observation 3.4. With the above assumptions about g, the spherical Cauchy-linear interpolation does not change the distribution of Z if the Z distribution is isotropic. Proof. Let Z,Z(1),Z(2) be independent and identically distributed. Let λ ∈ [0, 1] be a fixed real number. The following statements are straightforward consequences of Z(1),Z(2) being isotropic (and also independent). 1. The random variables Z(1) ‖Z(1)‖ , Z(2) ‖Z(2)‖ , ‖Z(1)‖, ‖Z(2)‖ are independent, 2. ‖Z(1)‖ and ‖Z(2)‖ are both distributed identically to ‖Z‖, 3. Z(1) ‖Z(1)‖ and Z(2) ‖Z(2)‖ are both distributed uniformly on the sphere of radius 1. The next two statements are consequences of Observations 3.1 and 3.3 respectively. 4. The random variable fSCL Z(1),Z(2) (λ) ‖fSCL Z(1),Z(2) (λ)‖ = sin [(1− λ)Ω] sin Ω Z(1) ‖Z(1)‖ + sin[λΩ] sin Ω Z(2) ‖Z(2)‖ is dis- tributed uniformly on the unit sphere. 5. The random variable ‖fSCL Z(1),Z(2) (λ)‖ = g ( (1 − λ)g−1(‖Z(1)‖) + λg−1(‖Z(2)‖) ) is dis- tributed identically to ‖Z‖. fSCL Z(1),Z(2) (λ) ‖fSCL Z(1),Z(2) (λ)‖ and ‖fSCL Z(1),Z(2) (λ)‖ are independent, because they are functions of independent random variables (Ω is a function of Z(1) ‖Z(1)‖ and Z(2) ‖Z(2)‖ ), therefore fSCL Z(1),Z(2) (λ) is isotropic. Using the statement 5. and the fact that two isotropic probability distributions are equal if and only if the distributions of their euclidean norms are equal we conclude that fSCL Z(1),Z(2) (λ) is distributed identically to Z. C THE CAUCHY DISTRIBUTION – SAMPLES AND INTERPOLATIONS D MORE CAUCHY-LINEAR AND SPHERICAL CAUCHY-LINEAR INTERPOLATIONS
1. What is the main contribution of the paper regarding latent variable models? 2. What are the strengths and weaknesses of the proposed approach to interpolation in the latent space? 3. Do you have any concerns or disagreements with the ideas presented in the paper, particularly regarding the assumptions made about the latent space? 4. How does the reviewer assess the empirical evaluation of the paper, and what conclusions can be drawn from it? 5. Are there any other relevant points or ideas discussed in the paper that were not mentioned in the review?
Review
Review == Paper overview == Given a latent variable model (deep generative model), the paper ask how we should interpolate in the latent space. The key idea is to derive a natural interpolant from the prior distribution p(z), where z is the latent variable. The idea is that the interpolation function you apply to a variable z should not change the distribution of z of the start and end points of the interpolation curve are identically distribution. Example: consider two unit-length points drawn from a standard Gaussian, then linear interpolation of these points will result in points of smaller norm and hence different distribution. Differerent priors and corresponding interpolants are demonstrated and discussed. Empirical results are more of an illustrative nature. == Pros/cons == + The paper contribute a new and relevant point to an ongoing discussion on the geometry of the latent space. + The key point is well-articulated and relevant mathematical details are derived in detail along the way. - I have some concerns about the idea itself (see below); yet, while I disagree with some of the presented view points, I don't think that diminishes the contribution. - The empirical evaluation hardly qualifies as such. A few image interpolations are shown, but it is unclear what conclusions can really be drawn from this. In the end, it remains unclear to me which approach to interpolation is better. == Concerns / debate == I have some concerns about the key idea of the paper (in essence, I find it overly simplistic), but I nonetheless find that the paper brings an interesting new idea to the table. 1) In section 1.1, the authors state "one would expect the latent space to be organized in a way that reflects the internal structure of the training dataset". My simple counter-question is: why? I know that this is common intuition, but I don't see anything in the cost functions of e.g. VAEs or GANs to make the statement true. Generative models, as far as I can see, only assume that the latent variables are somehow compressed versions of the data points; no assumptions on structure seems to be made. 2) Later in the same section, the authors state "In absence of any additional knowledge about the latent space, it feels natural to use the Euclidean metric". Same question: why? Again, I know that this is a common assumption, but, again, there is nothing in the models that seem to actually justify such an assumption. I agree that it would be nice to have a Euclidean latent space, but doesn't make it so. 3) In practice, we often see "holes" in the "cloud" of latent variables, that is regions of the latent space where only little data resides. I would argue that a good interpolant should not cross over a hole in the data manifold; none of the presented interpolants can satisfy this as they only depend no the start and end points, but not on the actual distribution of the latent points. So if the data does not fit the prior or are not iid, then the proposed interpolants will most likely perform poorly. A recent arXiv paper discuss one way to deal with such holes: https://arxiv.org/abs/1806.04994
ICLR
Title Unsupervised Performance Predictor for Architecture Search Abstract Performance predictors can directly predict the performance value of given neural architectures without training, thus broadly being studied to alleviate the prohibitive cost of Neural Architecture Search (NAS). However, existing performance predictors still require training a large number of architectures from scratch to get their performance labels as the training dataset, which is still computationally expensive. To solve this issue, we develop an performance predictor by applying the unsupervised domain adaptation technique called USPP, which can avoid costly dataset construction by using existing fully-trained architectures. Specifically, a progressive domain-invariant feature extraction method is proposed to assist in extracting domain-invariant features due to the great transferability challenge caused by the rich domain-specific features. Furthermore, a learnable representation (denoted as operation embedding) is designed to replace the fixed encoding of the operations to transfer more knowledge about operations to the target search space. In experiments, we train the predictor by the labeled architectures in NAS-Bench101 and predict the architectures in the DARTS search space. Compared with other state-of-the-art NAS methods, the proposed USPP only costs 0.02 GPU days but finds the architecture with 97.86% on CIFAR-10 and 76.50% top-1 accuracy on ImageNet. 1 INTRODUCTION Neural Architecture Search (NAS) (Elsken et al., 2019) aims to automatically design highperformance neural architectures and has been a popular research field of machine learning. In recent years, the architectures searched by NAS have outperformed manually-designed architectures in many fields (Howard et al., 2019; Real et al., 2019). However, NAS generally requires massive computation resources to estimate the performance of architectures obtained during the search process (Real et al., 2019; Zoph et al., 2018). In practice, this is unaffordable for most researchers interested. As a result, how to speed up the estimation of neural architectures has become a hot topic among the NAS community. Performance predictor (Wen et al., 2020) is a popular accelerated method for NAS. It can directly predict the performance of neural architectures without training, thus greatly accelerating the NAS process. A large number of related works are carried out because of its superiority in reducing the costs of NAS. For example, E2EPP (Sun et al., 2019) adopted a random forest (Breiman, 2001) as the regression model to effectively find promising architectures. ReNAS (Xu et al., 2021) used a simple LeNet-5 network (LeCun et al., 1998) as the regression model, and creatively employed a ranking-based loss function to train the predictor, thus improving the prediction ability of the performance predictor. Although existing performance predictors gain huge success in improving the efficiency of NAS, sufficient architectures need to be sampled from the target search space and be fully trained to obtain their performance value as the label (Wen et al., 2020). The performance predictor is trained by these labeled architectures, and then is used to predict the performance of architectures. In order to ensure the prediction accuracy of the performance predictor, it is usually necessary to train at least hundreds of architectures as the dataset, which is a huge cost. In recent years, many benchmark datasets such as NAS-Bench-101 (Ying et al., 2019), NAS-Bench201 (Dong & Yang, 2020), NAS-Bench-NLP (Klyuchnikov et al., 2020) are released for promoting the research on NAS. There are a large number of architecture pairs (i.e., the architecture and its performance) in these datasets. As a result, we are motivated to utilize the rich architecture knowledge in these datasets to predict the architectures in the target search space (i.e., the search space in which the architecture needs to be predicted.). In this way, we can avoid training a large number of architectures in the target search space, thereby alleviating the expensive cost of building the dataset for performance predictors. However, the search space designed in the benchmark datasets is very different from the real-world search spaces. The performance predictor trained on existing labeled architectures cannot be applied to the target search space. In this paper, we proposed an UnSupervised domain adaptation-based Performance Predictor (USPP) with the usage of the domain adaptation technique. Different from the traditional performance predictors that need the training data and the predicted data in the same search space, USPP can leverage the labeled architectures in existing benchmark datasets (e.g., NAS-Bench-101 (Ying et al., 2019)) to build a powerful performance predictor for the target search space (e.g., the DARTS search space (Liu et al., 2018b)). As a result, USPP can avoid expensive data collection for the target search space. Specifically, the contributions can be summarized as follows: • A progressive domain-invariant feature extraction method is proposed to reduce the transfer difficulty caused by the huge difference between source and target search spaces. The progressive method explicitly models the domain-specific features and gradually separates them from the domain-invariant features, thus assisting in the alignment of the source and target search spaces. • A learnable representation for the operations in architectures, i.e., operation embedding, is designed to transfer more knowledge about operations to the target search space. Compared to the widely used fixed encoding method, the operation embedding can effectively capture the inner meaning and structural role of each operation in the source search space and applied them in the target search space to reduce transfer difficulty. • USPP only costs 0.02 GPU days to search for the architectures in the DARTS search space because there is no need to annotate the architectures in the target search space. Furthermore, the searched architecture by USPP achieves 97.86% classification accuracy on CIFAR-10 and 76.50% classification accuracy on ImageNet and outperforms all the stateof-the-art methods compared. 2 RELATED WORK 2.1 NAS AND PERFORMANCE PREDICTORS NAS can automatically design high-performance neural architecture and consists of search space, search strategy, and performance estimation strategy (Elsken et al., 2019). Specifically, the search space defines the collections of the candidate architectures. The search strategy corresponds to the employed optimization algorithms for the search, which can be mainly classified into evolutionary algorithms (Bäck et al., 1997), reinforcement learning (Kaelbling et al., 1996) and gradient descent (Liu et al., 2018b). The performance estimation strategy defines how to evaluate the architectures to obtain their performance. During the search, the NAS algorithms use the search strategy to search architecture in the predefined search space and obtain the performance value of the searched architectures by the performance estimation strategy. No matter what search strategy is used, a lot of neural architectures need to be estimated. Because of the heavy cost of the traditional GPU-based estimation method, many accelerated methods are proposed such as early stopping policy (Sun et al., 2018), proxy dataset (Sapra & Pimentel, 2020), weight-sharing method (Bender et al., 2018). However, the first two methods may lead to poor generalization and low-fidelity approximation of performance value, and the weight-sharing method may be unreliable in predicting the relative ranking among architectures (Li et al.; Yang et al.), which disobeys the goal of finding the architecture with the highest ranking. Performance predictor is free from the aforementioned shortcomings and has received great attention in recent years. However, existing predictors have the constraint that the source search space and the target search space must be the same. The proposed USPP breaks the limitation and creatively uses the existing labeled architectures in the source search space to predict the architectures in the target search space, thus removing the reliance on potentially costly labels in the target search space. 2.2 DOMAIN ADAPTATION In many machine learning methods, there is a major assumption that the training data and testing data are from the same distribution (Wilson & Cook, 2020). When it is not held, the network trained on the source domain will face a performance decline when testing on the target domain (Patel et al., 2015). Transfer learning (Weiss et al., 2016) can solve the problem by learning knowledge from the source domain and applying it to the target domain. Domain Adaptation (DA) (Wilson & Cook, 2020) is a subcategory of transfer learning, and its goal is to train a network that performs well on a different but related target domain by the labeled data in the source domain. The scenario in this paper is a problem of Unsupervised Domain Adaptation (UDA), where the labeled data are only available in the source domain. The mainstream method to address the UDA is aligning source and target domains by learning the domain-invariant feature representation. Specifically, the features are said to be domain-invariant if features extracted from data in both source and target domains follow the same distribution. If a network performs well in the source domain using domain-invariant feature representation, the network can be generalized well to the target domain. Generally, the domain-invariant methods can be classified into two categories. The first category explicitly reduces the domain discrepancy to obtain the domain-invariant by some distribution discrepancy metrics such as Maximum Mean Discrepancy (MMD) (Gretton et al., 2006; 2012), correlation alignment (Sun et al., 2016), and contrastive domain discrepancy (Kang et al., 2019). Motivated by the Generative Adversarial Network (GAN) (Goodfellow et al., 2014), another category learns domain-invariant representation by adversarial training (Ganin et al., 2016), and this is also what our method belongs to. The methods generally train a domain discriminator to distinguish the source domain from the target domain, and train a feature extractor to fool the discriminator to extract domain-invariant representations. Adversarial domain adaptation achieves great success in many fields, and a lot of works are proposed. For example, symnets (Zhang et al., 2019) designed symmetric classifiers to play the role of domain discriminator. GVB (Cui et al., 2020b) constructed bridge layers on both the generator and discriminator to reduce the overall transfer difficulty. However, most existing adversarial methods are performed on the classification task of Computer Vision (CV) or Neural Language Processing (NLP), and cannot be directly exploited in performance predictor which is a regression problem and deals with a completely different data type (i.e., graph data). In this paper, we apply the adversarial methods in performance predictors and propose a progressive domain-invariant feature extraction method to reduce the transferring difficulty. As far as we know, this is the first work to alleviate the high costs of performance predictors by the adversarial domain adaptation techniques. 3 APPROACH 3.1 FORMULATION In the scenarios of our paper, the architecture dataset in the source domain is denoted as DS ={( xSi ,y S i )}Ns i=1 with NS labeled architectures, where xSi and y S i denote the i th architecture and its performance value, respectively. The dataset in the target domain is denoted as DT = { xTi }NT i=1 with NT unlabeled architectures. USPP aims to train a performance predictor on DS and DT , and achieve promising results in the performance prediction of the architectures in target domain. The overall framework of the proposed USPP is shown in Fig 1. Specifically, to build the predictor, the neural architectures first should be represented into a form that can be fed into the predictor. Generally speaking, most neural architectures can be regarded as the Directed Acyclic Graph (DAG). The node in the DAG corresponds to one specific operation (e.g., convolution 3×3), and the edge stands for the connection between operations. Consequently, each architecture can be represented by the adjacency matrix A and the operation list Lop, where A stands for the topology connections between nodes and Lop represents the operations of nodes. Because the most commonly used encoding method for Lop (i.e., one-hot method) ignores the intrinsic relationship between different operations, we adopt a learnable operation embedding E to mapLop to a continuous space R to avoid the drawback, and the operation embedding will be elaborated in Sec. 3.4. Then, we use a Graph Convolution Network (GCN) as the first component of USPP to map every architecture into the feature space Z . In specific, GCN is a powerful technique to learn useful features from graph data which is a kind of non-Euclidean data, and has achieved success in processing neural architecture data. After learning the latent representation of the architectures by G, regressor R, as the second component of USPP, is exploited to predict the performance of the architectures. To ensure the prediction accuracy in target domain, the prediction results in source domain needs to be guaranteed first. It can be achieved by minimizing the regression loss Lreg on G and R when training. The regression loss can be calculated as: Lreg = 1 NS NS∑ i=1 L ( ySi , R ( G ( xSi ))) (1) where L is the L2 loss fuction in USPP. 3.2 ADVERSARIAL TRAINING As mentioned above, we are motivated to use the existing labeled architectures in the benchmark datasets to build a performance predictor for the target search space. However, the gap between the architectures in different search spaces is so large that the performance predictor trained with the labeled architectures in the source domain cannot be directly applied to predict the architectures in the target domain. Accordingly, we leverage the adversarial training (Wilson & Cook, 2020) to learn domain-invariant representations, thus reducing the discrepancy between source and target domains. In this way, the performance predictor can generalize well to the target domain. To perform adversarial training, a discriminatorD is designed as the third component of USPP, and it is a fully connected layer. Similar to the regressor R, D is attached after the feature extractor G. Specifically, the discriminator is essentially a domain classifier to differentiate between the features from the source domain and the target domain, and only works during the training process of USPP. During the training, the discriminator D is optimized to correctly classify the domains, while the feature extractorG is optimized to fool the discriminator so that the discriminator cannot distinguish the source domain from the target domain. Through the adversarial training, the feature extractor is encouraged to extract the common features shared by domains (i.e., domain-invariant features). To adversarially train the discriminator and the feature extractor, the domain classification loss Lcls is designed, which can be formulated as: Lcls = − 1 NS NS∑ i=1 log ( D ( G ( xSi ))) − 1 NT NT∑ j=1 log ( 1−D ( G ( xTj ))) (2) To make D distinguish between the source domain and target domain, we train D to minimize the domain classification loss Lcls. At the same time, we train G to maximize Lcls, thus making the feature distributions from source and target domains as similar as possible to fool D. 3.3 PROGRESSIVE DOMAIN-INVARIANT FEATURE EXTRACTION Although the adversarial training can align the source and target domains to a certain extent, it is difficult to reduce the divergence between source and target domains to zero in practice due to the existence of rich domain-specific features in respective domains. Specifically, domain-specific features refer to the features unique to each domain and greatly hinder the alignment of the data distributions from source and target domains. To further reduce the negative influence of domainspecific features, we propose a simple but effective progressive domain-invariant feature extraction method. The method reduces the influence of domain-specific features mainly by explicitly modeling them. In fact, there are some similar works to mitigate domain-specific features in this way. For example, DSN (Bousmalis et al., 2016) separately modeled the domain-specific features of each domain and the domain-invariant feature shared by source and target domains and reconstructed the input data by the extracted domain-specific and domain-invariant features to add generalizability. However, the modeled domain-invariant features also participated in the reconstruction, leading to them with a lot of domain-specific properties. HDA (Cui et al., 2020a) proposed a heuristic framework, and leveraged the modeled domain-specific features as heuristics to gradually gain domain-invariant features. However, the multiple sub-networks in the heuristic network made it difficult to optimize. The proposed progressive feature extraction method can effectively model the domain-specific features and gradually separate them from the features for the architecture data processed in this paper. To alleviate the domain-specific features more easily, we hypothesize that the domain-specific features are easier to be captured compared with the domain-invariant features according to HDA. This is because the architectures in the same domain generally share common domain-specific characteristics. For example, the cell in the NAS-Bench-101 only has one input while that in the DARTS search space has two inputs. Furthermore, the operations in NAS-Bench-101 are completely different from those in the DARTS search space. Compared with the domain-invariant features, these domain-specific features are easier to be extracted. The proposed method is embodied in the design of the regressor R. The regressor is used to map the extracted feature from G to the predicted label, and consists of fundament layer F , bridge layer B, and prediction layer P . Specifically, fundament layer F is used to model the current feature representation ri, and bridge layer B is utilized to explicitly model the discrepancy from the current feature to the ideal domain-invariant feature, i.e., the domain-specific part bi in the current feature. By subtracting the results of bridge layer bi from the results of ri, the domain-invariant features di can be obtained. This is because bi is easier to be obtained than di and can give guide to the constructing of di. Finally, the obtained di is fed to the prediction layer P to get the final prediction label ŷ. The overall process can be formulated as: R(fi) = P (F (fi)−B(fi)) (3) where fi denotes the extracted feature from G. During training, we seek to reduce the influence of the modeled domain-specific feature bi to help the extraction of domain-invariant representation. To achieve this goal, we design an extra loss Lbri as following: Lbri = 1 (NS +NT ) Ns+Nt∑ i=1 L∑ j=1 |bi,j | (4) where L is the size of bi. During the training of USPP, we aim to gradually minimize Lbri, thus mitigating domain-specific representation. Therefore, at the beginning of training, the designed bridge layer assists in the progressive extraction of the domain-invariant feature. In the later stage of training, bi is close to zero, which will not have too much impact on training. In conclusion, USPP has three objectives in the training phase: 1. Minimizing Lreg on G and R to get high prediction accuracy in the source domain. 2. Minimizing Lcls on D and maximizing Lcls on G to perform adversarial training. 3. Minimizing Lbri on R and G to assist in the extraction of the domain-invariant features. Let Ldom = −Lcls, the overall objectives during training is as following: min G,R Lreg + γLdom + µLbri; max D Ldom (5) where γ and µ are the hyper-parameters. 3.4 OPERATION EMBEDDING As is mentioned above, neural architectures can be represented by the adjacency matrix A and the operation list Lop, where A represents the connection relationship between different nodes, and Lop represents the operation type of every node. Generally, the one-hot method is used to encode the operation list Lop to a fixed matrix Mlen(Lop)×K (Liu et al., 2021). Then, the fixed matrix Mlen(Lop)×K and A are fed into GCN to obtain the feature representation of a given architecture. However, there is a large limitation that the process ignores the computational relationship between different operations. This is because every operation is thought to be independent in the one-hot method, but it is obviously not in line with reality. For example, the role of 1 × 1 convolution may be more similar to that of 3× 3 convolution compared with that of the max pooling operation. According to this motivation, the trainable operation embedding is proposed instead of the one-hot method. Specifically, we use operation embedding E to map each operation in Lop to a continuous space Rlen(Lop)×K . K is the dimension of the mapped vector and can be set during the training. Then, the K-dimensional vector obtained by E along with the adjacency matrix is mapped to the feature representation by GCN. During the training process of USPP, the operation embedding is regarded as a part of GCN, and the weight of the operation embedding is optimized with the weights of GCN. Through this, the operation embedding E can seek the inner relationship between different operations during the training. In specific, the operation embedding is similar to the word embedding in NLP, except that the mapped object words are replaced by the operation in the architecture. Word embedding can encode the semantic and syntactic information of every word, where semantic information is related to the meaning of words, and syntactic information corresponds to the structural roles of words (Li & Yang, 2018). Similarly, the inner meaning and structural information of each operation can be also learned from the source domain and applied in the target domain through the operation embedding. As a result, operation embedding helps USPP transfer more information about the operation from the source domain to the target domain, thus improving the prediction performance in the target domain. 3.5 THEORETICAL ANALYSIS In this section, we analyze the expected target-domain error for the proposed method based on the theories of domain adaptation (Ben-David et al., 2010). Let S and T be the source and target distributions for a family of fundament layer F in the hypothesis spaceH. εT (F ) and εS(F ) be the expected error of a hypothesis F ∈ H on target and source domains. To bound the target error, we use the H∆H-distance to measure the distance between distributions S and T , which is defined as the following: dH∆H(S, T ) = 2 sup h1,h2∈H | Pf∼S [h1(f) 6= h2(f)]− Pf∼T [h1(f) 6= h2(f)] | (6) We can get the target error of hypothesis F by the theory in (Ben-David et al., 2010): εT (F ) ≤ εS(F ) + 1 2 dH∆H(S, T ) + λ (7) where λ = εT (F ∗) + εS (F ∗) is the combined error of ideal hypothesis F ∗ = arg minF (εS(F ) + εT (F )) on both domains. By the definition of H∆H-distance, dH∆H (DS ,DT ) = 2 sup |εS (h, h′)− εT (h, h′)|. Hence, εT (F ) ≤ εS(F ) + sup |εS (F, F ∗)− εT (F, F ∗)|+ λ (8) Because the ideal discrepancy B∗ from the current feature to the ideal domain-invariant feature may be not fully represented by the practice discrepancy B, B ≤ B∗. Furthermore, B∗ = F − F ∗. Hence, there exists a positive correlation between B and F : B = k (F − F ∗) k ∈ (0, 1] (9) As mentioned in Subsec. 3.3, F = B+P . Furthermore, the goal of F and P is both to better obtain the performance of the given architecture. As a result, the ideal hypothesis F ∗ = P ∗. We could obtain the relationship between F and P : (1− k) (F − F ∗) = (P − P ∗) (10) Equation (10) is held on source and target distributions. Hence, εS (P, P ∗) = (1 − k)εS (F, F ∗) and εT (P, P ∗) = (1 − k)εT (F, F ∗). The P and F can both correctly predict the performance of architectures, the differences between them on the source domain are small. Hence, εS(P ) = εS(F ). Combined with Equation (10), we can obtain Equation (11): εT (P ) ≤ εS(P ) + sup |εS (P, P ∗)− εT (P, P ∗)|+ λ ≤ εS(F ) + (1− k) sup |εS (F, F ∗)− εT (F, F ∗)|+ λ ≤ εS(F ) + (1− k) 2 dH∆H(S, T ) + λ (11) It can be seen from Equation (11) that the upper bound of the target-domain error for P with the usage of B is lower than that for F without the usage of B. Therefore, the proposed method can help to reduce the upper bound of the target-domain error, which also proves the effectiveness of our proposed method from a theoretical point of view. 4 EXPERIMENTS In this part, we choose the NAS-Bench-101 as source domain and the DARTS search space as target domains to train USPP. We first report the results of neural architecture search in DARTS to verify the effectiveness of USPP. Then, we perform extensive ablation studies on various components. Please note that we perform additional experiments (NAS-Bench-101→NASBench-201, NAS-Bench-201→NAS-Bench-101, NAS-Bench-201→DARTS, NDS ResNet→NDS ResNeXt Radosavovic et al. (2019), and NDS ResNeXt→NDS ResNet) in Appendix B. All experiments in the paper are performed on NVIDIA GTX 2080Ti GPU. The experimental settings is presented in Appendix A. 4.1 NEURAL ARCHITECTURE SEARCH ON DARTS Training process. The regression loss Lreg , the domain classification loss Lcls, and the bridge loss Lbri are shown in Fig. 4.1(a). As expected, Lreg , Lcls, and Lbri reduce repaidly. Kendall’s Tau (KTau) (Sen, 1968), which can describe the correlation between the predicted value and the groundtruth values, is used to measure the prediction performance of USPP. The trend of the KTau value of USPP is shown in Fig. 4.1(b). It can be seen that the KTau of USPP goes up very quickly and converges. CIFAR-10. The experimental results on CIFAR-10 are shown in Table 1. It can be seen that the proposed USPP only costs 0.02 GPU days, which is the least among all methods compared. Please note that we do not take the costs for building the NAS-Bench-101 into consideration because NASBench-101 is a public dataset and not constructed for our work. Besides, TENAS with the least cost among other methods, takes 2.5 times as long as USPP. This is because USPP takes advantage of the fully-trained architectures in NAS-Bench-101, thus eliminating the cost of building new architectures in the DARTS search space. Furthermore, the searched architecture by USPP gets the highest test accuracy 97.86% compared with all the other state-of-the-art methods. Specifically, the accuracy of the architecture searched is still 0.32% higher than that of the best performance architecture searched by other methods. This shows that our method can effectively transfer knowledge from the source search space to the target search space. ImageNet. The comparison results on ImageNet are shown in Table 2. It can be seen from the results that the search cost of USPP is the lowest, and the PC-DARTS method with the least cost of the other methods is five times of the USPP. Besides, the searched architecture by USPP trained without any tricks can obtain the highest top-1 and top-5 accuracy, and the training result with positive technique largely surpasses existing methods in top-1 and top-5 accuracy. As a result, USPP can transfer useful information from NAS-Bench-101 and learn the inner correspondence between architectures and performance in DARTS. 4.2 ABLATION STUDY In this part, we conduct ablation experiments on the progressive domain-invariant feature extraction method, operation embedding, and domain adaptation methods. We randomly sample 100 architectures from NAS-Bench-301 Siems et al. (2020) to evaluate the proposed USPP. The progressive domain-invariant feature extraction. To validate the effectiveness of the proposed progressive method, we have performed ablation experiments on the core components of the progressive domain-invariant feature extraction method, i.e., the bridge layer. It can be seen from Table 3 that the performance predictor with the bridge layer gets a higher Kendall’s Tau value and 0.1434 more than the result without the bridge. The result indicates that the progressive domaininvariant feature extraction plays a positive role in improving the performance of USPP. Operation embedding. The results of the ablation study on the operation embedding are reported in Table 4. It can be seen from the result with operation embedding is more 0.1009 than that without operation embedding, which illustrates that operation embedding improves the performance of the predictor. Furthermore, the length of operation embedding is set as 3 in the experiment because of the small number of the operations, and the 3D visual result is shown in Fig. 3. Surprisingly, convolution 1 × 1 and output are the most similar pair in all pairs of operations. Furthermore, just as we suspected, the distance between convolution operations is smaller than the distance between convolution and pooling operations. Different domain adaptation methods. We adapt the distribution discrepancy metric-based method MMD (Long et al., 2015) and the adversarial training-based methods DANN (Ganin & Lempitsky, 2015), GVB (Cui et al., 2020b), and HDA (Cui et al., 2020a). The hyperparameters of these methods are optimized, and the best results of them are reported in Table 7. It is obvious that our method outperforms all the peer competitors. 5 CONCLUSION In this paper, we propose an performance predictor with the unsupervised domain adaptation technique to reducing the high cost of annotating architectures. Specifically, we design a progressive domain-invariant feature extraction to reduce the difficulty of alignment by explicitly modeling the domain-specific and separating them of domain-invariant features. Moreover, we use operation embedding to map the operation list of every architecture to continuous space, and learn the deeper meaning and structural information of each operation during training. The USPP achieves the test accuracy of 97.86% and 76.5% on CIFAR-10 and ImageNet, respectively. Unfortunately, this paper does not consider sequence-based search spaces and we will propose a more general domain adaptation framework in the future. A EXPERIMENTAL SETTING In this section, we report the expermental settings of the experiments in Section 4. Data processing. In order to keep the DARTS search space and NAS-Bench-101 search space uniform, we convert the architecture of the DARTS search space with operations on edges to one with operations on nodes following the convention (Liu et al., 2021). Then, since the types of operations in the two search spaces are very different, we group all operation types into five categories: input, output, convolution 1× 1, convolution 3× 3, convolution 5× 5, and pooling. Setting of USPP. The Gradient Reversal Layer (GRL) is utilized to simplify the adversarial training. Specifically, GRL can flip the gradients from D to G so that we can train G and R to minimize the sum of Lreg , Ldom and Lbri. We train the USPP with the batch size of 1, 000 for 100 epoch. We employ the SGD with the weight decay of 0.0005 and the learning rate is initialed with 0.1. We set γ = 1 and λ = 1 in the loss which is verified to perform well in the experiments. Search setting. Followed the convention of performance predictor (Wen et al., 2020), we randomly sample 100, 000 architectures from the DARTS search space, and predict them by the trained USPP. Then, we select the architecture with the highest predicted performance value as the searched architecture. Training setting of the searched architecture. For the training setting on CIFAR-10, we follow the convention of (Lu et al., 2021b). As for the training setting on ImageNet, we train the searched architecture under two types of training settings to compare with other methods fairly. In the first setting, we train architecture for 350 epoch with the batch size of 512. Besides, we use the SGD optimizer with a momentum of 0.9, an initial learning rate of 0.2, and the weight decay of 4×10−5. In the second setting, the cosine learning rate strategy is used during the training on the basis of the first setting. In addition, we use some enhancement techniques to improve the classification accuracy on ImageNet. Specifically, label smoothing (Szegedy et al., 2016) and autoaugment (Cubuk et al., 2019) which are effective regularization methods are utilized in the training. Furthermore, the Squeeze-and-Excitation (SE) module (Hu et al., 2018) is attached after each cell. B SUPPLEMENTARY EXPERIMENTS In this part, we perform additional experiments on cell-based search space and block-based search space, and compare our method with other domain adaptation methods. The detail of the cell-based search space and the block-based search space used are shown in Fig. 6 and Fig. 7, respectively. Specifically, NAS-Bench-101, NAS-Bench-201, and DARTS belong to the cell-based search space, while NDS ResNet and NDS ResNeXt belong to the block-based search space. B.1 CELL-BASED SEARCH SPACE Table 9: The KTau value of USPP on NASBench-101 (NAS-Bench-201→NAS-Bench101).g Methods KTau MMD (Gretton et al., 2006) 0.306 DANN (Ganin & Lempitsky, 2015) 0.325 GVB (Cui et al., 2020b) 0.302 HDA (Cui et al., 2020a) 0.307 USPP (ours) 0.349 Transferring from NAS-Bench-101 to NAS-Bench-201. The comparisons of the KTau value on NAS-Bench-201 between USPP and other domain adaptation methods are shown in Table. 9. It can be seen that the USPP gets the highest KTau value among all peer competitors. This illustrates that the proposed method can effectively transfer knowledge from NAS-Bench-101 to NAS-Bench-201. Transferring from NAS-Bench-201 to NAS-Bench-101. Table 8 shows the comparison results of USPP on NAS-Bench-101. It can be seen that USPP achieves the highest KTau value among all peer competitors, which verifies the effectiveness of USPP to transfer from NAS-Bench-201 to NAS-Bench-101. Transferring from NAS-Bench-201 to DARTS. USPP is evaluated in NAS-Bench-301. The comparison results of transferring from NAS-Bench-201 to DARTS search space are shown in Table. 10. It can be observed that although USPP surpasses all competitors, the KTau value of USPP is low. We infer that this is because NAS-Bench-201 is too small compared to DARTS (15.6k vs 1018), and the domain discrepancy between this NAS-Bench-201 and DARTS is too large. The information learned from NAS-Bench-201 is not enough to accurately predict the architecture in DARTS. B.2 BLOCK-BASED SEARCH SPACE Table 12: The KTau value of USPP on NDS ResNeXt (NDS ResNet→NDS ResNeXt).g Methods KTau MMD (Gretton et al., 2006) 0.519 DANN (Ganin & Lempitsky, 2015) 0.513 GVB (Cui et al., 2020b) 0.485 HDA (Cui et al., 2020a) 0.391 USPP (ours) 0.616 Transferring from NDS ResNet to NDS ResNeXt. We apply the 25k public annotated architectures in the NDS ResNeXt search space and the unlabeled architectures in the NDS ResNet search space to train USPP. Please note that we do not use the proposed operation embedding method, and replace the GCN (i.e., feature extractor) with the MLP. The comparison results of the KTau value are shown in Table. 11. The results show that the proposed USPP surpasses all competitors. This verifies the effectiveness of our approach in NDS ResNet which belongs to the block-based search space. Transferring from NDS ResNeXt to NDS ResNet. Similar to the last experiment (i.e., NDS ResNeXt→NDS ResNet), We train USPP on 25k labeled architectures in the NDS ResNet search space and test USPP on the public annotated architectures in the ResNeXt search space. The comparison results are shown in Table 12. USPP achieves the highest KTau value. This illustrates that the proposed method can effectively learn the correlation between architecture and performance and transfer it to the NDS ResNeXt search space. C DETAILS OF THE TRAINING PROCESS Quantity of the domain-adaptation features. We have performed experiments on NAS-Bench201→ DARTS and NAS-Bench-101→ NAS-Bench-201, and computed the mean KL-Divergence and the mean cosine similarity. Specifically, the KL-Divergence is 0.0086, and the mean cosine similarity is 0.5018. These results show that the domain invariant features we learn are similar in different cases (i.e., NAS-Bench-201→ DARTS, NAS-Bench-101→ NAS-Bench-201), and further prove the quality of domain invariant features. The regression error and the domain classification accuracy. The results are shown in Table. 13. The regression error is low, and illustrates that the regressor learns the correspondence between the architecture and performance in the source domain. Furthermore, the domain classification accuracy can verify that the discriminator cannot distinguish between domains.
1. What is the focus and contribution of the paper regarding unsupervised domain adaptation for neural architecture search? 2. What are the strengths and weaknesses of the proposed approach, particularly in its technical soundness and novelty? 3. Do you have any concerns or questions regarding the experiments and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposed an unsupervised (UDA) performance predictor called USPP, which aims to mitigate the architecture performance prediction gap between source and target domain. Strengths And Weaknesses Pros: The paper is well written and easy to follow. The proposed method is technically sound. The idea of using UDA and adversarial training to address the domain gap issue in NAS is smart. Cons: The name of the paper is a little bit confusing at the first glance. I thought it is a method related to unsupervised learning, but turns out to be related to Unsupervised Domain Adaptation. Missing recent baselines in the result table[1][2]. The author uses GPU day as a metric to evaluate searching speed. My question is that are the compared methods using the same GPU? If not, then they are not comparable. The author only report one domain transfer result (NAS101 --> DARTS). I would expect to see more domain transfer case, like NAS101 --> NAS102,NAS102 -->DARTS etc. [1] Mellor, J., Turner, J., Storkey, A., & Crowley, E. J. (2021, July). Neural architecture search without training. In International Conference on Machine Learning (pp. 7588-7598). PMLR. [2] hen, W., Gong, X., & Wang, Z. (2021). Neural architecture search on imagenet in four gpu hours: A theoretically inspired perspective. arXiv preprint arXiv:2102.11535. Clarity, Quality, Novelty And Reproducibility The paper is well-written and easy to follow, while the authors forgot to report what type of GPU are they using to report the searching speed. The author has provided code. The proposed method is somehow novel to me.
ICLR
Title Unsupervised Performance Predictor for Architecture Search Abstract Performance predictors can directly predict the performance value of given neural architectures without training, thus broadly being studied to alleviate the prohibitive cost of Neural Architecture Search (NAS). However, existing performance predictors still require training a large number of architectures from scratch to get their performance labels as the training dataset, which is still computationally expensive. To solve this issue, we develop an performance predictor by applying the unsupervised domain adaptation technique called USPP, which can avoid costly dataset construction by using existing fully-trained architectures. Specifically, a progressive domain-invariant feature extraction method is proposed to assist in extracting domain-invariant features due to the great transferability challenge caused by the rich domain-specific features. Furthermore, a learnable representation (denoted as operation embedding) is designed to replace the fixed encoding of the operations to transfer more knowledge about operations to the target search space. In experiments, we train the predictor by the labeled architectures in NAS-Bench101 and predict the architectures in the DARTS search space. Compared with other state-of-the-art NAS methods, the proposed USPP only costs 0.02 GPU days but finds the architecture with 97.86% on CIFAR-10 and 76.50% top-1 accuracy on ImageNet. 1 INTRODUCTION Neural Architecture Search (NAS) (Elsken et al., 2019) aims to automatically design highperformance neural architectures and has been a popular research field of machine learning. In recent years, the architectures searched by NAS have outperformed manually-designed architectures in many fields (Howard et al., 2019; Real et al., 2019). However, NAS generally requires massive computation resources to estimate the performance of architectures obtained during the search process (Real et al., 2019; Zoph et al., 2018). In practice, this is unaffordable for most researchers interested. As a result, how to speed up the estimation of neural architectures has become a hot topic among the NAS community. Performance predictor (Wen et al., 2020) is a popular accelerated method for NAS. It can directly predict the performance of neural architectures without training, thus greatly accelerating the NAS process. A large number of related works are carried out because of its superiority in reducing the costs of NAS. For example, E2EPP (Sun et al., 2019) adopted a random forest (Breiman, 2001) as the regression model to effectively find promising architectures. ReNAS (Xu et al., 2021) used a simple LeNet-5 network (LeCun et al., 1998) as the regression model, and creatively employed a ranking-based loss function to train the predictor, thus improving the prediction ability of the performance predictor. Although existing performance predictors gain huge success in improving the efficiency of NAS, sufficient architectures need to be sampled from the target search space and be fully trained to obtain their performance value as the label (Wen et al., 2020). The performance predictor is trained by these labeled architectures, and then is used to predict the performance of architectures. In order to ensure the prediction accuracy of the performance predictor, it is usually necessary to train at least hundreds of architectures as the dataset, which is a huge cost. In recent years, many benchmark datasets such as NAS-Bench-101 (Ying et al., 2019), NAS-Bench201 (Dong & Yang, 2020), NAS-Bench-NLP (Klyuchnikov et al., 2020) are released for promoting the research on NAS. There are a large number of architecture pairs (i.e., the architecture and its performance) in these datasets. As a result, we are motivated to utilize the rich architecture knowledge in these datasets to predict the architectures in the target search space (i.e., the search space in which the architecture needs to be predicted.). In this way, we can avoid training a large number of architectures in the target search space, thereby alleviating the expensive cost of building the dataset for performance predictors. However, the search space designed in the benchmark datasets is very different from the real-world search spaces. The performance predictor trained on existing labeled architectures cannot be applied to the target search space. In this paper, we proposed an UnSupervised domain adaptation-based Performance Predictor (USPP) with the usage of the domain adaptation technique. Different from the traditional performance predictors that need the training data and the predicted data in the same search space, USPP can leverage the labeled architectures in existing benchmark datasets (e.g., NAS-Bench-101 (Ying et al., 2019)) to build a powerful performance predictor for the target search space (e.g., the DARTS search space (Liu et al., 2018b)). As a result, USPP can avoid expensive data collection for the target search space. Specifically, the contributions can be summarized as follows: • A progressive domain-invariant feature extraction method is proposed to reduce the transfer difficulty caused by the huge difference between source and target search spaces. The progressive method explicitly models the domain-specific features and gradually separates them from the domain-invariant features, thus assisting in the alignment of the source and target search spaces. • A learnable representation for the operations in architectures, i.e., operation embedding, is designed to transfer more knowledge about operations to the target search space. Compared to the widely used fixed encoding method, the operation embedding can effectively capture the inner meaning and structural role of each operation in the source search space and applied them in the target search space to reduce transfer difficulty. • USPP only costs 0.02 GPU days to search for the architectures in the DARTS search space because there is no need to annotate the architectures in the target search space. Furthermore, the searched architecture by USPP achieves 97.86% classification accuracy on CIFAR-10 and 76.50% classification accuracy on ImageNet and outperforms all the stateof-the-art methods compared. 2 RELATED WORK 2.1 NAS AND PERFORMANCE PREDICTORS NAS can automatically design high-performance neural architecture and consists of search space, search strategy, and performance estimation strategy (Elsken et al., 2019). Specifically, the search space defines the collections of the candidate architectures. The search strategy corresponds to the employed optimization algorithms for the search, which can be mainly classified into evolutionary algorithms (Bäck et al., 1997), reinforcement learning (Kaelbling et al., 1996) and gradient descent (Liu et al., 2018b). The performance estimation strategy defines how to evaluate the architectures to obtain their performance. During the search, the NAS algorithms use the search strategy to search architecture in the predefined search space and obtain the performance value of the searched architectures by the performance estimation strategy. No matter what search strategy is used, a lot of neural architectures need to be estimated. Because of the heavy cost of the traditional GPU-based estimation method, many accelerated methods are proposed such as early stopping policy (Sun et al., 2018), proxy dataset (Sapra & Pimentel, 2020), weight-sharing method (Bender et al., 2018). However, the first two methods may lead to poor generalization and low-fidelity approximation of performance value, and the weight-sharing method may be unreliable in predicting the relative ranking among architectures (Li et al.; Yang et al.), which disobeys the goal of finding the architecture with the highest ranking. Performance predictor is free from the aforementioned shortcomings and has received great attention in recent years. However, existing predictors have the constraint that the source search space and the target search space must be the same. The proposed USPP breaks the limitation and creatively uses the existing labeled architectures in the source search space to predict the architectures in the target search space, thus removing the reliance on potentially costly labels in the target search space. 2.2 DOMAIN ADAPTATION In many machine learning methods, there is a major assumption that the training data and testing data are from the same distribution (Wilson & Cook, 2020). When it is not held, the network trained on the source domain will face a performance decline when testing on the target domain (Patel et al., 2015). Transfer learning (Weiss et al., 2016) can solve the problem by learning knowledge from the source domain and applying it to the target domain. Domain Adaptation (DA) (Wilson & Cook, 2020) is a subcategory of transfer learning, and its goal is to train a network that performs well on a different but related target domain by the labeled data in the source domain. The scenario in this paper is a problem of Unsupervised Domain Adaptation (UDA), where the labeled data are only available in the source domain. The mainstream method to address the UDA is aligning source and target domains by learning the domain-invariant feature representation. Specifically, the features are said to be domain-invariant if features extracted from data in both source and target domains follow the same distribution. If a network performs well in the source domain using domain-invariant feature representation, the network can be generalized well to the target domain. Generally, the domain-invariant methods can be classified into two categories. The first category explicitly reduces the domain discrepancy to obtain the domain-invariant by some distribution discrepancy metrics such as Maximum Mean Discrepancy (MMD) (Gretton et al., 2006; 2012), correlation alignment (Sun et al., 2016), and contrastive domain discrepancy (Kang et al., 2019). Motivated by the Generative Adversarial Network (GAN) (Goodfellow et al., 2014), another category learns domain-invariant representation by adversarial training (Ganin et al., 2016), and this is also what our method belongs to. The methods generally train a domain discriminator to distinguish the source domain from the target domain, and train a feature extractor to fool the discriminator to extract domain-invariant representations. Adversarial domain adaptation achieves great success in many fields, and a lot of works are proposed. For example, symnets (Zhang et al., 2019) designed symmetric classifiers to play the role of domain discriminator. GVB (Cui et al., 2020b) constructed bridge layers on both the generator and discriminator to reduce the overall transfer difficulty. However, most existing adversarial methods are performed on the classification task of Computer Vision (CV) or Neural Language Processing (NLP), and cannot be directly exploited in performance predictor which is a regression problem and deals with a completely different data type (i.e., graph data). In this paper, we apply the adversarial methods in performance predictors and propose a progressive domain-invariant feature extraction method to reduce the transferring difficulty. As far as we know, this is the first work to alleviate the high costs of performance predictors by the adversarial domain adaptation techniques. 3 APPROACH 3.1 FORMULATION In the scenarios of our paper, the architecture dataset in the source domain is denoted as DS ={( xSi ,y S i )}Ns i=1 with NS labeled architectures, where xSi and y S i denote the i th architecture and its performance value, respectively. The dataset in the target domain is denoted as DT = { xTi }NT i=1 with NT unlabeled architectures. USPP aims to train a performance predictor on DS and DT , and achieve promising results in the performance prediction of the architectures in target domain. The overall framework of the proposed USPP is shown in Fig 1. Specifically, to build the predictor, the neural architectures first should be represented into a form that can be fed into the predictor. Generally speaking, most neural architectures can be regarded as the Directed Acyclic Graph (DAG). The node in the DAG corresponds to one specific operation (e.g., convolution 3×3), and the edge stands for the connection between operations. Consequently, each architecture can be represented by the adjacency matrix A and the operation list Lop, where A stands for the topology connections between nodes and Lop represents the operations of nodes. Because the most commonly used encoding method for Lop (i.e., one-hot method) ignores the intrinsic relationship between different operations, we adopt a learnable operation embedding E to mapLop to a continuous space R to avoid the drawback, and the operation embedding will be elaborated in Sec. 3.4. Then, we use a Graph Convolution Network (GCN) as the first component of USPP to map every architecture into the feature space Z . In specific, GCN is a powerful technique to learn useful features from graph data which is a kind of non-Euclidean data, and has achieved success in processing neural architecture data. After learning the latent representation of the architectures by G, regressor R, as the second component of USPP, is exploited to predict the performance of the architectures. To ensure the prediction accuracy in target domain, the prediction results in source domain needs to be guaranteed first. It can be achieved by minimizing the regression loss Lreg on G and R when training. The regression loss can be calculated as: Lreg = 1 NS NS∑ i=1 L ( ySi , R ( G ( xSi ))) (1) where L is the L2 loss fuction in USPP. 3.2 ADVERSARIAL TRAINING As mentioned above, we are motivated to use the existing labeled architectures in the benchmark datasets to build a performance predictor for the target search space. However, the gap between the architectures in different search spaces is so large that the performance predictor trained with the labeled architectures in the source domain cannot be directly applied to predict the architectures in the target domain. Accordingly, we leverage the adversarial training (Wilson & Cook, 2020) to learn domain-invariant representations, thus reducing the discrepancy between source and target domains. In this way, the performance predictor can generalize well to the target domain. To perform adversarial training, a discriminatorD is designed as the third component of USPP, and it is a fully connected layer. Similar to the regressor R, D is attached after the feature extractor G. Specifically, the discriminator is essentially a domain classifier to differentiate between the features from the source domain and the target domain, and only works during the training process of USPP. During the training, the discriminator D is optimized to correctly classify the domains, while the feature extractorG is optimized to fool the discriminator so that the discriminator cannot distinguish the source domain from the target domain. Through the adversarial training, the feature extractor is encouraged to extract the common features shared by domains (i.e., domain-invariant features). To adversarially train the discriminator and the feature extractor, the domain classification loss Lcls is designed, which can be formulated as: Lcls = − 1 NS NS∑ i=1 log ( D ( G ( xSi ))) − 1 NT NT∑ j=1 log ( 1−D ( G ( xTj ))) (2) To make D distinguish between the source domain and target domain, we train D to minimize the domain classification loss Lcls. At the same time, we train G to maximize Lcls, thus making the feature distributions from source and target domains as similar as possible to fool D. 3.3 PROGRESSIVE DOMAIN-INVARIANT FEATURE EXTRACTION Although the adversarial training can align the source and target domains to a certain extent, it is difficult to reduce the divergence between source and target domains to zero in practice due to the existence of rich domain-specific features in respective domains. Specifically, domain-specific features refer to the features unique to each domain and greatly hinder the alignment of the data distributions from source and target domains. To further reduce the negative influence of domainspecific features, we propose a simple but effective progressive domain-invariant feature extraction method. The method reduces the influence of domain-specific features mainly by explicitly modeling them. In fact, there are some similar works to mitigate domain-specific features in this way. For example, DSN (Bousmalis et al., 2016) separately modeled the domain-specific features of each domain and the domain-invariant feature shared by source and target domains and reconstructed the input data by the extracted domain-specific and domain-invariant features to add generalizability. However, the modeled domain-invariant features also participated in the reconstruction, leading to them with a lot of domain-specific properties. HDA (Cui et al., 2020a) proposed a heuristic framework, and leveraged the modeled domain-specific features as heuristics to gradually gain domain-invariant features. However, the multiple sub-networks in the heuristic network made it difficult to optimize. The proposed progressive feature extraction method can effectively model the domain-specific features and gradually separate them from the features for the architecture data processed in this paper. To alleviate the domain-specific features more easily, we hypothesize that the domain-specific features are easier to be captured compared with the domain-invariant features according to HDA. This is because the architectures in the same domain generally share common domain-specific characteristics. For example, the cell in the NAS-Bench-101 only has one input while that in the DARTS search space has two inputs. Furthermore, the operations in NAS-Bench-101 are completely different from those in the DARTS search space. Compared with the domain-invariant features, these domain-specific features are easier to be extracted. The proposed method is embodied in the design of the regressor R. The regressor is used to map the extracted feature from G to the predicted label, and consists of fundament layer F , bridge layer B, and prediction layer P . Specifically, fundament layer F is used to model the current feature representation ri, and bridge layer B is utilized to explicitly model the discrepancy from the current feature to the ideal domain-invariant feature, i.e., the domain-specific part bi in the current feature. By subtracting the results of bridge layer bi from the results of ri, the domain-invariant features di can be obtained. This is because bi is easier to be obtained than di and can give guide to the constructing of di. Finally, the obtained di is fed to the prediction layer P to get the final prediction label ŷ. The overall process can be formulated as: R(fi) = P (F (fi)−B(fi)) (3) where fi denotes the extracted feature from G. During training, we seek to reduce the influence of the modeled domain-specific feature bi to help the extraction of domain-invariant representation. To achieve this goal, we design an extra loss Lbri as following: Lbri = 1 (NS +NT ) Ns+Nt∑ i=1 L∑ j=1 |bi,j | (4) where L is the size of bi. During the training of USPP, we aim to gradually minimize Lbri, thus mitigating domain-specific representation. Therefore, at the beginning of training, the designed bridge layer assists in the progressive extraction of the domain-invariant feature. In the later stage of training, bi is close to zero, which will not have too much impact on training. In conclusion, USPP has three objectives in the training phase: 1. Minimizing Lreg on G and R to get high prediction accuracy in the source domain. 2. Minimizing Lcls on D and maximizing Lcls on G to perform adversarial training. 3. Minimizing Lbri on R and G to assist in the extraction of the domain-invariant features. Let Ldom = −Lcls, the overall objectives during training is as following: min G,R Lreg + γLdom + µLbri; max D Ldom (5) where γ and µ are the hyper-parameters. 3.4 OPERATION EMBEDDING As is mentioned above, neural architectures can be represented by the adjacency matrix A and the operation list Lop, where A represents the connection relationship between different nodes, and Lop represents the operation type of every node. Generally, the one-hot method is used to encode the operation list Lop to a fixed matrix Mlen(Lop)×K (Liu et al., 2021). Then, the fixed matrix Mlen(Lop)×K and A are fed into GCN to obtain the feature representation of a given architecture. However, there is a large limitation that the process ignores the computational relationship between different operations. This is because every operation is thought to be independent in the one-hot method, but it is obviously not in line with reality. For example, the role of 1 × 1 convolution may be more similar to that of 3× 3 convolution compared with that of the max pooling operation. According to this motivation, the trainable operation embedding is proposed instead of the one-hot method. Specifically, we use operation embedding E to map each operation in Lop to a continuous space Rlen(Lop)×K . K is the dimension of the mapped vector and can be set during the training. Then, the K-dimensional vector obtained by E along with the adjacency matrix is mapped to the feature representation by GCN. During the training process of USPP, the operation embedding is regarded as a part of GCN, and the weight of the operation embedding is optimized with the weights of GCN. Through this, the operation embedding E can seek the inner relationship between different operations during the training. In specific, the operation embedding is similar to the word embedding in NLP, except that the mapped object words are replaced by the operation in the architecture. Word embedding can encode the semantic and syntactic information of every word, where semantic information is related to the meaning of words, and syntactic information corresponds to the structural roles of words (Li & Yang, 2018). Similarly, the inner meaning and structural information of each operation can be also learned from the source domain and applied in the target domain through the operation embedding. As a result, operation embedding helps USPP transfer more information about the operation from the source domain to the target domain, thus improving the prediction performance in the target domain. 3.5 THEORETICAL ANALYSIS In this section, we analyze the expected target-domain error for the proposed method based on the theories of domain adaptation (Ben-David et al., 2010). Let S and T be the source and target distributions for a family of fundament layer F in the hypothesis spaceH. εT (F ) and εS(F ) be the expected error of a hypothesis F ∈ H on target and source domains. To bound the target error, we use the H∆H-distance to measure the distance between distributions S and T , which is defined as the following: dH∆H(S, T ) = 2 sup h1,h2∈H | Pf∼S [h1(f) 6= h2(f)]− Pf∼T [h1(f) 6= h2(f)] | (6) We can get the target error of hypothesis F by the theory in (Ben-David et al., 2010): εT (F ) ≤ εS(F ) + 1 2 dH∆H(S, T ) + λ (7) where λ = εT (F ∗) + εS (F ∗) is the combined error of ideal hypothesis F ∗ = arg minF (εS(F ) + εT (F )) on both domains. By the definition of H∆H-distance, dH∆H (DS ,DT ) = 2 sup |εS (h, h′)− εT (h, h′)|. Hence, εT (F ) ≤ εS(F ) + sup |εS (F, F ∗)− εT (F, F ∗)|+ λ (8) Because the ideal discrepancy B∗ from the current feature to the ideal domain-invariant feature may be not fully represented by the practice discrepancy B, B ≤ B∗. Furthermore, B∗ = F − F ∗. Hence, there exists a positive correlation between B and F : B = k (F − F ∗) k ∈ (0, 1] (9) As mentioned in Subsec. 3.3, F = B+P . Furthermore, the goal of F and P is both to better obtain the performance of the given architecture. As a result, the ideal hypothesis F ∗ = P ∗. We could obtain the relationship between F and P : (1− k) (F − F ∗) = (P − P ∗) (10) Equation (10) is held on source and target distributions. Hence, εS (P, P ∗) = (1 − k)εS (F, F ∗) and εT (P, P ∗) = (1 − k)εT (F, F ∗). The P and F can both correctly predict the performance of architectures, the differences between them on the source domain are small. Hence, εS(P ) = εS(F ). Combined with Equation (10), we can obtain Equation (11): εT (P ) ≤ εS(P ) + sup |εS (P, P ∗)− εT (P, P ∗)|+ λ ≤ εS(F ) + (1− k) sup |εS (F, F ∗)− εT (F, F ∗)|+ λ ≤ εS(F ) + (1− k) 2 dH∆H(S, T ) + λ (11) It can be seen from Equation (11) that the upper bound of the target-domain error for P with the usage of B is lower than that for F without the usage of B. Therefore, the proposed method can help to reduce the upper bound of the target-domain error, which also proves the effectiveness of our proposed method from a theoretical point of view. 4 EXPERIMENTS In this part, we choose the NAS-Bench-101 as source domain and the DARTS search space as target domains to train USPP. We first report the results of neural architecture search in DARTS to verify the effectiveness of USPP. Then, we perform extensive ablation studies on various components. Please note that we perform additional experiments (NAS-Bench-101→NASBench-201, NAS-Bench-201→NAS-Bench-101, NAS-Bench-201→DARTS, NDS ResNet→NDS ResNeXt Radosavovic et al. (2019), and NDS ResNeXt→NDS ResNet) in Appendix B. All experiments in the paper are performed on NVIDIA GTX 2080Ti GPU. The experimental settings is presented in Appendix A. 4.1 NEURAL ARCHITECTURE SEARCH ON DARTS Training process. The regression loss Lreg , the domain classification loss Lcls, and the bridge loss Lbri are shown in Fig. 4.1(a). As expected, Lreg , Lcls, and Lbri reduce repaidly. Kendall’s Tau (KTau) (Sen, 1968), which can describe the correlation between the predicted value and the groundtruth values, is used to measure the prediction performance of USPP. The trend of the KTau value of USPP is shown in Fig. 4.1(b). It can be seen that the KTau of USPP goes up very quickly and converges. CIFAR-10. The experimental results on CIFAR-10 are shown in Table 1. It can be seen that the proposed USPP only costs 0.02 GPU days, which is the least among all methods compared. Please note that we do not take the costs for building the NAS-Bench-101 into consideration because NASBench-101 is a public dataset and not constructed for our work. Besides, TENAS with the least cost among other methods, takes 2.5 times as long as USPP. This is because USPP takes advantage of the fully-trained architectures in NAS-Bench-101, thus eliminating the cost of building new architectures in the DARTS search space. Furthermore, the searched architecture by USPP gets the highest test accuracy 97.86% compared with all the other state-of-the-art methods. Specifically, the accuracy of the architecture searched is still 0.32% higher than that of the best performance architecture searched by other methods. This shows that our method can effectively transfer knowledge from the source search space to the target search space. ImageNet. The comparison results on ImageNet are shown in Table 2. It can be seen from the results that the search cost of USPP is the lowest, and the PC-DARTS method with the least cost of the other methods is five times of the USPP. Besides, the searched architecture by USPP trained without any tricks can obtain the highest top-1 and top-5 accuracy, and the training result with positive technique largely surpasses existing methods in top-1 and top-5 accuracy. As a result, USPP can transfer useful information from NAS-Bench-101 and learn the inner correspondence between architectures and performance in DARTS. 4.2 ABLATION STUDY In this part, we conduct ablation experiments on the progressive domain-invariant feature extraction method, operation embedding, and domain adaptation methods. We randomly sample 100 architectures from NAS-Bench-301 Siems et al. (2020) to evaluate the proposed USPP. The progressive domain-invariant feature extraction. To validate the effectiveness of the proposed progressive method, we have performed ablation experiments on the core components of the progressive domain-invariant feature extraction method, i.e., the bridge layer. It can be seen from Table 3 that the performance predictor with the bridge layer gets a higher Kendall’s Tau value and 0.1434 more than the result without the bridge. The result indicates that the progressive domaininvariant feature extraction plays a positive role in improving the performance of USPP. Operation embedding. The results of the ablation study on the operation embedding are reported in Table 4. It can be seen from the result with operation embedding is more 0.1009 than that without operation embedding, which illustrates that operation embedding improves the performance of the predictor. Furthermore, the length of operation embedding is set as 3 in the experiment because of the small number of the operations, and the 3D visual result is shown in Fig. 3. Surprisingly, convolution 1 × 1 and output are the most similar pair in all pairs of operations. Furthermore, just as we suspected, the distance between convolution operations is smaller than the distance between convolution and pooling operations. Different domain adaptation methods. We adapt the distribution discrepancy metric-based method MMD (Long et al., 2015) and the adversarial training-based methods DANN (Ganin & Lempitsky, 2015), GVB (Cui et al., 2020b), and HDA (Cui et al., 2020a). The hyperparameters of these methods are optimized, and the best results of them are reported in Table 7. It is obvious that our method outperforms all the peer competitors. 5 CONCLUSION In this paper, we propose an performance predictor with the unsupervised domain adaptation technique to reducing the high cost of annotating architectures. Specifically, we design a progressive domain-invariant feature extraction to reduce the difficulty of alignment by explicitly modeling the domain-specific and separating them of domain-invariant features. Moreover, we use operation embedding to map the operation list of every architecture to continuous space, and learn the deeper meaning and structural information of each operation during training. The USPP achieves the test accuracy of 97.86% and 76.5% on CIFAR-10 and ImageNet, respectively. Unfortunately, this paper does not consider sequence-based search spaces and we will propose a more general domain adaptation framework in the future. A EXPERIMENTAL SETTING In this section, we report the expermental settings of the experiments in Section 4. Data processing. In order to keep the DARTS search space and NAS-Bench-101 search space uniform, we convert the architecture of the DARTS search space with operations on edges to one with operations on nodes following the convention (Liu et al., 2021). Then, since the types of operations in the two search spaces are very different, we group all operation types into five categories: input, output, convolution 1× 1, convolution 3× 3, convolution 5× 5, and pooling. Setting of USPP. The Gradient Reversal Layer (GRL) is utilized to simplify the adversarial training. Specifically, GRL can flip the gradients from D to G so that we can train G and R to minimize the sum of Lreg , Ldom and Lbri. We train the USPP with the batch size of 1, 000 for 100 epoch. We employ the SGD with the weight decay of 0.0005 and the learning rate is initialed with 0.1. We set γ = 1 and λ = 1 in the loss which is verified to perform well in the experiments. Search setting. Followed the convention of performance predictor (Wen et al., 2020), we randomly sample 100, 000 architectures from the DARTS search space, and predict them by the trained USPP. Then, we select the architecture with the highest predicted performance value as the searched architecture. Training setting of the searched architecture. For the training setting on CIFAR-10, we follow the convention of (Lu et al., 2021b). As for the training setting on ImageNet, we train the searched architecture under two types of training settings to compare with other methods fairly. In the first setting, we train architecture for 350 epoch with the batch size of 512. Besides, we use the SGD optimizer with a momentum of 0.9, an initial learning rate of 0.2, and the weight decay of 4×10−5. In the second setting, the cosine learning rate strategy is used during the training on the basis of the first setting. In addition, we use some enhancement techniques to improve the classification accuracy on ImageNet. Specifically, label smoothing (Szegedy et al., 2016) and autoaugment (Cubuk et al., 2019) which are effective regularization methods are utilized in the training. Furthermore, the Squeeze-and-Excitation (SE) module (Hu et al., 2018) is attached after each cell. B SUPPLEMENTARY EXPERIMENTS In this part, we perform additional experiments on cell-based search space and block-based search space, and compare our method with other domain adaptation methods. The detail of the cell-based search space and the block-based search space used are shown in Fig. 6 and Fig. 7, respectively. Specifically, NAS-Bench-101, NAS-Bench-201, and DARTS belong to the cell-based search space, while NDS ResNet and NDS ResNeXt belong to the block-based search space. B.1 CELL-BASED SEARCH SPACE Table 9: The KTau value of USPP on NASBench-101 (NAS-Bench-201→NAS-Bench101).g Methods KTau MMD (Gretton et al., 2006) 0.306 DANN (Ganin & Lempitsky, 2015) 0.325 GVB (Cui et al., 2020b) 0.302 HDA (Cui et al., 2020a) 0.307 USPP (ours) 0.349 Transferring from NAS-Bench-101 to NAS-Bench-201. The comparisons of the KTau value on NAS-Bench-201 between USPP and other domain adaptation methods are shown in Table. 9. It can be seen that the USPP gets the highest KTau value among all peer competitors. This illustrates that the proposed method can effectively transfer knowledge from NAS-Bench-101 to NAS-Bench-201. Transferring from NAS-Bench-201 to NAS-Bench-101. Table 8 shows the comparison results of USPP on NAS-Bench-101. It can be seen that USPP achieves the highest KTau value among all peer competitors, which verifies the effectiveness of USPP to transfer from NAS-Bench-201 to NAS-Bench-101. Transferring from NAS-Bench-201 to DARTS. USPP is evaluated in NAS-Bench-301. The comparison results of transferring from NAS-Bench-201 to DARTS search space are shown in Table. 10. It can be observed that although USPP surpasses all competitors, the KTau value of USPP is low. We infer that this is because NAS-Bench-201 is too small compared to DARTS (15.6k vs 1018), and the domain discrepancy between this NAS-Bench-201 and DARTS is too large. The information learned from NAS-Bench-201 is not enough to accurately predict the architecture in DARTS. B.2 BLOCK-BASED SEARCH SPACE Table 12: The KTau value of USPP on NDS ResNeXt (NDS ResNet→NDS ResNeXt).g Methods KTau MMD (Gretton et al., 2006) 0.519 DANN (Ganin & Lempitsky, 2015) 0.513 GVB (Cui et al., 2020b) 0.485 HDA (Cui et al., 2020a) 0.391 USPP (ours) 0.616 Transferring from NDS ResNet to NDS ResNeXt. We apply the 25k public annotated architectures in the NDS ResNeXt search space and the unlabeled architectures in the NDS ResNet search space to train USPP. Please note that we do not use the proposed operation embedding method, and replace the GCN (i.e., feature extractor) with the MLP. The comparison results of the KTau value are shown in Table. 11. The results show that the proposed USPP surpasses all competitors. This verifies the effectiveness of our approach in NDS ResNet which belongs to the block-based search space. Transferring from NDS ResNeXt to NDS ResNet. Similar to the last experiment (i.e., NDS ResNeXt→NDS ResNet), We train USPP on 25k labeled architectures in the NDS ResNet search space and test USPP on the public annotated architectures in the ResNeXt search space. The comparison results are shown in Table 12. USPP achieves the highest KTau value. This illustrates that the proposed method can effectively learn the correlation between architecture and performance and transfer it to the NDS ResNeXt search space. C DETAILS OF THE TRAINING PROCESS Quantity of the domain-adaptation features. We have performed experiments on NAS-Bench201→ DARTS and NAS-Bench-101→ NAS-Bench-201, and computed the mean KL-Divergence and the mean cosine similarity. Specifically, the KL-Divergence is 0.0086, and the mean cosine similarity is 0.5018. These results show that the domain invariant features we learn are similar in different cases (i.e., NAS-Bench-201→ DARTS, NAS-Bench-101→ NAS-Bench-201), and further prove the quality of domain invariant features. The regression error and the domain classification accuracy. The results are shown in Table. 13. The regression error is low, and illustrates that the regressor learns the correspondence between the architecture and performance in the source domain. Furthermore, the domain classification accuracy can verify that the discriminator cannot distinguish between domains.
1. What is the focus and contribution of the paper regarding unsupervised performance predictors for NAS? 2. What are the strengths and weaknesses of the proposed approach, particularly in its novelty and application scenarios? 3. Do you have any questions or concerns about the progressive domain-invariant feature extraction method? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any typos or errors in the paper that need to be addressed?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes an unsupervised performance predictor called USPP to reduce the training/search cost of NAS. To bridge the source and target search spaces, the authors develop a progressive domain-invariant feature extraction method to obtain domain-invariant features of architectures. Moreover, this paper provides sufficient ablation studies for each module/element of the proposed method. Nevertheless, both the novelty of the learning operation embedding and the application scenario of the proposed method are very limited. Strengths And Weaknesses Strengths: This paper develops a new NAS method that trains a NAS model and a regressor model in an adversarial scheme. The proposed progressive domain-invariant feature extraction method is interesting. Weaknesses: There is a typo in the abstract. The accuracy on ImageNet should be 76.50% instead of 96.50%. As highlighted in the paper, one of the contributions is the learnable operation embedding. However, it is not novel because almost all Reinforcement Learning (RL) based NAS methods exploit a learnable embedding for each operation, such as ENAS. Please further clarify it if there are some other differences from what is used in ENAS. The motivation for reducing the divergence between source and target search space seems questionable. For example, in Figure 1, the architectures in source and target domains may share exactly the same or similar architecture. In other words, the discriminator would definitely fail in this case since it cannot distinguish between two identical/similar architectures. Instead, what should be bridged/distinguished between two spaces is the accuracy y instead of the architecture x. The application scenario of the proposed method seems very limited. According to the paper, this method can be only used on the search space of DARTS. Given another search space, e.g., MobileNet based space used in OFA, is the proposed method still applicable in this case? Clarity, Quality, Novelty And Reproducibility The paper is well-written and easy to follow.
ICLR
Title Unsupervised Performance Predictor for Architecture Search Abstract Performance predictors can directly predict the performance value of given neural architectures without training, thus broadly being studied to alleviate the prohibitive cost of Neural Architecture Search (NAS). However, existing performance predictors still require training a large number of architectures from scratch to get their performance labels as the training dataset, which is still computationally expensive. To solve this issue, we develop an performance predictor by applying the unsupervised domain adaptation technique called USPP, which can avoid costly dataset construction by using existing fully-trained architectures. Specifically, a progressive domain-invariant feature extraction method is proposed to assist in extracting domain-invariant features due to the great transferability challenge caused by the rich domain-specific features. Furthermore, a learnable representation (denoted as operation embedding) is designed to replace the fixed encoding of the operations to transfer more knowledge about operations to the target search space. In experiments, we train the predictor by the labeled architectures in NAS-Bench101 and predict the architectures in the DARTS search space. Compared with other state-of-the-art NAS methods, the proposed USPP only costs 0.02 GPU days but finds the architecture with 97.86% on CIFAR-10 and 76.50% top-1 accuracy on ImageNet. 1 INTRODUCTION Neural Architecture Search (NAS) (Elsken et al., 2019) aims to automatically design highperformance neural architectures and has been a popular research field of machine learning. In recent years, the architectures searched by NAS have outperformed manually-designed architectures in many fields (Howard et al., 2019; Real et al., 2019). However, NAS generally requires massive computation resources to estimate the performance of architectures obtained during the search process (Real et al., 2019; Zoph et al., 2018). In practice, this is unaffordable for most researchers interested. As a result, how to speed up the estimation of neural architectures has become a hot topic among the NAS community. Performance predictor (Wen et al., 2020) is a popular accelerated method for NAS. It can directly predict the performance of neural architectures without training, thus greatly accelerating the NAS process. A large number of related works are carried out because of its superiority in reducing the costs of NAS. For example, E2EPP (Sun et al., 2019) adopted a random forest (Breiman, 2001) as the regression model to effectively find promising architectures. ReNAS (Xu et al., 2021) used a simple LeNet-5 network (LeCun et al., 1998) as the regression model, and creatively employed a ranking-based loss function to train the predictor, thus improving the prediction ability of the performance predictor. Although existing performance predictors gain huge success in improving the efficiency of NAS, sufficient architectures need to be sampled from the target search space and be fully trained to obtain their performance value as the label (Wen et al., 2020). The performance predictor is trained by these labeled architectures, and then is used to predict the performance of architectures. In order to ensure the prediction accuracy of the performance predictor, it is usually necessary to train at least hundreds of architectures as the dataset, which is a huge cost. In recent years, many benchmark datasets such as NAS-Bench-101 (Ying et al., 2019), NAS-Bench201 (Dong & Yang, 2020), NAS-Bench-NLP (Klyuchnikov et al., 2020) are released for promoting the research on NAS. There are a large number of architecture pairs (i.e., the architecture and its performance) in these datasets. As a result, we are motivated to utilize the rich architecture knowledge in these datasets to predict the architectures in the target search space (i.e., the search space in which the architecture needs to be predicted.). In this way, we can avoid training a large number of architectures in the target search space, thereby alleviating the expensive cost of building the dataset for performance predictors. However, the search space designed in the benchmark datasets is very different from the real-world search spaces. The performance predictor trained on existing labeled architectures cannot be applied to the target search space. In this paper, we proposed an UnSupervised domain adaptation-based Performance Predictor (USPP) with the usage of the domain adaptation technique. Different from the traditional performance predictors that need the training data and the predicted data in the same search space, USPP can leverage the labeled architectures in existing benchmark datasets (e.g., NAS-Bench-101 (Ying et al., 2019)) to build a powerful performance predictor for the target search space (e.g., the DARTS search space (Liu et al., 2018b)). As a result, USPP can avoid expensive data collection for the target search space. Specifically, the contributions can be summarized as follows: • A progressive domain-invariant feature extraction method is proposed to reduce the transfer difficulty caused by the huge difference between source and target search spaces. The progressive method explicitly models the domain-specific features and gradually separates them from the domain-invariant features, thus assisting in the alignment of the source and target search spaces. • A learnable representation for the operations in architectures, i.e., operation embedding, is designed to transfer more knowledge about operations to the target search space. Compared to the widely used fixed encoding method, the operation embedding can effectively capture the inner meaning and structural role of each operation in the source search space and applied them in the target search space to reduce transfer difficulty. • USPP only costs 0.02 GPU days to search for the architectures in the DARTS search space because there is no need to annotate the architectures in the target search space. Furthermore, the searched architecture by USPP achieves 97.86% classification accuracy on CIFAR-10 and 76.50% classification accuracy on ImageNet and outperforms all the stateof-the-art methods compared. 2 RELATED WORK 2.1 NAS AND PERFORMANCE PREDICTORS NAS can automatically design high-performance neural architecture and consists of search space, search strategy, and performance estimation strategy (Elsken et al., 2019). Specifically, the search space defines the collections of the candidate architectures. The search strategy corresponds to the employed optimization algorithms for the search, which can be mainly classified into evolutionary algorithms (Bäck et al., 1997), reinforcement learning (Kaelbling et al., 1996) and gradient descent (Liu et al., 2018b). The performance estimation strategy defines how to evaluate the architectures to obtain their performance. During the search, the NAS algorithms use the search strategy to search architecture in the predefined search space and obtain the performance value of the searched architectures by the performance estimation strategy. No matter what search strategy is used, a lot of neural architectures need to be estimated. Because of the heavy cost of the traditional GPU-based estimation method, many accelerated methods are proposed such as early stopping policy (Sun et al., 2018), proxy dataset (Sapra & Pimentel, 2020), weight-sharing method (Bender et al., 2018). However, the first two methods may lead to poor generalization and low-fidelity approximation of performance value, and the weight-sharing method may be unreliable in predicting the relative ranking among architectures (Li et al.; Yang et al.), which disobeys the goal of finding the architecture with the highest ranking. Performance predictor is free from the aforementioned shortcomings and has received great attention in recent years. However, existing predictors have the constraint that the source search space and the target search space must be the same. The proposed USPP breaks the limitation and creatively uses the existing labeled architectures in the source search space to predict the architectures in the target search space, thus removing the reliance on potentially costly labels in the target search space. 2.2 DOMAIN ADAPTATION In many machine learning methods, there is a major assumption that the training data and testing data are from the same distribution (Wilson & Cook, 2020). When it is not held, the network trained on the source domain will face a performance decline when testing on the target domain (Patel et al., 2015). Transfer learning (Weiss et al., 2016) can solve the problem by learning knowledge from the source domain and applying it to the target domain. Domain Adaptation (DA) (Wilson & Cook, 2020) is a subcategory of transfer learning, and its goal is to train a network that performs well on a different but related target domain by the labeled data in the source domain. The scenario in this paper is a problem of Unsupervised Domain Adaptation (UDA), where the labeled data are only available in the source domain. The mainstream method to address the UDA is aligning source and target domains by learning the domain-invariant feature representation. Specifically, the features are said to be domain-invariant if features extracted from data in both source and target domains follow the same distribution. If a network performs well in the source domain using domain-invariant feature representation, the network can be generalized well to the target domain. Generally, the domain-invariant methods can be classified into two categories. The first category explicitly reduces the domain discrepancy to obtain the domain-invariant by some distribution discrepancy metrics such as Maximum Mean Discrepancy (MMD) (Gretton et al., 2006; 2012), correlation alignment (Sun et al., 2016), and contrastive domain discrepancy (Kang et al., 2019). Motivated by the Generative Adversarial Network (GAN) (Goodfellow et al., 2014), another category learns domain-invariant representation by adversarial training (Ganin et al., 2016), and this is also what our method belongs to. The methods generally train a domain discriminator to distinguish the source domain from the target domain, and train a feature extractor to fool the discriminator to extract domain-invariant representations. Adversarial domain adaptation achieves great success in many fields, and a lot of works are proposed. For example, symnets (Zhang et al., 2019) designed symmetric classifiers to play the role of domain discriminator. GVB (Cui et al., 2020b) constructed bridge layers on both the generator and discriminator to reduce the overall transfer difficulty. However, most existing adversarial methods are performed on the classification task of Computer Vision (CV) or Neural Language Processing (NLP), and cannot be directly exploited in performance predictor which is a regression problem and deals with a completely different data type (i.e., graph data). In this paper, we apply the adversarial methods in performance predictors and propose a progressive domain-invariant feature extraction method to reduce the transferring difficulty. As far as we know, this is the first work to alleviate the high costs of performance predictors by the adversarial domain adaptation techniques. 3 APPROACH 3.1 FORMULATION In the scenarios of our paper, the architecture dataset in the source domain is denoted as DS ={( xSi ,y S i )}Ns i=1 with NS labeled architectures, where xSi and y S i denote the i th architecture and its performance value, respectively. The dataset in the target domain is denoted as DT = { xTi }NT i=1 with NT unlabeled architectures. USPP aims to train a performance predictor on DS and DT , and achieve promising results in the performance prediction of the architectures in target domain. The overall framework of the proposed USPP is shown in Fig 1. Specifically, to build the predictor, the neural architectures first should be represented into a form that can be fed into the predictor. Generally speaking, most neural architectures can be regarded as the Directed Acyclic Graph (DAG). The node in the DAG corresponds to one specific operation (e.g., convolution 3×3), and the edge stands for the connection between operations. Consequently, each architecture can be represented by the adjacency matrix A and the operation list Lop, where A stands for the topology connections between nodes and Lop represents the operations of nodes. Because the most commonly used encoding method for Lop (i.e., one-hot method) ignores the intrinsic relationship between different operations, we adopt a learnable operation embedding E to mapLop to a continuous space R to avoid the drawback, and the operation embedding will be elaborated in Sec. 3.4. Then, we use a Graph Convolution Network (GCN) as the first component of USPP to map every architecture into the feature space Z . In specific, GCN is a powerful technique to learn useful features from graph data which is a kind of non-Euclidean data, and has achieved success in processing neural architecture data. After learning the latent representation of the architectures by G, regressor R, as the second component of USPP, is exploited to predict the performance of the architectures. To ensure the prediction accuracy in target domain, the prediction results in source domain needs to be guaranteed first. It can be achieved by minimizing the regression loss Lreg on G and R when training. The regression loss can be calculated as: Lreg = 1 NS NS∑ i=1 L ( ySi , R ( G ( xSi ))) (1) where L is the L2 loss fuction in USPP. 3.2 ADVERSARIAL TRAINING As mentioned above, we are motivated to use the existing labeled architectures in the benchmark datasets to build a performance predictor for the target search space. However, the gap between the architectures in different search spaces is so large that the performance predictor trained with the labeled architectures in the source domain cannot be directly applied to predict the architectures in the target domain. Accordingly, we leverage the adversarial training (Wilson & Cook, 2020) to learn domain-invariant representations, thus reducing the discrepancy between source and target domains. In this way, the performance predictor can generalize well to the target domain. To perform adversarial training, a discriminatorD is designed as the third component of USPP, and it is a fully connected layer. Similar to the regressor R, D is attached after the feature extractor G. Specifically, the discriminator is essentially a domain classifier to differentiate between the features from the source domain and the target domain, and only works during the training process of USPP. During the training, the discriminator D is optimized to correctly classify the domains, while the feature extractorG is optimized to fool the discriminator so that the discriminator cannot distinguish the source domain from the target domain. Through the adversarial training, the feature extractor is encouraged to extract the common features shared by domains (i.e., domain-invariant features). To adversarially train the discriminator and the feature extractor, the domain classification loss Lcls is designed, which can be formulated as: Lcls = − 1 NS NS∑ i=1 log ( D ( G ( xSi ))) − 1 NT NT∑ j=1 log ( 1−D ( G ( xTj ))) (2) To make D distinguish between the source domain and target domain, we train D to minimize the domain classification loss Lcls. At the same time, we train G to maximize Lcls, thus making the feature distributions from source and target domains as similar as possible to fool D. 3.3 PROGRESSIVE DOMAIN-INVARIANT FEATURE EXTRACTION Although the adversarial training can align the source and target domains to a certain extent, it is difficult to reduce the divergence between source and target domains to zero in practice due to the existence of rich domain-specific features in respective domains. Specifically, domain-specific features refer to the features unique to each domain and greatly hinder the alignment of the data distributions from source and target domains. To further reduce the negative influence of domainspecific features, we propose a simple but effective progressive domain-invariant feature extraction method. The method reduces the influence of domain-specific features mainly by explicitly modeling them. In fact, there are some similar works to mitigate domain-specific features in this way. For example, DSN (Bousmalis et al., 2016) separately modeled the domain-specific features of each domain and the domain-invariant feature shared by source and target domains and reconstructed the input data by the extracted domain-specific and domain-invariant features to add generalizability. However, the modeled domain-invariant features also participated in the reconstruction, leading to them with a lot of domain-specific properties. HDA (Cui et al., 2020a) proposed a heuristic framework, and leveraged the modeled domain-specific features as heuristics to gradually gain domain-invariant features. However, the multiple sub-networks in the heuristic network made it difficult to optimize. The proposed progressive feature extraction method can effectively model the domain-specific features and gradually separate them from the features for the architecture data processed in this paper. To alleviate the domain-specific features more easily, we hypothesize that the domain-specific features are easier to be captured compared with the domain-invariant features according to HDA. This is because the architectures in the same domain generally share common domain-specific characteristics. For example, the cell in the NAS-Bench-101 only has one input while that in the DARTS search space has two inputs. Furthermore, the operations in NAS-Bench-101 are completely different from those in the DARTS search space. Compared with the domain-invariant features, these domain-specific features are easier to be extracted. The proposed method is embodied in the design of the regressor R. The regressor is used to map the extracted feature from G to the predicted label, and consists of fundament layer F , bridge layer B, and prediction layer P . Specifically, fundament layer F is used to model the current feature representation ri, and bridge layer B is utilized to explicitly model the discrepancy from the current feature to the ideal domain-invariant feature, i.e., the domain-specific part bi in the current feature. By subtracting the results of bridge layer bi from the results of ri, the domain-invariant features di can be obtained. This is because bi is easier to be obtained than di and can give guide to the constructing of di. Finally, the obtained di is fed to the prediction layer P to get the final prediction label ŷ. The overall process can be formulated as: R(fi) = P (F (fi)−B(fi)) (3) where fi denotes the extracted feature from G. During training, we seek to reduce the influence of the modeled domain-specific feature bi to help the extraction of domain-invariant representation. To achieve this goal, we design an extra loss Lbri as following: Lbri = 1 (NS +NT ) Ns+Nt∑ i=1 L∑ j=1 |bi,j | (4) where L is the size of bi. During the training of USPP, we aim to gradually minimize Lbri, thus mitigating domain-specific representation. Therefore, at the beginning of training, the designed bridge layer assists in the progressive extraction of the domain-invariant feature. In the later stage of training, bi is close to zero, which will not have too much impact on training. In conclusion, USPP has three objectives in the training phase: 1. Minimizing Lreg on G and R to get high prediction accuracy in the source domain. 2. Minimizing Lcls on D and maximizing Lcls on G to perform adversarial training. 3. Minimizing Lbri on R and G to assist in the extraction of the domain-invariant features. Let Ldom = −Lcls, the overall objectives during training is as following: min G,R Lreg + γLdom + µLbri; max D Ldom (5) where γ and µ are the hyper-parameters. 3.4 OPERATION EMBEDDING As is mentioned above, neural architectures can be represented by the adjacency matrix A and the operation list Lop, where A represents the connection relationship between different nodes, and Lop represents the operation type of every node. Generally, the one-hot method is used to encode the operation list Lop to a fixed matrix Mlen(Lop)×K (Liu et al., 2021). Then, the fixed matrix Mlen(Lop)×K and A are fed into GCN to obtain the feature representation of a given architecture. However, there is a large limitation that the process ignores the computational relationship between different operations. This is because every operation is thought to be independent in the one-hot method, but it is obviously not in line with reality. For example, the role of 1 × 1 convolution may be more similar to that of 3× 3 convolution compared with that of the max pooling operation. According to this motivation, the trainable operation embedding is proposed instead of the one-hot method. Specifically, we use operation embedding E to map each operation in Lop to a continuous space Rlen(Lop)×K . K is the dimension of the mapped vector and can be set during the training. Then, the K-dimensional vector obtained by E along with the adjacency matrix is mapped to the feature representation by GCN. During the training process of USPP, the operation embedding is regarded as a part of GCN, and the weight of the operation embedding is optimized with the weights of GCN. Through this, the operation embedding E can seek the inner relationship between different operations during the training. In specific, the operation embedding is similar to the word embedding in NLP, except that the mapped object words are replaced by the operation in the architecture. Word embedding can encode the semantic and syntactic information of every word, where semantic information is related to the meaning of words, and syntactic information corresponds to the structural roles of words (Li & Yang, 2018). Similarly, the inner meaning and structural information of each operation can be also learned from the source domain and applied in the target domain through the operation embedding. As a result, operation embedding helps USPP transfer more information about the operation from the source domain to the target domain, thus improving the prediction performance in the target domain. 3.5 THEORETICAL ANALYSIS In this section, we analyze the expected target-domain error for the proposed method based on the theories of domain adaptation (Ben-David et al., 2010). Let S and T be the source and target distributions for a family of fundament layer F in the hypothesis spaceH. εT (F ) and εS(F ) be the expected error of a hypothesis F ∈ H on target and source domains. To bound the target error, we use the H∆H-distance to measure the distance between distributions S and T , which is defined as the following: dH∆H(S, T ) = 2 sup h1,h2∈H | Pf∼S [h1(f) 6= h2(f)]− Pf∼T [h1(f) 6= h2(f)] | (6) We can get the target error of hypothesis F by the theory in (Ben-David et al., 2010): εT (F ) ≤ εS(F ) + 1 2 dH∆H(S, T ) + λ (7) where λ = εT (F ∗) + εS (F ∗) is the combined error of ideal hypothesis F ∗ = arg minF (εS(F ) + εT (F )) on both domains. By the definition of H∆H-distance, dH∆H (DS ,DT ) = 2 sup |εS (h, h′)− εT (h, h′)|. Hence, εT (F ) ≤ εS(F ) + sup |εS (F, F ∗)− εT (F, F ∗)|+ λ (8) Because the ideal discrepancy B∗ from the current feature to the ideal domain-invariant feature may be not fully represented by the practice discrepancy B, B ≤ B∗. Furthermore, B∗ = F − F ∗. Hence, there exists a positive correlation between B and F : B = k (F − F ∗) k ∈ (0, 1] (9) As mentioned in Subsec. 3.3, F = B+P . Furthermore, the goal of F and P is both to better obtain the performance of the given architecture. As a result, the ideal hypothesis F ∗ = P ∗. We could obtain the relationship between F and P : (1− k) (F − F ∗) = (P − P ∗) (10) Equation (10) is held on source and target distributions. Hence, εS (P, P ∗) = (1 − k)εS (F, F ∗) and εT (P, P ∗) = (1 − k)εT (F, F ∗). The P and F can both correctly predict the performance of architectures, the differences between them on the source domain are small. Hence, εS(P ) = εS(F ). Combined with Equation (10), we can obtain Equation (11): εT (P ) ≤ εS(P ) + sup |εS (P, P ∗)− εT (P, P ∗)|+ λ ≤ εS(F ) + (1− k) sup |εS (F, F ∗)− εT (F, F ∗)|+ λ ≤ εS(F ) + (1− k) 2 dH∆H(S, T ) + λ (11) It can be seen from Equation (11) that the upper bound of the target-domain error for P with the usage of B is lower than that for F without the usage of B. Therefore, the proposed method can help to reduce the upper bound of the target-domain error, which also proves the effectiveness of our proposed method from a theoretical point of view. 4 EXPERIMENTS In this part, we choose the NAS-Bench-101 as source domain and the DARTS search space as target domains to train USPP. We first report the results of neural architecture search in DARTS to verify the effectiveness of USPP. Then, we perform extensive ablation studies on various components. Please note that we perform additional experiments (NAS-Bench-101→NASBench-201, NAS-Bench-201→NAS-Bench-101, NAS-Bench-201→DARTS, NDS ResNet→NDS ResNeXt Radosavovic et al. (2019), and NDS ResNeXt→NDS ResNet) in Appendix B. All experiments in the paper are performed on NVIDIA GTX 2080Ti GPU. The experimental settings is presented in Appendix A. 4.1 NEURAL ARCHITECTURE SEARCH ON DARTS Training process. The regression loss Lreg , the domain classification loss Lcls, and the bridge loss Lbri are shown in Fig. 4.1(a). As expected, Lreg , Lcls, and Lbri reduce repaidly. Kendall’s Tau (KTau) (Sen, 1968), which can describe the correlation between the predicted value and the groundtruth values, is used to measure the prediction performance of USPP. The trend of the KTau value of USPP is shown in Fig. 4.1(b). It can be seen that the KTau of USPP goes up very quickly and converges. CIFAR-10. The experimental results on CIFAR-10 are shown in Table 1. It can be seen that the proposed USPP only costs 0.02 GPU days, which is the least among all methods compared. Please note that we do not take the costs for building the NAS-Bench-101 into consideration because NASBench-101 is a public dataset and not constructed for our work. Besides, TENAS with the least cost among other methods, takes 2.5 times as long as USPP. This is because USPP takes advantage of the fully-trained architectures in NAS-Bench-101, thus eliminating the cost of building new architectures in the DARTS search space. Furthermore, the searched architecture by USPP gets the highest test accuracy 97.86% compared with all the other state-of-the-art methods. Specifically, the accuracy of the architecture searched is still 0.32% higher than that of the best performance architecture searched by other methods. This shows that our method can effectively transfer knowledge from the source search space to the target search space. ImageNet. The comparison results on ImageNet are shown in Table 2. It can be seen from the results that the search cost of USPP is the lowest, and the PC-DARTS method with the least cost of the other methods is five times of the USPP. Besides, the searched architecture by USPP trained without any tricks can obtain the highest top-1 and top-5 accuracy, and the training result with positive technique largely surpasses existing methods in top-1 and top-5 accuracy. As a result, USPP can transfer useful information from NAS-Bench-101 and learn the inner correspondence between architectures and performance in DARTS. 4.2 ABLATION STUDY In this part, we conduct ablation experiments on the progressive domain-invariant feature extraction method, operation embedding, and domain adaptation methods. We randomly sample 100 architectures from NAS-Bench-301 Siems et al. (2020) to evaluate the proposed USPP. The progressive domain-invariant feature extraction. To validate the effectiveness of the proposed progressive method, we have performed ablation experiments on the core components of the progressive domain-invariant feature extraction method, i.e., the bridge layer. It can be seen from Table 3 that the performance predictor with the bridge layer gets a higher Kendall’s Tau value and 0.1434 more than the result without the bridge. The result indicates that the progressive domaininvariant feature extraction plays a positive role in improving the performance of USPP. Operation embedding. The results of the ablation study on the operation embedding are reported in Table 4. It can be seen from the result with operation embedding is more 0.1009 than that without operation embedding, which illustrates that operation embedding improves the performance of the predictor. Furthermore, the length of operation embedding is set as 3 in the experiment because of the small number of the operations, and the 3D visual result is shown in Fig. 3. Surprisingly, convolution 1 × 1 and output are the most similar pair in all pairs of operations. Furthermore, just as we suspected, the distance between convolution operations is smaller than the distance between convolution and pooling operations. Different domain adaptation methods. We adapt the distribution discrepancy metric-based method MMD (Long et al., 2015) and the adversarial training-based methods DANN (Ganin & Lempitsky, 2015), GVB (Cui et al., 2020b), and HDA (Cui et al., 2020a). The hyperparameters of these methods are optimized, and the best results of them are reported in Table 7. It is obvious that our method outperforms all the peer competitors. 5 CONCLUSION In this paper, we propose an performance predictor with the unsupervised domain adaptation technique to reducing the high cost of annotating architectures. Specifically, we design a progressive domain-invariant feature extraction to reduce the difficulty of alignment by explicitly modeling the domain-specific and separating them of domain-invariant features. Moreover, we use operation embedding to map the operation list of every architecture to continuous space, and learn the deeper meaning and structural information of each operation during training. The USPP achieves the test accuracy of 97.86% and 76.5% on CIFAR-10 and ImageNet, respectively. Unfortunately, this paper does not consider sequence-based search spaces and we will propose a more general domain adaptation framework in the future. A EXPERIMENTAL SETTING In this section, we report the expermental settings of the experiments in Section 4. Data processing. In order to keep the DARTS search space and NAS-Bench-101 search space uniform, we convert the architecture of the DARTS search space with operations on edges to one with operations on nodes following the convention (Liu et al., 2021). Then, since the types of operations in the two search spaces are very different, we group all operation types into five categories: input, output, convolution 1× 1, convolution 3× 3, convolution 5× 5, and pooling. Setting of USPP. The Gradient Reversal Layer (GRL) is utilized to simplify the adversarial training. Specifically, GRL can flip the gradients from D to G so that we can train G and R to minimize the sum of Lreg , Ldom and Lbri. We train the USPP with the batch size of 1, 000 for 100 epoch. We employ the SGD with the weight decay of 0.0005 and the learning rate is initialed with 0.1. We set γ = 1 and λ = 1 in the loss which is verified to perform well in the experiments. Search setting. Followed the convention of performance predictor (Wen et al., 2020), we randomly sample 100, 000 architectures from the DARTS search space, and predict them by the trained USPP. Then, we select the architecture with the highest predicted performance value as the searched architecture. Training setting of the searched architecture. For the training setting on CIFAR-10, we follow the convention of (Lu et al., 2021b). As for the training setting on ImageNet, we train the searched architecture under two types of training settings to compare with other methods fairly. In the first setting, we train architecture for 350 epoch with the batch size of 512. Besides, we use the SGD optimizer with a momentum of 0.9, an initial learning rate of 0.2, and the weight decay of 4×10−5. In the second setting, the cosine learning rate strategy is used during the training on the basis of the first setting. In addition, we use some enhancement techniques to improve the classification accuracy on ImageNet. Specifically, label smoothing (Szegedy et al., 2016) and autoaugment (Cubuk et al., 2019) which are effective regularization methods are utilized in the training. Furthermore, the Squeeze-and-Excitation (SE) module (Hu et al., 2018) is attached after each cell. B SUPPLEMENTARY EXPERIMENTS In this part, we perform additional experiments on cell-based search space and block-based search space, and compare our method with other domain adaptation methods. The detail of the cell-based search space and the block-based search space used are shown in Fig. 6 and Fig. 7, respectively. Specifically, NAS-Bench-101, NAS-Bench-201, and DARTS belong to the cell-based search space, while NDS ResNet and NDS ResNeXt belong to the block-based search space. B.1 CELL-BASED SEARCH SPACE Table 9: The KTau value of USPP on NASBench-101 (NAS-Bench-201→NAS-Bench101).g Methods KTau MMD (Gretton et al., 2006) 0.306 DANN (Ganin & Lempitsky, 2015) 0.325 GVB (Cui et al., 2020b) 0.302 HDA (Cui et al., 2020a) 0.307 USPP (ours) 0.349 Transferring from NAS-Bench-101 to NAS-Bench-201. The comparisons of the KTau value on NAS-Bench-201 between USPP and other domain adaptation methods are shown in Table. 9. It can be seen that the USPP gets the highest KTau value among all peer competitors. This illustrates that the proposed method can effectively transfer knowledge from NAS-Bench-101 to NAS-Bench-201. Transferring from NAS-Bench-201 to NAS-Bench-101. Table 8 shows the comparison results of USPP on NAS-Bench-101. It can be seen that USPP achieves the highest KTau value among all peer competitors, which verifies the effectiveness of USPP to transfer from NAS-Bench-201 to NAS-Bench-101. Transferring from NAS-Bench-201 to DARTS. USPP is evaluated in NAS-Bench-301. The comparison results of transferring from NAS-Bench-201 to DARTS search space are shown in Table. 10. It can be observed that although USPP surpasses all competitors, the KTau value of USPP is low. We infer that this is because NAS-Bench-201 is too small compared to DARTS (15.6k vs 1018), and the domain discrepancy between this NAS-Bench-201 and DARTS is too large. The information learned from NAS-Bench-201 is not enough to accurately predict the architecture in DARTS. B.2 BLOCK-BASED SEARCH SPACE Table 12: The KTau value of USPP on NDS ResNeXt (NDS ResNet→NDS ResNeXt).g Methods KTau MMD (Gretton et al., 2006) 0.519 DANN (Ganin & Lempitsky, 2015) 0.513 GVB (Cui et al., 2020b) 0.485 HDA (Cui et al., 2020a) 0.391 USPP (ours) 0.616 Transferring from NDS ResNet to NDS ResNeXt. We apply the 25k public annotated architectures in the NDS ResNeXt search space and the unlabeled architectures in the NDS ResNet search space to train USPP. Please note that we do not use the proposed operation embedding method, and replace the GCN (i.e., feature extractor) with the MLP. The comparison results of the KTau value are shown in Table. 11. The results show that the proposed USPP surpasses all competitors. This verifies the effectiveness of our approach in NDS ResNet which belongs to the block-based search space. Transferring from NDS ResNeXt to NDS ResNet. Similar to the last experiment (i.e., NDS ResNeXt→NDS ResNet), We train USPP on 25k labeled architectures in the NDS ResNet search space and test USPP on the public annotated architectures in the ResNeXt search space. The comparison results are shown in Table 12. USPP achieves the highest KTau value. This illustrates that the proposed method can effectively learn the correlation between architecture and performance and transfer it to the NDS ResNeXt search space. C DETAILS OF THE TRAINING PROCESS Quantity of the domain-adaptation features. We have performed experiments on NAS-Bench201→ DARTS and NAS-Bench-101→ NAS-Bench-201, and computed the mean KL-Divergence and the mean cosine similarity. Specifically, the KL-Divergence is 0.0086, and the mean cosine similarity is 0.5018. These results show that the domain invariant features we learn are similar in different cases (i.e., NAS-Bench-201→ DARTS, NAS-Bench-101→ NAS-Bench-201), and further prove the quality of domain invariant features. The regression error and the domain classification accuracy. The results are shown in Table. 13. The regression error is low, and illustrates that the regressor learns the correspondence between the architecture and performance in the source domain. Furthermore, the domain classification accuracy can verify that the discriminator cannot distinguish between domains.
1. What is the main contribution of the paper regarding surrogate models for predicting architecture accuracy? 2. What are the strengths and weaknesses of the proposed approach, particularly in its ability to learn domain-invariant features? 3. Do you have any questions or suggestions regarding the evaluation methodology, such as using KL divergence or cosine similarity to measure feature distance or similarity? 4. Could you provide more information or clarification on the USPP DANN + bridge layer and the ablation study results? 5. Would it be possible to use operation embeddings for all baselines in Table 5, and why not use NDS-Darts [1] benchmark for evaluating architectures? 6. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper presents a surrogate model that predicts the accuracy of an architecture. The surrogate model has access to the architectures and their corresponding accuracies from the source search space. It is tasked with predicting the accuracies of architectures that are sampled from the target search space. It uses adversarial training for unsupervised domain adaptation. A Graph convolution network is used to extract the latent features from the architectures and serves as a generator. The discriminator in turn tries to differentiate the domain of the architecture. A fundament layer F models the entire feature space. A bridge layer B captures the domain specific features. The difference between the F ( x i ) and B ( x i ) gives the domain invariant features. The domain invariant features are fed to a predictor layer which in turn predicts the accuracy. The overall loss function is a combination of the domain loss from the discriminator, the bridge layer loss and the regression loss from the predict layer. The input to the surrogate model is an adjacency matrix. It learns an embedding for the operations of the architecture. Strengths And Weaknesses Strengths: The paper is able to learn a surrogate model on the target search space without any training data Questions: Could you evaluate the quality of the domain invariant features that are learnt? For example, use the source domain as nasbench-201 and target as Darts, source domain as nasbench-101 and target as nasbench-201 and compute the distance (KL-Divergence) or similarity (cosine similarity) between the learnt domain invariant features using source domain 1 and source domain 2. Could you also include a table for the regression error and the domain classification accuracy? Is USPP DANN + bridge layer. If so, in the ablation study, when the bridge layer is removed, why is the kendall tau different from DANN? For table 5, are you using operation embeddings for all the baselines? For table 5, rather than training the architectures for darts search space, why not use NDS-Darts [1] benchmark? Evaluate it on 100 architectures sampled from that search space. Similarly, for table 2, if you could actually use NDS-Darts to find the best architecture and report the numbers for other algorithms also from the same benchmark, it would be a fair comparison. Please include darts-pt [2] also in table 1 and 2. [1] On Network Design Spaces for Visual Recognition, Radosavovic et al. [2] Rethinking Architecture Selection in Differentiable NAS, Wang et al. Clarity, Quality, Novelty And Reproducibility This paper is written very clearly. But the novelty of this paper is limited. The idea is heavily borrowed from the paper "Gradually Vanishing Bridge for Adversarial Domain Adaptation" by Cui et al. That paper uses a bridge layer for both generator and discriminator and use the domain invariant features to compute the classification loss and adversarial loss.
ICLR
Title Optimal Control Via Neural Networks: A Convex Approach Abstract Control of complex systems involves both system identification and controller design. Deep neural networks have proven to be successful in many identification tasks, however, from model-based control perspective, these networks are difficult to work with because they are typically nonlinear and nonconvex. Therefore many systems are still identified and controlled based on simple linear models despite their poor representation capability. In this paper we bridge the gap between model accuracy and control tractability faced by neural networks, by explicitly constructing networks that are convex with respect to their inputs. We show that these input convex networks can be trained to obtain accurate models of complex physical systems. In particular, we design input convex recurrent neural networks to capture temporal behavior of dynamical systems. Then optimal controllers can be achieved via solving a convex model predictive control problem. Experiment results demonstrate the good potential of the proposed input convex neural network based approach in a variety of control applications. In particular we show that in the MuJoCo locomotion tasks, we could achieve over 10% higher performance using 5× less time compared with state-of-the-art model-based reinforcement learning method; and in the building HVAC control example, our method achieved up to 20% energy reduction compared with classic linear models. 1 INTRODUCTION Decisions on how to best operate and control complex physical systems such as the power grid, commercial and industrial buildings, transportation networks and robotic systems are of critical societal importance. These systems are often challenging to control because they tend to have complicated and poorly understood dynamics, sometimes with legacy components are built over a long period of time (Wolf, 2009). Therefore detailed models for these systems may not be available or may be intractable to construct. For instance, since buildings account for 40% of the global energy consumption (Cheng et al., 2008), many approaches have been proposed to operate buildings more efficiently by controlling their heating, ventilation, and air conditioning (HVAC) systems (Zhang et al., 2017). Most of these methods, however, suffer from two drawbacks. On one hand, a detailed physics model of a building can be used to accurately describe its behavior, but this model can take years to develop. On the other hand, simple control algorithms have been developed by using linear (RC circuit) models (Ma et al., 2012) to represent buildings, but the performance of these models may be poor since the building dynamics can be far from linear (Shaikh et al., 2014). In this paper, we leverage the availability of data to strike a balance between requiring painstaking manual construction of physics based models and the risk of not capturing rich and complex system dynamics through models that are too simplistic. In recent years—with the growing deployment of sensors in physical and robotics systems—large amount of operational data have been collected, such as in smart buildings (Suryadevara et al., 2015), legged robotics (Meger et al., 2015) and manipulators (Deisenroth et al., 2011). Using these data, the system dynamics can be learned directly and then automatically updated at periodic intervals. One popular method is to parameterize these ∗Authors contribute equally. complex system dynamics using deep neural networks to capturing complex relationships (He et al., 2016; Vaswani et al., 2017), yet few research investigated how to integrate deep learning models into real-time closed-loop control of physical systems. A key reason that deep neural networks have not been directly applied in control is that even though they provide good performances in learning system behaviors, optimization on top of these networks is challenging (Kawaguchi, 2016). Neural networks, because of their structures, are generally not convex from input to output. Therefore, many control applications (e.g., where real-time decisions need to be made) choose to favor the computational tractability offered by linear models despite their poor fitting performances. In this paper we tackle the modeling accuracy and control tractability tradeoff by building on the input convex neural networks (ICNN) in (Amos et al., 2017) to both represent system dynamics and to find optimal control policies. By making the neural network convex from input to output, we are able to obtain both good predictive accuracies and tractable computational optimization problems. The overall methodology is shown in Fig. 1. Our proposed method (shown in Fig. 1 (b)) firstly utilizes an input convex network model to learn the system dynamics and then computes the best control decisions via solving a convex model predictive control (MPC) problem, which is tractable and has optimality guarantees. This is different from existing methods that uses model-free end-to-end controller which directly maps input to output (shown in Fig. 1 (a)). Another major contribution of our work is that we explicitly prove that ICNN can represent all convex functions and systems dynamics, and is exponentially more efficient than widely used convex piecewise linear approximations (Magnani & Boyd, 2009). 1.1 RELATED WORK The work in (Amos et al., 2017) was an impetus for this paper. The key differences are that the goal in (Amos et al., 2017) is to show that ICNN can achieve similar classification performances as conventional neural networks and how the former can be used in inference and prediction problems. Our goal is to use these networks for optimization and closed-loop control, and in a sense that we are more interested in the overall system performances and not directly the performance of the networks. We also extend the class of networks to include RNNs to capture dynamical systems. Control and decision-making have used deep learning mainly in model-free end-to-end controller settings (shown in Fig. 1 (a)), such as sequential decision making in game (Mnih et al., 2013), robotics manipulation (Levine & Koltun, 2014; Levine et al., 2016), and control of cyber-physical systems (Wei et al., 2017; O’Neill et al., 2010). However, much of the success relies heavily on a reinforcement learning setup where the optimal state-action relationship can be learned via a large number of samples. However, many physical systems do not fit into the reinforcement learning process, where both the sample collection is limited by real-time operations, and there are physical model constraints hard to represent efficiently. To address the above sample efficiency, safety and model constraints incompatibility concerns faced by model-free reinforcement learning algorithms in physical system control, we consider a model-based control approach in this work. Model-based control algorithms often involve two stages – system identification and controller design. For the system identification stage, the goal is to learn a fixed form of system model to minimize some prediction error (Ljung, 1998). Most efficient model-based control algorithms have used a relatively simple function estimator for the system dynamics identification (Nagabandi et al., 2018), such as linear model (Ma et al., 2012) and Gaussian processes (Meger et al., 2015; Deisenroth et al., 2011). These simplified models are sample-efficient to learn, and can be nicely incorporated in the sub-sequent optimal control problems. However, such simple models may not have enough representation capacity in modeling large-scale or high-dimension systems with nonlinear dynamics. Deep neural networks (DNNs) feature powerful representation capability, while the main challenge of using DNNs for system identification is that such models are typically highly non-linear and non-convex (Kawaguchi, 2016), which causes great difficulty for following decision making. A recent work from (Nagabandi et al., 2018) is close in spirit as our proposed method. Similarly, the authors use a model-based approach for robotics control, where they first fit a neural network for the system dynamics and then use the fitted network in an MPC loop. However, since (Nagabandi et al., 2018) use conventional NN for system identification, they cannot solve the MPC problem to global optimality. Our work shows how the proposed ICNN control algorithm achieves the benefits from both sides of the world. The optimization with respect to inputs can be implemented using off-the-shelf deep learning optimizers, while we are able to obtain good identification accuracies and tractable computational optimization problems by using proposed method at the same time. 2 CLOSED-LOOP CONTROL WITH INPUT CONVEX NEURAL NETWORKS In this paper, we consider the settings where a neural network is used in a closed-loop system. The fundamental goal is to optimize system performance which is beyond the learning performance of network on its own. In this section we describe how input convex neural networks (ICNN) can be extremely useful in these systems by considering two related problems. First, we show how ICNN perform in single-shot optimization problems. Then we extend the results to an input convex recurrent neural networks (ICRNN), which allows us to both capture systems’ complex dynamics and make time-series decisions. 2.1 SINGLE-SHOT PROBLEM The following proposition states a simple sufficient condition for a neural network to be input convex: Proposition 1. The feedforward neural network in Fig. 2(a) is convex from input to output given that all weights between layers W1:k and weights in the “passthrough” layers D2:k are non-negative, and all of the activation functions are convex and nondecreasing (e.g. ReLU). The structure of the input convex neural network (ICNN) structure in Proposition 1 is motivated by the structure in (Amos et al., 2017) but modified to be more suitable to control of dynamical systems. In (Amos et al., 2017) it only requires W2:k to be non-negative while having no restrictions on weights W1 and D2:k. Our construction achieves the exact representation by expanding the inputs to include both u (∈ Rd) and −u. Then any negative weights in W1 and D2:k in (Amos et al., 2017)’s ICNN structure is set to zero and its negation (which is positive) is added as the weight for corresponding −u. The reason for our construction is to allow the network to be “rolled out in time” when we are dealing with dynamical systems and multiple networks need to be composed together. An simple example that demonstrates how the proposed ICNN can be used to fit a convex function comes form fitting the |u| function. This function is convex and both decreasing and increasing. Let the activation function be ReLU(·) = max(·,0). We can write |u| = −u+ 2ReLU(u) (Amos et al., 2017). However, in this representation, we need a negative weight, the −1 in front of u, and this would be troublesome if we compose several networks together. In our proposed ICNN structure with all positive weights and input negation duplicates, we can write |u| = v+ 2ReLU(u), where we impose a constraint v = −u. Such doubline on the number of input variables may potentially make the network harder to train. Yet during control, having all of the weights positive maintains the convexity between inputs and outputs even if multiple steps are considered which will be discussed in Section 2.2. The constraint v =−u is linear and can be easily included in any convex optimization. This proposition follows directly from composition of convex functions (Boyd & Vandenberghe, 2004). Although it allows for any increasing convex activation functions, in this paper we work with the popular ReLU activation function. Two notable additions in ICNN compared with conventional feedforward neural networks are: 1) Addition of the direct “passthrough” layers connecting inputs to hidden layers and conventional feedforward layers connecting hidden layers for better representation power. 2) the expanded inputs that include both u and −u. The proposed ICNN structure is shown in Fig. 2(a). Note that such construction guarantees that the network is convex and non-decreasing with respect to the expanded inputs û = [ u −u ] , while the output can achieve either decreasing or non-decreasing functions over u. Fundamentally, ICNN allows us to use neural networks in decision making processes by guaranteeing the solution is unique and globally optimal. Since many complex input and output relationships can be learned through deep neural networks, it is natural to consider using the learned network in an optimization problem in the form of min u f (u;W) (1a) s.t. u ∈U , (1b) where U is a convex feasible space. Then if f is an ICNN, optimizing over u is a convex problem, which can be solved efficiently to global optimality. Note that we will always duplicate the variables by introducing v = −u, but again this does not change the convexity of the problem. Of course, since the weights of the network are restricted to be nonnegative, the performance of the network (e.g., classification) may be worse. A common thread we observe in this paper is that trading off classification performance with tractability can be preferable. 2.2 CLOSED-LOOP CONTROL AND RECURRENT NEURAL NETWORKS In addition to the single-shot optimization problem in (1), we are interested in optimally controlling a dynamical system. To model the temporal dependency of the system dynamics, we propose to use recurrent neural networks (instead of feed-forward neural networks). Recurrent networks carry an internal state of the system, which introduces coupling with previous inputs to the system. Fig. 2(b) shows the proposed input convex recurrent neural networks (ICRNN) structure. This network maps from input û to output y with memory unit z according to the following Eq. (2), zt = σ1(Uût +Wzt−1 +D2ût−1) , (2) yt = σ2(Vzt +D1zt−1 +D3ût) , (3) where û = [ u −u ] , and D1,D2,D3 are added direct “passthrough” layers for augmenting representation power. If we unroll the dynamics with respect to time, we have yt = f (û1, û2, ..., ût ;θ) where θ = [U,V,W,D1,D2,D3] are network parameters, and σ1,σ2 denote the nonlinear activation functions. The next proposition states a sufficient condition for the network to be input convex. Proposition 2. The network shown in Fig. 2(b) is a convex function from inputs to output if all weights U,V,W,D1,D2,D3 are non-negative, and all activation functions are convex and nondecreasing (e.g. ReLU). The proof of this proposition again follows directly from the composition rule of convex functions. Similarly to the ICNN case, by expanding the inputs vector to include both u and −u and restricting all weights to be non-negative, the resulted ICRNN structure is a convex and non-decreasing mapping from inputs to output. The proposed ICRNN structure can be leveraged to represent system dynamics for close-loop control. Consider a physical system with discrete-time dynamics, at time step t, let’s define st as the system states, ut as the control actions, and yt as the system output. For example, for the real-time control of a building system, st includes the room temperature, humidity, etc; ut denotes the building appliance scheduling, room temperature set-points, etc; and output yt is the building energy consumption. In addition, there maybe exogenous variables that impact the output of the system, for example, outside temperature will impact the energy consumption of the building. However, since the exogenous variables are not impacted by any of the control actions we take, we suppress them in the formulation below. The time evolution of a system is described by yt = f (st ,ut) , (4a) st+1 = g(st ,ut) (4b) where (4b) describes the coupling between the current inputs to the future system states. Physical systems described by (4) may have significant inertia in the sense that the outcome of any control actions is delayed in time and there are significant couplings across time periods. Since we use ICRNNs to represent both the system dynamics g(·) and the output f (·), the control variable u expands as û. The optimal receding horizon control problem at time t can be written as, minimize ut ,ut+1,...,ut+T C(x̂,y) = t+T ∑ τ=t J(x̂τ ,yτ) (5a) subject to yτ = f(x̂τ−nw , x̂τ−nw+1, ..., x̂τ),∀τ ∈ [t, t +T ] (5b) sτ = g(x̂τ−nw , x̂τ−nw+1, ..., x̂τ−1, ûτ) ,∀τ ∈ [t, t +T ] (5c) x̂τ = [ sτ ûτ ] , ûτ = [ uτ vτ ] ,∀τ ∈ [t, t +T ] (5d) vτ =−uτ ,∀τ ∈ [t, t +T ] (5e) sτ ∈S f easible,∀τ ∈ [t, t +T ] (5f) uτ ∈U f easible,∀τ ∈ [t, t +T ] (5g) where a new variable x̂ = [st , ût ] is introduced for notational simplicity, which called system inputs. It is the collection of system states st and duplicated control actions ut and −ut , therefore ensuring the mapping from ut to any future states and outputs remains convex. J(x̂τ ,yτ) is the control system cost incurs at time τ , that is a function of both the system inputs x̂τ and output yτ . The functions f (·) and g(·) in Eq. (5b)-(5c) are parameterized as ICRNNs, which represent the system dynamics from sequence of inputs (x̂τ−nw , x̂τ−nw+1, ..., x̂τ) to the system output yτ , and the dynamics from control actions to system states, respectively. nw is the memory window length of the recurrent neural network. The equations (5d) and (5e) duplicate the input variables u and enforce the consistency condition between u and its negation v. Lastly, (5f) and (5g) are the constraints on feasible system states and control actions respectively. Note that as a general formulation, we do not include the duplication tricks on state variables, so the dynamics fitted by (5b) and (5c) are non-decreasing over state space, which are not equivalent to those dynamics represented by linear systems. However, since we are not restricting the control space, and we have explicitly included multiple previous states in the system transition dynamics, so the non-decreasing constraint over state space should not restrict the representation capacity by much. In Section.3 we theoretically prove the representability of proposed networks. Optimization problem in (5) is a convex optimization with respect to (w.r.t.) inputs u = [ut , ...,ut+T ], provided the cost function J(x̂τ ,yτ) = J(sτ , ûτ ,yτ) is convex w.r.t. ûτ , and convex, nondecreasing w.r.t. sτ and yτ . A problem is convex if and only if both the objective function and constraints are convex. In the above problem, J(sτ , ûτ ,yτ) is convex and nondecreasing w.r.t. sτ and yτ ; sτ and yτ are parameterized as ICRNNs, i.e., (5a) and (5b), such that they are convex w.r.t. ûτ . Therefore following the composition rule of convex functions, the objective function is convex w.r.t. inputs u = [ut , ...,ut+T ]. Besides, all the equality constraints (5d) and (5e) are affine. Suppose both the state feasibile set (5f) and action feasibile set (5g) are convex, the overall optimization is convex. The convexity of the problem in (5) guarantees that it can be solved efficiently and optimally using gradient descend method. Since both the objective function (5a) and the constraints (5b)-(5c) are parameterized as neural networks, and their gradients can be calculated via back-propagation with the modification where cost is propagated to the input rather than the weights of the network. For implementation, the gradients can be convinently calculated via existing modules such as Tensorflow viaback-propagation. Let u∗ = {u∗t ,u∗t+1, ...,u∗t+T} be the optimal solution of the optimization problem at time t. Then the first element of u∗ is implemented to the real-time system control, that is u∗t . The optimization problem is repeated at time t +1, based on the updated state prediction using u∗t , yielding a model predictive control strategy. 3 EFFICIENCY AND REPRESENTATION POWER OF ICNN Besides the computational traceability of the input convex networks, as an system identification model, we are also interested its predictive accuracies and capacity. This section provides theoretical analysis on the representation ability and efficiency of input convex neural networks. 3.1 REPRESENTATION POWER OF INPUT CONVEX NEURAL NETWORK Definition 1. Given a function f : Rd → R, we say that the function f̂ approximate f within ε if | f (x)− f̂ (x)| ≤ ε for all x in the domain of f . Theorem 1. [Representation power of ICNN] For any Lipschitz convex function over a compact domain, there exists a neural network with nonnegative weights and ReLU activation functions that approximates it within ε . Lemma 1. Given a continuous Lipschitz convex function f : Rd → R with compact domain and ε > 0, it can be approximated within ε by maximum of a finite number of affine functions. That is, there exists f̂ (x) = maxi=1,...,N{µiT x+bi} such that | f (x)− f̂ (x)| ≤ ε for all x ∈ dom f . Sketch of proof for Theorem 1. Supposing Lemma 1 is true, the proof of Theorem 1 boils down to showing that neural network with nonnegative weights and ReLU activation functions can exactly represent a maximum of affine functions. The proof is constructive. We first construct a neural network with ReLU activation functions and both positive and negative weights, then we show that the weights between different layers of the network can be restricted to be nonnegative by a simple duplication trick. Specifically, since the weights in the input layer and passthrough layers in the ICNN can be negative, we simply add a negation of each input variable (e.g. both x and −x are given as inputs) to the network. These variables need satisfy a consistency constraint since one is the negation of the other. Since this constraint is linear, it preserves the convexity of optimization problems. The details of the proofs are given in the Appendix B. This proof is similar in spirit to theorems in (Hanin, 2017; Arora et al., 2016). The key new result is a simpler construction than the one used in (Hanin, 2017) and the restriction to nonnegative weights between the layers. Similar to Theorem 1, an analogous result about the representation power of ICRNN can be shown for systems with convex dynamics. Given a dynamical system described by rolled out system dynamics yt = f (x1, . . . ,xt) is convex, then there exists a recurrent neural network with nonnegative weights and ReLU activation functions that approximates it within ε . A broad range of systems can be captured by this model. For example, the linear quadratic (Gaussian) regulator problem can be described using a ICRNN if we identify y as the cost of the regulator (Skogestad & Postlethwaite, 2007; Boyd et al., 1994).1 An example of a nonlinear system is the control of electrochemical 1It’s important to note that y is usually used as the system output of a linear system, but in our context, we are using it to refer to the quadratic cost with respect to the system states and the control input. batteries. It can be shown from first principles that the degradation of these types of batteries is convex in their charge and discharge actions (Shi et al., 2018) and our framework offers a powerful data-driven way to control batteries found in electric vehicles, cell phones, and power systems. 3.2 ICNN VS. CONVEX PIECEWISE LINEAR FITTING In the proof of Theorem 1, we first approximate a convex function by a maximum of affine functions then construct a neural network according to this maximum. Then a natural question is why learn a neural network and not directly the affine functions in the maximum? This approach was taken in (Magnani & Boyd, 2009), where a convex piecewise-linear function (max of affine functions) are directly learned from data through a regression problem. A key reason that we propose to use ICNN (or ICRNN) to fit a function rather than directly finding a maximum of affine functions is that the former is a much more efficient parameterization than the latter. As stated in Theorem 2, a maximum of K affine functions can be represented by an ICNN with K layers, where each layer only requires a single ReLU activation function. However, given a single layer ICNN with K ReLU activation functions, it may take a maximum of 2K affine functions to represent it exactly. Therefore in practice, it would be much easier to train a good ICNN than finding a good set of affine functions. Theorem 2. [Efficiency of Representation] 1. Let fICNN : Rd → R be an input convex neural network with K ReLU activation functions. Then Ω(2K) functions are required to represent fICNN using a max of affine functions. 2. Let fCPL : Rd → R be a max of K affine functions. Then O(K) activation functions are sufficient to represent fCPL exactly with an ICNN. The proof of this theorem is given in Appendix C. 4 EXPERIMENTS In this section, we verify the effectiveness of ICNN and ICRNN by presenting experimental results on two decision-making problems: continuous control benchmarks on MuJoco locomotion tasks (Todorov et al., 2012) and energy management of reference large-scale commercial building (Crawley et al., 2001), respectively. The proposed method can be used as a flexible building block in decision making problems, where we use ICNN to represent system dynamics for MuJoco simulators, and we use ICRNN in an end-to-end fashion to find the optimal control inputs. Both examples demonstrate that proposed method: 1) discovers the connection between controllable variables and the system dynamics or cost objectives; 2) is lightweight and sample-efficient; 3) achieves generalizable and more stable control performances compared with previous model-based reinforcement learning and simplified linear control approaches. 4.1 MUJOCO LOCOMOTION TASKS Experimental Setup We consider four simulated robotic locomotion tasks: swimmer, half-cheetah, hopper, ant implemented in MuJoCo under the OpenAI rllab framework (Duan et al., 2016). We train and represent the locomotion state transition dynamics st+1 =g(st ,ut)2 using a 2-layer ICNN with ReLU activations, which could be integrated into the following finite-horizon control problem to find the optimal action sequence ut , ...,ut+T for fixed looking ahead horizon T : minimize ut ,...,ut+T − t+T ∑ τ=t r(sτ ,uτ) (6a) subject to sτ+1 = g(sτ ,uτ),∀τ ∈ [t, t +T ] (6b) uτ ∈U f easible,∀τ ∈ [t, t +T ] (6c) 2Note that for notation convenience, in this example and the following building example, we use u to represent the expanded control vector including its negation. For system state s ∈ Rd , if d > 1, convexity means that each dimension of s is convex w.r.t. the function inputs. where the objective (6a) is convex because r(sτ ,uτ) is a concave reward function related to system states such as velocity and control actions (the detailed forms of r(sτ ,uτ) for different locomotion tasks are listed in Appendix D). To achieve better model generalization on locomotion dynamics, we also followed (Nagabandi et al., 2018), and applied DAGGER (Ross et al., 2011) to iteratively collect labeled robotic rollouts and train the supervised dyamics model (6b) using on-policy locomotion samples. See Appendix D for furthur simulation hyperparameters and experimental details. For each aggregated iterations of collecting rollouts data and training ICNN model, we validate the controller performance on standalone validation rollouts by optimally solving (6). Baselines We compare our system modeling and continuous control method with state-of-the-art model-based RL algorithm (Nagabandi et al., 2018), where the authors used a normal multi-layer perceptrons (MLP) model to parameterize the system dynamics (6b). We refer to their method as random-shooting algorithm, since they can not solve (6) to optimality, and they used pre-defined number of random-shooting control sequences (denoted as K) to query the trained MLP and find a best sequence as the rollout policy. Such a method is able to find good control policies in the degree of 104 timesteps, which are much more sample-efficient than model-free RL methods (Duan et al., 2016; Mnih et al., 2015). To make fair comparisons with baseline method, we keep the same setup on the rollouts number and initial random action training. Our framework makes the neural networks convex w.r.t input by adding passthrough links to the 2-layer model and keeping all the layer weights nonnegative. We evaluate the performance of both algorithms on three randomly selected fixed random seeds for four tasks. Similar to the fine tuning steps in (Nagabandi et al., 2018), control policies found by ICNN can also be plugged in as initialized policies for subsequent model-free reinforcement learning algorithms. Continuous Control Performance During training, we found both ICNN and MLP are able to predict robotic states quite accurately based on (6b). This provides a good system dynamics model which is beneficial to solve control policies. The control performances are shown in Fig. 3, where we compare the average reward of proposed method and random-shooting method with K = 100 over 10 validation rollouts during each aggregated iteration (see Fig. 8 in Appendix D.4 for random shooting performance with varying K). The policy found by ICNN outperforms the random-shooting method in all settings with varying horizon T for all of the four locomotion tasks. Intuitively, ICNN should perform better when the action space is larger, since random-shooting method can not search through the action space efficiently with a fixed K. This is illustrated in the example of ant, where with more training samples aggregated and MLP model representing more accurate dynamics, random-shooting gets stuck to find better control policies and there is little improvement reflected in the control performance. Moreover, since we are skipping the expensive process on calculating rewards of each random shooting trajectory and finding the best one, our method only implements ICNN inference step based on (6) and is much faster than random shooting methods in most settings, especially when K is large (see Table. 2 for wall-clock time in Appendix D.3). For instance, in the case of Swimmer, our proposed method only uses 15 of time compared to (Nagabandi et al., 2018). This also indicates that our method is even much more sample-efficient than off-the-shelf model-free RL methods, where we use two orders of magnitude less training data to reach similar validation rewards (Duan et al., 2016; Mnih et al., 2015) (see Fig. 9 in Appendix D.4). 4.2 BUILDING ENERGY MANAGEMENT Experimental Setup We now move on to optimally control a dynamical system with significant inertia. We consider the real-time control problem of building’s HVAC (heating, ventilation, and air conditioning) system to reduce its energy consumption. Building energy management remains to be a hard problem in control area. The exact system dynamics are unknown and hard to model due to the complex heating transfer dynamics, time-varying environments and the scale of the system in terms of states and actions (Kouro et al., 2009). At time t, we assume the building’s running profile xt := [st ,ut ] is available, where st denotes building system states, including outside temperature, room temperature measurements, zone occupancies and etc. ut denotes a collection of control actions such as room temperature set points and appliance schedule. Output is the electricity consumption Pt . This is a model predictive control problem in the sense that we want to find the best control inputs that minimize the overall energy consumption of building by looking ahead several time steps. To achieve this goal, we firstly learn an ICRNN model f (·) of the building dynamics, which is trained to minimize the error between Pt and f (xt−nw , ...,xt), while nw denotes the memory window of recurrent neural networks. Then we solve: minimize ut ,...,ut+T t+T ∑ τ=t f (xτ−nw , ...,xτ) (7a) subject to sτ = g(xτ−nw , ...,xτ−1,uτ),∀τ ∈ [t, t +T ] (7b) uτ ≤ uτ ≤ uτ ,∀τ ∈ [t, t +T ] (7c) sτ ≤ sτ ≤ sτ ,∀τ ∈ [t, t +T ] (7d) where the objective (7a) is minimizing the total energy consumption in future T steps (T is the model predictive control horizon), and (7b) is used for modeling building states, in which g(·) are parameterized as ICRNNs. Note that the formulation (7) is also flexible with different loss functions. For instance, in practice, we could reuse trained dynamics model (7b), and integrate electricity prices into the overall objective so that we could directly learn real-time actions to minimize electricity bills (please refer to Appendix E for more results). The constraints on control actions ut and system states st are given in (7c) and (7d). For instance, the temperature set points as well as real measurements should not exceed user-defined comfort regions. To test the performance of the proposed method, we set up a 12-story large office building, which is a reference EnergyPlus commercial building model from US Department of Energy (DoE) 3, with a total floor area of 498,584 square feet which is divided into 16 separate zones. By using the whole year’s weather profile, we simulate the building running through the year and record (xt , Pt) with a resolution of 10 minutes. We use 10 months’ data to train the ICRNN and subsequent 2 months’ data for testing. We use 39 building system state variables st (uncontrollable), along with 16 control variables ut . Output is a single value of building energy consumption at each time step. We set the model predictive control horizon T = 36 (six hours). We employ an ICRNN with recurrent layer of dimension 200 to fit the building input-output dynamics f (·). The model is trained to minimize the MSE between its predictions and the actual building energy consumption using stochastic gradient descent. We use the same network structure and training scheme to fit state transition dynamics g(·). Baseline We set the model-based forecasting and optimization benchmark using an linear resistorcircuit (RC) circuit model to represent the heat transfer in building systems, and solve for the optimal control actions via MPC (Ma et al., 2012). At each step, MPC algorithm takes into account the forecasted states of the building based on the fitted RC model and implements the current step control actions. We also compare the performance of ICRNN against the conventionally trained RNN in terms of building dynamics fitting performance and control performance. To solve the MPC problem with conventional RNN models, we also use gradient-based method with respect to controls. However, since conventional RNN models are generally not convex from input to output, there is no guarantee to reach a global optimum (or even a local one). Results In terms of the fitting performance, ICRNN provides a competitive result compared to conventional RNN model. The overall test root mean square error (RMSE) is 0.054 for ICRNN and 0.051 for conventional RNN, both of which are much smaller than the error made by RC model (0.240). Fig. 4(a) shows the fitting performance on 5 working days in test data. This illustrates the good performance of ICRNN in modeling building HVAC system dynamics. Then by using the learned ICRNN model of building dynamics, we obtain the suggested room control actions u∗t by solving the optimal building control problem (7). As shown in Fig. 4(b), with the same constraints on building temperature interval of [19◦C, 24◦C], the building energy consumption is reduced by 23.25% after implementing the new temperature set points calculated by ICRNN. On the contrary, since there is no guarantee for finding optimal control actions by optimizing over conventional RNN’s input, the control solutions given by conventional RNN could only reduce 11.73% of electricity. Solutions given by RC model only saves 4.07% of electricity. More importantly, in Fig. 4(c) we demonstrate the control actions outputted by our method against MPC with conventional RNN in two randomly selected building zones, the building basement and top floor central area. It shows that our proposed 3Energyplus is an open-source whole-building energy modeling software, which is developed by US DoE for standard building energy simulation approach is able to find a group of stable control actions for the building system control. While in the conventional RNN case, it generates control set points which have undesirable, drastic variations. 5 SUMMARY AND DISCUSSION In this work we proposed a novel optimal control framework that uses deep neural networks engineered to be convex from the input to the output. This framework bridges machine learning and control by representing system dynamics using input convex (recurrent) neural networks. We show that many interesting data-driven control problems can be cast as convex optimization problems using the proposed network architecture. Experiments on both benchmark MuJoCo locomotion tasks and building energy management demonstrate our methodology’s potential in a variety of control and optimization problems. A. TOY EXAMPLE Consider a synthetic example which contains two circles of noisy input data u ∈ R2, along with discrete data label y ∈ {0,1} which is based on input coming from inner loop (y = 0) or outer loop (y = 1). Suppose a decision maker is interested in finding the u that maximizes the probability of y being 0. This optimization problem can be solved by firstly learning a neural network classifier from u to y, and then to find the u point which minimizes the output of the neural network. More specifically, let fNN be a conventional neural network and fICNN be an ICNN. Then the objective becomes minimizing fNN(u) or fICNN(u). Figure 5 shows the decision boundaries for fNN and fICNN , respectively. These networks are composed of 2 hidden layers, with 200 neurons in each layer, and are trained using the same random seed, same number of samples (100) until loss convergence. The decision boundaries of a conventional network have many “zigzags”, which makes solving (1) challenging, especially if u is constrained. In contrast, the ICNN has convex level sets (by construction) as decision boundaries, which leads to a convex optimization problem. APPENDIX B. PROOF OF THEOREM 1 Proof. Lemma 1 follows from well established facts in function analysis stating that piecewise linear functions are dense in the space of all continuous functions over compact sets (Royden & Fitzpatrick, 2010) and convex piecewise linear functions are dense in the space of all convex continuous functions (Cox, 1971; Gavrilović, 1975). Using the fact that convex piecewise linear functions can be represented as a maximum of affine functions (Magnani & Boyd, 2009; Wang, 2004) gives the desired result in the lemma. Lemma 1 shows that all continuous Lipschitz convex functions f (x) : Rd → R over convex compact sets can be approximated using maximum of affine functions. Then it suffices to show that an ICNN can exactly represent a maximum of affine functions. To do this, we first construct a neural network with ReLU activation function with both positive and negative weights that can represent a maximum of affine functions. Then we show how to restrict all weights to be nonnegative. As a starting example, consider a maximum of two affine functions fCPL(x) = max{aT1 x+b1,aT2 x+b2}. (8) To obtain the exact same function using a neural network, we first rewrite it as fCPL(x) = (aT2 x+b2)+max ( (a1−a2)T x+(b1−b2),0 ) . (9) Now define a two-layer neural network with layers z1 and z2 as shown in Fig. 6: z1 = σ ( (a1−a2)T x+(b1−b2) ) , (10a) z2 = z1 +aT2 x+b2 (10b) where σ is the ReLU activation function and the second layer is linear. By construction, this neural network is the same function as fCPL given in (8). The above argument extends directly to a maximum of K linear functions. Suppose fCPL(x) = max{aT1 x+b1, ...,aTKx+bK} (11) Again the trick is to rewrite fCPL(x) as a nested maximum of affine functions. For notational convenience, let Li = aTi x+bi, L′i = Li−Li+1. Then fCPL = max{L1,L2, ...,LK} = max{max{L1,L2, ...,LK−1},LK} = LK +σ (max{L1,L2, ...,LK−1}−LK) = LK +σ (max{max{L1,L2, ...,LK−2},LK−1}−LK ,0) = LK +σ (LK−1−LK +σ (max{L1,L2, ...,LK−2}−LK−1,0) ,0) = ... = LK +σ ( L′K−1 +σ ( L′K−2 +σ ( ...σ ( L′2 +σ (L1−L2,0) ,0 ) , ...,0 ) ,0 ) ,0 ) . The last equation describes a K layer neural network, where the layers are: z1 = σ (L1−L2,0) = σ ( (a1−a2)T x+(b1−b2) ) , z2 = σ ( L′2 + z1,0 ) = σ ( z1 +(a2−a3)T x+(b2−b3) ) , ...... zi = σ ( L′i + zi−1,0 ) = σ ( zi−1 +(ai−ai+1)T x+(bi−bi+1) ) , ...... zK = zK−1 +LK = hK (zK−1 +LK) = ( zK−1 +aTKx+bK ) . Each layer of of this neural network uses only a single activation function. Although the above neural network exactly represent a maximum of linear functions, it is not convex since the coefficients between layers could be negative. In particular, each layer involves an inner product of the form (ai−ai+1)T x and the coefficients are not necessarily nonnegative. To overcome this, we simply expand the input to include x and −x. Namely, define a new input x̂ ∈ R2d as x̂ = [ x −x ] . (12) Then any inner product of the form hT x can be written as hT x = d ∑ j=1 hixi = ∑ i:hi≥0 hixi + ∑ i:hi<0 hixi = ∑ i:hi≥0 hixi + ∑ i:hi<0 (−hi)(−xi) = ∑ i:hi≥0 hix̂i + ∑ i:hi<0 (−hi)(x̂i+d), where all coefficients are nonnegative in the above sum. Therefore any inner product between a coefficient vector and the input x can be written as an inner product between a nonnegative coefficient vector and the expanded input x̂. Therefore, without loss of generality, we can limit all of the weights between layers to be nonnegative, and thus the neural network to be input convex. Note that in optimization problems, we need to enforce consistency in x̂ be including (12) as a constraint. However, this is a linear equality constraint, which maintains the convexity of the optimization problem. APPENDIX C. PROOF OF THEOREM 2 Proof. The second statement of Theorem 2 directly follows the construction in the proof of Theorem 1, which shows that a maximum of K affine functions can be represent by a K-layer ICNN (with a single ReLU function in each layer). So it remains to show the first statement of Theorem 2. To show that a maximum of affine functions can require exponential number of pieces to approximate a function specified by an ICNN with K activation functions, consider a network with 1 hidden layer of K nodes and the weights of direct “passthrough” layers are set to 0: fICNN(x) = K ∑ i=1 w1iσ(wT0ix+bi) , (13) It contains 3K parameters: w0i, w1i and bi, where w0i ∈ Rd and w1i,bi ∈ R. In order to represent the same function by a maximum of affine functions, we need to assess the value of every activation unit σ(wT0ix+bi). If w T 0ix+bi ≥ 0, σ(wT0ix+bi) = wT0ix+bi; otherwise, σ(wT0ix+bi) = 0. In total, we have 2 K potential combinations of piecewise-linear function, including L1 = ( K ∑ i=1 w1iw0i )T x+ K ∑ i=1 w1ibi , if all wT0ix+bi ≥ 0 L2 = ( K ∑ i=2 w1iw0i )T x+ K ∑ i=2 w1ibi , if wT01x+b1 < 0 and all other w T 0ix+bi ≥ 0 L3 = ( w11w01 + K ∑ i=3 w1iw0i )T x+w1ibi + K ∑ i=3 w1ibi , if wT02x+b2 < 0 and other w T 0ix+bi ≥ 0 · · · · · · , L2K =0 , if all w T 0ix+bi < 0. So the following maximum over 2K pieces is required to represent the single linear ICNN: max{L1,L2, ...,L2K}. APPENDIX D. EXPERIMENTAL DETAILS ON MUJOCO TASKS D.1 DATA COLLECTION Rollout Samples To train the neural network dynamics model (both ICNN and MLP), we first collect initial rollout data using fully random action sequences ut ∼ Uniform[-1,1] with a random chosen initial state. During the data collection process in aggregated iterations, to improve model generalization and explore larger state spaces, we add Gaussian noise to the optimal control policies ut = ut +N (0,0.001). Neural Networks Training We represent the MuJoCo dynamics with a 2-hidden-layer neural networks with hidden sizes 512−512. The passthrough links of ICNN are of same size of corresponding added layers. We train both models using Adam optimizer with a learning rate 0.001 and a mini-batch size of 512. Due to the different complexity of MuJoCo tasks, we vary training epochs and summarize the training details in Table. 1. D.2 ENVIRONMENT DETAILS In all of the MuJoCo locomotion tasks, s includes state variables such as robot positions, velocity along each axis; u includes action efforts for the agent. We use standard reward functions r(st ,ut) for moving tasks, which could be also promptly calculated in (6a) as the control objective. For the ease of neural network training and action sampling, we normalize all the action and states in the range of [−1, 1]. We use DAGGER (Ross et al., 2011) for 6 aggregated iterations for all cases, and during aggregated iteration, we use a split of 10% random rollouts collected as described in 5, and other 90% coming from past iterations’ control policies (on-policy rollouts). Note that we use 10 random control sequences in our method to initialize the policy finding approach and avoid the long computation time for taking gradients on finding optimal ut . Other environment parameters are described in Table. 1. D.3 WALL-CLOCK TIME In Table.2, we show the average run time for the total of 6 aggregation iterations over 3 runs. Finding control policies via ICNN is using less or equal training time compared to random-shooting method with K = 100, while achieving better task rewards than K = 1000 for different control horizons. All the experiments are running on a computer with 8 cores Intel I7 6700 CPU. Note that we do not use GPU for accelerating ICNN optimization step (6), which could furthur improve our method’s efficiency. D.4 DETAILS OF SIMULATION RESULTS MuJoCo Dynamics Modeling In Fig. 7, we compare the ICNN and normal MLP fitting performance of the MuJoCo dynamics modeling (6b), which illustrates that both MLP and ICNN are able to find a data-driven dynamics model for ant MuJoCo agent, which is of the most complex dynamics we considered for locomotion tasks. The multi-step prediction errors of ICNN is comparable to normal MLP used in (Nagabandi et al., 2018) for different length of rollout steps. More Simulation Results In Fig. 8, we compare our control method with random-shooting approach with varying settings on shooting number K, which shows that our approach is more efficient in finding control policies. In Fig. 9, we compare our control method with the rllab implementation of trust region policy optimization (TRPO) (Schulman et al., 2015), an end-to-end deep reinforcement learning approach for mujoco locomotion tasks. More specifically, we compare the algorithms’ performances with relatively few available rollout samples. While our approach quickly learns the dynamics and then find control actions via optimization steps, TRPO is hard to learn the actions directly with few provided rollouts. Similarly to the model-based and model-free (Mb-Mf) approach described in (Nagabandi et al., 2018), our control method could provide good initialization samples for the model-free algorithms, which could greatly accelerate the training process of model-free algorithms. APPENDIX E. DETAILS ON BUILDING ENERGY MANAGEMENT E.1 MINIMIZING ELECTRICITY COSTS To further demonstrate the potential of our proposed control framework in dealing with different real world tasks, we modify the setting of the building control example in Section 4.2 to a more complicated case. Instead of directly minimize the total energy consumption of building, we aim to minimize the total energy cost of building which subject to a varying time-of-use electrical price λ . The optimization problem in (7) should be re-written as, minimize ut ,...,ut+T T ∑ τ=0 λτ · f (xt+τ−nw , ...,xt+τ) (14a) subject to st+τ = g(xt+τ−nw , ...,xt+τ−1,ut+τ),∀τ (14b) ut+τ ≤ ut+τ ≤ ut+τ ,∀τ (14c) st+τ ≤ st+τ ≤ st+τ ,∀τ (14d) where the objective (14a) is minimizing the total energy cost of building in future T steps (T is the model predictive control horizon) subject to time-of-use electricity price λτ , and (14b) is used for modeling building states, in which g(·) are parameterized as ICRNNs. Same as the previous building control case, we have constraints on both control actions ut and system states st are given in (14c) and (14d). For instance, the temperature set points as well as real measurements should not exceed user-defined comfort regions. In Fig. 10 we visualize our model flexibility by using Seattle’s Time-of-Use (TOU) price from Seattle City Light 4, and minimizing one week’s electricity bills. We could see ICRNN capture the long term relationships between control variables and final costs, and raise the energy consumption during off-peak price a little, but reduce the energy consumption during peak hours. 4http://www.seattle.gov/light/ E.2 CONTROL CONSTRAINTS EFFECTS In Fig. 11 we add one more comparison on the control constraints effects on the final control performance by using ICRNN. Interestingly, with different set point constraints, the ICRNN finds similar solutions for off-peak electricity usage, which may correspond to necessary energy consumptions, such as lightning and ventilation. Moreover, when we set no constraints on the system, it would cut down more than 80% of total energy during peak hours.
1. What is the main contribution of the paper, and how does it bridge the gap between discrete-time continuous-state/action optimal control and convex model classes? 2. What are the strengths and weaknesses of the proposed input-convex recurrent neural network architecture? 3. How can the authors address the concern regarding the non-convexity of Equation (5) and (6), and provide a stronger and more formal argument to show that these equations are convex optimization problems? 4. What methods are used to solve the control problems in Equation (5) and (6)? 5. How does the paper's empirical setup and tasks differ from those in Nagabandi et al., and why do the results seem to contradict each other? 6. What could be the reason behind the poor performance of f_NN in the red region of the data on the left side in Figure 5, and what are the sizes of the networks used?
Review
Review This is a well-motived paper that considers bridging the gap in discrete-time continuous-state/action optimal control by approximating the system dynamics with a convex model class. The convex model class has more representational power than linear model classes while likely being more tractable and stable than non-convex model classes. They show empirical results in Mujoco continuous-control environments and in an HVAC example. I think this setup is a promising direction but I have significant concerns with some of the details and claims in this work: 1. Proposition 2 is wrong and the proposed input-convex recurrent neural network architecture not input-convex. To fix this, the D1 parameters should also be non-negative. To show why the proposition is wrong, consider the convexity of y2 with respect to x1, using g to denote the activation function: z1 = g(U x1 + ...) y2 = g(D1 z1 + ...) Thus making y2 = g(D1 g(U x1 + ...) + ...) y2 is *not* necessarily convex with respect to x1 because D1 takes an unrestricted weighted sum of the convex functions g(U x1 + ...) With the ICRNN architecture as described in the paper not being input-convex, I do not know how to interpret the empirical findings in Section 4.2 that use this architecture. 2. I think a stronger and more formal argument should be used to show that Equation (5) is a convex optimization problem as claimed. It has arbitrary convex functions on the equality constraints that are composed with each other and then used in the objective. Even with parts of the objective being convex and non-decreasing as the text mentions, it's not clear that this is sufficient when combined with the composed functions in the constraints. 3. I have similar concerns with the convexity of Equation (6). Consider the convexity of x3 with respect to u1, where g is now an input-convex neural network (that is not recurrent): x3 = g(g(x1, u1), u2) This composes two convex functions that do *not* have non-decreasing properties and therefore introduces an equality constraint that is not necessarily even convex, almost certainly making the domain of this problem non-convex. I think a similar argument can be used to show why Equation (5) is not convex. In addition to these significant concerns, I have a few other minor comments. 1. Figure 1 hides too much information. It would be useful to know, for example, that the ICNN portion at the bottom right is solving a control optimization problem with an ICNN as part of the constraints. 2. The theoretical results in Section 3 seem slightly out-of-place within the broader context of this paper but are perhaps of standalone interest. Due to my concerns above I did not go into the details in this portion. 3. I think more information should be added to the last paragraph of Section 1 as it's claimed that the representational power of ICNNs and "a nice mathematical property" help improve the computational time of the method, but it's not clear why this is and this connection is not made anywhere else in the paper. 4. What method are you using to solve the control problems in Eq (5) and (6)? 5. The empirical setup and tasks seems identical to [Nagabandi et al.]. Figure 3 directly compares to the K=100 case of their method. Why does Fig 6 of [Nagabandi et al.] have significantly higher rewards for their method, even in the K=5 case? 6. In Figure 5, f_NN seems surprisingly bad in the red region of the data on the left side. Is this because the model is not using many parameters? What are the sizes of the networks used?
ICLR
Title Optimal Control Via Neural Networks: A Convex Approach Abstract Control of complex systems involves both system identification and controller design. Deep neural networks have proven to be successful in many identification tasks, however, from model-based control perspective, these networks are difficult to work with because they are typically nonlinear and nonconvex. Therefore many systems are still identified and controlled based on simple linear models despite their poor representation capability. In this paper we bridge the gap between model accuracy and control tractability faced by neural networks, by explicitly constructing networks that are convex with respect to their inputs. We show that these input convex networks can be trained to obtain accurate models of complex physical systems. In particular, we design input convex recurrent neural networks to capture temporal behavior of dynamical systems. Then optimal controllers can be achieved via solving a convex model predictive control problem. Experiment results demonstrate the good potential of the proposed input convex neural network based approach in a variety of control applications. In particular we show that in the MuJoCo locomotion tasks, we could achieve over 10% higher performance using 5× less time compared with state-of-the-art model-based reinforcement learning method; and in the building HVAC control example, our method achieved up to 20% energy reduction compared with classic linear models. 1 INTRODUCTION Decisions on how to best operate and control complex physical systems such as the power grid, commercial and industrial buildings, transportation networks and robotic systems are of critical societal importance. These systems are often challenging to control because they tend to have complicated and poorly understood dynamics, sometimes with legacy components are built over a long period of time (Wolf, 2009). Therefore detailed models for these systems may not be available or may be intractable to construct. For instance, since buildings account for 40% of the global energy consumption (Cheng et al., 2008), many approaches have been proposed to operate buildings more efficiently by controlling their heating, ventilation, and air conditioning (HVAC) systems (Zhang et al., 2017). Most of these methods, however, suffer from two drawbacks. On one hand, a detailed physics model of a building can be used to accurately describe its behavior, but this model can take years to develop. On the other hand, simple control algorithms have been developed by using linear (RC circuit) models (Ma et al., 2012) to represent buildings, but the performance of these models may be poor since the building dynamics can be far from linear (Shaikh et al., 2014). In this paper, we leverage the availability of data to strike a balance between requiring painstaking manual construction of physics based models and the risk of not capturing rich and complex system dynamics through models that are too simplistic. In recent years—with the growing deployment of sensors in physical and robotics systems—large amount of operational data have been collected, such as in smart buildings (Suryadevara et al., 2015), legged robotics (Meger et al., 2015) and manipulators (Deisenroth et al., 2011). Using these data, the system dynamics can be learned directly and then automatically updated at periodic intervals. One popular method is to parameterize these ∗Authors contribute equally. complex system dynamics using deep neural networks to capturing complex relationships (He et al., 2016; Vaswani et al., 2017), yet few research investigated how to integrate deep learning models into real-time closed-loop control of physical systems. A key reason that deep neural networks have not been directly applied in control is that even though they provide good performances in learning system behaviors, optimization on top of these networks is challenging (Kawaguchi, 2016). Neural networks, because of their structures, are generally not convex from input to output. Therefore, many control applications (e.g., where real-time decisions need to be made) choose to favor the computational tractability offered by linear models despite their poor fitting performances. In this paper we tackle the modeling accuracy and control tractability tradeoff by building on the input convex neural networks (ICNN) in (Amos et al., 2017) to both represent system dynamics and to find optimal control policies. By making the neural network convex from input to output, we are able to obtain both good predictive accuracies and tractable computational optimization problems. The overall methodology is shown in Fig. 1. Our proposed method (shown in Fig. 1 (b)) firstly utilizes an input convex network model to learn the system dynamics and then computes the best control decisions via solving a convex model predictive control (MPC) problem, which is tractable and has optimality guarantees. This is different from existing methods that uses model-free end-to-end controller which directly maps input to output (shown in Fig. 1 (a)). Another major contribution of our work is that we explicitly prove that ICNN can represent all convex functions and systems dynamics, and is exponentially more efficient than widely used convex piecewise linear approximations (Magnani & Boyd, 2009). 1.1 RELATED WORK The work in (Amos et al., 2017) was an impetus for this paper. The key differences are that the goal in (Amos et al., 2017) is to show that ICNN can achieve similar classification performances as conventional neural networks and how the former can be used in inference and prediction problems. Our goal is to use these networks for optimization and closed-loop control, and in a sense that we are more interested in the overall system performances and not directly the performance of the networks. We also extend the class of networks to include RNNs to capture dynamical systems. Control and decision-making have used deep learning mainly in model-free end-to-end controller settings (shown in Fig. 1 (a)), such as sequential decision making in game (Mnih et al., 2013), robotics manipulation (Levine & Koltun, 2014; Levine et al., 2016), and control of cyber-physical systems (Wei et al., 2017; O’Neill et al., 2010). However, much of the success relies heavily on a reinforcement learning setup where the optimal state-action relationship can be learned via a large number of samples. However, many physical systems do not fit into the reinforcement learning process, where both the sample collection is limited by real-time operations, and there are physical model constraints hard to represent efficiently. To address the above sample efficiency, safety and model constraints incompatibility concerns faced by model-free reinforcement learning algorithms in physical system control, we consider a model-based control approach in this work. Model-based control algorithms often involve two stages – system identification and controller design. For the system identification stage, the goal is to learn a fixed form of system model to minimize some prediction error (Ljung, 1998). Most efficient model-based control algorithms have used a relatively simple function estimator for the system dynamics identification (Nagabandi et al., 2018), such as linear model (Ma et al., 2012) and Gaussian processes (Meger et al., 2015; Deisenroth et al., 2011). These simplified models are sample-efficient to learn, and can be nicely incorporated in the sub-sequent optimal control problems. However, such simple models may not have enough representation capacity in modeling large-scale or high-dimension systems with nonlinear dynamics. Deep neural networks (DNNs) feature powerful representation capability, while the main challenge of using DNNs for system identification is that such models are typically highly non-linear and non-convex (Kawaguchi, 2016), which causes great difficulty for following decision making. A recent work from (Nagabandi et al., 2018) is close in spirit as our proposed method. Similarly, the authors use a model-based approach for robotics control, where they first fit a neural network for the system dynamics and then use the fitted network in an MPC loop. However, since (Nagabandi et al., 2018) use conventional NN for system identification, they cannot solve the MPC problem to global optimality. Our work shows how the proposed ICNN control algorithm achieves the benefits from both sides of the world. The optimization with respect to inputs can be implemented using off-the-shelf deep learning optimizers, while we are able to obtain good identification accuracies and tractable computational optimization problems by using proposed method at the same time. 2 CLOSED-LOOP CONTROL WITH INPUT CONVEX NEURAL NETWORKS In this paper, we consider the settings where a neural network is used in a closed-loop system. The fundamental goal is to optimize system performance which is beyond the learning performance of network on its own. In this section we describe how input convex neural networks (ICNN) can be extremely useful in these systems by considering two related problems. First, we show how ICNN perform in single-shot optimization problems. Then we extend the results to an input convex recurrent neural networks (ICRNN), which allows us to both capture systems’ complex dynamics and make time-series decisions. 2.1 SINGLE-SHOT PROBLEM The following proposition states a simple sufficient condition for a neural network to be input convex: Proposition 1. The feedforward neural network in Fig. 2(a) is convex from input to output given that all weights between layers W1:k and weights in the “passthrough” layers D2:k are non-negative, and all of the activation functions are convex and nondecreasing (e.g. ReLU). The structure of the input convex neural network (ICNN) structure in Proposition 1 is motivated by the structure in (Amos et al., 2017) but modified to be more suitable to control of dynamical systems. In (Amos et al., 2017) it only requires W2:k to be non-negative while having no restrictions on weights W1 and D2:k. Our construction achieves the exact representation by expanding the inputs to include both u (∈ Rd) and −u. Then any negative weights in W1 and D2:k in (Amos et al., 2017)’s ICNN structure is set to zero and its negation (which is positive) is added as the weight for corresponding −u. The reason for our construction is to allow the network to be “rolled out in time” when we are dealing with dynamical systems and multiple networks need to be composed together. An simple example that demonstrates how the proposed ICNN can be used to fit a convex function comes form fitting the |u| function. This function is convex and both decreasing and increasing. Let the activation function be ReLU(·) = max(·,0). We can write |u| = −u+ 2ReLU(u) (Amos et al., 2017). However, in this representation, we need a negative weight, the −1 in front of u, and this would be troublesome if we compose several networks together. In our proposed ICNN structure with all positive weights and input negation duplicates, we can write |u| = v+ 2ReLU(u), where we impose a constraint v = −u. Such doubline on the number of input variables may potentially make the network harder to train. Yet during control, having all of the weights positive maintains the convexity between inputs and outputs even if multiple steps are considered which will be discussed in Section 2.2. The constraint v =−u is linear and can be easily included in any convex optimization. This proposition follows directly from composition of convex functions (Boyd & Vandenberghe, 2004). Although it allows for any increasing convex activation functions, in this paper we work with the popular ReLU activation function. Two notable additions in ICNN compared with conventional feedforward neural networks are: 1) Addition of the direct “passthrough” layers connecting inputs to hidden layers and conventional feedforward layers connecting hidden layers for better representation power. 2) the expanded inputs that include both u and −u. The proposed ICNN structure is shown in Fig. 2(a). Note that such construction guarantees that the network is convex and non-decreasing with respect to the expanded inputs û = [ u −u ] , while the output can achieve either decreasing or non-decreasing functions over u. Fundamentally, ICNN allows us to use neural networks in decision making processes by guaranteeing the solution is unique and globally optimal. Since many complex input and output relationships can be learned through deep neural networks, it is natural to consider using the learned network in an optimization problem in the form of min u f (u;W) (1a) s.t. u ∈U , (1b) where U is a convex feasible space. Then if f is an ICNN, optimizing over u is a convex problem, which can be solved efficiently to global optimality. Note that we will always duplicate the variables by introducing v = −u, but again this does not change the convexity of the problem. Of course, since the weights of the network are restricted to be nonnegative, the performance of the network (e.g., classification) may be worse. A common thread we observe in this paper is that trading off classification performance with tractability can be preferable. 2.2 CLOSED-LOOP CONTROL AND RECURRENT NEURAL NETWORKS In addition to the single-shot optimization problem in (1), we are interested in optimally controlling a dynamical system. To model the temporal dependency of the system dynamics, we propose to use recurrent neural networks (instead of feed-forward neural networks). Recurrent networks carry an internal state of the system, which introduces coupling with previous inputs to the system. Fig. 2(b) shows the proposed input convex recurrent neural networks (ICRNN) structure. This network maps from input û to output y with memory unit z according to the following Eq. (2), zt = σ1(Uût +Wzt−1 +D2ût−1) , (2) yt = σ2(Vzt +D1zt−1 +D3ût) , (3) where û = [ u −u ] , and D1,D2,D3 are added direct “passthrough” layers for augmenting representation power. If we unroll the dynamics with respect to time, we have yt = f (û1, û2, ..., ût ;θ) where θ = [U,V,W,D1,D2,D3] are network parameters, and σ1,σ2 denote the nonlinear activation functions. The next proposition states a sufficient condition for the network to be input convex. Proposition 2. The network shown in Fig. 2(b) is a convex function from inputs to output if all weights U,V,W,D1,D2,D3 are non-negative, and all activation functions are convex and nondecreasing (e.g. ReLU). The proof of this proposition again follows directly from the composition rule of convex functions. Similarly to the ICNN case, by expanding the inputs vector to include both u and −u and restricting all weights to be non-negative, the resulted ICRNN structure is a convex and non-decreasing mapping from inputs to output. The proposed ICRNN structure can be leveraged to represent system dynamics for close-loop control. Consider a physical system with discrete-time dynamics, at time step t, let’s define st as the system states, ut as the control actions, and yt as the system output. For example, for the real-time control of a building system, st includes the room temperature, humidity, etc; ut denotes the building appliance scheduling, room temperature set-points, etc; and output yt is the building energy consumption. In addition, there maybe exogenous variables that impact the output of the system, for example, outside temperature will impact the energy consumption of the building. However, since the exogenous variables are not impacted by any of the control actions we take, we suppress them in the formulation below. The time evolution of a system is described by yt = f (st ,ut) , (4a) st+1 = g(st ,ut) (4b) where (4b) describes the coupling between the current inputs to the future system states. Physical systems described by (4) may have significant inertia in the sense that the outcome of any control actions is delayed in time and there are significant couplings across time periods. Since we use ICRNNs to represent both the system dynamics g(·) and the output f (·), the control variable u expands as û. The optimal receding horizon control problem at time t can be written as, minimize ut ,ut+1,...,ut+T C(x̂,y) = t+T ∑ τ=t J(x̂τ ,yτ) (5a) subject to yτ = f(x̂τ−nw , x̂τ−nw+1, ..., x̂τ),∀τ ∈ [t, t +T ] (5b) sτ = g(x̂τ−nw , x̂τ−nw+1, ..., x̂τ−1, ûτ) ,∀τ ∈ [t, t +T ] (5c) x̂τ = [ sτ ûτ ] , ûτ = [ uτ vτ ] ,∀τ ∈ [t, t +T ] (5d) vτ =−uτ ,∀τ ∈ [t, t +T ] (5e) sτ ∈S f easible,∀τ ∈ [t, t +T ] (5f) uτ ∈U f easible,∀τ ∈ [t, t +T ] (5g) where a new variable x̂ = [st , ût ] is introduced for notational simplicity, which called system inputs. It is the collection of system states st and duplicated control actions ut and −ut , therefore ensuring the mapping from ut to any future states and outputs remains convex. J(x̂τ ,yτ) is the control system cost incurs at time τ , that is a function of both the system inputs x̂τ and output yτ . The functions f (·) and g(·) in Eq. (5b)-(5c) are parameterized as ICRNNs, which represent the system dynamics from sequence of inputs (x̂τ−nw , x̂τ−nw+1, ..., x̂τ) to the system output yτ , and the dynamics from control actions to system states, respectively. nw is the memory window length of the recurrent neural network. The equations (5d) and (5e) duplicate the input variables u and enforce the consistency condition between u and its negation v. Lastly, (5f) and (5g) are the constraints on feasible system states and control actions respectively. Note that as a general formulation, we do not include the duplication tricks on state variables, so the dynamics fitted by (5b) and (5c) are non-decreasing over state space, which are not equivalent to those dynamics represented by linear systems. However, since we are not restricting the control space, and we have explicitly included multiple previous states in the system transition dynamics, so the non-decreasing constraint over state space should not restrict the representation capacity by much. In Section.3 we theoretically prove the representability of proposed networks. Optimization problem in (5) is a convex optimization with respect to (w.r.t.) inputs u = [ut , ...,ut+T ], provided the cost function J(x̂τ ,yτ) = J(sτ , ûτ ,yτ) is convex w.r.t. ûτ , and convex, nondecreasing w.r.t. sτ and yτ . A problem is convex if and only if both the objective function and constraints are convex. In the above problem, J(sτ , ûτ ,yτ) is convex and nondecreasing w.r.t. sτ and yτ ; sτ and yτ are parameterized as ICRNNs, i.e., (5a) and (5b), such that they are convex w.r.t. ûτ . Therefore following the composition rule of convex functions, the objective function is convex w.r.t. inputs u = [ut , ...,ut+T ]. Besides, all the equality constraints (5d) and (5e) are affine. Suppose both the state feasibile set (5f) and action feasibile set (5g) are convex, the overall optimization is convex. The convexity of the problem in (5) guarantees that it can be solved efficiently and optimally using gradient descend method. Since both the objective function (5a) and the constraints (5b)-(5c) are parameterized as neural networks, and their gradients can be calculated via back-propagation with the modification where cost is propagated to the input rather than the weights of the network. For implementation, the gradients can be convinently calculated via existing modules such as Tensorflow viaback-propagation. Let u∗ = {u∗t ,u∗t+1, ...,u∗t+T} be the optimal solution of the optimization problem at time t. Then the first element of u∗ is implemented to the real-time system control, that is u∗t . The optimization problem is repeated at time t +1, based on the updated state prediction using u∗t , yielding a model predictive control strategy. 3 EFFICIENCY AND REPRESENTATION POWER OF ICNN Besides the computational traceability of the input convex networks, as an system identification model, we are also interested its predictive accuracies and capacity. This section provides theoretical analysis on the representation ability and efficiency of input convex neural networks. 3.1 REPRESENTATION POWER OF INPUT CONVEX NEURAL NETWORK Definition 1. Given a function f : Rd → R, we say that the function f̂ approximate f within ε if | f (x)− f̂ (x)| ≤ ε for all x in the domain of f . Theorem 1. [Representation power of ICNN] For any Lipschitz convex function over a compact domain, there exists a neural network with nonnegative weights and ReLU activation functions that approximates it within ε . Lemma 1. Given a continuous Lipschitz convex function f : Rd → R with compact domain and ε > 0, it can be approximated within ε by maximum of a finite number of affine functions. That is, there exists f̂ (x) = maxi=1,...,N{µiT x+bi} such that | f (x)− f̂ (x)| ≤ ε for all x ∈ dom f . Sketch of proof for Theorem 1. Supposing Lemma 1 is true, the proof of Theorem 1 boils down to showing that neural network with nonnegative weights and ReLU activation functions can exactly represent a maximum of affine functions. The proof is constructive. We first construct a neural network with ReLU activation functions and both positive and negative weights, then we show that the weights between different layers of the network can be restricted to be nonnegative by a simple duplication trick. Specifically, since the weights in the input layer and passthrough layers in the ICNN can be negative, we simply add a negation of each input variable (e.g. both x and −x are given as inputs) to the network. These variables need satisfy a consistency constraint since one is the negation of the other. Since this constraint is linear, it preserves the convexity of optimization problems. The details of the proofs are given in the Appendix B. This proof is similar in spirit to theorems in (Hanin, 2017; Arora et al., 2016). The key new result is a simpler construction than the one used in (Hanin, 2017) and the restriction to nonnegative weights between the layers. Similar to Theorem 1, an analogous result about the representation power of ICRNN can be shown for systems with convex dynamics. Given a dynamical system described by rolled out system dynamics yt = f (x1, . . . ,xt) is convex, then there exists a recurrent neural network with nonnegative weights and ReLU activation functions that approximates it within ε . A broad range of systems can be captured by this model. For example, the linear quadratic (Gaussian) regulator problem can be described using a ICRNN if we identify y as the cost of the regulator (Skogestad & Postlethwaite, 2007; Boyd et al., 1994).1 An example of a nonlinear system is the control of electrochemical 1It’s important to note that y is usually used as the system output of a linear system, but in our context, we are using it to refer to the quadratic cost with respect to the system states and the control input. batteries. It can be shown from first principles that the degradation of these types of batteries is convex in their charge and discharge actions (Shi et al., 2018) and our framework offers a powerful data-driven way to control batteries found in electric vehicles, cell phones, and power systems. 3.2 ICNN VS. CONVEX PIECEWISE LINEAR FITTING In the proof of Theorem 1, we first approximate a convex function by a maximum of affine functions then construct a neural network according to this maximum. Then a natural question is why learn a neural network and not directly the affine functions in the maximum? This approach was taken in (Magnani & Boyd, 2009), where a convex piecewise-linear function (max of affine functions) are directly learned from data through a regression problem. A key reason that we propose to use ICNN (or ICRNN) to fit a function rather than directly finding a maximum of affine functions is that the former is a much more efficient parameterization than the latter. As stated in Theorem 2, a maximum of K affine functions can be represented by an ICNN with K layers, where each layer only requires a single ReLU activation function. However, given a single layer ICNN with K ReLU activation functions, it may take a maximum of 2K affine functions to represent it exactly. Therefore in practice, it would be much easier to train a good ICNN than finding a good set of affine functions. Theorem 2. [Efficiency of Representation] 1. Let fICNN : Rd → R be an input convex neural network with K ReLU activation functions. Then Ω(2K) functions are required to represent fICNN using a max of affine functions. 2. Let fCPL : Rd → R be a max of K affine functions. Then O(K) activation functions are sufficient to represent fCPL exactly with an ICNN. The proof of this theorem is given in Appendix C. 4 EXPERIMENTS In this section, we verify the effectiveness of ICNN and ICRNN by presenting experimental results on two decision-making problems: continuous control benchmarks on MuJoco locomotion tasks (Todorov et al., 2012) and energy management of reference large-scale commercial building (Crawley et al., 2001), respectively. The proposed method can be used as a flexible building block in decision making problems, where we use ICNN to represent system dynamics for MuJoco simulators, and we use ICRNN in an end-to-end fashion to find the optimal control inputs. Both examples demonstrate that proposed method: 1) discovers the connection between controllable variables and the system dynamics or cost objectives; 2) is lightweight and sample-efficient; 3) achieves generalizable and more stable control performances compared with previous model-based reinforcement learning and simplified linear control approaches. 4.1 MUJOCO LOCOMOTION TASKS Experimental Setup We consider four simulated robotic locomotion tasks: swimmer, half-cheetah, hopper, ant implemented in MuJoCo under the OpenAI rllab framework (Duan et al., 2016). We train and represent the locomotion state transition dynamics st+1 =g(st ,ut)2 using a 2-layer ICNN with ReLU activations, which could be integrated into the following finite-horizon control problem to find the optimal action sequence ut , ...,ut+T for fixed looking ahead horizon T : minimize ut ,...,ut+T − t+T ∑ τ=t r(sτ ,uτ) (6a) subject to sτ+1 = g(sτ ,uτ),∀τ ∈ [t, t +T ] (6b) uτ ∈U f easible,∀τ ∈ [t, t +T ] (6c) 2Note that for notation convenience, in this example and the following building example, we use u to represent the expanded control vector including its negation. For system state s ∈ Rd , if d > 1, convexity means that each dimension of s is convex w.r.t. the function inputs. where the objective (6a) is convex because r(sτ ,uτ) is a concave reward function related to system states such as velocity and control actions (the detailed forms of r(sτ ,uτ) for different locomotion tasks are listed in Appendix D). To achieve better model generalization on locomotion dynamics, we also followed (Nagabandi et al., 2018), and applied DAGGER (Ross et al., 2011) to iteratively collect labeled robotic rollouts and train the supervised dyamics model (6b) using on-policy locomotion samples. See Appendix D for furthur simulation hyperparameters and experimental details. For each aggregated iterations of collecting rollouts data and training ICNN model, we validate the controller performance on standalone validation rollouts by optimally solving (6). Baselines We compare our system modeling and continuous control method with state-of-the-art model-based RL algorithm (Nagabandi et al., 2018), where the authors used a normal multi-layer perceptrons (MLP) model to parameterize the system dynamics (6b). We refer to their method as random-shooting algorithm, since they can not solve (6) to optimality, and they used pre-defined number of random-shooting control sequences (denoted as K) to query the trained MLP and find a best sequence as the rollout policy. Such a method is able to find good control policies in the degree of 104 timesteps, which are much more sample-efficient than model-free RL methods (Duan et al., 2016; Mnih et al., 2015). To make fair comparisons with baseline method, we keep the same setup on the rollouts number and initial random action training. Our framework makes the neural networks convex w.r.t input by adding passthrough links to the 2-layer model and keeping all the layer weights nonnegative. We evaluate the performance of both algorithms on three randomly selected fixed random seeds for four tasks. Similar to the fine tuning steps in (Nagabandi et al., 2018), control policies found by ICNN can also be plugged in as initialized policies for subsequent model-free reinforcement learning algorithms. Continuous Control Performance During training, we found both ICNN and MLP are able to predict robotic states quite accurately based on (6b). This provides a good system dynamics model which is beneficial to solve control policies. The control performances are shown in Fig. 3, where we compare the average reward of proposed method and random-shooting method with K = 100 over 10 validation rollouts during each aggregated iteration (see Fig. 8 in Appendix D.4 for random shooting performance with varying K). The policy found by ICNN outperforms the random-shooting method in all settings with varying horizon T for all of the four locomotion tasks. Intuitively, ICNN should perform better when the action space is larger, since random-shooting method can not search through the action space efficiently with a fixed K. This is illustrated in the example of ant, where with more training samples aggregated and MLP model representing more accurate dynamics, random-shooting gets stuck to find better control policies and there is little improvement reflected in the control performance. Moreover, since we are skipping the expensive process on calculating rewards of each random shooting trajectory and finding the best one, our method only implements ICNN inference step based on (6) and is much faster than random shooting methods in most settings, especially when K is large (see Table. 2 for wall-clock time in Appendix D.3). For instance, in the case of Swimmer, our proposed method only uses 15 of time compared to (Nagabandi et al., 2018). This also indicates that our method is even much more sample-efficient than off-the-shelf model-free RL methods, where we use two orders of magnitude less training data to reach similar validation rewards (Duan et al., 2016; Mnih et al., 2015) (see Fig. 9 in Appendix D.4). 4.2 BUILDING ENERGY MANAGEMENT Experimental Setup We now move on to optimally control a dynamical system with significant inertia. We consider the real-time control problem of building’s HVAC (heating, ventilation, and air conditioning) system to reduce its energy consumption. Building energy management remains to be a hard problem in control area. The exact system dynamics are unknown and hard to model due to the complex heating transfer dynamics, time-varying environments and the scale of the system in terms of states and actions (Kouro et al., 2009). At time t, we assume the building’s running profile xt := [st ,ut ] is available, where st denotes building system states, including outside temperature, room temperature measurements, zone occupancies and etc. ut denotes a collection of control actions such as room temperature set points and appliance schedule. Output is the electricity consumption Pt . This is a model predictive control problem in the sense that we want to find the best control inputs that minimize the overall energy consumption of building by looking ahead several time steps. To achieve this goal, we firstly learn an ICRNN model f (·) of the building dynamics, which is trained to minimize the error between Pt and f (xt−nw , ...,xt), while nw denotes the memory window of recurrent neural networks. Then we solve: minimize ut ,...,ut+T t+T ∑ τ=t f (xτ−nw , ...,xτ) (7a) subject to sτ = g(xτ−nw , ...,xτ−1,uτ),∀τ ∈ [t, t +T ] (7b) uτ ≤ uτ ≤ uτ ,∀τ ∈ [t, t +T ] (7c) sτ ≤ sτ ≤ sτ ,∀τ ∈ [t, t +T ] (7d) where the objective (7a) is minimizing the total energy consumption in future T steps (T is the model predictive control horizon), and (7b) is used for modeling building states, in which g(·) are parameterized as ICRNNs. Note that the formulation (7) is also flexible with different loss functions. For instance, in practice, we could reuse trained dynamics model (7b), and integrate electricity prices into the overall objective so that we could directly learn real-time actions to minimize electricity bills (please refer to Appendix E for more results). The constraints on control actions ut and system states st are given in (7c) and (7d). For instance, the temperature set points as well as real measurements should not exceed user-defined comfort regions. To test the performance of the proposed method, we set up a 12-story large office building, which is a reference EnergyPlus commercial building model from US Department of Energy (DoE) 3, with a total floor area of 498,584 square feet which is divided into 16 separate zones. By using the whole year’s weather profile, we simulate the building running through the year and record (xt , Pt) with a resolution of 10 minutes. We use 10 months’ data to train the ICRNN and subsequent 2 months’ data for testing. We use 39 building system state variables st (uncontrollable), along with 16 control variables ut . Output is a single value of building energy consumption at each time step. We set the model predictive control horizon T = 36 (six hours). We employ an ICRNN with recurrent layer of dimension 200 to fit the building input-output dynamics f (·). The model is trained to minimize the MSE between its predictions and the actual building energy consumption using stochastic gradient descent. We use the same network structure and training scheme to fit state transition dynamics g(·). Baseline We set the model-based forecasting and optimization benchmark using an linear resistorcircuit (RC) circuit model to represent the heat transfer in building systems, and solve for the optimal control actions via MPC (Ma et al., 2012). At each step, MPC algorithm takes into account the forecasted states of the building based on the fitted RC model and implements the current step control actions. We also compare the performance of ICRNN against the conventionally trained RNN in terms of building dynamics fitting performance and control performance. To solve the MPC problem with conventional RNN models, we also use gradient-based method with respect to controls. However, since conventional RNN models are generally not convex from input to output, there is no guarantee to reach a global optimum (or even a local one). Results In terms of the fitting performance, ICRNN provides a competitive result compared to conventional RNN model. The overall test root mean square error (RMSE) is 0.054 for ICRNN and 0.051 for conventional RNN, both of which are much smaller than the error made by RC model (0.240). Fig. 4(a) shows the fitting performance on 5 working days in test data. This illustrates the good performance of ICRNN in modeling building HVAC system dynamics. Then by using the learned ICRNN model of building dynamics, we obtain the suggested room control actions u∗t by solving the optimal building control problem (7). As shown in Fig. 4(b), with the same constraints on building temperature interval of [19◦C, 24◦C], the building energy consumption is reduced by 23.25% after implementing the new temperature set points calculated by ICRNN. On the contrary, since there is no guarantee for finding optimal control actions by optimizing over conventional RNN’s input, the control solutions given by conventional RNN could only reduce 11.73% of electricity. Solutions given by RC model only saves 4.07% of electricity. More importantly, in Fig. 4(c) we demonstrate the control actions outputted by our method against MPC with conventional RNN in two randomly selected building zones, the building basement and top floor central area. It shows that our proposed 3Energyplus is an open-source whole-building energy modeling software, which is developed by US DoE for standard building energy simulation approach is able to find a group of stable control actions for the building system control. While in the conventional RNN case, it generates control set points which have undesirable, drastic variations. 5 SUMMARY AND DISCUSSION In this work we proposed a novel optimal control framework that uses deep neural networks engineered to be convex from the input to the output. This framework bridges machine learning and control by representing system dynamics using input convex (recurrent) neural networks. We show that many interesting data-driven control problems can be cast as convex optimization problems using the proposed network architecture. Experiments on both benchmark MuJoCo locomotion tasks and building energy management demonstrate our methodology’s potential in a variety of control and optimization problems. A. TOY EXAMPLE Consider a synthetic example which contains two circles of noisy input data u ∈ R2, along with discrete data label y ∈ {0,1} which is based on input coming from inner loop (y = 0) or outer loop (y = 1). Suppose a decision maker is interested in finding the u that maximizes the probability of y being 0. This optimization problem can be solved by firstly learning a neural network classifier from u to y, and then to find the u point which minimizes the output of the neural network. More specifically, let fNN be a conventional neural network and fICNN be an ICNN. Then the objective becomes minimizing fNN(u) or fICNN(u). Figure 5 shows the decision boundaries for fNN and fICNN , respectively. These networks are composed of 2 hidden layers, with 200 neurons in each layer, and are trained using the same random seed, same number of samples (100) until loss convergence. The decision boundaries of a conventional network have many “zigzags”, which makes solving (1) challenging, especially if u is constrained. In contrast, the ICNN has convex level sets (by construction) as decision boundaries, which leads to a convex optimization problem. APPENDIX B. PROOF OF THEOREM 1 Proof. Lemma 1 follows from well established facts in function analysis stating that piecewise linear functions are dense in the space of all continuous functions over compact sets (Royden & Fitzpatrick, 2010) and convex piecewise linear functions are dense in the space of all convex continuous functions (Cox, 1971; Gavrilović, 1975). Using the fact that convex piecewise linear functions can be represented as a maximum of affine functions (Magnani & Boyd, 2009; Wang, 2004) gives the desired result in the lemma. Lemma 1 shows that all continuous Lipschitz convex functions f (x) : Rd → R over convex compact sets can be approximated using maximum of affine functions. Then it suffices to show that an ICNN can exactly represent a maximum of affine functions. To do this, we first construct a neural network with ReLU activation function with both positive and negative weights that can represent a maximum of affine functions. Then we show how to restrict all weights to be nonnegative. As a starting example, consider a maximum of two affine functions fCPL(x) = max{aT1 x+b1,aT2 x+b2}. (8) To obtain the exact same function using a neural network, we first rewrite it as fCPL(x) = (aT2 x+b2)+max ( (a1−a2)T x+(b1−b2),0 ) . (9) Now define a two-layer neural network with layers z1 and z2 as shown in Fig. 6: z1 = σ ( (a1−a2)T x+(b1−b2) ) , (10a) z2 = z1 +aT2 x+b2 (10b) where σ is the ReLU activation function and the second layer is linear. By construction, this neural network is the same function as fCPL given in (8). The above argument extends directly to a maximum of K linear functions. Suppose fCPL(x) = max{aT1 x+b1, ...,aTKx+bK} (11) Again the trick is to rewrite fCPL(x) as a nested maximum of affine functions. For notational convenience, let Li = aTi x+bi, L′i = Li−Li+1. Then fCPL = max{L1,L2, ...,LK} = max{max{L1,L2, ...,LK−1},LK} = LK +σ (max{L1,L2, ...,LK−1}−LK) = LK +σ (max{max{L1,L2, ...,LK−2},LK−1}−LK ,0) = LK +σ (LK−1−LK +σ (max{L1,L2, ...,LK−2}−LK−1,0) ,0) = ... = LK +σ ( L′K−1 +σ ( L′K−2 +σ ( ...σ ( L′2 +σ (L1−L2,0) ,0 ) , ...,0 ) ,0 ) ,0 ) . The last equation describes a K layer neural network, where the layers are: z1 = σ (L1−L2,0) = σ ( (a1−a2)T x+(b1−b2) ) , z2 = σ ( L′2 + z1,0 ) = σ ( z1 +(a2−a3)T x+(b2−b3) ) , ...... zi = σ ( L′i + zi−1,0 ) = σ ( zi−1 +(ai−ai+1)T x+(bi−bi+1) ) , ...... zK = zK−1 +LK = hK (zK−1 +LK) = ( zK−1 +aTKx+bK ) . Each layer of of this neural network uses only a single activation function. Although the above neural network exactly represent a maximum of linear functions, it is not convex since the coefficients between layers could be negative. In particular, each layer involves an inner product of the form (ai−ai+1)T x and the coefficients are not necessarily nonnegative. To overcome this, we simply expand the input to include x and −x. Namely, define a new input x̂ ∈ R2d as x̂ = [ x −x ] . (12) Then any inner product of the form hT x can be written as hT x = d ∑ j=1 hixi = ∑ i:hi≥0 hixi + ∑ i:hi<0 hixi = ∑ i:hi≥0 hixi + ∑ i:hi<0 (−hi)(−xi) = ∑ i:hi≥0 hix̂i + ∑ i:hi<0 (−hi)(x̂i+d), where all coefficients are nonnegative in the above sum. Therefore any inner product between a coefficient vector and the input x can be written as an inner product between a nonnegative coefficient vector and the expanded input x̂. Therefore, without loss of generality, we can limit all of the weights between layers to be nonnegative, and thus the neural network to be input convex. Note that in optimization problems, we need to enforce consistency in x̂ be including (12) as a constraint. However, this is a linear equality constraint, which maintains the convexity of the optimization problem. APPENDIX C. PROOF OF THEOREM 2 Proof. The second statement of Theorem 2 directly follows the construction in the proof of Theorem 1, which shows that a maximum of K affine functions can be represent by a K-layer ICNN (with a single ReLU function in each layer). So it remains to show the first statement of Theorem 2. To show that a maximum of affine functions can require exponential number of pieces to approximate a function specified by an ICNN with K activation functions, consider a network with 1 hidden layer of K nodes and the weights of direct “passthrough” layers are set to 0: fICNN(x) = K ∑ i=1 w1iσ(wT0ix+bi) , (13) It contains 3K parameters: w0i, w1i and bi, where w0i ∈ Rd and w1i,bi ∈ R. In order to represent the same function by a maximum of affine functions, we need to assess the value of every activation unit σ(wT0ix+bi). If w T 0ix+bi ≥ 0, σ(wT0ix+bi) = wT0ix+bi; otherwise, σ(wT0ix+bi) = 0. In total, we have 2 K potential combinations of piecewise-linear function, including L1 = ( K ∑ i=1 w1iw0i )T x+ K ∑ i=1 w1ibi , if all wT0ix+bi ≥ 0 L2 = ( K ∑ i=2 w1iw0i )T x+ K ∑ i=2 w1ibi , if wT01x+b1 < 0 and all other w T 0ix+bi ≥ 0 L3 = ( w11w01 + K ∑ i=3 w1iw0i )T x+w1ibi + K ∑ i=3 w1ibi , if wT02x+b2 < 0 and other w T 0ix+bi ≥ 0 · · · · · · , L2K =0 , if all w T 0ix+bi < 0. So the following maximum over 2K pieces is required to represent the single linear ICNN: max{L1,L2, ...,L2K}. APPENDIX D. EXPERIMENTAL DETAILS ON MUJOCO TASKS D.1 DATA COLLECTION Rollout Samples To train the neural network dynamics model (both ICNN and MLP), we first collect initial rollout data using fully random action sequences ut ∼ Uniform[-1,1] with a random chosen initial state. During the data collection process in aggregated iterations, to improve model generalization and explore larger state spaces, we add Gaussian noise to the optimal control policies ut = ut +N (0,0.001). Neural Networks Training We represent the MuJoCo dynamics with a 2-hidden-layer neural networks with hidden sizes 512−512. The passthrough links of ICNN are of same size of corresponding added layers. We train both models using Adam optimizer with a learning rate 0.001 and a mini-batch size of 512. Due to the different complexity of MuJoCo tasks, we vary training epochs and summarize the training details in Table. 1. D.2 ENVIRONMENT DETAILS In all of the MuJoCo locomotion tasks, s includes state variables such as robot positions, velocity along each axis; u includes action efforts for the agent. We use standard reward functions r(st ,ut) for moving tasks, which could be also promptly calculated in (6a) as the control objective. For the ease of neural network training and action sampling, we normalize all the action and states in the range of [−1, 1]. We use DAGGER (Ross et al., 2011) for 6 aggregated iterations for all cases, and during aggregated iteration, we use a split of 10% random rollouts collected as described in 5, and other 90% coming from past iterations’ control policies (on-policy rollouts). Note that we use 10 random control sequences in our method to initialize the policy finding approach and avoid the long computation time for taking gradients on finding optimal ut . Other environment parameters are described in Table. 1. D.3 WALL-CLOCK TIME In Table.2, we show the average run time for the total of 6 aggregation iterations over 3 runs. Finding control policies via ICNN is using less or equal training time compared to random-shooting method with K = 100, while achieving better task rewards than K = 1000 for different control horizons. All the experiments are running on a computer with 8 cores Intel I7 6700 CPU. Note that we do not use GPU for accelerating ICNN optimization step (6), which could furthur improve our method’s efficiency. D.4 DETAILS OF SIMULATION RESULTS MuJoCo Dynamics Modeling In Fig. 7, we compare the ICNN and normal MLP fitting performance of the MuJoCo dynamics modeling (6b), which illustrates that both MLP and ICNN are able to find a data-driven dynamics model for ant MuJoCo agent, which is of the most complex dynamics we considered for locomotion tasks. The multi-step prediction errors of ICNN is comparable to normal MLP used in (Nagabandi et al., 2018) for different length of rollout steps. More Simulation Results In Fig. 8, we compare our control method with random-shooting approach with varying settings on shooting number K, which shows that our approach is more efficient in finding control policies. In Fig. 9, we compare our control method with the rllab implementation of trust region policy optimization (TRPO) (Schulman et al., 2015), an end-to-end deep reinforcement learning approach for mujoco locomotion tasks. More specifically, we compare the algorithms’ performances with relatively few available rollout samples. While our approach quickly learns the dynamics and then find control actions via optimization steps, TRPO is hard to learn the actions directly with few provided rollouts. Similarly to the model-based and model-free (Mb-Mf) approach described in (Nagabandi et al., 2018), our control method could provide good initialization samples for the model-free algorithms, which could greatly accelerate the training process of model-free algorithms. APPENDIX E. DETAILS ON BUILDING ENERGY MANAGEMENT E.1 MINIMIZING ELECTRICITY COSTS To further demonstrate the potential of our proposed control framework in dealing with different real world tasks, we modify the setting of the building control example in Section 4.2 to a more complicated case. Instead of directly minimize the total energy consumption of building, we aim to minimize the total energy cost of building which subject to a varying time-of-use electrical price λ . The optimization problem in (7) should be re-written as, minimize ut ,...,ut+T T ∑ τ=0 λτ · f (xt+τ−nw , ...,xt+τ) (14a) subject to st+τ = g(xt+τ−nw , ...,xt+τ−1,ut+τ),∀τ (14b) ut+τ ≤ ut+τ ≤ ut+τ ,∀τ (14c) st+τ ≤ st+τ ≤ st+τ ,∀τ (14d) where the objective (14a) is minimizing the total energy cost of building in future T steps (T is the model predictive control horizon) subject to time-of-use electricity price λτ , and (14b) is used for modeling building states, in which g(·) are parameterized as ICRNNs. Same as the previous building control case, we have constraints on both control actions ut and system states st are given in (14c) and (14d). For instance, the temperature set points as well as real measurements should not exceed user-defined comfort regions. In Fig. 10 we visualize our model flexibility by using Seattle’s Time-of-Use (TOU) price from Seattle City Light 4, and minimizing one week’s electricity bills. We could see ICRNN capture the long term relationships between control variables and final costs, and raise the energy consumption during off-peak price a little, but reduce the energy consumption during peak hours. 4http://www.seattle.gov/light/ E.2 CONTROL CONSTRAINTS EFFECTS In Fig. 11 we add one more comparison on the control constraints effects on the final control performance by using ICRNN. Interestingly, with different set point constraints, the ICRNN finds similar solutions for off-peak electricity usage, which may correspond to necessary energy consumptions, such as lightning and ventilation. Moreover, when we set no constraints on the system, it would cut down more than 80% of total energy during peak hours.
1. What is the main contribution of the paper regarding the use of input convex neural networks (ICNN)? 2. How does the proposed method bridge the gap between neural networks and model predictive control (MPC)? 3. What are the advantages of using ICNN for learning system dynamics, particularly in terms of robustness and sample efficiency? 4. Are there any minor suggestions or improvements that could be made to the paper?
Review
Review This paper proposes to use input convex neural networks (ICNN) to capture a complex relationship between control inputs and system dynamics, and then use trained ICNN to form a model predictive control (MPC) problem for control tasks. The paper is well-written and bridges the gap between neural networks and MPC. The main contribution of this paper is to use ICNN for learning system dynamics. ICNN is a neural network that only contains non-negative weights. Thanks to this constraint, ICNN is convex with respect to an input, therefore MPC problem with an ICNN model and additional convex constraints on control inputs is a convex optimization problem. While it is not easy to solve such a convex problem, it has a global optimum, and a gradient descent algorithm will eventually reach such a point. It should also be noted that a convex problem has a robustness with respect to an initial starting point and an ICNN model itself as well. The latter is pretty important, since training ICNN (or NN) is a non-convex optimization, so the parameters in trained ICNN (or NN) model can vary depending on the initial random weights and learning rates, etc. Since a convex MPC has some robustness (or margin) over an error or deviation in system dynamics, while non-convex MPC does not, using ICNN can also stabilize the control inputs in MPC. Overall, I believe that using ICNN to from convex MPC is a sample-efficient, non-intrusive way of constructing a controller with unknown dynamics. Below are some minor suggestions to improve this paper. -- Page 18, there is Fig.??. Please fix this. -- In experiments, could you compare the result with a conventional end-to-end RL approach? I know this is not a main point of this paper, but it can be more compelling.
ICLR
Title Optimal Control Via Neural Networks: A Convex Approach Abstract Control of complex systems involves both system identification and controller design. Deep neural networks have proven to be successful in many identification tasks, however, from model-based control perspective, these networks are difficult to work with because they are typically nonlinear and nonconvex. Therefore many systems are still identified and controlled based on simple linear models despite their poor representation capability. In this paper we bridge the gap between model accuracy and control tractability faced by neural networks, by explicitly constructing networks that are convex with respect to their inputs. We show that these input convex networks can be trained to obtain accurate models of complex physical systems. In particular, we design input convex recurrent neural networks to capture temporal behavior of dynamical systems. Then optimal controllers can be achieved via solving a convex model predictive control problem. Experiment results demonstrate the good potential of the proposed input convex neural network based approach in a variety of control applications. In particular we show that in the MuJoCo locomotion tasks, we could achieve over 10% higher performance using 5× less time compared with state-of-the-art model-based reinforcement learning method; and in the building HVAC control example, our method achieved up to 20% energy reduction compared with classic linear models. 1 INTRODUCTION Decisions on how to best operate and control complex physical systems such as the power grid, commercial and industrial buildings, transportation networks and robotic systems are of critical societal importance. These systems are often challenging to control because they tend to have complicated and poorly understood dynamics, sometimes with legacy components are built over a long period of time (Wolf, 2009). Therefore detailed models for these systems may not be available or may be intractable to construct. For instance, since buildings account for 40% of the global energy consumption (Cheng et al., 2008), many approaches have been proposed to operate buildings more efficiently by controlling their heating, ventilation, and air conditioning (HVAC) systems (Zhang et al., 2017). Most of these methods, however, suffer from two drawbacks. On one hand, a detailed physics model of a building can be used to accurately describe its behavior, but this model can take years to develop. On the other hand, simple control algorithms have been developed by using linear (RC circuit) models (Ma et al., 2012) to represent buildings, but the performance of these models may be poor since the building dynamics can be far from linear (Shaikh et al., 2014). In this paper, we leverage the availability of data to strike a balance between requiring painstaking manual construction of physics based models and the risk of not capturing rich and complex system dynamics through models that are too simplistic. In recent years—with the growing deployment of sensors in physical and robotics systems—large amount of operational data have been collected, such as in smart buildings (Suryadevara et al., 2015), legged robotics (Meger et al., 2015) and manipulators (Deisenroth et al., 2011). Using these data, the system dynamics can be learned directly and then automatically updated at periodic intervals. One popular method is to parameterize these ∗Authors contribute equally. complex system dynamics using deep neural networks to capturing complex relationships (He et al., 2016; Vaswani et al., 2017), yet few research investigated how to integrate deep learning models into real-time closed-loop control of physical systems. A key reason that deep neural networks have not been directly applied in control is that even though they provide good performances in learning system behaviors, optimization on top of these networks is challenging (Kawaguchi, 2016). Neural networks, because of their structures, are generally not convex from input to output. Therefore, many control applications (e.g., where real-time decisions need to be made) choose to favor the computational tractability offered by linear models despite their poor fitting performances. In this paper we tackle the modeling accuracy and control tractability tradeoff by building on the input convex neural networks (ICNN) in (Amos et al., 2017) to both represent system dynamics and to find optimal control policies. By making the neural network convex from input to output, we are able to obtain both good predictive accuracies and tractable computational optimization problems. The overall methodology is shown in Fig. 1. Our proposed method (shown in Fig. 1 (b)) firstly utilizes an input convex network model to learn the system dynamics and then computes the best control decisions via solving a convex model predictive control (MPC) problem, which is tractable and has optimality guarantees. This is different from existing methods that uses model-free end-to-end controller which directly maps input to output (shown in Fig. 1 (a)). Another major contribution of our work is that we explicitly prove that ICNN can represent all convex functions and systems dynamics, and is exponentially more efficient than widely used convex piecewise linear approximations (Magnani & Boyd, 2009). 1.1 RELATED WORK The work in (Amos et al., 2017) was an impetus for this paper. The key differences are that the goal in (Amos et al., 2017) is to show that ICNN can achieve similar classification performances as conventional neural networks and how the former can be used in inference and prediction problems. Our goal is to use these networks for optimization and closed-loop control, and in a sense that we are more interested in the overall system performances and not directly the performance of the networks. We also extend the class of networks to include RNNs to capture dynamical systems. Control and decision-making have used deep learning mainly in model-free end-to-end controller settings (shown in Fig. 1 (a)), such as sequential decision making in game (Mnih et al., 2013), robotics manipulation (Levine & Koltun, 2014; Levine et al., 2016), and control of cyber-physical systems (Wei et al., 2017; O’Neill et al., 2010). However, much of the success relies heavily on a reinforcement learning setup where the optimal state-action relationship can be learned via a large number of samples. However, many physical systems do not fit into the reinforcement learning process, where both the sample collection is limited by real-time operations, and there are physical model constraints hard to represent efficiently. To address the above sample efficiency, safety and model constraints incompatibility concerns faced by model-free reinforcement learning algorithms in physical system control, we consider a model-based control approach in this work. Model-based control algorithms often involve two stages – system identification and controller design. For the system identification stage, the goal is to learn a fixed form of system model to minimize some prediction error (Ljung, 1998). Most efficient model-based control algorithms have used a relatively simple function estimator for the system dynamics identification (Nagabandi et al., 2018), such as linear model (Ma et al., 2012) and Gaussian processes (Meger et al., 2015; Deisenroth et al., 2011). These simplified models are sample-efficient to learn, and can be nicely incorporated in the sub-sequent optimal control problems. However, such simple models may not have enough representation capacity in modeling large-scale or high-dimension systems with nonlinear dynamics. Deep neural networks (DNNs) feature powerful representation capability, while the main challenge of using DNNs for system identification is that such models are typically highly non-linear and non-convex (Kawaguchi, 2016), which causes great difficulty for following decision making. A recent work from (Nagabandi et al., 2018) is close in spirit as our proposed method. Similarly, the authors use a model-based approach for robotics control, where they first fit a neural network for the system dynamics and then use the fitted network in an MPC loop. However, since (Nagabandi et al., 2018) use conventional NN for system identification, they cannot solve the MPC problem to global optimality. Our work shows how the proposed ICNN control algorithm achieves the benefits from both sides of the world. The optimization with respect to inputs can be implemented using off-the-shelf deep learning optimizers, while we are able to obtain good identification accuracies and tractable computational optimization problems by using proposed method at the same time. 2 CLOSED-LOOP CONTROL WITH INPUT CONVEX NEURAL NETWORKS In this paper, we consider the settings where a neural network is used in a closed-loop system. The fundamental goal is to optimize system performance which is beyond the learning performance of network on its own. In this section we describe how input convex neural networks (ICNN) can be extremely useful in these systems by considering two related problems. First, we show how ICNN perform in single-shot optimization problems. Then we extend the results to an input convex recurrent neural networks (ICRNN), which allows us to both capture systems’ complex dynamics and make time-series decisions. 2.1 SINGLE-SHOT PROBLEM The following proposition states a simple sufficient condition for a neural network to be input convex: Proposition 1. The feedforward neural network in Fig. 2(a) is convex from input to output given that all weights between layers W1:k and weights in the “passthrough” layers D2:k are non-negative, and all of the activation functions are convex and nondecreasing (e.g. ReLU). The structure of the input convex neural network (ICNN) structure in Proposition 1 is motivated by the structure in (Amos et al., 2017) but modified to be more suitable to control of dynamical systems. In (Amos et al., 2017) it only requires W2:k to be non-negative while having no restrictions on weights W1 and D2:k. Our construction achieves the exact representation by expanding the inputs to include both u (∈ Rd) and −u. Then any negative weights in W1 and D2:k in (Amos et al., 2017)’s ICNN structure is set to zero and its negation (which is positive) is added as the weight for corresponding −u. The reason for our construction is to allow the network to be “rolled out in time” when we are dealing with dynamical systems and multiple networks need to be composed together. An simple example that demonstrates how the proposed ICNN can be used to fit a convex function comes form fitting the |u| function. This function is convex and both decreasing and increasing. Let the activation function be ReLU(·) = max(·,0). We can write |u| = −u+ 2ReLU(u) (Amos et al., 2017). However, in this representation, we need a negative weight, the −1 in front of u, and this would be troublesome if we compose several networks together. In our proposed ICNN structure with all positive weights and input negation duplicates, we can write |u| = v+ 2ReLU(u), where we impose a constraint v = −u. Such doubline on the number of input variables may potentially make the network harder to train. Yet during control, having all of the weights positive maintains the convexity between inputs and outputs even if multiple steps are considered which will be discussed in Section 2.2. The constraint v =−u is linear and can be easily included in any convex optimization. This proposition follows directly from composition of convex functions (Boyd & Vandenberghe, 2004). Although it allows for any increasing convex activation functions, in this paper we work with the popular ReLU activation function. Two notable additions in ICNN compared with conventional feedforward neural networks are: 1) Addition of the direct “passthrough” layers connecting inputs to hidden layers and conventional feedforward layers connecting hidden layers for better representation power. 2) the expanded inputs that include both u and −u. The proposed ICNN structure is shown in Fig. 2(a). Note that such construction guarantees that the network is convex and non-decreasing with respect to the expanded inputs û = [ u −u ] , while the output can achieve either decreasing or non-decreasing functions over u. Fundamentally, ICNN allows us to use neural networks in decision making processes by guaranteeing the solution is unique and globally optimal. Since many complex input and output relationships can be learned through deep neural networks, it is natural to consider using the learned network in an optimization problem in the form of min u f (u;W) (1a) s.t. u ∈U , (1b) where U is a convex feasible space. Then if f is an ICNN, optimizing over u is a convex problem, which can be solved efficiently to global optimality. Note that we will always duplicate the variables by introducing v = −u, but again this does not change the convexity of the problem. Of course, since the weights of the network are restricted to be nonnegative, the performance of the network (e.g., classification) may be worse. A common thread we observe in this paper is that trading off classification performance with tractability can be preferable. 2.2 CLOSED-LOOP CONTROL AND RECURRENT NEURAL NETWORKS In addition to the single-shot optimization problem in (1), we are interested in optimally controlling a dynamical system. To model the temporal dependency of the system dynamics, we propose to use recurrent neural networks (instead of feed-forward neural networks). Recurrent networks carry an internal state of the system, which introduces coupling with previous inputs to the system. Fig. 2(b) shows the proposed input convex recurrent neural networks (ICRNN) structure. This network maps from input û to output y with memory unit z according to the following Eq. (2), zt = σ1(Uût +Wzt−1 +D2ût−1) , (2) yt = σ2(Vzt +D1zt−1 +D3ût) , (3) where û = [ u −u ] , and D1,D2,D3 are added direct “passthrough” layers for augmenting representation power. If we unroll the dynamics with respect to time, we have yt = f (û1, û2, ..., ût ;θ) where θ = [U,V,W,D1,D2,D3] are network parameters, and σ1,σ2 denote the nonlinear activation functions. The next proposition states a sufficient condition for the network to be input convex. Proposition 2. The network shown in Fig. 2(b) is a convex function from inputs to output if all weights U,V,W,D1,D2,D3 are non-negative, and all activation functions are convex and nondecreasing (e.g. ReLU). The proof of this proposition again follows directly from the composition rule of convex functions. Similarly to the ICNN case, by expanding the inputs vector to include both u and −u and restricting all weights to be non-negative, the resulted ICRNN structure is a convex and non-decreasing mapping from inputs to output. The proposed ICRNN structure can be leveraged to represent system dynamics for close-loop control. Consider a physical system with discrete-time dynamics, at time step t, let’s define st as the system states, ut as the control actions, and yt as the system output. For example, for the real-time control of a building system, st includes the room temperature, humidity, etc; ut denotes the building appliance scheduling, room temperature set-points, etc; and output yt is the building energy consumption. In addition, there maybe exogenous variables that impact the output of the system, for example, outside temperature will impact the energy consumption of the building. However, since the exogenous variables are not impacted by any of the control actions we take, we suppress them in the formulation below. The time evolution of a system is described by yt = f (st ,ut) , (4a) st+1 = g(st ,ut) (4b) where (4b) describes the coupling between the current inputs to the future system states. Physical systems described by (4) may have significant inertia in the sense that the outcome of any control actions is delayed in time and there are significant couplings across time periods. Since we use ICRNNs to represent both the system dynamics g(·) and the output f (·), the control variable u expands as û. The optimal receding horizon control problem at time t can be written as, minimize ut ,ut+1,...,ut+T C(x̂,y) = t+T ∑ τ=t J(x̂τ ,yτ) (5a) subject to yτ = f(x̂τ−nw , x̂τ−nw+1, ..., x̂τ),∀τ ∈ [t, t +T ] (5b) sτ = g(x̂τ−nw , x̂τ−nw+1, ..., x̂τ−1, ûτ) ,∀τ ∈ [t, t +T ] (5c) x̂τ = [ sτ ûτ ] , ûτ = [ uτ vτ ] ,∀τ ∈ [t, t +T ] (5d) vτ =−uτ ,∀τ ∈ [t, t +T ] (5e) sτ ∈S f easible,∀τ ∈ [t, t +T ] (5f) uτ ∈U f easible,∀τ ∈ [t, t +T ] (5g) where a new variable x̂ = [st , ût ] is introduced for notational simplicity, which called system inputs. It is the collection of system states st and duplicated control actions ut and −ut , therefore ensuring the mapping from ut to any future states and outputs remains convex. J(x̂τ ,yτ) is the control system cost incurs at time τ , that is a function of both the system inputs x̂τ and output yτ . The functions f (·) and g(·) in Eq. (5b)-(5c) are parameterized as ICRNNs, which represent the system dynamics from sequence of inputs (x̂τ−nw , x̂τ−nw+1, ..., x̂τ) to the system output yτ , and the dynamics from control actions to system states, respectively. nw is the memory window length of the recurrent neural network. The equations (5d) and (5e) duplicate the input variables u and enforce the consistency condition between u and its negation v. Lastly, (5f) and (5g) are the constraints on feasible system states and control actions respectively. Note that as a general formulation, we do not include the duplication tricks on state variables, so the dynamics fitted by (5b) and (5c) are non-decreasing over state space, which are not equivalent to those dynamics represented by linear systems. However, since we are not restricting the control space, and we have explicitly included multiple previous states in the system transition dynamics, so the non-decreasing constraint over state space should not restrict the representation capacity by much. In Section.3 we theoretically prove the representability of proposed networks. Optimization problem in (5) is a convex optimization with respect to (w.r.t.) inputs u = [ut , ...,ut+T ], provided the cost function J(x̂τ ,yτ) = J(sτ , ûτ ,yτ) is convex w.r.t. ûτ , and convex, nondecreasing w.r.t. sτ and yτ . A problem is convex if and only if both the objective function and constraints are convex. In the above problem, J(sτ , ûτ ,yτ) is convex and nondecreasing w.r.t. sτ and yτ ; sτ and yτ are parameterized as ICRNNs, i.e., (5a) and (5b), such that they are convex w.r.t. ûτ . Therefore following the composition rule of convex functions, the objective function is convex w.r.t. inputs u = [ut , ...,ut+T ]. Besides, all the equality constraints (5d) and (5e) are affine. Suppose both the state feasibile set (5f) and action feasibile set (5g) are convex, the overall optimization is convex. The convexity of the problem in (5) guarantees that it can be solved efficiently and optimally using gradient descend method. Since both the objective function (5a) and the constraints (5b)-(5c) are parameterized as neural networks, and their gradients can be calculated via back-propagation with the modification where cost is propagated to the input rather than the weights of the network. For implementation, the gradients can be convinently calculated via existing modules such as Tensorflow viaback-propagation. Let u∗ = {u∗t ,u∗t+1, ...,u∗t+T} be the optimal solution of the optimization problem at time t. Then the first element of u∗ is implemented to the real-time system control, that is u∗t . The optimization problem is repeated at time t +1, based on the updated state prediction using u∗t , yielding a model predictive control strategy. 3 EFFICIENCY AND REPRESENTATION POWER OF ICNN Besides the computational traceability of the input convex networks, as an system identification model, we are also interested its predictive accuracies and capacity. This section provides theoretical analysis on the representation ability and efficiency of input convex neural networks. 3.1 REPRESENTATION POWER OF INPUT CONVEX NEURAL NETWORK Definition 1. Given a function f : Rd → R, we say that the function f̂ approximate f within ε if | f (x)− f̂ (x)| ≤ ε for all x in the domain of f . Theorem 1. [Representation power of ICNN] For any Lipschitz convex function over a compact domain, there exists a neural network with nonnegative weights and ReLU activation functions that approximates it within ε . Lemma 1. Given a continuous Lipschitz convex function f : Rd → R with compact domain and ε > 0, it can be approximated within ε by maximum of a finite number of affine functions. That is, there exists f̂ (x) = maxi=1,...,N{µiT x+bi} such that | f (x)− f̂ (x)| ≤ ε for all x ∈ dom f . Sketch of proof for Theorem 1. Supposing Lemma 1 is true, the proof of Theorem 1 boils down to showing that neural network with nonnegative weights and ReLU activation functions can exactly represent a maximum of affine functions. The proof is constructive. We first construct a neural network with ReLU activation functions and both positive and negative weights, then we show that the weights between different layers of the network can be restricted to be nonnegative by a simple duplication trick. Specifically, since the weights in the input layer and passthrough layers in the ICNN can be negative, we simply add a negation of each input variable (e.g. both x and −x are given as inputs) to the network. These variables need satisfy a consistency constraint since one is the negation of the other. Since this constraint is linear, it preserves the convexity of optimization problems. The details of the proofs are given in the Appendix B. This proof is similar in spirit to theorems in (Hanin, 2017; Arora et al., 2016). The key new result is a simpler construction than the one used in (Hanin, 2017) and the restriction to nonnegative weights between the layers. Similar to Theorem 1, an analogous result about the representation power of ICRNN can be shown for systems with convex dynamics. Given a dynamical system described by rolled out system dynamics yt = f (x1, . . . ,xt) is convex, then there exists a recurrent neural network with nonnegative weights and ReLU activation functions that approximates it within ε . A broad range of systems can be captured by this model. For example, the linear quadratic (Gaussian) regulator problem can be described using a ICRNN if we identify y as the cost of the regulator (Skogestad & Postlethwaite, 2007; Boyd et al., 1994).1 An example of a nonlinear system is the control of electrochemical 1It’s important to note that y is usually used as the system output of a linear system, but in our context, we are using it to refer to the quadratic cost with respect to the system states and the control input. batteries. It can be shown from first principles that the degradation of these types of batteries is convex in their charge and discharge actions (Shi et al., 2018) and our framework offers a powerful data-driven way to control batteries found in electric vehicles, cell phones, and power systems. 3.2 ICNN VS. CONVEX PIECEWISE LINEAR FITTING In the proof of Theorem 1, we first approximate a convex function by a maximum of affine functions then construct a neural network according to this maximum. Then a natural question is why learn a neural network and not directly the affine functions in the maximum? This approach was taken in (Magnani & Boyd, 2009), where a convex piecewise-linear function (max of affine functions) are directly learned from data through a regression problem. A key reason that we propose to use ICNN (or ICRNN) to fit a function rather than directly finding a maximum of affine functions is that the former is a much more efficient parameterization than the latter. As stated in Theorem 2, a maximum of K affine functions can be represented by an ICNN with K layers, where each layer only requires a single ReLU activation function. However, given a single layer ICNN with K ReLU activation functions, it may take a maximum of 2K affine functions to represent it exactly. Therefore in practice, it would be much easier to train a good ICNN than finding a good set of affine functions. Theorem 2. [Efficiency of Representation] 1. Let fICNN : Rd → R be an input convex neural network with K ReLU activation functions. Then Ω(2K) functions are required to represent fICNN using a max of affine functions. 2. Let fCPL : Rd → R be a max of K affine functions. Then O(K) activation functions are sufficient to represent fCPL exactly with an ICNN. The proof of this theorem is given in Appendix C. 4 EXPERIMENTS In this section, we verify the effectiveness of ICNN and ICRNN by presenting experimental results on two decision-making problems: continuous control benchmarks on MuJoco locomotion tasks (Todorov et al., 2012) and energy management of reference large-scale commercial building (Crawley et al., 2001), respectively. The proposed method can be used as a flexible building block in decision making problems, where we use ICNN to represent system dynamics for MuJoco simulators, and we use ICRNN in an end-to-end fashion to find the optimal control inputs. Both examples demonstrate that proposed method: 1) discovers the connection between controllable variables and the system dynamics or cost objectives; 2) is lightweight and sample-efficient; 3) achieves generalizable and more stable control performances compared with previous model-based reinforcement learning and simplified linear control approaches. 4.1 MUJOCO LOCOMOTION TASKS Experimental Setup We consider four simulated robotic locomotion tasks: swimmer, half-cheetah, hopper, ant implemented in MuJoCo under the OpenAI rllab framework (Duan et al., 2016). We train and represent the locomotion state transition dynamics st+1 =g(st ,ut)2 using a 2-layer ICNN with ReLU activations, which could be integrated into the following finite-horizon control problem to find the optimal action sequence ut , ...,ut+T for fixed looking ahead horizon T : minimize ut ,...,ut+T − t+T ∑ τ=t r(sτ ,uτ) (6a) subject to sτ+1 = g(sτ ,uτ),∀τ ∈ [t, t +T ] (6b) uτ ∈U f easible,∀τ ∈ [t, t +T ] (6c) 2Note that for notation convenience, in this example and the following building example, we use u to represent the expanded control vector including its negation. For system state s ∈ Rd , if d > 1, convexity means that each dimension of s is convex w.r.t. the function inputs. where the objective (6a) is convex because r(sτ ,uτ) is a concave reward function related to system states such as velocity and control actions (the detailed forms of r(sτ ,uτ) for different locomotion tasks are listed in Appendix D). To achieve better model generalization on locomotion dynamics, we also followed (Nagabandi et al., 2018), and applied DAGGER (Ross et al., 2011) to iteratively collect labeled robotic rollouts and train the supervised dyamics model (6b) using on-policy locomotion samples. See Appendix D for furthur simulation hyperparameters and experimental details. For each aggregated iterations of collecting rollouts data and training ICNN model, we validate the controller performance on standalone validation rollouts by optimally solving (6). Baselines We compare our system modeling and continuous control method with state-of-the-art model-based RL algorithm (Nagabandi et al., 2018), where the authors used a normal multi-layer perceptrons (MLP) model to parameterize the system dynamics (6b). We refer to their method as random-shooting algorithm, since they can not solve (6) to optimality, and they used pre-defined number of random-shooting control sequences (denoted as K) to query the trained MLP and find a best sequence as the rollout policy. Such a method is able to find good control policies in the degree of 104 timesteps, which are much more sample-efficient than model-free RL methods (Duan et al., 2016; Mnih et al., 2015). To make fair comparisons with baseline method, we keep the same setup on the rollouts number and initial random action training. Our framework makes the neural networks convex w.r.t input by adding passthrough links to the 2-layer model and keeping all the layer weights nonnegative. We evaluate the performance of both algorithms on three randomly selected fixed random seeds for four tasks. Similar to the fine tuning steps in (Nagabandi et al., 2018), control policies found by ICNN can also be plugged in as initialized policies for subsequent model-free reinforcement learning algorithms. Continuous Control Performance During training, we found both ICNN and MLP are able to predict robotic states quite accurately based on (6b). This provides a good system dynamics model which is beneficial to solve control policies. The control performances are shown in Fig. 3, where we compare the average reward of proposed method and random-shooting method with K = 100 over 10 validation rollouts during each aggregated iteration (see Fig. 8 in Appendix D.4 for random shooting performance with varying K). The policy found by ICNN outperforms the random-shooting method in all settings with varying horizon T for all of the four locomotion tasks. Intuitively, ICNN should perform better when the action space is larger, since random-shooting method can not search through the action space efficiently with a fixed K. This is illustrated in the example of ant, where with more training samples aggregated and MLP model representing more accurate dynamics, random-shooting gets stuck to find better control policies and there is little improvement reflected in the control performance. Moreover, since we are skipping the expensive process on calculating rewards of each random shooting trajectory and finding the best one, our method only implements ICNN inference step based on (6) and is much faster than random shooting methods in most settings, especially when K is large (see Table. 2 for wall-clock time in Appendix D.3). For instance, in the case of Swimmer, our proposed method only uses 15 of time compared to (Nagabandi et al., 2018). This also indicates that our method is even much more sample-efficient than off-the-shelf model-free RL methods, where we use two orders of magnitude less training data to reach similar validation rewards (Duan et al., 2016; Mnih et al., 2015) (see Fig. 9 in Appendix D.4). 4.2 BUILDING ENERGY MANAGEMENT Experimental Setup We now move on to optimally control a dynamical system with significant inertia. We consider the real-time control problem of building’s HVAC (heating, ventilation, and air conditioning) system to reduce its energy consumption. Building energy management remains to be a hard problem in control area. The exact system dynamics are unknown and hard to model due to the complex heating transfer dynamics, time-varying environments and the scale of the system in terms of states and actions (Kouro et al., 2009). At time t, we assume the building’s running profile xt := [st ,ut ] is available, where st denotes building system states, including outside temperature, room temperature measurements, zone occupancies and etc. ut denotes a collection of control actions such as room temperature set points and appliance schedule. Output is the electricity consumption Pt . This is a model predictive control problem in the sense that we want to find the best control inputs that minimize the overall energy consumption of building by looking ahead several time steps. To achieve this goal, we firstly learn an ICRNN model f (·) of the building dynamics, which is trained to minimize the error between Pt and f (xt−nw , ...,xt), while nw denotes the memory window of recurrent neural networks. Then we solve: minimize ut ,...,ut+T t+T ∑ τ=t f (xτ−nw , ...,xτ) (7a) subject to sτ = g(xτ−nw , ...,xτ−1,uτ),∀τ ∈ [t, t +T ] (7b) uτ ≤ uτ ≤ uτ ,∀τ ∈ [t, t +T ] (7c) sτ ≤ sτ ≤ sτ ,∀τ ∈ [t, t +T ] (7d) where the objective (7a) is minimizing the total energy consumption in future T steps (T is the model predictive control horizon), and (7b) is used for modeling building states, in which g(·) are parameterized as ICRNNs. Note that the formulation (7) is also flexible with different loss functions. For instance, in practice, we could reuse trained dynamics model (7b), and integrate electricity prices into the overall objective so that we could directly learn real-time actions to minimize electricity bills (please refer to Appendix E for more results). The constraints on control actions ut and system states st are given in (7c) and (7d). For instance, the temperature set points as well as real measurements should not exceed user-defined comfort regions. To test the performance of the proposed method, we set up a 12-story large office building, which is a reference EnergyPlus commercial building model from US Department of Energy (DoE) 3, with a total floor area of 498,584 square feet which is divided into 16 separate zones. By using the whole year’s weather profile, we simulate the building running through the year and record (xt , Pt) with a resolution of 10 minutes. We use 10 months’ data to train the ICRNN and subsequent 2 months’ data for testing. We use 39 building system state variables st (uncontrollable), along with 16 control variables ut . Output is a single value of building energy consumption at each time step. We set the model predictive control horizon T = 36 (six hours). We employ an ICRNN with recurrent layer of dimension 200 to fit the building input-output dynamics f (·). The model is trained to minimize the MSE between its predictions and the actual building energy consumption using stochastic gradient descent. We use the same network structure and training scheme to fit state transition dynamics g(·). Baseline We set the model-based forecasting and optimization benchmark using an linear resistorcircuit (RC) circuit model to represent the heat transfer in building systems, and solve for the optimal control actions via MPC (Ma et al., 2012). At each step, MPC algorithm takes into account the forecasted states of the building based on the fitted RC model and implements the current step control actions. We also compare the performance of ICRNN against the conventionally trained RNN in terms of building dynamics fitting performance and control performance. To solve the MPC problem with conventional RNN models, we also use gradient-based method with respect to controls. However, since conventional RNN models are generally not convex from input to output, there is no guarantee to reach a global optimum (or even a local one). Results In terms of the fitting performance, ICRNN provides a competitive result compared to conventional RNN model. The overall test root mean square error (RMSE) is 0.054 for ICRNN and 0.051 for conventional RNN, both of which are much smaller than the error made by RC model (0.240). Fig. 4(a) shows the fitting performance on 5 working days in test data. This illustrates the good performance of ICRNN in modeling building HVAC system dynamics. Then by using the learned ICRNN model of building dynamics, we obtain the suggested room control actions u∗t by solving the optimal building control problem (7). As shown in Fig. 4(b), with the same constraints on building temperature interval of [19◦C, 24◦C], the building energy consumption is reduced by 23.25% after implementing the new temperature set points calculated by ICRNN. On the contrary, since there is no guarantee for finding optimal control actions by optimizing over conventional RNN’s input, the control solutions given by conventional RNN could only reduce 11.73% of electricity. Solutions given by RC model only saves 4.07% of electricity. More importantly, in Fig. 4(c) we demonstrate the control actions outputted by our method against MPC with conventional RNN in two randomly selected building zones, the building basement and top floor central area. It shows that our proposed 3Energyplus is an open-source whole-building energy modeling software, which is developed by US DoE for standard building energy simulation approach is able to find a group of stable control actions for the building system control. While in the conventional RNN case, it generates control set points which have undesirable, drastic variations. 5 SUMMARY AND DISCUSSION In this work we proposed a novel optimal control framework that uses deep neural networks engineered to be convex from the input to the output. This framework bridges machine learning and control by representing system dynamics using input convex (recurrent) neural networks. We show that many interesting data-driven control problems can be cast as convex optimization problems using the proposed network architecture. Experiments on both benchmark MuJoCo locomotion tasks and building energy management demonstrate our methodology’s potential in a variety of control and optimization problems. A. TOY EXAMPLE Consider a synthetic example which contains two circles of noisy input data u ∈ R2, along with discrete data label y ∈ {0,1} which is based on input coming from inner loop (y = 0) or outer loop (y = 1). Suppose a decision maker is interested in finding the u that maximizes the probability of y being 0. This optimization problem can be solved by firstly learning a neural network classifier from u to y, and then to find the u point which minimizes the output of the neural network. More specifically, let fNN be a conventional neural network and fICNN be an ICNN. Then the objective becomes minimizing fNN(u) or fICNN(u). Figure 5 shows the decision boundaries for fNN and fICNN , respectively. These networks are composed of 2 hidden layers, with 200 neurons in each layer, and are trained using the same random seed, same number of samples (100) until loss convergence. The decision boundaries of a conventional network have many “zigzags”, which makes solving (1) challenging, especially if u is constrained. In contrast, the ICNN has convex level sets (by construction) as decision boundaries, which leads to a convex optimization problem. APPENDIX B. PROOF OF THEOREM 1 Proof. Lemma 1 follows from well established facts in function analysis stating that piecewise linear functions are dense in the space of all continuous functions over compact sets (Royden & Fitzpatrick, 2010) and convex piecewise linear functions are dense in the space of all convex continuous functions (Cox, 1971; Gavrilović, 1975). Using the fact that convex piecewise linear functions can be represented as a maximum of affine functions (Magnani & Boyd, 2009; Wang, 2004) gives the desired result in the lemma. Lemma 1 shows that all continuous Lipschitz convex functions f (x) : Rd → R over convex compact sets can be approximated using maximum of affine functions. Then it suffices to show that an ICNN can exactly represent a maximum of affine functions. To do this, we first construct a neural network with ReLU activation function with both positive and negative weights that can represent a maximum of affine functions. Then we show how to restrict all weights to be nonnegative. As a starting example, consider a maximum of two affine functions fCPL(x) = max{aT1 x+b1,aT2 x+b2}. (8) To obtain the exact same function using a neural network, we first rewrite it as fCPL(x) = (aT2 x+b2)+max ( (a1−a2)T x+(b1−b2),0 ) . (9) Now define a two-layer neural network with layers z1 and z2 as shown in Fig. 6: z1 = σ ( (a1−a2)T x+(b1−b2) ) , (10a) z2 = z1 +aT2 x+b2 (10b) where σ is the ReLU activation function and the second layer is linear. By construction, this neural network is the same function as fCPL given in (8). The above argument extends directly to a maximum of K linear functions. Suppose fCPL(x) = max{aT1 x+b1, ...,aTKx+bK} (11) Again the trick is to rewrite fCPL(x) as a nested maximum of affine functions. For notational convenience, let Li = aTi x+bi, L′i = Li−Li+1. Then fCPL = max{L1,L2, ...,LK} = max{max{L1,L2, ...,LK−1},LK} = LK +σ (max{L1,L2, ...,LK−1}−LK) = LK +σ (max{max{L1,L2, ...,LK−2},LK−1}−LK ,0) = LK +σ (LK−1−LK +σ (max{L1,L2, ...,LK−2}−LK−1,0) ,0) = ... = LK +σ ( L′K−1 +σ ( L′K−2 +σ ( ...σ ( L′2 +σ (L1−L2,0) ,0 ) , ...,0 ) ,0 ) ,0 ) . The last equation describes a K layer neural network, where the layers are: z1 = σ (L1−L2,0) = σ ( (a1−a2)T x+(b1−b2) ) , z2 = σ ( L′2 + z1,0 ) = σ ( z1 +(a2−a3)T x+(b2−b3) ) , ...... zi = σ ( L′i + zi−1,0 ) = σ ( zi−1 +(ai−ai+1)T x+(bi−bi+1) ) , ...... zK = zK−1 +LK = hK (zK−1 +LK) = ( zK−1 +aTKx+bK ) . Each layer of of this neural network uses only a single activation function. Although the above neural network exactly represent a maximum of linear functions, it is not convex since the coefficients between layers could be negative. In particular, each layer involves an inner product of the form (ai−ai+1)T x and the coefficients are not necessarily nonnegative. To overcome this, we simply expand the input to include x and −x. Namely, define a new input x̂ ∈ R2d as x̂ = [ x −x ] . (12) Then any inner product of the form hT x can be written as hT x = d ∑ j=1 hixi = ∑ i:hi≥0 hixi + ∑ i:hi<0 hixi = ∑ i:hi≥0 hixi + ∑ i:hi<0 (−hi)(−xi) = ∑ i:hi≥0 hix̂i + ∑ i:hi<0 (−hi)(x̂i+d), where all coefficients are nonnegative in the above sum. Therefore any inner product between a coefficient vector and the input x can be written as an inner product between a nonnegative coefficient vector and the expanded input x̂. Therefore, without loss of generality, we can limit all of the weights between layers to be nonnegative, and thus the neural network to be input convex. Note that in optimization problems, we need to enforce consistency in x̂ be including (12) as a constraint. However, this is a linear equality constraint, which maintains the convexity of the optimization problem. APPENDIX C. PROOF OF THEOREM 2 Proof. The second statement of Theorem 2 directly follows the construction in the proof of Theorem 1, which shows that a maximum of K affine functions can be represent by a K-layer ICNN (with a single ReLU function in each layer). So it remains to show the first statement of Theorem 2. To show that a maximum of affine functions can require exponential number of pieces to approximate a function specified by an ICNN with K activation functions, consider a network with 1 hidden layer of K nodes and the weights of direct “passthrough” layers are set to 0: fICNN(x) = K ∑ i=1 w1iσ(wT0ix+bi) , (13) It contains 3K parameters: w0i, w1i and bi, where w0i ∈ Rd and w1i,bi ∈ R. In order to represent the same function by a maximum of affine functions, we need to assess the value of every activation unit σ(wT0ix+bi). If w T 0ix+bi ≥ 0, σ(wT0ix+bi) = wT0ix+bi; otherwise, σ(wT0ix+bi) = 0. In total, we have 2 K potential combinations of piecewise-linear function, including L1 = ( K ∑ i=1 w1iw0i )T x+ K ∑ i=1 w1ibi , if all wT0ix+bi ≥ 0 L2 = ( K ∑ i=2 w1iw0i )T x+ K ∑ i=2 w1ibi , if wT01x+b1 < 0 and all other w T 0ix+bi ≥ 0 L3 = ( w11w01 + K ∑ i=3 w1iw0i )T x+w1ibi + K ∑ i=3 w1ibi , if wT02x+b2 < 0 and other w T 0ix+bi ≥ 0 · · · · · · , L2K =0 , if all w T 0ix+bi < 0. So the following maximum over 2K pieces is required to represent the single linear ICNN: max{L1,L2, ...,L2K}. APPENDIX D. EXPERIMENTAL DETAILS ON MUJOCO TASKS D.1 DATA COLLECTION Rollout Samples To train the neural network dynamics model (both ICNN and MLP), we first collect initial rollout data using fully random action sequences ut ∼ Uniform[-1,1] with a random chosen initial state. During the data collection process in aggregated iterations, to improve model generalization and explore larger state spaces, we add Gaussian noise to the optimal control policies ut = ut +N (0,0.001). Neural Networks Training We represent the MuJoCo dynamics with a 2-hidden-layer neural networks with hidden sizes 512−512. The passthrough links of ICNN are of same size of corresponding added layers. We train both models using Adam optimizer with a learning rate 0.001 and a mini-batch size of 512. Due to the different complexity of MuJoCo tasks, we vary training epochs and summarize the training details in Table. 1. D.2 ENVIRONMENT DETAILS In all of the MuJoCo locomotion tasks, s includes state variables such as robot positions, velocity along each axis; u includes action efforts for the agent. We use standard reward functions r(st ,ut) for moving tasks, which could be also promptly calculated in (6a) as the control objective. For the ease of neural network training and action sampling, we normalize all the action and states in the range of [−1, 1]. We use DAGGER (Ross et al., 2011) for 6 aggregated iterations for all cases, and during aggregated iteration, we use a split of 10% random rollouts collected as described in 5, and other 90% coming from past iterations’ control policies (on-policy rollouts). Note that we use 10 random control sequences in our method to initialize the policy finding approach and avoid the long computation time for taking gradients on finding optimal ut . Other environment parameters are described in Table. 1. D.3 WALL-CLOCK TIME In Table.2, we show the average run time for the total of 6 aggregation iterations over 3 runs. Finding control policies via ICNN is using less or equal training time compared to random-shooting method with K = 100, while achieving better task rewards than K = 1000 for different control horizons. All the experiments are running on a computer with 8 cores Intel I7 6700 CPU. Note that we do not use GPU for accelerating ICNN optimization step (6), which could furthur improve our method’s efficiency. D.4 DETAILS OF SIMULATION RESULTS MuJoCo Dynamics Modeling In Fig. 7, we compare the ICNN and normal MLP fitting performance of the MuJoCo dynamics modeling (6b), which illustrates that both MLP and ICNN are able to find a data-driven dynamics model for ant MuJoCo agent, which is of the most complex dynamics we considered for locomotion tasks. The multi-step prediction errors of ICNN is comparable to normal MLP used in (Nagabandi et al., 2018) for different length of rollout steps. More Simulation Results In Fig. 8, we compare our control method with random-shooting approach with varying settings on shooting number K, which shows that our approach is more efficient in finding control policies. In Fig. 9, we compare our control method with the rllab implementation of trust region policy optimization (TRPO) (Schulman et al., 2015), an end-to-end deep reinforcement learning approach for mujoco locomotion tasks. More specifically, we compare the algorithms’ performances with relatively few available rollout samples. While our approach quickly learns the dynamics and then find control actions via optimization steps, TRPO is hard to learn the actions directly with few provided rollouts. Similarly to the model-based and model-free (Mb-Mf) approach described in (Nagabandi et al., 2018), our control method could provide good initialization samples for the model-free algorithms, which could greatly accelerate the training process of model-free algorithms. APPENDIX E. DETAILS ON BUILDING ENERGY MANAGEMENT E.1 MINIMIZING ELECTRICITY COSTS To further demonstrate the potential of our proposed control framework in dealing with different real world tasks, we modify the setting of the building control example in Section 4.2 to a more complicated case. Instead of directly minimize the total energy consumption of building, we aim to minimize the total energy cost of building which subject to a varying time-of-use electrical price λ . The optimization problem in (7) should be re-written as, minimize ut ,...,ut+T T ∑ τ=0 λτ · f (xt+τ−nw , ...,xt+τ) (14a) subject to st+τ = g(xt+τ−nw , ...,xt+τ−1,ut+τ),∀τ (14b) ut+τ ≤ ut+τ ≤ ut+τ ,∀τ (14c) st+τ ≤ st+τ ≤ st+τ ,∀τ (14d) where the objective (14a) is minimizing the total energy cost of building in future T steps (T is the model predictive control horizon) subject to time-of-use electricity price λτ , and (14b) is used for modeling building states, in which g(·) are parameterized as ICRNNs. Same as the previous building control case, we have constraints on both control actions ut and system states st are given in (14c) and (14d). For instance, the temperature set points as well as real measurements should not exceed user-defined comfort regions. In Fig. 10 we visualize our model flexibility by using Seattle’s Time-of-Use (TOU) price from Seattle City Light 4, and minimizing one week’s electricity bills. We could see ICRNN capture the long term relationships between control variables and final costs, and raise the energy consumption during off-peak price a little, but reduce the energy consumption during peak hours. 4http://www.seattle.gov/light/ E.2 CONTROL CONSTRAINTS EFFECTS In Fig. 11 we add one more comparison on the control constraints effects on the final control performance by using ICRNN. Interestingly, with different set point constraints, the ICRNN finds similar solutions for off-peak electricity usage, which may correspond to necessary energy consumptions, such as lightning and ventilation. Moreover, when we set no constraints on the system, it would cut down more than 80% of total energy during peak hours.
1. What is the main contribution of the paper regarding neural networks and their application in dynamic control problems? 2. What are the strengths of the proposed approach compared to non-convex models? 3. Are there any concerns or suggestions regarding the theoretical analysis, specifically regarding Lemma 1 and Theorem 1? 4. How does the reviewer assess the experimental results and their presentation in the paper? 5. Are there any questions or suggestions regarding the training process and optimization of the proposed neural networks? 6. What are some minor typos or formatting issues in the review?
Review
Review The paper proposes neural networks which are convex on inputs data to control problems. These types of networks, constructed based on either MLP or RNN, are shown to have similar representation power as their non-convex versions, thus are potentially able to better capture the dynamics behind complex systems compared with linear models. On the other hand, convexity on inputs brings much convenience to the later optimization part, because there are no worries on global/local minimum or escaping saddle points. In other words, convex but nonlinear provides not only enough search space, but also fast and tractable optimization. The compromise here is the size of memory, since 1) more weights and biases are needed to connect inputs and hidden layers in such nets and 2) we need to store also the negative parts on a portion of weights. Even though the idea of convex networks were not new, this work is novel in extending input convex RNN and applying it into dynamic control problems. As the main theoretical contribution, Theorem 2 shows that to have same representation power, input convex nets use polynomial number of activation functions, compared with exponential from using a set of affine functions. Experiments also show such effectiveness. The paper is clearly and nicely written. These are reasons I suggest accept. Questions and suggestions: 1) For Lemma 1 and Theorem 1, I wonder whether similar results can be established for non-convex functions. Intuitively, it seems that as long as assuming Lipschiz continuous, we can always approximate a function by a maximum of many affine functions, no matter it is convex or not. Is this right or something is missing? 2) In the main paper, all experiments were aimed to address ICNN and ICRNN have good accuracy, but not they are easier to optimize due to convexity. In the abstract, it is mentioned "... using 5X less time", but I can only see this through appendix. A suggestion is at least describing some results on the comparison with training time in the main paper. 3) In Appendix A, it seems the NN is not trained very well as shown in the left figure. Is this because the number of parameters of NN is restricted to be the same as in ICNN? Do training on both spend the same resource, ie, number of epoch? Such descriptions are necessary here. 4) In Table 2 in appendix, why the running time of ICNN increases by a magnitude for large H in Ant case? Typos: Page 1 "simple control algorithms HAS ..." Page 7 paragraph "Baselines": "Such (a) method". In the last line of Table 2, 979.73 should be bold instead of 5577. There is a ?? in appendix D.4.
ICLR
Title Feature-map-level Online Adversarial Knowledge Distillation Abstract Feature maps contain rich information about image intensity and spatial correlation. However, previous online knowledge distillation methods only utilize the class probabilities. Thus in this paper, we propose an online knowledge distillation method that transfers not only the knowledge of the class probabilities but also that of the feature map using the adversarial training framework. We train multiple networks simultaneously by employing discriminators to distinguish the feature map distributions of different networks. Each network has its corresponding discriminator which discriminates the feature map from its own as fake while classifying that of the other network as real. By training a network to fool the corresponding discriminator, it can learn the other network’s feature map distribution. Discriminators and networks are trained concurrently in a minimax twoplayer game. Also, we propose a novel cyclic learning scheme for training more than two networks together. We have applied our method to various network architectures on the classification task and discovered a significant improvement of performance especially in the case of training a pair of a small network and a large one. N/A Feature maps contain rich information about image intensity and spatial correlation. However, previous online knowledge distillation methods only utilize the class probabilities. Thus in this paper, we propose an online knowledge distillation method that transfers not only the knowledge of the class probabilities but also that of the feature map using the adversarial training framework. We train multiple networks simultaneously by employing discriminators to distinguish the feature map distributions of different networks. Each network has its corresponding discriminator which discriminates the feature map from its own as fake while classifying that of the other network as real. By training a network to fool the corresponding discriminator, it can learn the other network’s feature map distribution. Discriminators and networks are trained concurrently in a minimax twoplayer game. Also, we propose a novel cyclic learning scheme for training more than two networks together. We have applied our method to various network architectures on the classification task and discovered a significant improvement of performance especially in the case of training a pair of a small network and a large one. 1 INTRODUCTION With the advent of Alexnet (Krizhevsky et al., 2012), deep convolution neural networks have achieved remarkable success in a variety of computer vision tasks. However, high-performance of deep neural network is often gained by increasing the depth or the width of a network. Deep and wide networks cost a large number of computation as well as memory storage which is not suitable for a resource-limited environment such as mobile or embedded systems. To overcome this issue, many researches have been conducted to develop smaller but accurate neural networks. Some of the well-known methods in this line of research are parameter quantization or binarization (Rastegari et al., 2016), pruning (Li et al., 2016) and knowledge distillation (KD) (Hinton et al., 2015). KD has been an active area of research as a solution to improve the performance of a light-weight network by transferring the knowledge of a large pre-trained network (or an ensemble of small networks) as a teacher network. KD sets the teacher network’s class probabilities as a target which a small student network tries to mimic. By aligning the student’s predictions to those of the teacher, the student can improve its performance. Recently, some studies have shown that rather than using a pretrained teacher, simultaneously training networks to learn from each other in a peer-teaching manner is also possible. This approach is called online distillation. Deep mutual learning (DML) (Zhang et al., 2018) and on-the-fly native ensemble (ONE) (Lan et al., 2018) are the representative online distillation methods that show appealing results in the image classification tasks. Conventional distillation method requires pre-training a powerful teacher network and performs an one-way transfer to a relatively small and untrained student network. On the other hand, in online mutual distillation, there is no specific teacher-student role. All the student networks learn simultaneously by teaching each other from the start of training. It trains with the conventional cross-entropy loss from the ground truth label along with the mimicry loss to learn from its peers. Networks trained in such an online distillation way achieve results superior not only to the networks trained with the cross-entropy loss alone but also to those trained in conventional offline distillation manner from a pre-trained teacher network. However, aforementioned online distillation methods make use of only the logit information. While the logit contains the probabilistic information over classes, the feature map, the output of convolution layer, has more meaningful and abundant feature information on image intensity and spatial correlation. In offline distillation which utilizes a pre-trained model as a teacher network, many methods such as FitNet (Romero et al., 2014), attention transfer (AT) (Zagoruyko & Komodakis, 2016a) and factor transfer (FT) (Kim et al., 2018) make use of this intermediate feature representation as a target to learn for the student network, but in online distillation, to the best of our knowledge, no feature map-based knowledge distillation method has been proposed. This is due to some challenges. Unlike the offline methods that have a clear target to mimic, there is no static target to follow in an online method. At every training iteration, the feature maps of the co-trained network change, thus in online feature map-level distillation, the problem turns into mimicking the moving target properly. While each node of the logit is confined to represent its assigned class probability which does not change drastically over iterations, at the feature map-level, much more flexibility comes into play, which makes the problem more challenging. Therefore, the direct aligning method such as using L1 or L2 distance is not suitable for online mutual feature map distillation because it updates the network parameters to generate a feature map that tries to mimic the current output feature map of the other network. In other words, the direct alignment method only tries to minimize the distance between the two feature map points (one for each network), hence it ignores the distributional difference between the two feature maps (Fig. 1(a)). To alleviate this problem, in this paper, we propose a novel online distillation method that transfers the knowledge of feature maps adversarially as well as a cyclic learning framework for training more than two networks simultaneously. Unlike the direct aligning method, our adversarial distillation method enables a network to learn the overall feature map distribution of the co-trained network (Fig. 1(b)). Since the discriminator is trained to distinguish the difference between the networks’ feature map distributions (containing the history of feature maps for different input images) at every training iteration, by fooling the discriminator, the network learns the co-trained network’s changing feature map distribution. Exchanging the knowledge of feature map distribution facilitates the networks to converge to a better feature map manifold that generalizes better and yields more accurate results. Our method consists of two major losses: 1) logit-based loss and 2) feature map-based loss. Logitbased loss is defined by two different loss terms which are conventional cross-entropy (CE) loss and the mutual distillation loss using the Kullback-Leibler divergence (KLD). Our newly proposed feature map-based loss is to distill the feature map indirectly via discriminators. We use the feature map from the last convolution layer since deeper convolution layer generates more meaningful features with a high-level abstraction (Kim et al., 2018). The adversarial training scheme of generative adversarial networks (GAN) (Goodfellow et al., 2014) is utilized to transfer the knowledge at feature map-level. The contributions of this paper can be summarized as follows: 1) we propose an online knowledge distillation method that utilizes not only the logit but also the feature map from the convolution layer. 2) Our method transfers the knowledge of feature maps not by directly aligning them using the distance loss but by learning their distributions using the adversarial training via discriminators. 3) We propose a novel cyclic learning scheme for training more than two networks simultaneously. 2 RELATED WORK The idea of model compression by transferring the knowledge of a high performing model to a smaller model was originally proposed by Buciluǎ et al. (2006). Then in recent years, this research area got invigorated due to the work of knowledge distillation (KD) by Hinton et al. (2015). The main contribution of KD is to use the softened logit of pre-trained teacher network that has higher entropy as an extra supervision to train a student network. KD trains a compact student network to learn not only by the conventional CE loss subjected to the labeled data but also by the final outputs of the teacher network. While KD only utilizes the logit, method such as FitNet (Romero et al., 2014), AT (Zagoruyko & Komodakis, 2016a), FT (Kim et al., 2018) and KTAN (Liu et al., 2018) use the intermediate feature representation to transfer the knowledge of a teacher network. Online Knowledge Distillation: Conventional offline methods require training a teacher model in advance while online methods do not require any pre-trained model. Instead, the networks teach each other mutually by sharing their knowledge throughout the training process. Some examples of recent online methods are DML (Zhang et al., 2018) and ONE (Lan et al., 2018) which demonstrate promising results. DML simply applies KD losses mutually, treating each other as teachers, and it achieves results that is even better than the offline KD method. The drawback of DML is that it lacks an appropriate teacher role, hence provides only limited information to each network. ONE pointed out this defect of DML. Rather than mutually distilling between the networks, ONE generates a gated ensemble logit of the training networks and uses it as a target to align for each network. ONE tries to create a powerful teacher logit that can provide more generalized information. The flaw of ONE is that it can not train different network architectures at the same time due to its architecture of sharing the low-level layers for the gating module. The common limitation of existing online methods is that they are dependent only on the logit and do not make any use of the feature map information. Considering that KD loss term is only applicable to the classification task, transferring knowledge at feature map-level can enlarge the applicability to other tasks. Therefore, our method proposes a distillation method that utilizes not only the logit but also the feature map via adversarial training, moreover, our method can be applied in case where the co-trained networks have different architectures. Generative Adversarial Network (GAN): GAN (Goodfellow et al., 2014) is a generative model framework that is proposed with an adversarial training scheme, using a generator network G and a discriminator network D. G learns to generate the real data distribution while D is trained to distinguish the real samples of the dataset from the fake results generated by G. The goal of G is to trickD to make a mistake of determining the fake results as the real samples. Though it was initially proposed for generative models, its adversarial training scheme is not limited to data generation. Adversarial training has been adapted to various tasks such as image translation (Isola et al., 2017; Zhu et al., 2017), captioning (Dai et al., 2017), semi-supervised learning (Miyato et al., 2016; Springenberg, 2015), reinforcement learning (Pfau & Vinyals, 2016), and many others. In this paper, we utilize GAN’s adversarial training strategy to transfer the knowledge at feature map-level in an online manner. The networks learn the other networks’ feature map distributions by trying to deceive the discriminators while the discriminators are trained to distinguish the different distributions of each network. 3 PROPOSED METHOD In this section, we describe the overall process of our proposed Online Adversarial Feature map Distillation (AFD). As can be seen in Figure 2, when training two different networks, Θ1 and Θ2, in an online manner, we employ two discriminators, D1 and D2. We train D1 such that the feature map of Θ2 is regarded as a real and that of Θ1 is classified as a fake and do vice versa for discriminator D2. Then, each network Θ1 and Θ2 are trained to fool its corresponding discriminator so that it can generate a feature map that mimics the other network’s feature map. Throughout this adversarial training, each network learns the feature map distribution of the other network. By exploiting both logit-based distillation loss and feature map-based adversarial loss together, we could observe a significant improvement of performance in various pairs of network architectures especially when training small and large networks together. Also we introduce a cyclic learning scheme for training more than two networks simultaneously. It reduces the number of required discriminators from 2×2 CK(when employing discriminators bidirectionally between every network pairs.) to K where K is the number of networks participating. This cyclic learning framework not only requires less computation than the bidirectional way but also achieves better results compared to other online training schemes for multiple networks. First, we explain the conventional mutual knowledge distillation method conducted among the networks at the logit-level. Then we introduce our novel online feature map distillation method using the adversarial training scheme in addition to the cyclic learning framework for training more than two networks at the same time. 3.1 LOGIT-BASED MUTUAL KNOWLEDGE DISTILLATION We use two loss terms for logit-based learning, one is the conventional cross-entropy(CE) loss and the other is mutual distillation loss between networks based on Kullback Leibler(KL) divergence. We formulate our proposed method assuming training two networks. Training scheme for more than two networks will be explained in Sec 3.3. Below is the overall logit-based loss for two networks: L1logit = Lce(y, σ(z1)) + T 2 × Lkl(σ(z2/T ), σ(z1/T )) (1) L2logit = Lce(y, σ(z2)) + T 2 × Lkl(σ(z1/T ), σ(z2/T )). (2) Here, σ(·) refers to softmax function and z ∈ RC is the logit produced from a network for Cclass classification problem. The temperature term T is used to control the level of smoothness in probabilities. As the temperature term T goes up, it creates a more softened probability distribution. We use T = 3 for every experiment. Lce is the CE loss between the ground truth label y and the softmax output σ(z) that is commonly used in image classification. Lkl is the KL loss between the softened logit of each network. We multiply the KL loss term with T 2 because the gradients produced by the soft targets are scaled by 1/T 2. While the CE loss is between the correct labels and the outputs of the model, the KL loss is the KL distance between the outputs of two training networks. The KL loss provides an extra information from the peer network so that the network can improve its generalization performance. The difference with DML is that while DML updates asynchronously which means that it updates one network first and then the other network, our AFD updates the networks synchronously, not alternatingly. The CE loss trains the networks to predict the correct truth label while the mutual distillation loss tries to match the outputs of the peer-networks, enabling the networks to share the knowledge at logit-level. 3.2 FEATURE MAP-BASED LEARNING VIA ADVERSARIAL TRAINING Our AFD uses adversarial training to transfer knowledge at feature map-level. We formulate our adversarial feature map distillation for two networks which will be extended for more networks later. We divide a network into two parts, one is the feature extractor part that generates a feature map and the other is the classifier part that transforms the feature map into a logit. Each network also has a corresponding discriminator which distinguishes different feature map distributions. The architecture of the discriminator is simply a series of Conv-Batch Normalization-Leaky ReLU-Conv-Sigmoid. It takes a feature map of the last layer and it reduces the spatial size and the number of channel of the feature map as it goes through the convolution operation so that it can produce a single scalar value. Then we apply the sigmoid function of the value to normalize it between 0 and 1. We utilize the feature extractor part to enable feature map-level distillation. For the convenience of mathematical notation, we name the feature extractor part as Gk and its discriminator as Dk, k indicates the network number. As depicted in Figure 2, each network has to fool its discriminator to mimic the peer network’s feature map and the discriminator has to discriminate from which network the feature map is originated. Following LSGAN (Mao et al., 2017), our overall adversarial loss for discriminator and the feature extractor can be written as below: LD1 = [1−D1(G2(x))]2 + [D1(G1(x))]2 (3) LG1 = [1−D1(G1(x))]2. (4) The feature extractors G1 and G2 take input x and generate feature maps. The discriminator D1 takes a feature map and yields a scalar between 0 (fake) and 1 (real). It is trained to output 1 if the feature map came from the co-trained network (in this case, G2) or 0 if the feature map is produced from the network it belongs to (G1 in this case). The goal ofD1 is to minimize the discriminator loss term LD1 by correctly distinguishing the two different feature map distributions while G1’s goal is to minimize the loss term LG1 by fooling D1 to make mistake of determining G1’s feature map as real and yield 1. Each training network’s object is to minimize LGk to mimic the peer network’s feature map distribution. This adversarial scheme works exactly the same by changing the role of two networks. In case where the two networks’ feature map outputs have different channel sizes, for example a pair like (WRN-16-2, WRN-16-4) (Zagoruyko & Komodakis, 2016b), we use a transfer layer that is composed of a convolution layer, a batch normalization and a ReLU which converts the number of channels to that of peer network. The above loss terms change as LD1 = [1−D1(T2(G2(x)))]2 + [D1(T1(G1(x)))] 2 and LG1 = [1−D1(T1(G1(x)))]2 when using the transfer layer Tk. Optimization: Combining both logit-based loss and the adversarial feature map-based loss, the overall loss for each network Θ1 and Θ2 are as follows: LΘ1 = L1logit + LG1 , LΘ2 = L2logit + LG2 (5) However, the logit-based loss termLklogit and the feature map-based loss termLGk are not optimized by the same optimizer. In fact, they are optimized alternatingly in a same mini-batch. At every minibatch iteration, we infer an image into a model and it computes a logit and a feature map. Then we calculate the two loss terms and optimize the networks based on the two losses separately, meaning that we update the parameters by the logit-based loss once and then update again by the feature map-based loss. The reason we optimize separately for each loss term is because they use different learning rates. The adversarial loss requires much slower learning rate thus if we use the same optimizer with the same learning rate, the networks would not be optimized. Note that we do not infer for each loss term, inference is conducted only once, only the optimization is conducted twice, one for each loss term. 3.3 CYCLIC LEARNING FRAMEWORK Our method proposes a novel cyclic peer-learning scheme for training more than two networks simultaneously. As can be seen in Figure 3, each network transfers its knowledge to its next peer network in an one-way cyclic manner. If we train K number of networks together, each network distills its knowledge to its next network except the last network transfers its knowledge to the first network, creating a cyclic knowledge transfer flow as 1 → 2, 2 → 3, · · · , (K − 1) → K,K → 1. The main contribution of using this cyclic learning framework is to avoid employing too many number of discriminators. If we apply our adversarial loss for every pair of networks, it would demand two times the amount of every possible pair of K networks which would cost a lot of computation. Also in Sec 4.5, we empirically show that our cyclic training scheme is better than other online methods’ training scheme for multiple networks. 4 EXPERIMENT In this section, to show the adequacy of our method, we first present comparison experiment with distance method and ablation study to analyze our method. Then we compare our approach with existing online knowledge distillation methods under different settings. First of all, we demonstrate results on using the same sub-network architectures in Sec 4.3. Then, we apply our method on subnetworks with different architectures in Sec 4.4. In Sec 4.5, we also show the results of training more than two networks to demonstrate that our method generalizes well even when the number of networks increases. In most of the experiments, we use the CIFAR-100 (Krizhevsky et al.) dataset. It consists of 50K training images and 10K test images over 100 classes, accordingly it has 600 images per each class. All the reported results on CIFAR-100 are average of 5 experiments. Since our method uses two loss terms, logit-based loss and feature map-based loss, we use different learning details for each loss term. For overall learning schedule, we follow the learning schedule of ONE(Lan et al., 2018) to conduct fair comparison which is 300 epochs of training. In terms of logit-based loss, the learning rate starts at 0.1 and is multiplied by 0.1 at 150, 225 epoch. We optimize the logit-based loss using SGD with mini-batch size of 128, momentum 0.9 and weight decay of 1e-4. This learning details for logit-based loss is equally applied to other compared online distillation methods. For feature map-based loss, the learning rate starts at 2e-5 for both discriminators and feature extractors and is decayed by 0.1 at 75, 150 epoch. The feature map-based loss is optimized by ADAM(Kingma & Ba, 2014) with the same mini-batch size and weight decay of 1e-1. In tables, ‘2 Net Avg’ and ‘Ens’ represents the average accuracy of the two sub-networks and the ensemble accuracy respectively. The average ensemble is used for AFD, DML and KD while ONE uses gated ensemble of sub-networks according to its methodology. 4.1 COMPARISON WITH DIRECT FEATURE MAP ALIGNMENT METHODS Since our goal is to distill feature map information that suits for mutual online distillation, we briefly compare our method with conventional direct alignment method in Table 1. We train two networks together, in one setting, we use the same architecture (ResNet-32 (He et al., 2016)) and in the other, we use different types (WRN-16-2, WRN-28-2 (Zagoruyko & Komodakis, 2016b)). For L1, each network is trained not only to follow the ground-truth label by CE loss, but also to mimic the other network’s feature map using the L1 distance loss. For L1+ KD, KD (Hinton et al., 2015) loss is applied mutually along with the L1 loss between the feature maps. We also compare our results with offline method, L1+ KD (offline) employs a pre-trained network as a teacher network and distills its feature map knowledge to an untrained student network by L1 loss as well as the KD loss at logit level. ResNet-32 and WRN-28-2 that shows 69.79% and 73.62% accuracy are used as the teacher networks in the two settings respectively. The results clearly show that learning the distributions of feature maps with adversarial loss performs better than direct alignment method in both mutual online distillation and offline distillation. We could observe that using L1 distance loss actually disturbs the networks to learn good features in online environment. The accuracy of ResNet-32 has dropped more than 2% compared to its vanilla version accuracy (69.38%) and the accuracy of WRN-16-2 is also lower than its vanilla network (71.07%). Even when combined with KD loss(L1 + KD), direct alignment method shows poor performance compared to ours in both online and offline manner. Though distance loss is used in many conventional offline methods, they suffer when it comes to online environment. In case of different architecture types, our method also outperforms the direct alignment method. It indicates that when it comes to online feature map distillation, transferring feature map information with direct alignment method such as L1 distance is worse than indirect distillation that uses feature map distribution via adversarial loss. 4.2 ABLATION STUDY Table 2 shows the ablation study of our proposed method. We conduct experiments using the same and different sub-network architectures. We run three experiments with different training settings for each model case. The three settings are full model, without mutual knowledge distillation at logitlevel and without adversarial feature map distillation. When trained without the adversarial feature map distillation, the accuracy decreases in all three model cases. The accuracy of both ResNet-32 and WRN-16-2 dropped by 0.65% and 0.52% respectively, and those of (WRN-16-2, WRN-28-2) pair declined by 0.89% and 0.44% compared to the full model. Ensemble results are also lower than those of the full models. When only the adversarial feature map distillation is applied, the accuracy has increased by 0.71% and 0.87% compared to the vanilla versions of ResNet-32 and WRN-16-2 respectively. Especially in case of different sub-network architecture, the accuracy of WRN-16-2 has increased by almost 1%. Based on these experiments, we could confirm that adversarial feature map distillation has some efficacy of improving the performance in online environment. 4.3 SAME ARCHITECTURE We compare our method with DML and ONE for training two sub-networks with the same architecture. The vanilla network refers to the original network trained without any distillation method. As shown in Table 3, in both ResNet and WRN serises, DML, ONE and AFD all improves the networks’ accuracy compared to the vanilla networks. However, AFD shows the highest improvement of performance in both sub-network and ensemble accuracy among the compared distillation methods. Especially in case of ResNet-20, ResNet-32 and WRN-16-2, our method significantly improves the accuracy by more than 4% compared to the vanilla version while other distillation methods improve around 3% on average except the ResNet-32 of DML. 4.4 DIFFERENT ARCHITECTURE In this section, we compare our method with DML and KD using different network architectures. We set Net2 as the higher capacity network. For KD, we use the ensemble of the two sub-networks as a teacher to mimic at every iteration. The difference with original KD (Hinton et al., 2015) is that it is an online learning method, not offline. We did not include ONE because ONE can not be applied in case where the sub-networks have different model types due to its architecture of sharing the lowlevel layers. In table 4, we could observe that our method shows better performance improvement than other methods in both Net1 and Net2 except for a couple of cases. The interesting result is that when AFD is applied, the performance of Net1 (smaller network) is improved significantly compared to other online distillation methods. This is because AFD can transfer the higher capacity network’s meaningful knowledge (feature map distribution) to the lower capacity one better than other online methods. When compared with KD and DML, AFD’s Net1 accuracy is higher by 1.66% and 1.15% and the ensemble accuracy is better by 0.61% and 0.67% on average respectively. In case of (WRN-16-2, WRN-28-4) pair, the Net1’s parameter size (0.70M) is more than 8 times smaller than Net2 (5.87M). Despite the large size difference, our method improves both networks’ accuracy, particularly our Net1 performance is better than KD and DML by 1.72% and 1.28% respectively. The performance of KD and DML seems to decline as the difference between the two model sizes gets larger. Throughout this experiment, we have shown that our method also works properly for different architectures of sub-networks even when two networks have large difference in their model sizes. Using our method, smaller network considerably benefits from the large network. 4.5 EXPANSION TO 3 NETWORKS To show our method’s expandability for training more than two networks, we conduct experiment of training 3 networks in this section. As proposed in Sec 3.3, our method uses a cyclic learning framework rather than employing adversarial loss between every network pairs in order to reduce the amount of computation and memory. DML calculates the mutual knowledge distillation loss between every network pairs and uses the average of the losses. ONE generates a gated ensemble of the sub-networks and transfers the knowledge of the ensemble logit to each network. As it can be seen in Table 5, AFD outperforms the compared online distillation methods on both 3 Net average and ensemble accuracy in every model types. Comparing the results of Table 5 to that of Table 3, the overall tendency of performance gains compared to DML and ONE is maintained. 4.6 IMAGENET EXPERIMENT We evaluate our method on ImageNet dataset to show that our method can also be applicable to a large scale image dataset. We use ImageNet LSVRC 2015 (Russakovsky et al., 2015) which has 1.2M training images and 50K validation images over 1,000 classes. We compare our method with DML using two pre-trained networks ResNet-18 and ResNet-34 as a pair. The results are after 30 epochs of training. As shown in Table 6, our method improves the networks better than DML. 5 CONCLUSION We proposed an online knowledge distillation method that transfers the knowledge not only at logitlevel but also at feature map-level using the adversarial training scheme. Unlike existing online distillation methods, our method utilizes the feature map information and showed that knowledge transfer at feature map-level is possible even in an online environment. Through extensive experiments, we demonstrated the adequacy of adopting the distribution learning via adversarial training for online feature map distillation and could achieve better performance than existing online methods. We also introduced a novel cyclic learning framework for training multiple networks concurrently and presented its efficacy by comparing with existing approaches. We also confirmed that our method is broadly suitable to various architecture types from a very small network (ResNet-20) to a large (WRN-28-4) network. We hope that due to the work of our research, the area of knowledge distillation can be further advanced and studied by many researchers.
1. What is the focus of the paper, and what are the proposed methods? 2. What are the strengths of the proposed approach, particularly in terms of its performance in ablation studies and comparisons with other works? 3. What are the weaknesses of the paper, especially regarding theoretical analysis and experiment details? 4. How can the authors improve the paper by providing more thorough analyses and experimental results?
Review
Review In this paper, the authors study the online knowledge distillation problem and propose a method called AFD (Online Adversarial Feature map Distillation), which aims to transfers the knowledge of intermediate feature map (first propose) using adversarial training. Then, a cyclic learning scheme is proposed to train more than two networks simultaneously and efficiently. Ablation study on CIFAR100 shows that the adversarial training in AFD can improve the accuracy significantly, while the direct method such as using L1 distance is worse. The comparison experiments with several online distillation methods also show the effectiveness of proposed method. Some comments or suggestions: (i) The theoretical analysis is lacking. For example, some formulas proofs can be added to illustrate that the adversarial feature map distillation is more advantageous than the direct feature map alignment. (ii) The details of the experiments such as parameter configurations are missing, which makes the results not easy to be reproduced. (iii) Tab.1 and Tab.2 can be combined.
ICLR
Title Feature-map-level Online Adversarial Knowledge Distillation Abstract Feature maps contain rich information about image intensity and spatial correlation. However, previous online knowledge distillation methods only utilize the class probabilities. Thus in this paper, we propose an online knowledge distillation method that transfers not only the knowledge of the class probabilities but also that of the feature map using the adversarial training framework. We train multiple networks simultaneously by employing discriminators to distinguish the feature map distributions of different networks. Each network has its corresponding discriminator which discriminates the feature map from its own as fake while classifying that of the other network as real. By training a network to fool the corresponding discriminator, it can learn the other network’s feature map distribution. Discriminators and networks are trained concurrently in a minimax twoplayer game. Also, we propose a novel cyclic learning scheme for training more than two networks together. We have applied our method to various network architectures on the classification task and discovered a significant improvement of performance especially in the case of training a pair of a small network and a large one. N/A Feature maps contain rich information about image intensity and spatial correlation. However, previous online knowledge distillation methods only utilize the class probabilities. Thus in this paper, we propose an online knowledge distillation method that transfers not only the knowledge of the class probabilities but also that of the feature map using the adversarial training framework. We train multiple networks simultaneously by employing discriminators to distinguish the feature map distributions of different networks. Each network has its corresponding discriminator which discriminates the feature map from its own as fake while classifying that of the other network as real. By training a network to fool the corresponding discriminator, it can learn the other network’s feature map distribution. Discriminators and networks are trained concurrently in a minimax twoplayer game. Also, we propose a novel cyclic learning scheme for training more than two networks together. We have applied our method to various network architectures on the classification task and discovered a significant improvement of performance especially in the case of training a pair of a small network and a large one. 1 INTRODUCTION With the advent of Alexnet (Krizhevsky et al., 2012), deep convolution neural networks have achieved remarkable success in a variety of computer vision tasks. However, high-performance of deep neural network is often gained by increasing the depth or the width of a network. Deep and wide networks cost a large number of computation as well as memory storage which is not suitable for a resource-limited environment such as mobile or embedded systems. To overcome this issue, many researches have been conducted to develop smaller but accurate neural networks. Some of the well-known methods in this line of research are parameter quantization or binarization (Rastegari et al., 2016), pruning (Li et al., 2016) and knowledge distillation (KD) (Hinton et al., 2015). KD has been an active area of research as a solution to improve the performance of a light-weight network by transferring the knowledge of a large pre-trained network (or an ensemble of small networks) as a teacher network. KD sets the teacher network’s class probabilities as a target which a small student network tries to mimic. By aligning the student’s predictions to those of the teacher, the student can improve its performance. Recently, some studies have shown that rather than using a pretrained teacher, simultaneously training networks to learn from each other in a peer-teaching manner is also possible. This approach is called online distillation. Deep mutual learning (DML) (Zhang et al., 2018) and on-the-fly native ensemble (ONE) (Lan et al., 2018) are the representative online distillation methods that show appealing results in the image classification tasks. Conventional distillation method requires pre-training a powerful teacher network and performs an one-way transfer to a relatively small and untrained student network. On the other hand, in online mutual distillation, there is no specific teacher-student role. All the student networks learn simultaneously by teaching each other from the start of training. It trains with the conventional cross-entropy loss from the ground truth label along with the mimicry loss to learn from its peers. Networks trained in such an online distillation way achieve results superior not only to the networks trained with the cross-entropy loss alone but also to those trained in conventional offline distillation manner from a pre-trained teacher network. However, aforementioned online distillation methods make use of only the logit information. While the logit contains the probabilistic information over classes, the feature map, the output of convolution layer, has more meaningful and abundant feature information on image intensity and spatial correlation. In offline distillation which utilizes a pre-trained model as a teacher network, many methods such as FitNet (Romero et al., 2014), attention transfer (AT) (Zagoruyko & Komodakis, 2016a) and factor transfer (FT) (Kim et al., 2018) make use of this intermediate feature representation as a target to learn for the student network, but in online distillation, to the best of our knowledge, no feature map-based knowledge distillation method has been proposed. This is due to some challenges. Unlike the offline methods that have a clear target to mimic, there is no static target to follow in an online method. At every training iteration, the feature maps of the co-trained network change, thus in online feature map-level distillation, the problem turns into mimicking the moving target properly. While each node of the logit is confined to represent its assigned class probability which does not change drastically over iterations, at the feature map-level, much more flexibility comes into play, which makes the problem more challenging. Therefore, the direct aligning method such as using L1 or L2 distance is not suitable for online mutual feature map distillation because it updates the network parameters to generate a feature map that tries to mimic the current output feature map of the other network. In other words, the direct alignment method only tries to minimize the distance between the two feature map points (one for each network), hence it ignores the distributional difference between the two feature maps (Fig. 1(a)). To alleviate this problem, in this paper, we propose a novel online distillation method that transfers the knowledge of feature maps adversarially as well as a cyclic learning framework for training more than two networks simultaneously. Unlike the direct aligning method, our adversarial distillation method enables a network to learn the overall feature map distribution of the co-trained network (Fig. 1(b)). Since the discriminator is trained to distinguish the difference between the networks’ feature map distributions (containing the history of feature maps for different input images) at every training iteration, by fooling the discriminator, the network learns the co-trained network’s changing feature map distribution. Exchanging the knowledge of feature map distribution facilitates the networks to converge to a better feature map manifold that generalizes better and yields more accurate results. Our method consists of two major losses: 1) logit-based loss and 2) feature map-based loss. Logitbased loss is defined by two different loss terms which are conventional cross-entropy (CE) loss and the mutual distillation loss using the Kullback-Leibler divergence (KLD). Our newly proposed feature map-based loss is to distill the feature map indirectly via discriminators. We use the feature map from the last convolution layer since deeper convolution layer generates more meaningful features with a high-level abstraction (Kim et al., 2018). The adversarial training scheme of generative adversarial networks (GAN) (Goodfellow et al., 2014) is utilized to transfer the knowledge at feature map-level. The contributions of this paper can be summarized as follows: 1) we propose an online knowledge distillation method that utilizes not only the logit but also the feature map from the convolution layer. 2) Our method transfers the knowledge of feature maps not by directly aligning them using the distance loss but by learning their distributions using the adversarial training via discriminators. 3) We propose a novel cyclic learning scheme for training more than two networks simultaneously. 2 RELATED WORK The idea of model compression by transferring the knowledge of a high performing model to a smaller model was originally proposed by Buciluǎ et al. (2006). Then in recent years, this research area got invigorated due to the work of knowledge distillation (KD) by Hinton et al. (2015). The main contribution of KD is to use the softened logit of pre-trained teacher network that has higher entropy as an extra supervision to train a student network. KD trains a compact student network to learn not only by the conventional CE loss subjected to the labeled data but also by the final outputs of the teacher network. While KD only utilizes the logit, method such as FitNet (Romero et al., 2014), AT (Zagoruyko & Komodakis, 2016a), FT (Kim et al., 2018) and KTAN (Liu et al., 2018) use the intermediate feature representation to transfer the knowledge of a teacher network. Online Knowledge Distillation: Conventional offline methods require training a teacher model in advance while online methods do not require any pre-trained model. Instead, the networks teach each other mutually by sharing their knowledge throughout the training process. Some examples of recent online methods are DML (Zhang et al., 2018) and ONE (Lan et al., 2018) which demonstrate promising results. DML simply applies KD losses mutually, treating each other as teachers, and it achieves results that is even better than the offline KD method. The drawback of DML is that it lacks an appropriate teacher role, hence provides only limited information to each network. ONE pointed out this defect of DML. Rather than mutually distilling between the networks, ONE generates a gated ensemble logit of the training networks and uses it as a target to align for each network. ONE tries to create a powerful teacher logit that can provide more generalized information. The flaw of ONE is that it can not train different network architectures at the same time due to its architecture of sharing the low-level layers for the gating module. The common limitation of existing online methods is that they are dependent only on the logit and do not make any use of the feature map information. Considering that KD loss term is only applicable to the classification task, transferring knowledge at feature map-level can enlarge the applicability to other tasks. Therefore, our method proposes a distillation method that utilizes not only the logit but also the feature map via adversarial training, moreover, our method can be applied in case where the co-trained networks have different architectures. Generative Adversarial Network (GAN): GAN (Goodfellow et al., 2014) is a generative model framework that is proposed with an adversarial training scheme, using a generator network G and a discriminator network D. G learns to generate the real data distribution while D is trained to distinguish the real samples of the dataset from the fake results generated by G. The goal of G is to trickD to make a mistake of determining the fake results as the real samples. Though it was initially proposed for generative models, its adversarial training scheme is not limited to data generation. Adversarial training has been adapted to various tasks such as image translation (Isola et al., 2017; Zhu et al., 2017), captioning (Dai et al., 2017), semi-supervised learning (Miyato et al., 2016; Springenberg, 2015), reinforcement learning (Pfau & Vinyals, 2016), and many others. In this paper, we utilize GAN’s adversarial training strategy to transfer the knowledge at feature map-level in an online manner. The networks learn the other networks’ feature map distributions by trying to deceive the discriminators while the discriminators are trained to distinguish the different distributions of each network. 3 PROPOSED METHOD In this section, we describe the overall process of our proposed Online Adversarial Feature map Distillation (AFD). As can be seen in Figure 2, when training two different networks, Θ1 and Θ2, in an online manner, we employ two discriminators, D1 and D2. We train D1 such that the feature map of Θ2 is regarded as a real and that of Θ1 is classified as a fake and do vice versa for discriminator D2. Then, each network Θ1 and Θ2 are trained to fool its corresponding discriminator so that it can generate a feature map that mimics the other network’s feature map. Throughout this adversarial training, each network learns the feature map distribution of the other network. By exploiting both logit-based distillation loss and feature map-based adversarial loss together, we could observe a significant improvement of performance in various pairs of network architectures especially when training small and large networks together. Also we introduce a cyclic learning scheme for training more than two networks simultaneously. It reduces the number of required discriminators from 2×2 CK(when employing discriminators bidirectionally between every network pairs.) to K where K is the number of networks participating. This cyclic learning framework not only requires less computation than the bidirectional way but also achieves better results compared to other online training schemes for multiple networks. First, we explain the conventional mutual knowledge distillation method conducted among the networks at the logit-level. Then we introduce our novel online feature map distillation method using the adversarial training scheme in addition to the cyclic learning framework for training more than two networks at the same time. 3.1 LOGIT-BASED MUTUAL KNOWLEDGE DISTILLATION We use two loss terms for logit-based learning, one is the conventional cross-entropy(CE) loss and the other is mutual distillation loss between networks based on Kullback Leibler(KL) divergence. We formulate our proposed method assuming training two networks. Training scheme for more than two networks will be explained in Sec 3.3. Below is the overall logit-based loss for two networks: L1logit = Lce(y, σ(z1)) + T 2 × Lkl(σ(z2/T ), σ(z1/T )) (1) L2logit = Lce(y, σ(z2)) + T 2 × Lkl(σ(z1/T ), σ(z2/T )). (2) Here, σ(·) refers to softmax function and z ∈ RC is the logit produced from a network for Cclass classification problem. The temperature term T is used to control the level of smoothness in probabilities. As the temperature term T goes up, it creates a more softened probability distribution. We use T = 3 for every experiment. Lce is the CE loss between the ground truth label y and the softmax output σ(z) that is commonly used in image classification. Lkl is the KL loss between the softened logit of each network. We multiply the KL loss term with T 2 because the gradients produced by the soft targets are scaled by 1/T 2. While the CE loss is between the correct labels and the outputs of the model, the KL loss is the KL distance between the outputs of two training networks. The KL loss provides an extra information from the peer network so that the network can improve its generalization performance. The difference with DML is that while DML updates asynchronously which means that it updates one network first and then the other network, our AFD updates the networks synchronously, not alternatingly. The CE loss trains the networks to predict the correct truth label while the mutual distillation loss tries to match the outputs of the peer-networks, enabling the networks to share the knowledge at logit-level. 3.2 FEATURE MAP-BASED LEARNING VIA ADVERSARIAL TRAINING Our AFD uses adversarial training to transfer knowledge at feature map-level. We formulate our adversarial feature map distillation for two networks which will be extended for more networks later. We divide a network into two parts, one is the feature extractor part that generates a feature map and the other is the classifier part that transforms the feature map into a logit. Each network also has a corresponding discriminator which distinguishes different feature map distributions. The architecture of the discriminator is simply a series of Conv-Batch Normalization-Leaky ReLU-Conv-Sigmoid. It takes a feature map of the last layer and it reduces the spatial size and the number of channel of the feature map as it goes through the convolution operation so that it can produce a single scalar value. Then we apply the sigmoid function of the value to normalize it between 0 and 1. We utilize the feature extractor part to enable feature map-level distillation. For the convenience of mathematical notation, we name the feature extractor part as Gk and its discriminator as Dk, k indicates the network number. As depicted in Figure 2, each network has to fool its discriminator to mimic the peer network’s feature map and the discriminator has to discriminate from which network the feature map is originated. Following LSGAN (Mao et al., 2017), our overall adversarial loss for discriminator and the feature extractor can be written as below: LD1 = [1−D1(G2(x))]2 + [D1(G1(x))]2 (3) LG1 = [1−D1(G1(x))]2. (4) The feature extractors G1 and G2 take input x and generate feature maps. The discriminator D1 takes a feature map and yields a scalar between 0 (fake) and 1 (real). It is trained to output 1 if the feature map came from the co-trained network (in this case, G2) or 0 if the feature map is produced from the network it belongs to (G1 in this case). The goal ofD1 is to minimize the discriminator loss term LD1 by correctly distinguishing the two different feature map distributions while G1’s goal is to minimize the loss term LG1 by fooling D1 to make mistake of determining G1’s feature map as real and yield 1. Each training network’s object is to minimize LGk to mimic the peer network’s feature map distribution. This adversarial scheme works exactly the same by changing the role of two networks. In case where the two networks’ feature map outputs have different channel sizes, for example a pair like (WRN-16-2, WRN-16-4) (Zagoruyko & Komodakis, 2016b), we use a transfer layer that is composed of a convolution layer, a batch normalization and a ReLU which converts the number of channels to that of peer network. The above loss terms change as LD1 = [1−D1(T2(G2(x)))]2 + [D1(T1(G1(x)))] 2 and LG1 = [1−D1(T1(G1(x)))]2 when using the transfer layer Tk. Optimization: Combining both logit-based loss and the adversarial feature map-based loss, the overall loss for each network Θ1 and Θ2 are as follows: LΘ1 = L1logit + LG1 , LΘ2 = L2logit + LG2 (5) However, the logit-based loss termLklogit and the feature map-based loss termLGk are not optimized by the same optimizer. In fact, they are optimized alternatingly in a same mini-batch. At every minibatch iteration, we infer an image into a model and it computes a logit and a feature map. Then we calculate the two loss terms and optimize the networks based on the two losses separately, meaning that we update the parameters by the logit-based loss once and then update again by the feature map-based loss. The reason we optimize separately for each loss term is because they use different learning rates. The adversarial loss requires much slower learning rate thus if we use the same optimizer with the same learning rate, the networks would not be optimized. Note that we do not infer for each loss term, inference is conducted only once, only the optimization is conducted twice, one for each loss term. 3.3 CYCLIC LEARNING FRAMEWORK Our method proposes a novel cyclic peer-learning scheme for training more than two networks simultaneously. As can be seen in Figure 3, each network transfers its knowledge to its next peer network in an one-way cyclic manner. If we train K number of networks together, each network distills its knowledge to its next network except the last network transfers its knowledge to the first network, creating a cyclic knowledge transfer flow as 1 → 2, 2 → 3, · · · , (K − 1) → K,K → 1. The main contribution of using this cyclic learning framework is to avoid employing too many number of discriminators. If we apply our adversarial loss for every pair of networks, it would demand two times the amount of every possible pair of K networks which would cost a lot of computation. Also in Sec 4.5, we empirically show that our cyclic training scheme is better than other online methods’ training scheme for multiple networks. 4 EXPERIMENT In this section, to show the adequacy of our method, we first present comparison experiment with distance method and ablation study to analyze our method. Then we compare our approach with existing online knowledge distillation methods under different settings. First of all, we demonstrate results on using the same sub-network architectures in Sec 4.3. Then, we apply our method on subnetworks with different architectures in Sec 4.4. In Sec 4.5, we also show the results of training more than two networks to demonstrate that our method generalizes well even when the number of networks increases. In most of the experiments, we use the CIFAR-100 (Krizhevsky et al.) dataset. It consists of 50K training images and 10K test images over 100 classes, accordingly it has 600 images per each class. All the reported results on CIFAR-100 are average of 5 experiments. Since our method uses two loss terms, logit-based loss and feature map-based loss, we use different learning details for each loss term. For overall learning schedule, we follow the learning schedule of ONE(Lan et al., 2018) to conduct fair comparison which is 300 epochs of training. In terms of logit-based loss, the learning rate starts at 0.1 and is multiplied by 0.1 at 150, 225 epoch. We optimize the logit-based loss using SGD with mini-batch size of 128, momentum 0.9 and weight decay of 1e-4. This learning details for logit-based loss is equally applied to other compared online distillation methods. For feature map-based loss, the learning rate starts at 2e-5 for both discriminators and feature extractors and is decayed by 0.1 at 75, 150 epoch. The feature map-based loss is optimized by ADAM(Kingma & Ba, 2014) with the same mini-batch size and weight decay of 1e-1. In tables, ‘2 Net Avg’ and ‘Ens’ represents the average accuracy of the two sub-networks and the ensemble accuracy respectively. The average ensemble is used for AFD, DML and KD while ONE uses gated ensemble of sub-networks according to its methodology. 4.1 COMPARISON WITH DIRECT FEATURE MAP ALIGNMENT METHODS Since our goal is to distill feature map information that suits for mutual online distillation, we briefly compare our method with conventional direct alignment method in Table 1. We train two networks together, in one setting, we use the same architecture (ResNet-32 (He et al., 2016)) and in the other, we use different types (WRN-16-2, WRN-28-2 (Zagoruyko & Komodakis, 2016b)). For L1, each network is trained not only to follow the ground-truth label by CE loss, but also to mimic the other network’s feature map using the L1 distance loss. For L1+ KD, KD (Hinton et al., 2015) loss is applied mutually along with the L1 loss between the feature maps. We also compare our results with offline method, L1+ KD (offline) employs a pre-trained network as a teacher network and distills its feature map knowledge to an untrained student network by L1 loss as well as the KD loss at logit level. ResNet-32 and WRN-28-2 that shows 69.79% and 73.62% accuracy are used as the teacher networks in the two settings respectively. The results clearly show that learning the distributions of feature maps with adversarial loss performs better than direct alignment method in both mutual online distillation and offline distillation. We could observe that using L1 distance loss actually disturbs the networks to learn good features in online environment. The accuracy of ResNet-32 has dropped more than 2% compared to its vanilla version accuracy (69.38%) and the accuracy of WRN-16-2 is also lower than its vanilla network (71.07%). Even when combined with KD loss(L1 + KD), direct alignment method shows poor performance compared to ours in both online and offline manner. Though distance loss is used in many conventional offline methods, they suffer when it comes to online environment. In case of different architecture types, our method also outperforms the direct alignment method. It indicates that when it comes to online feature map distillation, transferring feature map information with direct alignment method such as L1 distance is worse than indirect distillation that uses feature map distribution via adversarial loss. 4.2 ABLATION STUDY Table 2 shows the ablation study of our proposed method. We conduct experiments using the same and different sub-network architectures. We run three experiments with different training settings for each model case. The three settings are full model, without mutual knowledge distillation at logitlevel and without adversarial feature map distillation. When trained without the adversarial feature map distillation, the accuracy decreases in all three model cases. The accuracy of both ResNet-32 and WRN-16-2 dropped by 0.65% and 0.52% respectively, and those of (WRN-16-2, WRN-28-2) pair declined by 0.89% and 0.44% compared to the full model. Ensemble results are also lower than those of the full models. When only the adversarial feature map distillation is applied, the accuracy has increased by 0.71% and 0.87% compared to the vanilla versions of ResNet-32 and WRN-16-2 respectively. Especially in case of different sub-network architecture, the accuracy of WRN-16-2 has increased by almost 1%. Based on these experiments, we could confirm that adversarial feature map distillation has some efficacy of improving the performance in online environment. 4.3 SAME ARCHITECTURE We compare our method with DML and ONE for training two sub-networks with the same architecture. The vanilla network refers to the original network trained without any distillation method. As shown in Table 3, in both ResNet and WRN serises, DML, ONE and AFD all improves the networks’ accuracy compared to the vanilla networks. However, AFD shows the highest improvement of performance in both sub-network and ensemble accuracy among the compared distillation methods. Especially in case of ResNet-20, ResNet-32 and WRN-16-2, our method significantly improves the accuracy by more than 4% compared to the vanilla version while other distillation methods improve around 3% on average except the ResNet-32 of DML. 4.4 DIFFERENT ARCHITECTURE In this section, we compare our method with DML and KD using different network architectures. We set Net2 as the higher capacity network. For KD, we use the ensemble of the two sub-networks as a teacher to mimic at every iteration. The difference with original KD (Hinton et al., 2015) is that it is an online learning method, not offline. We did not include ONE because ONE can not be applied in case where the sub-networks have different model types due to its architecture of sharing the lowlevel layers. In table 4, we could observe that our method shows better performance improvement than other methods in both Net1 and Net2 except for a couple of cases. The interesting result is that when AFD is applied, the performance of Net1 (smaller network) is improved significantly compared to other online distillation methods. This is because AFD can transfer the higher capacity network’s meaningful knowledge (feature map distribution) to the lower capacity one better than other online methods. When compared with KD and DML, AFD’s Net1 accuracy is higher by 1.66% and 1.15% and the ensemble accuracy is better by 0.61% and 0.67% on average respectively. In case of (WRN-16-2, WRN-28-4) pair, the Net1’s parameter size (0.70M) is more than 8 times smaller than Net2 (5.87M). Despite the large size difference, our method improves both networks’ accuracy, particularly our Net1 performance is better than KD and DML by 1.72% and 1.28% respectively. The performance of KD and DML seems to decline as the difference between the two model sizes gets larger. Throughout this experiment, we have shown that our method also works properly for different architectures of sub-networks even when two networks have large difference in their model sizes. Using our method, smaller network considerably benefits from the large network. 4.5 EXPANSION TO 3 NETWORKS To show our method’s expandability for training more than two networks, we conduct experiment of training 3 networks in this section. As proposed in Sec 3.3, our method uses a cyclic learning framework rather than employing adversarial loss between every network pairs in order to reduce the amount of computation and memory. DML calculates the mutual knowledge distillation loss between every network pairs and uses the average of the losses. ONE generates a gated ensemble of the sub-networks and transfers the knowledge of the ensemble logit to each network. As it can be seen in Table 5, AFD outperforms the compared online distillation methods on both 3 Net average and ensemble accuracy in every model types. Comparing the results of Table 5 to that of Table 3, the overall tendency of performance gains compared to DML and ONE is maintained. 4.6 IMAGENET EXPERIMENT We evaluate our method on ImageNet dataset to show that our method can also be applicable to a large scale image dataset. We use ImageNet LSVRC 2015 (Russakovsky et al., 2015) which has 1.2M training images and 50K validation images over 1,000 classes. We compare our method with DML using two pre-trained networks ResNet-18 and ResNet-34 as a pair. The results are after 30 epochs of training. As shown in Table 6, our method improves the networks better than DML. 5 CONCLUSION We proposed an online knowledge distillation method that transfers the knowledge not only at logitlevel but also at feature map-level using the adversarial training scheme. Unlike existing online distillation methods, our method utilizes the feature map information and showed that knowledge transfer at feature map-level is possible even in an online environment. Through extensive experiments, we demonstrated the adequacy of adopting the distribution learning via adversarial training for online feature map distillation and could achieve better performance than existing online methods. We also introduced a novel cyclic learning framework for training multiple networks concurrently and presented its efficacy by comparing with existing approaches. We also confirmed that our method is broadly suitable to various architecture types from a very small network (ResNet-20) to a large (WRN-28-4) network. We hope that due to the work of our research, the area of knowledge distillation can be further advanced and studied by many researchers.
1. What is the focus and contribution of the paper on online knowledge distillation? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its novelty and efficiency compared to prior works? 3. Do you have any concerns or questions about the methodology, such as the choice of feature maps, computational expense, and optimization difficulties? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any comparisons or discussions missing in the paper that would enhance its validity and impact?
Review
Review A new online knowledge distillation is investigated by utilizing feature map information next to the logits via GAN. Instead of direct feature map alignment, the algorithm tries to transfer the distribution of the feature maps. There is no teacher per se, but the big and small nets are trained via an adversarial game where 2 discriminators try to minimize the distributions of the two nets. The idea is understandable but some issues remain: 1- Training GAN is by itself an expensive task and optimization is difficult, so how computationally expensive is this online KD compared to the offline one? 2- It is not clear form the paper feature maps from which layers are being used? If multiple layers are considered, how did you choose which ones are better. Also the smaller model has a different structure, h ow did you choose to pair feature maps in the big model and the small model? 3- What would be the performance difference compared to offline knowledge distillation? For example in Table 1 can you please add a column with offline KD? 4- Mutual training brings some generalization, and when you compare the results in Table 3 with vanilla model, I am wondering if you made sure this is not only due to a better generalization. 5- In cyclic learning framework, it is not well-motivated why someone wants to train multiple networks that mimic each others’ behavior; also the complexity increases in that case which makes me wondering wouldn’t it be better to do it offline then?
ICLR
Title Feature-map-level Online Adversarial Knowledge Distillation Abstract Feature maps contain rich information about image intensity and spatial correlation. However, previous online knowledge distillation methods only utilize the class probabilities. Thus in this paper, we propose an online knowledge distillation method that transfers not only the knowledge of the class probabilities but also that of the feature map using the adversarial training framework. We train multiple networks simultaneously by employing discriminators to distinguish the feature map distributions of different networks. Each network has its corresponding discriminator which discriminates the feature map from its own as fake while classifying that of the other network as real. By training a network to fool the corresponding discriminator, it can learn the other network’s feature map distribution. Discriminators and networks are trained concurrently in a minimax twoplayer game. Also, we propose a novel cyclic learning scheme for training more than two networks together. We have applied our method to various network architectures on the classification task and discovered a significant improvement of performance especially in the case of training a pair of a small network and a large one. N/A Feature maps contain rich information about image intensity and spatial correlation. However, previous online knowledge distillation methods only utilize the class probabilities. Thus in this paper, we propose an online knowledge distillation method that transfers not only the knowledge of the class probabilities but also that of the feature map using the adversarial training framework. We train multiple networks simultaneously by employing discriminators to distinguish the feature map distributions of different networks. Each network has its corresponding discriminator which discriminates the feature map from its own as fake while classifying that of the other network as real. By training a network to fool the corresponding discriminator, it can learn the other network’s feature map distribution. Discriminators and networks are trained concurrently in a minimax twoplayer game. Also, we propose a novel cyclic learning scheme for training more than two networks together. We have applied our method to various network architectures on the classification task and discovered a significant improvement of performance especially in the case of training a pair of a small network and a large one. 1 INTRODUCTION With the advent of Alexnet (Krizhevsky et al., 2012), deep convolution neural networks have achieved remarkable success in a variety of computer vision tasks. However, high-performance of deep neural network is often gained by increasing the depth or the width of a network. Deep and wide networks cost a large number of computation as well as memory storage which is not suitable for a resource-limited environment such as mobile or embedded systems. To overcome this issue, many researches have been conducted to develop smaller but accurate neural networks. Some of the well-known methods in this line of research are parameter quantization or binarization (Rastegari et al., 2016), pruning (Li et al., 2016) and knowledge distillation (KD) (Hinton et al., 2015). KD has been an active area of research as a solution to improve the performance of a light-weight network by transferring the knowledge of a large pre-trained network (or an ensemble of small networks) as a teacher network. KD sets the teacher network’s class probabilities as a target which a small student network tries to mimic. By aligning the student’s predictions to those of the teacher, the student can improve its performance. Recently, some studies have shown that rather than using a pretrained teacher, simultaneously training networks to learn from each other in a peer-teaching manner is also possible. This approach is called online distillation. Deep mutual learning (DML) (Zhang et al., 2018) and on-the-fly native ensemble (ONE) (Lan et al., 2018) are the representative online distillation methods that show appealing results in the image classification tasks. Conventional distillation method requires pre-training a powerful teacher network and performs an one-way transfer to a relatively small and untrained student network. On the other hand, in online mutual distillation, there is no specific teacher-student role. All the student networks learn simultaneously by teaching each other from the start of training. It trains with the conventional cross-entropy loss from the ground truth label along with the mimicry loss to learn from its peers. Networks trained in such an online distillation way achieve results superior not only to the networks trained with the cross-entropy loss alone but also to those trained in conventional offline distillation manner from a pre-trained teacher network. However, aforementioned online distillation methods make use of only the logit information. While the logit contains the probabilistic information over classes, the feature map, the output of convolution layer, has more meaningful and abundant feature information on image intensity and spatial correlation. In offline distillation which utilizes a pre-trained model as a teacher network, many methods such as FitNet (Romero et al., 2014), attention transfer (AT) (Zagoruyko & Komodakis, 2016a) and factor transfer (FT) (Kim et al., 2018) make use of this intermediate feature representation as a target to learn for the student network, but in online distillation, to the best of our knowledge, no feature map-based knowledge distillation method has been proposed. This is due to some challenges. Unlike the offline methods that have a clear target to mimic, there is no static target to follow in an online method. At every training iteration, the feature maps of the co-trained network change, thus in online feature map-level distillation, the problem turns into mimicking the moving target properly. While each node of the logit is confined to represent its assigned class probability which does not change drastically over iterations, at the feature map-level, much more flexibility comes into play, which makes the problem more challenging. Therefore, the direct aligning method such as using L1 or L2 distance is not suitable for online mutual feature map distillation because it updates the network parameters to generate a feature map that tries to mimic the current output feature map of the other network. In other words, the direct alignment method only tries to minimize the distance between the two feature map points (one for each network), hence it ignores the distributional difference between the two feature maps (Fig. 1(a)). To alleviate this problem, in this paper, we propose a novel online distillation method that transfers the knowledge of feature maps adversarially as well as a cyclic learning framework for training more than two networks simultaneously. Unlike the direct aligning method, our adversarial distillation method enables a network to learn the overall feature map distribution of the co-trained network (Fig. 1(b)). Since the discriminator is trained to distinguish the difference between the networks’ feature map distributions (containing the history of feature maps for different input images) at every training iteration, by fooling the discriminator, the network learns the co-trained network’s changing feature map distribution. Exchanging the knowledge of feature map distribution facilitates the networks to converge to a better feature map manifold that generalizes better and yields more accurate results. Our method consists of two major losses: 1) logit-based loss and 2) feature map-based loss. Logitbased loss is defined by two different loss terms which are conventional cross-entropy (CE) loss and the mutual distillation loss using the Kullback-Leibler divergence (KLD). Our newly proposed feature map-based loss is to distill the feature map indirectly via discriminators. We use the feature map from the last convolution layer since deeper convolution layer generates more meaningful features with a high-level abstraction (Kim et al., 2018). The adversarial training scheme of generative adversarial networks (GAN) (Goodfellow et al., 2014) is utilized to transfer the knowledge at feature map-level. The contributions of this paper can be summarized as follows: 1) we propose an online knowledge distillation method that utilizes not only the logit but also the feature map from the convolution layer. 2) Our method transfers the knowledge of feature maps not by directly aligning them using the distance loss but by learning their distributions using the adversarial training via discriminators. 3) We propose a novel cyclic learning scheme for training more than two networks simultaneously. 2 RELATED WORK The idea of model compression by transferring the knowledge of a high performing model to a smaller model was originally proposed by Buciluǎ et al. (2006). Then in recent years, this research area got invigorated due to the work of knowledge distillation (KD) by Hinton et al. (2015). The main contribution of KD is to use the softened logit of pre-trained teacher network that has higher entropy as an extra supervision to train a student network. KD trains a compact student network to learn not only by the conventional CE loss subjected to the labeled data but also by the final outputs of the teacher network. While KD only utilizes the logit, method such as FitNet (Romero et al., 2014), AT (Zagoruyko & Komodakis, 2016a), FT (Kim et al., 2018) and KTAN (Liu et al., 2018) use the intermediate feature representation to transfer the knowledge of a teacher network. Online Knowledge Distillation: Conventional offline methods require training a teacher model in advance while online methods do not require any pre-trained model. Instead, the networks teach each other mutually by sharing their knowledge throughout the training process. Some examples of recent online methods are DML (Zhang et al., 2018) and ONE (Lan et al., 2018) which demonstrate promising results. DML simply applies KD losses mutually, treating each other as teachers, and it achieves results that is even better than the offline KD method. The drawback of DML is that it lacks an appropriate teacher role, hence provides only limited information to each network. ONE pointed out this defect of DML. Rather than mutually distilling between the networks, ONE generates a gated ensemble logit of the training networks and uses it as a target to align for each network. ONE tries to create a powerful teacher logit that can provide more generalized information. The flaw of ONE is that it can not train different network architectures at the same time due to its architecture of sharing the low-level layers for the gating module. The common limitation of existing online methods is that they are dependent only on the logit and do not make any use of the feature map information. Considering that KD loss term is only applicable to the classification task, transferring knowledge at feature map-level can enlarge the applicability to other tasks. Therefore, our method proposes a distillation method that utilizes not only the logit but also the feature map via adversarial training, moreover, our method can be applied in case where the co-trained networks have different architectures. Generative Adversarial Network (GAN): GAN (Goodfellow et al., 2014) is a generative model framework that is proposed with an adversarial training scheme, using a generator network G and a discriminator network D. G learns to generate the real data distribution while D is trained to distinguish the real samples of the dataset from the fake results generated by G. The goal of G is to trickD to make a mistake of determining the fake results as the real samples. Though it was initially proposed for generative models, its adversarial training scheme is not limited to data generation. Adversarial training has been adapted to various tasks such as image translation (Isola et al., 2017; Zhu et al., 2017), captioning (Dai et al., 2017), semi-supervised learning (Miyato et al., 2016; Springenberg, 2015), reinforcement learning (Pfau & Vinyals, 2016), and many others. In this paper, we utilize GAN’s adversarial training strategy to transfer the knowledge at feature map-level in an online manner. The networks learn the other networks’ feature map distributions by trying to deceive the discriminators while the discriminators are trained to distinguish the different distributions of each network. 3 PROPOSED METHOD In this section, we describe the overall process of our proposed Online Adversarial Feature map Distillation (AFD). As can be seen in Figure 2, when training two different networks, Θ1 and Θ2, in an online manner, we employ two discriminators, D1 and D2. We train D1 such that the feature map of Θ2 is regarded as a real and that of Θ1 is classified as a fake and do vice versa for discriminator D2. Then, each network Θ1 and Θ2 are trained to fool its corresponding discriminator so that it can generate a feature map that mimics the other network’s feature map. Throughout this adversarial training, each network learns the feature map distribution of the other network. By exploiting both logit-based distillation loss and feature map-based adversarial loss together, we could observe a significant improvement of performance in various pairs of network architectures especially when training small and large networks together. Also we introduce a cyclic learning scheme for training more than two networks simultaneously. It reduces the number of required discriminators from 2×2 CK(when employing discriminators bidirectionally between every network pairs.) to K where K is the number of networks participating. This cyclic learning framework not only requires less computation than the bidirectional way but also achieves better results compared to other online training schemes for multiple networks. First, we explain the conventional mutual knowledge distillation method conducted among the networks at the logit-level. Then we introduce our novel online feature map distillation method using the adversarial training scheme in addition to the cyclic learning framework for training more than two networks at the same time. 3.1 LOGIT-BASED MUTUAL KNOWLEDGE DISTILLATION We use two loss terms for logit-based learning, one is the conventional cross-entropy(CE) loss and the other is mutual distillation loss between networks based on Kullback Leibler(KL) divergence. We formulate our proposed method assuming training two networks. Training scheme for more than two networks will be explained in Sec 3.3. Below is the overall logit-based loss for two networks: L1logit = Lce(y, σ(z1)) + T 2 × Lkl(σ(z2/T ), σ(z1/T )) (1) L2logit = Lce(y, σ(z2)) + T 2 × Lkl(σ(z1/T ), σ(z2/T )). (2) Here, σ(·) refers to softmax function and z ∈ RC is the logit produced from a network for Cclass classification problem. The temperature term T is used to control the level of smoothness in probabilities. As the temperature term T goes up, it creates a more softened probability distribution. We use T = 3 for every experiment. Lce is the CE loss between the ground truth label y and the softmax output σ(z) that is commonly used in image classification. Lkl is the KL loss between the softened logit of each network. We multiply the KL loss term with T 2 because the gradients produced by the soft targets are scaled by 1/T 2. While the CE loss is between the correct labels and the outputs of the model, the KL loss is the KL distance between the outputs of two training networks. The KL loss provides an extra information from the peer network so that the network can improve its generalization performance. The difference with DML is that while DML updates asynchronously which means that it updates one network first and then the other network, our AFD updates the networks synchronously, not alternatingly. The CE loss trains the networks to predict the correct truth label while the mutual distillation loss tries to match the outputs of the peer-networks, enabling the networks to share the knowledge at logit-level. 3.2 FEATURE MAP-BASED LEARNING VIA ADVERSARIAL TRAINING Our AFD uses adversarial training to transfer knowledge at feature map-level. We formulate our adversarial feature map distillation for two networks which will be extended for more networks later. We divide a network into two parts, one is the feature extractor part that generates a feature map and the other is the classifier part that transforms the feature map into a logit. Each network also has a corresponding discriminator which distinguishes different feature map distributions. The architecture of the discriminator is simply a series of Conv-Batch Normalization-Leaky ReLU-Conv-Sigmoid. It takes a feature map of the last layer and it reduces the spatial size and the number of channel of the feature map as it goes through the convolution operation so that it can produce a single scalar value. Then we apply the sigmoid function of the value to normalize it between 0 and 1. We utilize the feature extractor part to enable feature map-level distillation. For the convenience of mathematical notation, we name the feature extractor part as Gk and its discriminator as Dk, k indicates the network number. As depicted in Figure 2, each network has to fool its discriminator to mimic the peer network’s feature map and the discriminator has to discriminate from which network the feature map is originated. Following LSGAN (Mao et al., 2017), our overall adversarial loss for discriminator and the feature extractor can be written as below: LD1 = [1−D1(G2(x))]2 + [D1(G1(x))]2 (3) LG1 = [1−D1(G1(x))]2. (4) The feature extractors G1 and G2 take input x and generate feature maps. The discriminator D1 takes a feature map and yields a scalar between 0 (fake) and 1 (real). It is trained to output 1 if the feature map came from the co-trained network (in this case, G2) or 0 if the feature map is produced from the network it belongs to (G1 in this case). The goal ofD1 is to minimize the discriminator loss term LD1 by correctly distinguishing the two different feature map distributions while G1’s goal is to minimize the loss term LG1 by fooling D1 to make mistake of determining G1’s feature map as real and yield 1. Each training network’s object is to minimize LGk to mimic the peer network’s feature map distribution. This adversarial scheme works exactly the same by changing the role of two networks. In case where the two networks’ feature map outputs have different channel sizes, for example a pair like (WRN-16-2, WRN-16-4) (Zagoruyko & Komodakis, 2016b), we use a transfer layer that is composed of a convolution layer, a batch normalization and a ReLU which converts the number of channels to that of peer network. The above loss terms change as LD1 = [1−D1(T2(G2(x)))]2 + [D1(T1(G1(x)))] 2 and LG1 = [1−D1(T1(G1(x)))]2 when using the transfer layer Tk. Optimization: Combining both logit-based loss and the adversarial feature map-based loss, the overall loss for each network Θ1 and Θ2 are as follows: LΘ1 = L1logit + LG1 , LΘ2 = L2logit + LG2 (5) However, the logit-based loss termLklogit and the feature map-based loss termLGk are not optimized by the same optimizer. In fact, they are optimized alternatingly in a same mini-batch. At every minibatch iteration, we infer an image into a model and it computes a logit and a feature map. Then we calculate the two loss terms and optimize the networks based on the two losses separately, meaning that we update the parameters by the logit-based loss once and then update again by the feature map-based loss. The reason we optimize separately for each loss term is because they use different learning rates. The adversarial loss requires much slower learning rate thus if we use the same optimizer with the same learning rate, the networks would not be optimized. Note that we do not infer for each loss term, inference is conducted only once, only the optimization is conducted twice, one for each loss term. 3.3 CYCLIC LEARNING FRAMEWORK Our method proposes a novel cyclic peer-learning scheme for training more than two networks simultaneously. As can be seen in Figure 3, each network transfers its knowledge to its next peer network in an one-way cyclic manner. If we train K number of networks together, each network distills its knowledge to its next network except the last network transfers its knowledge to the first network, creating a cyclic knowledge transfer flow as 1 → 2, 2 → 3, · · · , (K − 1) → K,K → 1. The main contribution of using this cyclic learning framework is to avoid employing too many number of discriminators. If we apply our adversarial loss for every pair of networks, it would demand two times the amount of every possible pair of K networks which would cost a lot of computation. Also in Sec 4.5, we empirically show that our cyclic training scheme is better than other online methods’ training scheme for multiple networks. 4 EXPERIMENT In this section, to show the adequacy of our method, we first present comparison experiment with distance method and ablation study to analyze our method. Then we compare our approach with existing online knowledge distillation methods under different settings. First of all, we demonstrate results on using the same sub-network architectures in Sec 4.3. Then, we apply our method on subnetworks with different architectures in Sec 4.4. In Sec 4.5, we also show the results of training more than two networks to demonstrate that our method generalizes well even when the number of networks increases. In most of the experiments, we use the CIFAR-100 (Krizhevsky et al.) dataset. It consists of 50K training images and 10K test images over 100 classes, accordingly it has 600 images per each class. All the reported results on CIFAR-100 are average of 5 experiments. Since our method uses two loss terms, logit-based loss and feature map-based loss, we use different learning details for each loss term. For overall learning schedule, we follow the learning schedule of ONE(Lan et al., 2018) to conduct fair comparison which is 300 epochs of training. In terms of logit-based loss, the learning rate starts at 0.1 and is multiplied by 0.1 at 150, 225 epoch. We optimize the logit-based loss using SGD with mini-batch size of 128, momentum 0.9 and weight decay of 1e-4. This learning details for logit-based loss is equally applied to other compared online distillation methods. For feature map-based loss, the learning rate starts at 2e-5 for both discriminators and feature extractors and is decayed by 0.1 at 75, 150 epoch. The feature map-based loss is optimized by ADAM(Kingma & Ba, 2014) with the same mini-batch size and weight decay of 1e-1. In tables, ‘2 Net Avg’ and ‘Ens’ represents the average accuracy of the two sub-networks and the ensemble accuracy respectively. The average ensemble is used for AFD, DML and KD while ONE uses gated ensemble of sub-networks according to its methodology. 4.1 COMPARISON WITH DIRECT FEATURE MAP ALIGNMENT METHODS Since our goal is to distill feature map information that suits for mutual online distillation, we briefly compare our method with conventional direct alignment method in Table 1. We train two networks together, in one setting, we use the same architecture (ResNet-32 (He et al., 2016)) and in the other, we use different types (WRN-16-2, WRN-28-2 (Zagoruyko & Komodakis, 2016b)). For L1, each network is trained not only to follow the ground-truth label by CE loss, but also to mimic the other network’s feature map using the L1 distance loss. For L1+ KD, KD (Hinton et al., 2015) loss is applied mutually along with the L1 loss between the feature maps. We also compare our results with offline method, L1+ KD (offline) employs a pre-trained network as a teacher network and distills its feature map knowledge to an untrained student network by L1 loss as well as the KD loss at logit level. ResNet-32 and WRN-28-2 that shows 69.79% and 73.62% accuracy are used as the teacher networks in the two settings respectively. The results clearly show that learning the distributions of feature maps with adversarial loss performs better than direct alignment method in both mutual online distillation and offline distillation. We could observe that using L1 distance loss actually disturbs the networks to learn good features in online environment. The accuracy of ResNet-32 has dropped more than 2% compared to its vanilla version accuracy (69.38%) and the accuracy of WRN-16-2 is also lower than its vanilla network (71.07%). Even when combined with KD loss(L1 + KD), direct alignment method shows poor performance compared to ours in both online and offline manner. Though distance loss is used in many conventional offline methods, they suffer when it comes to online environment. In case of different architecture types, our method also outperforms the direct alignment method. It indicates that when it comes to online feature map distillation, transferring feature map information with direct alignment method such as L1 distance is worse than indirect distillation that uses feature map distribution via adversarial loss. 4.2 ABLATION STUDY Table 2 shows the ablation study of our proposed method. We conduct experiments using the same and different sub-network architectures. We run three experiments with different training settings for each model case. The three settings are full model, without mutual knowledge distillation at logitlevel and without adversarial feature map distillation. When trained without the adversarial feature map distillation, the accuracy decreases in all three model cases. The accuracy of both ResNet-32 and WRN-16-2 dropped by 0.65% and 0.52% respectively, and those of (WRN-16-2, WRN-28-2) pair declined by 0.89% and 0.44% compared to the full model. Ensemble results are also lower than those of the full models. When only the adversarial feature map distillation is applied, the accuracy has increased by 0.71% and 0.87% compared to the vanilla versions of ResNet-32 and WRN-16-2 respectively. Especially in case of different sub-network architecture, the accuracy of WRN-16-2 has increased by almost 1%. Based on these experiments, we could confirm that adversarial feature map distillation has some efficacy of improving the performance in online environment. 4.3 SAME ARCHITECTURE We compare our method with DML and ONE for training two sub-networks with the same architecture. The vanilla network refers to the original network trained without any distillation method. As shown in Table 3, in both ResNet and WRN serises, DML, ONE and AFD all improves the networks’ accuracy compared to the vanilla networks. However, AFD shows the highest improvement of performance in both sub-network and ensemble accuracy among the compared distillation methods. Especially in case of ResNet-20, ResNet-32 and WRN-16-2, our method significantly improves the accuracy by more than 4% compared to the vanilla version while other distillation methods improve around 3% on average except the ResNet-32 of DML. 4.4 DIFFERENT ARCHITECTURE In this section, we compare our method with DML and KD using different network architectures. We set Net2 as the higher capacity network. For KD, we use the ensemble of the two sub-networks as a teacher to mimic at every iteration. The difference with original KD (Hinton et al., 2015) is that it is an online learning method, not offline. We did not include ONE because ONE can not be applied in case where the sub-networks have different model types due to its architecture of sharing the lowlevel layers. In table 4, we could observe that our method shows better performance improvement than other methods in both Net1 and Net2 except for a couple of cases. The interesting result is that when AFD is applied, the performance of Net1 (smaller network) is improved significantly compared to other online distillation methods. This is because AFD can transfer the higher capacity network’s meaningful knowledge (feature map distribution) to the lower capacity one better than other online methods. When compared with KD and DML, AFD’s Net1 accuracy is higher by 1.66% and 1.15% and the ensemble accuracy is better by 0.61% and 0.67% on average respectively. In case of (WRN-16-2, WRN-28-4) pair, the Net1’s parameter size (0.70M) is more than 8 times smaller than Net2 (5.87M). Despite the large size difference, our method improves both networks’ accuracy, particularly our Net1 performance is better than KD and DML by 1.72% and 1.28% respectively. The performance of KD and DML seems to decline as the difference between the two model sizes gets larger. Throughout this experiment, we have shown that our method also works properly for different architectures of sub-networks even when two networks have large difference in their model sizes. Using our method, smaller network considerably benefits from the large network. 4.5 EXPANSION TO 3 NETWORKS To show our method’s expandability for training more than two networks, we conduct experiment of training 3 networks in this section. As proposed in Sec 3.3, our method uses a cyclic learning framework rather than employing adversarial loss between every network pairs in order to reduce the amount of computation and memory. DML calculates the mutual knowledge distillation loss between every network pairs and uses the average of the losses. ONE generates a gated ensemble of the sub-networks and transfers the knowledge of the ensemble logit to each network. As it can be seen in Table 5, AFD outperforms the compared online distillation methods on both 3 Net average and ensemble accuracy in every model types. Comparing the results of Table 5 to that of Table 3, the overall tendency of performance gains compared to DML and ONE is maintained. 4.6 IMAGENET EXPERIMENT We evaluate our method on ImageNet dataset to show that our method can also be applicable to a large scale image dataset. We use ImageNet LSVRC 2015 (Russakovsky et al., 2015) which has 1.2M training images and 50K validation images over 1,000 classes. We compare our method with DML using two pre-trained networks ResNet-18 and ResNet-34 as a pair. The results are after 30 epochs of training. As shown in Table 6, our method improves the networks better than DML. 5 CONCLUSION We proposed an online knowledge distillation method that transfers the knowledge not only at logitlevel but also at feature map-level using the adversarial training scheme. Unlike existing online distillation methods, our method utilizes the feature map information and showed that knowledge transfer at feature map-level is possible even in an online environment. Through extensive experiments, we demonstrated the adequacy of adopting the distribution learning via adversarial training for online feature map distillation and could achieve better performance than existing online methods. We also introduced a novel cyclic learning framework for training multiple networks concurrently and presented its efficacy by comparing with existing approaches. We also confirmed that our method is broadly suitable to various architecture types from a very small network (ResNet-20) to a large (WRN-28-4) network. We hope that due to the work of our research, the area of knowledge distillation can be further advanced and studied by many researchers.
1. What is the focus and contribution of the paper regarding deep mutual learning? 2. What are the strengths and weaknesses of the proposed method in terms of design and performance enhancement? 3. Do you have any concerns or suggestions regarding the manuscript's clarity and novelty? 4. How does the reviewer assess the significance and practicality of the target task and the proposed approach? 5. Are there any questions or issues with the experimental results and comparisons?
Review
Review = Summary This paper presents a new deep mutual learning (i.e., online peer-teaching) method based on Knowledge Distillation (KD) in a feature map level. The target task is similar with the original KD in the sense that the a network is taught by another network as well as groundtruth labels, but different with the KD in the sense that the networks are not a (frozen) teacher and a student but teaching each other in an online manner. Most approaches in this relatively new line of research rely on logit-based KD for transferring knowledges between networks, and the paper demonstrates that by an additional feature map level KD the performance can be further improved. = Decision The current decision is borderline in my mind, but officially weak accept. Although the proposed method is simple and consists of known ideas, it is designed convincingly and enhances performance practically. Also, I believe the target task itself is worth to be introduced as a next direction of KD. However, the submission is weak in terms of novelty and the manuscript should be polished carefully. = Comments 1) Weak clarity - The main motivation and advantages of the deep mutual learning is not well introduced in Section 1. Although the two papers (i.e., ONE and DML) are cited here, it would be much better to explicitly describe the main idea and motivation of peer-teaching, what is the difference between the task and the original KD, and the achievements in the previous work. Without these, readers, including me, may get confused why the online KD is required and why there is no clear teacher-student relationship between networks. - In a similar context, the motivation of introducing more than two student networks should be given. - The architecture of the discriminator seems not described even in the appendix. - The meaning of various arrow types in Figure 1 is not clearly described. 2) Insufficient experiments It would be better to report the performance of vanilla and (offline) KD in Table 1 to show more clearly that the feature map alignment is useful and that online KD is better than its offline counterpart. 3) Limited novelty and performance improvement - The main idea is already introduced in previous work on the task and the feature map level KD has been studied widely for various applications, their combination is somewhat new though. - The performance improvement by the proposed feature map level KD seems marginal as shown in Table 2. - The performance gap between DML and the proposed model seems also marginal.
ICLR
Title FALCON: Fast and Lightweight Convolution for Compressing and Accelerating CNN Abstract How can we efficiently compress Convolutional Neural Networks (CNN) while retaining their accuracy on classification tasks? A promising direction is based on depthwise separable convolution which replaces a standard convolution with a depthwise convolution and a pointwise convolution. However, previous works based on depthwise separable convolution are limited since 1) they are mostly heuristic approaches without a precise understanding of their relations to standard convolution, and 2) their accuracies do not match that of the standard convolution. In this paper, we propose FALCON, an accurate and lightweight method for compressing CNN. FALCON is derived by interpreting existing convolution methods based on depthwise separable convolution using EHP, our proposed mathematical formulation to approximate the standard convolution kernel. Such interpretation leads to developing a generalized version rank-k FALCON which further improves the accuracy while sacrificing a bit of compression and computation reduction rates. In addition, we propose FALCON-branch by fitting FALCON into the previous state-of-the-art convolution unit ShuffleUnitV2 which gives even better accuracy. Experiments show that FALCON and FALCON-branch outperform 1) existing methods based on depthwise separable convolution and 2) standard CNN models by up to 8× compression and 8× computation reduction while ensuring similar accuracy. We also demonstrate that rank-k FALCON provides even better accuracy than standard convolution in many cases, while using a smaller number of parameters and floating-point operations. 1 INTRODUCTION How can we efficiently reduce size and energy consumption of Convolutional Neural Networks (CNN) while maintaining their accuracy on classification tasks? Nowadays, CNN is widely used in various areas including computer vision (Krizhevsky et al. (2012); Simonyan & Zisserman (2014); Szegedy et al. (2017)), natural language processing (Yin et al. (2016)), recommendation system (Kim et al. (2016a)), etc. In addition, model compression has become an important technique due to an increase in the model capacity and the number of parameters in CNN. One recent and promising direction for compressing CNN is depthwise separable convolution (Sifre (2014)) which replaces standard convolution with depthwise and pointwise convolutions. The depthwise convolution applies a separate 2D convolution kernel for each input channel, and the pointwise convolution changes the channel size using 1×1 convolution (details in Section 2.1). Several recent methods (Howard et al. (2017); Sandler et al. (2018); Zhang et al. (2017)) based on depthwise separable convolution show reasonable performances in terms of compression and computation reduction. However, existing approaches based on depthwise separable convolution have several crucial limitations. First, they are heuristic methods, and their relation to the standard convolution is not clearly identified. Second, due to their heuristic nature, generalizing the methods is difficult. Third, although they give reasonable compression and computation reduction, their accuracy is not sufficient compared to that of standard-convolution-based models. In this paper, we propose FALCON, an accurate and lightweight method for compressing CNN. FALCON overcomes the limitations of the previous methods based on the depthwise separable convolution using the following two main ideas. First, we precisely define the relationship between the standard convolution and the depthwise separable convolution using EHP (Extended Hadamard Product), which is our proposed mathematical formulation to correlate the standard convolution kernel with the depthwise convolution kernel and the pointwise convolution kernel. We then design FALCON by fine-tuning and reordering the results of EHP to improve the accuracy of convolution operations. Second, based on the precise definition, we generalize the FALCON to design rank-k FALCON, which further improves accuracy while sacrificing a bit of compression and computation reduction rates. We also propose FALCON-branch by fitting FALCON into the state-of-the-art convolution unit ShuffleUnitV2 which gives even higher accuracy. As a result, FALCON and FALCON-branch provide a superior accuracy compared to other methods based on depthwise separable convolution, with similar compression and computation reduction rates, and rank-k FALCON further improves accuracy, outperforming even the original convolution in many cases. Our contributions are summarized as follows: • Generalization. We analyze and generalize depthwise separable convolution to our proposed EHP (Extended Hadamard Product) operation. This generalization enables a precise understanding of the relationship between depthwise separable convolution and standard convolution. Furthermore, with fine-tuning operations, it leads to our proposed method FALCON. • Algorithm. We propose FALCON, a CNN compression method based on depthwise separable convolution. FALCON is carefully designed to compress CNN with little accuracy loss. We also propose rank-k FALCON to further improve the accuracy with a little sacrifice in compression and computation reduction rates. FALCON can be easily integrated into other architectures, and we propose FALCON-branch which combines FALCON with a branch architecture for a better performance. We theoretically analyze the compression and computation reduction rates of FALCON and other competitors. • Experiments. We perform extensive experiments and show that FALCON 1) outperforms other state-of-the-art methods based on depthwise separable convolution for compressing CNN, and 2) provides up to 8× compression and computation reduction compared to the standard convolution while giving similar accuracies. Furthermore, we show that rank-k FALCON provides even better accuracy than the standard convolution in many cases while using a smaller number of parameters and floating-point operations. The rest of this paper is organized as follows. Section 2 explains preliminaries. Section 3 describes our proposed method FALCON. Section 4 presents experimental results. After discussing related works in Section 5, we conclude in Section 6. 2 PRELIMINARY We describe preliminaries on depthwise separable convolution and methods based on depthwise separable convolution. Symbols used in this paper are described in Table 4 of Appendix. 2.1 DEPTHWISE SEPARABLE CONVOLUTION Depthwise Separable Convolution (DSConv) consists of two sub-layers: depthwise convolution and pointwise convolution. The architecture of each convolution layer in DSConv is illustrated in Figure 5(a). Depthwise convolution (DWConv) kernel consists of several D×D 2-dimensional filters. The number of 2-dimension filters is the same as that of input feature maps. Each filter is applied on the corresponding input feature map, and produces an output feature map. Pointwise convolution (PWConv), known as 1× 1 convolution, is a standard convolution with kernel size 1. DSConv is defined as follows: O′h′,w′,m = D∑ i=1 D∑ j=1 Di,j,m · Ihi,wj ,m (1) Oh′,w′,n = M∑ m=1 Pm,n ·O′h′,w′,m (2) where Di,j,m and Pm,n are depthwise convolution kernel and pointwise convolution kernel, respectively. O′h′,w′,m ∈ RH ′×W ′×M denotes intermediate feature maps. DSConv performs DWConv on input feature maps Ihi,wj ,m using equation 1, and generates intermediate feature maps O ′ h′,w′,m. Then, DSConv performs PWConv on O′h′,w′,m using equation 2, and generates output feature maps Oh′,w′,n. 2.2 METHODS BASED ON DEPTHWISE SEPARABLE CONVOLUTION Several CNN methods have been proposed based on Depthwise Separable Convolution (DSConv) recently. DSConv was first introduced by Sifre (2014). Chollet & Franois (2016) built Xception module using DSConv in a few layers. Howard et al. (2017) built Mobilenet with all convolution layers replaced by DSConv. Sandler et al. (2018) built MobileNetV2 with inverted bottleneck block, denoted as MobileConvV2 in this paper. Zhang et al. (2017) built a CNN model with Shufflenet Unit, denoted as ShuffleUnit in this paper. Ma et al. (2018) improved Shufflenet by designing ShufflenetV2 Unit, denoted as ShuffleUnitV2 in this paper. The architecture and detailed descriptions are in Figure 5 and Appendix D. 3 PROPOSED METHOD We describe FALCON, our proposed method for compressing CNN. We first define Extended Hadamard Product (EHP), a key mathematical formulation to generalize depthwise separable convolution, in Section 3.1. We interpret depthwise separable convolution used in Mobilenet using EHP in Section 3.2. We propose FALCON in Section 3.3 and explain why FALCON can replace standard convolution. Then, we propose rank-k FALCON, which extends the basic FALCON, in Section 3.4. We show that FALCON can be easily integrated into a branch architecture to compress it with little sacrifice of accuracy in Section 3.5. Finally, we theoretically analyze the performance of FALCON in Appendix C. 3.1 EXTENDED HADAMARD PRODUCT (EHP) We define Extended Hadamard Product (EHP), a generalized elementwise product for two operands of different shapes, to generalize the formulation of the relation between standard convolution and depthwise separable convolution. Before generalizing the formulation, we give an example of formulating the relation between standard convolution and depthwise separable convolution. Suppose we have a 4-order standard convolution kernel K ∈ RI×J×M×N , a 3-order depthwise convolution kernel D ∈ RI×J×M , and a pointwise convolution kernel P ∈ RM×N . Let Ki,j,m,n be (i, j,m, n)-th element of K, Di,j,m be (i, j,m)-th element of D, and Pm,n be (m,n)-th element of P. Then, it can be shown that applying depthwise convolution with D and pointwise convolution with P is equivalent to applying standard convolution kernel K where Ki,j,m,n = Di,j,m · Pm,n (see Section 3.2 for detailed proof). To formally express this relation, we define Extended Hadamard Product (EHP) as follows. Definition 1 (Extended Hadamard Product). Given p-order tensor D ∈ RI1×···×Ip−1×M and qorder tensor P ∈ RM×J1×···×Jq−1 , the Extended Hadamard Product D E P of D and P is defined to be the tensor K ∈RI1×...×Ip−1×M×J1×...×Jq−1 where the last axis of D and the first axis of P are the common axes such that Ki1,...,ip−1,m,j1,...,jq−1 = Di1,...,ip−1,m ·Pm,j1,...,jq−1 for all elements of K. Contrary to Hadamard Product which is defined only if the shapes of the two operands are the same, Extended Hadamard Product (EHP) deals with tensors of different shapes. Now, we define a special case of Extended Hadamard Product (EHP) for a third-order tensor and a matrix. Definition 2 (Extended Hadamard Product for a third order tensor and a matrix). Given a thirdorder tensor D ∈ RI×J×M and a matrix P ∈ RM×N , the Extended Hadamard Product D E P is defined to be the tensor K ∈RI×J×M×N where the third axis of the tensor D and the first axis of the matrix P are the common axes such that Ki,j,m,n = Di,j,m ·Pm,n. for all elements of K. We will see that the depthwise separable convolution in Mobilenet can be easily expressed with EHP in Section 3.2; we also propose a new architecture FALCON based on EHP in Section 3.3. EHP is also a core operation that helps us understand other convolution architectures including MobilenetV2 and Shufflenet (see Appendix E). 3.2 DEPTHWISE SEPARABLE CONVOLUTION AND EHP In this section, we discuss how to represent the convolution layer of Mobilenet as Extended Hadamard Product (EHP) described in Section 3.1. We interpret the depthwise separable convolution, which is the convolution of Mobilenet, as an application of EHP. This interpretation leads to designing a better convolution architecture FALCON in Section 3.3. We represent the relationship between standard convolution kernel K ∈ RD×D×M×N and depthwise separable convolution consisting of depthwise convolution kernel D ∈ RD×D×M and pointwise convolution kernel P ∈ RM×N using one EHP operation. Figure 2(a) illustrates the relationship between standard convolution and depthwise separable convolution used in Mobilenet. We show that applying depthwise separable convolution with D and P is equivalent to applying standard convolution with a kernel K which is constructed from D and P. Theorem 1. Applying depthwise separable convolution with depthwise convolution kernel D ∈ RD×D×M and pointwise convolution kernel P ∈ RM×N is equivalent to applying standard convolution with kernel K = D E P. Proof. See Appendix B.1. 3.3 FAST AND LIGHTWEIGHT CONVOLUTION (FALCON) We propose FALCON (FAst and Lightweight CONvolution), a novel lightweight convolution that replaces standard convolution. FALCON is an efficient method with fewer parameters and computations than those that the standard convolution requires. In addition, FALCON has better accuracy than competitors while having similar compression and computation reduction rates. The main idea of FALCON is 1) to carefully align depthwise and pointwise convolutions, and 2) initialize kernels using the convolution kernels of the trained standard model. We observe that a typical convolution has more output channels than input channels. In such a setting, performing depthwise convolution after pointwise convolution would allow the depthwise convolution to extract more features from richer feature space; on the other hand, performing pointwise convolution after depthwise convolution as in Mobilenet only combines features extracted from a limited feature space. Based on the observation, FALCON first applies pointwise convolution to generate an intermediate tensor O′ ∈ RH×W×N and then applies depthwise convolution. We represent the relationship between standard convolution kernel K ∈ RD×D×M×N and FALCON by applying an EHP operation on pointwise convolution kernel P ∈ RN×M and depthwise convolution kernel D ∈ RD×D×N in Figure 2(b). In FALCON, the kernel K is represented by EHP of D and P as follows: K = TT(1,2,4,3)(D E P) s.t. Ki,j,m,n = Pn,m ·Di,j,n. where TT(1,2,4,3) indicates tensor transpose operation to permute the third and the fourth dimensions of a tensor. Note that the common axis is the output channel axis of the standard convolution, unlike EHP for depthwise separable convolution where the common axis is the input channel axis of the standard convolution. As in Section 3.2, we show that applying FALCON is equivalent to applying standard convolution with a specially constructed kernel. Theorem 2. FALCON which applies pointwise convolution with kernel P ∈ RN×M and then depthwise convolution with kernel D ∈ RD×D×N is equivalent to applying standard convolution with kernel K = TT(1,2,4,3)(D E P). Proof. See Appendix B.2. Based on the equivalence, we initialize pointwise convolution and depthwise convolution kernels D and P of FALCON by fitting them to the convolution kernels of the trained standard model; i.e., D,P = argminD′,P′ ||K−TT(1,2,4,3)(D′ E P′)||F . After pointwise convolution and depthwise convolution, we add batch-normalization and ReLU activation function as shown in Figure 1(b). We note that FALCON significantly reduces the numbers of parameters and FLOPs compared to standard convolution, which we discuss at Appendix C. 3.4 RANK-k FALCON We propose rank-k FALCON, an extended version of FALCON that improves accuracy while sacrificing a bit of compression and computation reduction rates. The main idea is to perform k independent FALCON operations and sum up the result. Then, we apply batch-normalization (BN) and ReLU activation function to the summed result. Since each FALCON operation requires independent parameters for pointwise convolution and depthwise convolution, the number of parameters increases and thus the compression and the computation reduction rates decrease; however, it improves accuracy by enlarging the model capacity. We formally define the rank-k FALCON with EHP as follows. Definition 3 (Rank-k FALCON with Extended Hadamard Product). Rank-k FALCON expresses standard convolution kernel K ∈ RD×D×M×N as EHP of depthwise convolution kernel D(i) ∈ RD×D×N and pointwise convolution kernel P(i) ∈ RN×M for i = 1, 2, ..., k such that K = k∑ i=1 TT(1,2,4,3)(D (i) E P(i)) s.t. Ki,j,m,n = k∑ i=1 P(i)n,m ·D (i) i,j,n Figure 2(c) illustrates the relation between standard convolution and rank-k FALCON. For each i = 1, 2, ..., k, we construct the tensor K(i) using EHP of the depthwise convolution kernel D(i) and the pointwise convolution kernel P(i). Then, we construct the standard kernel K by the elementwise sum of the tensors K(i) for all i. 3.5 FALCON-BRANCH FALCON can be easily integrated into a CNN architecture called standard convolution operation with a branch (StConv-branch), which consists of two branches: standard convolution on the left branch and a residual connection on the right branch (see Figure 1(c)). Ma et al. (2018) improved the performance of CNN by applying depthwise and pointwise convolutions on the left branch of StConv-branch. Since FALCON replaces standard convolution, we observe that StConv-branch can be easily compressed by applying FALCON on the left branch. StConv-branch first splits an input in half along the depth dimension. A standard convolution operation is applied to one half, and no operation to the other half. The two are concatenated along the depth dimension, and an output is produced by shuffling the channels of the concatenated tensor. FALCON-branch (see Figure 1(d)) is constructed by replacing the standard convolution branch (left branch) of StConv-branch with FALCON. Advantages of FALCON-branch are that 1) the branch architecture improves the efficiency since convolutions are applied to only half of input feature maps and that 2) FALCON further compresses the left branch effectively. FALCON-branch is initialized by fitting FALCON to the standard convolution kernel of the left branch of StConv-branch. 4 EXPERIMENTS We validate the performance of FALCON through extensive experiments. We aim to answer the following questions: • Q1. Accuracy vs. Compression (Section 4.3). What are the accuracy and the compression tradeoffs of FALCON, FALCON-branch, and competitors? Which method gives the best accuracy for a given compression rate? • Q2. Accuracy vs. Computation (Section 4.4). What are the accuracy and the computation tradeoffs of FALCON, FALCON-branch, and competitors? Which method gives the best accuracy for a given amount of computation? • Q3. Rank-k FALCON (Section 4.5). How do the accuracy, the number of parameters, and the number of FLOPs change as the rank k increases in FALCON? 4.1 EXPERIMENTAL SETUP Datasets. We perform image classification task on four famous datasets - CIFAR10, CIFAR100, SVHN, and ImageNet. Detailed information of these datasets is described in Table 1. Models. For CIFAR10, CIFAR100, and SVHN datasets, we choose VGG19 and ResNet34 to evaluate the performance. We shrink the sizes of both models since the sizes of these three datasets are smaller than that of Imagenet. In VGG19, we reduce the number of fully connected layers and the number of features in fully connected layers: three large fully connected layers (4096-4096-1000) in VGG19 are replaced with two small fully connected layers (512-10 or 512-100). In ResNet34, we remove the first 7 × 7 convolution layer and max-pooling layer since the input size (32 × 32) of these datasets is smaller than the input size (224 × 224) of ImageNet. On both models, we replace all standard convolution layers (except for the first convolution layer) with those of FALCON or other competitors in order to compress and accelerate the model. For ImageNet, we choose VGG16 BN (VGG16 with batch normalization after every convolution layer) and ResNet18. We use the pretrained model from Pytorch model zoo as the baseline model with standard convolution, and replace the standard convolution with other types of convolutions. Competitors. We compare FALCON and FALCON-branch with four convolution units consisting of depthwise convolution and pointwise convolution: DSConv, MobileConvV2, ShuffleUnit, and ShuffleUnitV2 (see Figure 5, Section 2.1, and Appendix D for more details). To evaluate the effectiveness of fitting depthwise and pointwise convolution kernels to standard convolution kernel, we build EHP-in which is DSConv where kernels D and P are fitted from the pretrained standard convolution kernel K; i.e., D,P = argminD′,P′ ||K−D′ E P′||F . Implementation. We construct all models using Pytorch framework. All the models are trained and tested on GeForce GTX 1080 Ti GPU. 4.2 FITTING CONVOLUTION UNIT INTO MODEL We evaluate the performance of FALCON against DSConv, MobileConvV2, ShuffleUnit, and ShuffleUnitV2. We take each standard convolution layer (StConv) as a unit, and replace StConv with those from FALCON or other competitors. We evaluate the classification accuracy, the number of parameters in the model, and the number of FLOPs needed for forwarding one image. We only explain how to apply FALCON in this section. The details of how to fit other convolution units into the models are described in Appendix F. FALCON. When replacing StConv with FALCON, we use the same setting as that of StConv. I.e., if there are BN and ReLU after StConv, we add BN and ReLU at the end of FALCON; if there is only ReLU after StConv, we add only ReLU at the end of FALCON. This is because FALCON is initialized by approximating the StConv kernel using EHP. Using the same setting for BN and ReLU as StConv is more efficient for FALCON to approximate the StConv. We initialize the pointwise convolution kernel and the depthwise convolution kernel of FALCON by approximating the pretrained standard convolution kernel using EHP. The approximation process is as follows: 1) we first initialize the pointwise convolution kernel and the depthwise convolution kernel randomly, and 2) the pointwise convolution kernel and the depthwise convolution kernel are updated using gradient descent such that the mean squared error of their EHP product and the standard convolution kernel is minimized. Rank-k FALCON uses the same initialization method. 4.3 ACCURACY VS. COMPRESSION We evaluate the accuracy and the compression rate of FALCON and competitors. Table 2 shows the results on four image datasets. Note that FALCON or FALCON-branch provides the highest accuracy in 7 out of 8 cases while using similar or smaller number of parameters than competitors. Specifically, FALCON and FALCON-branch achieve up to 8× compression rates with less than 1% accuracy drop compared to that of the standard convolution (StConv). Figure 3 shows the tradeoff between accuracy and the number of parameters. Note that FALCON and FALCON-branch show the best tradeoff (closest to the “best” point) between accuracy and compression rate, giving the highest accuracy with similar compression rates. Ablation study. We perform an ablation study on two components: 1) the order of depthwise and pointwise convolutions, and 2) initialization. We observe that with a similar number of parameters, 1) FALCON and FALCON without initialization always result in better accuracy than EHP-in and DSC, respectively, and 2) EHP-in always results in better accuracy than DSC. Furthermore, FALCON results in better accuracy than FALCON without initialization in 6 out of 8 cases. These observations prove our claims in Section 3.3 that 1) EHP-out (FALCON) is more efficient than EHP-in, and 2) the fitting and the initialization of kernels using EHP improves accuracy. Additionally, we observe that overall performance is more sensitive towards ordering compared to initialization. 4.4 ACCURACY VS. COMPUTATION We evaluate the accuracy and the amount of computation of FALCON and competitors. We use the number of multiply-adds floating point operations (FLOPs) needed for forwarding one image to a model as the metric of computation. Table 2 also shows the accuracies and the number of FLOPs of methods on four image datasets. Note that FALCON or FALCON-branch provide the highest accuracy in 7 out of 8 cases while using similar FLOPs as competitors do. Compared to StConv, FALCON and FALCON-branch achieve up to 8× FLOPs reduction across different models on different datasets. Figure 4 shows the tradeoff between accuracy and the number of FLOPs. Note that FALCON and FALCON-branch show the best tradeoff (closest to the “best” point) between accuracy and computation, giving the highest accuracy with a similar number of FLOPs. 4.5 RANK-K FALCON We evaluate the performance of rank-k FALCON by increasing the rank k and monitoring the changes in the numbers of parameters and FLOPs. In Table 3, we observe three trends as the rank k increases: 1) the accuracy becomes higher than that of rank-1 FALCON, 2) the number of parameters increases, and 3) the number of floating point operations (FLOPs) increases. Although the rank k that gives the best tradeoff of rank and compression/computation reduction varies, rank-k FALCON improves the accuracy of FALCON in all cases. Especially, we note that rank FALCON often gives even higher accuracy than the standard convolution, while using smaller number of parameters and FLOPs. For example, rank-3 FALCON applied to VGG19 on CIFAR100 dataset shows 1.31 percentage point higher accuracy compared to the standard convolution, with 2.8× smaller number of parameters and 2.8× smaller number of FLOPs. Thus, rank-k FALCON is a versatile method to further improve the accuracy of FALCON while sacrificing a bit of compression and computation. 5 RELATED WORK Over the past several years, a lot of studies focused on compressing and accelerating DNN to reduce model size, running time, and energy consumption. It is believed that DNNs are over-parameterized. Weight-sharing (Han et al. (2016); Ullrich et al. (2017); Chen et al. (2015); Choi et al. (2017); Agustsson et al. (2017)) is a common compression method which stores only assignments and centroids of weights. While using the model, weights are loaded according to assignments and centroids. Pruning (Han et al. (2014); Li et al. (2016)) aims at removing useless weights or setting them to zero. Although weight-sharing and pruning can significantly reduce the model size, they are not efficient in reducing the amount of computation. Quantizing (Courbariaux et al. (2015; 2016); Hou et al. (2017); Zhu et al. (2017)) the model into binary or ternary weights reduces model size and computation simultaneously: replacing arithmetic operations with bit-wise operations remarkably accelerates the model. Layer-wise approaches are also employed to efficiently compress models. A typical example of such approaches is low-rank approximation (Lebedev et al. (2015); Kim et al. (2016b); Novikov et al. (2015)); it treats the weights as a tensor and uses general tensor approximation methods to compress the tensor. To reduce computation, approximation methods should be carefully chosen, since some of approximation methods may increase computation of the model. Compressing existing models has limitations since they are originally designed to be deep and large to give high accuracy. A recent trend is to design a brand new architecture that is small and efficient. Mobilenet (Howard et al. (2017)), MobilenetV2 (Sandler et al. (2018)), Shufflenet (Zhang et al. (2017)), and ShufflenetV2 (Ma et al. (2018)) are the most representative approaches, and they use depthwise convolution and pointwise convolution as building blocks for designing convolution layers. Our proposed FALCON gives a thorough interpretation of depthwise convolution and pointwise convolution, and applies them into model compression, giving the best accuracies with regard to compression and computation. 6 CONCLUSION We propose FALCON, an accurate and lightweight convolution method to replace standard convolution. By interpreting existing convolution methods based on depthwise separable convolution using EHP operation, FALCON and its general version rank-k FALCON provide accurate and efficient compression on CNN. We also propose FALCON-branch, a variant of FALCON integrated into a branch architecture of CNN for model compression. Extensive experiments show that FALCON and its variants give the best accuracy for a given number of parameter or computation, outperforming other convolution models based on depthwise separable convolution. Compared to the standard convolution, FALCON and FALCON-branch give up to 8× compression and 8× computation reduction while giving similar accuracy. We also show that rank-k FALCON provides better accuracy than the standard convolution, while using smaller numbers of parameters and computations. A CONVOLUTIONAL NEURAL NETWORK Convolutional Neural Network (CNN) is a type of deep neural network used mainly for structured data. CNN uses convolution operation in convolution layers. In the following, we discuss CNN when applied to typical image data with RGB channels. Each convolution layer has three components: input feature maps, convolution kernel, and output feature maps. The input feature maps I ∈ RH×W×M and the output feature maps O ∈ RH′×W ′×N are 3-dimensional tensors, and the convolution kernel K ∈ RD×D×M×N is a 4-dimensional tensor. The convolution operation is defined as: Oh′,w′,n = D∑ i=1 D∑ j=1 M∑ m=1 Ki,j,m,n · Ihi,wj ,m (3) where the relations between height hi and width wj of input, and height h′ and width w′ of output are as follows: hi = (h ′ − 1)s+ i− p and wj = (w′ − 1)s+ j − p (4) where s is the stride size, and p is the padding size. The third and the fourth dimensions of the convolution kernel K must match the number M of input channels, and the number N of output channels, respectively. Convolution kernel K can be seen as N 3-dimensional filters Fn ∈ RD×D×M . Each filter Fn in kernel K performs convolution operation while sliding over all spatial locations on input feature maps. Each filter produces one output feature map. B PROOFS OF THEOREMS B.1 PROOF OF THEOREM 1 Proof. From the definition of EHP, Ki,j,m,n = Di,j,m ·Pm,n. Based on equation 3, we replace the kernel Ki,j,m,n with the depthwise convolution kernel Di,j,m and the pointwise convolution kernel Pm,n. Oh′,w′,n = D∑ i=1 D∑ j=1 M∑ m=1 Di,j,m ·Pm,n · Ihi,wj ,m where Ihi,wj ,m is the (hi, wj ,m)-th entry of the input. We split the above equation into the following two equations. O′h′,w′,m = D∑ i=1 D∑ j=1 Di,j,m · Ihi,wj ,m (5) Oh′,w′,n = M∑ m=1 Pm,n ·O′h′,w′,m (6) where O′h′,w′,m ∈ RH ′×W ′×M is an intermediate tensor. Note that equation 5 and equation 6 correspond to the depthwise convolution and the pointwise convolution, respectively. Therefore, the output O′h′,w′,m is equal to the output applying depthwise separable convolution used in Mobilenet. B.2 PROOF OF THEOREM 2 Proof. From equation 3, we replace the kernel Ki,j,m,n with the pointwise convolution kernel P and the depthwise convolution kernel D. Oh′,w′,n = M∑ m=1 D∑ i=1 D∑ j=1 Pm,n ·Di,j,n · Ihi,wj ,m where Ihi,wj ,m is the (hi, wj ,m)-th entry of the input I. We split the above equation into the following two equations. O′hi,wj ,n = M∑ m=1 Pm,n · Ihi,wj ,m (7) Oh′,w′,n = D∑ i=1 D∑ j=1 Di,j,n ·O′hi,wj ,n (8) where I, O′, and O are the input, the intermediate, and the output tensors of convolution layer, respectively. Note that equation 7 and equation 8 correspond to pointwise convolution and depthwise convolution, respectively. Therefore, the output Oh′,w′,n is equal to the output applying FALCON. C QUANTITATIVE ANALYSIS In this section, we evaluate the compression and the computation reduction of FALCON and rank-k FALCON. All the analysis is based on one convolution layer. The comparison of the numbers of parameters and FLOPs of FALCON and other competitors is in Appendix G. C.1 FALCON We analyze the compression and the computation reduction rates of FALCON in Theorems 3 and 4. Theorem 3. Compression Rate (CR) of FALCON is given by CR = # of parameters in standard convolution # of parameters in FALCON = D2MN MN +D2N where D2 is the size of standard kernel, M is the number of input channels, and N is the number of output channels. Proof. Standard convolution kernel has D2MN parameters. FALCON includes pointwise convolution and depthwise convolution which requires MN and D2N parameters, respectively. Thus, the compression rate of FALCON is CR = D2MN MN +D2N . Theorem 4. Computation Reduction Rate (CRR) of FALCON is described as: CRR = # of FLOPs in standard convolution # of FLOPs in FALCON = H ′W ′MD2N HWMN +H ′W ′D2N where H ′ and W ′ are the height and the width of output, respectively, and H and W are the height and the width of input, respectively. Proof. The standard convolution operation requires H ′W ′D2MN FLOPs (Molchanov et al. (2017)). FALCON includes pointwise convolution and depthwise convolution. Pointwise convolution has kernel size D = 1 with stride s = 1 and no padding, so the intermediate tensor O′ has the same height and width as those of the input feature maps. Thus, pointwise convolution needs HWMN FLOPs. Depthwise convolution has the number of input channel M = 1, so it needs H ′W ′D2N FLOPs. The total FLOPs of FALCON is HWMN +H ′W ′D2N , thus the computation reduction rate of FALCON is CRR = H ′W ′D2MN HWMN +H ′W ′D2N . C.2 RANK-K FALCON We analyze the compression and computation reduction rates of rank-k FALCON in Theorem 5. Theorem 5. Compression Rate (CRk) and Computation Reduction Rate (CRRk) of rank-k FALCON are described as: CRk = CR k CRRk = CRR k Proof. The numbers of parameters and FLOPs increase for k times since rank-k FALCON duplicates FALCON for k times. Thus, the compression rate and the computation reduction rate are calculated as CRk = CR k and CRRk = CRR k . D DESCRIPTION OF RELATED CONVOLUTION UNITS MobileNetV2. Sandler et al. (2018) proposed a new convolution architecture which we call as MobileConvV2, in their MobilenetV2 model. MobileConvV2 consists of three sub-layers as shown in Figure 5(b). The first and the third sub-layers are pointwise convolution for adjusting the number of channels. The first sub-layer expands the number of channels from M to tM , where t is an expansion ratio. The second sub-layer is a D×D depthwise convolution. Since depthwise convolution cannot change the number of channels, the third sub-layer adjusts the number of channels from tM to N . There is a shortcut connection between the input, and the output of the third sub-layer to facilitate flow of gradient across multiple layers. MobileConvV2 needs tM2 + D2tM + tMN parameters and tHWM2 + tH ′W ′D2M + tH ′W ′MN FLOPs. ShuffleNet. Zhang et al. (2017) proposed a computation-efficient CNN architecture named Shufflenet. As shown in Figure 5(c), each unit of Shufflenet (we call it ShuffleUnit) consists of three sublayers, first group pointwise convolution, depthwise convolution, and second group pointwise convolution, as well as a shortcut. The number of depthwise convolution channels is 14 of output channels N . ShuffleUnit uses group convolution in two pointwise convolution layers to reduce the parameters and FLOPs. However, it is hard to exchange information among groups when group convolutions are stacked. To deal with this problem, ShuffleUnit adds a channel shuffle layer after the first pointwise group convolution. The channel shuffle layer rearranges the order of channels. making it possible to obtain information from different groups. The number of groups is represented as g. ShuffleUnit needs 14gMN+ 1 4D 2N+ 14gN 2 parameters and 14gHWMN+ 1 4H ′W ′D2N+ 14gH ′W ′N2 FLOPs. ShufflenetV2. Ma et al. (2018) proposed a practically efficient CNN architecture ShufflenetV2. As shown in Figure 5(d), each unit of ShufflenetV2 (we call it ShuffleUnitV2) consists of two branches. The left branch consists of two pointwise convolutions and one depthwise convolution like MobileConvV2, and the right branch is an identity operation. Note that outputs of both branches maintain the number of channels as M/2. The final output is produced by concatenating and shuffling the output tensors from both of the branches. ShuffleUnitV2 needs 12 (M 2+D2M) parameters and 12HW (M 2 +D2M) FLOPs. E GENERALITY OF EHP We show that EHP is a key operation to understand other convolution architectures based on depthwise separable convolution. MobilenetV2. As shown in Figure 5(b), MobilenetV2 has an additional pointwise convolution before depthwise convolution in Mobilenet: one layer of MobilenetV2 consists of two pointwise convolutions and one depthwise convolution. In another point of view, MobilenetV2 can be understood as FALCON followed by additional pointwise convolution; i.e., MobilenetV2 performs EHP operation as FALCON does, and performs additional pointwise convolution after that. Shufflenet. As shown in Figure 5(c), Shufflenet consists of depthwise convolution and pointwise group convolution which is a variant of pointwise convolution. We represent the convolution layer of Shufflenet using EHP as follows. Let g be the number of groups. We divide the standard convolution kernel K ∈ RD×D×M×N into g group standard convolution kernels. Then, the relation of g-th group standard convolution kernel Kg ∈ RD×D× M g × N g with regard to g-th depthwise convolution kernel Dg ∈ RD×D× M g and g-th pointwise group convolution kernel Pg ∈ R M g × N g is Kg = Dg E Pg s.t. Kgi,j,mg,ng = D g i,j,mg ·Pgmg,ng where mg = 1, 2, ..., Mg and ng = 1, 2, ..., N g . Each group standard convolution is equivalent to the combination of a depthwise convolution and a pointwise convolution, and thus easily expressed with EHP as in Mobilenet. Therefore, each layer of Shufflenet is equivalent to the layer consisting of one group convolution followed by standard convolution. ShufflenetV2. As shown in Figure 5(d), the left branch of ShufflenetV2 has the same convolutions as in MobilenetV2: it consists of two pointwise convolutions and one depthwise convolution. Like MobilenetV2, the left branch of ShufflenetV2 can be understood as FALCON followed by additional pointwise convolution. F FITTING OTHER CONVOLUTION UNITS INTO MODELS DSConv. DSConv (shown in Figure 5(a)) has the most similar architecture as FALCON among competitors, and thus DSConv has nearly the same number of parameters as that of FALCON. As in the setting of FALCON, the existence of BN and ReLU at the end of DSConv depends on that of StConv. MobileConvV2. In MobileConvV2 architecture (shown in Figure 5(b)), we adjust the numbers of parameters and FLOPs by changing the expansion ratio t as described in Appendix D, which is represented as ‘MobileConvV2-t’. We choose t = 0.5 as the baseline MobileConvV2 to compare with FALCON, since two pointwise convolutions bring lots of parameters and FLOPs to MobileConvV2. ShuffleUnit. In ShuffleUnit (shown in Figure 5(c)), we adjust the numbers of parameters and FLOPs by changing the width multiplier α (Howard et al. (2017)) and the number of groups g, which is represented as ‘ShuffleUnit α×(g=g)’. Note that the width multiplier is used to adjust the number of input channels M and the number of output channels N of a convolution layer; if the width multiplier is α, the numbers of input and output channels become αM and αN , respectively. While experimenting with ResNet, we find that ShuffleUnit does not cooperate well with ResNet: ResNet34 with ShuffleUnit does not converge. We suspect that residual block and ShuffleUnit may conflict with each other because of redundant residual connections: the gradient may not find the right path towards previous layers. For this reason, we delete the shortcut of all residual blocks in ResNet34 when using ShuffleUnit. ShuffleUnitV2. In ShuffleUnitV2 (shown in Figure 5(d)),we also adjust the number of parameters and FLOPs by changing the width multiplier α, which is represented as ’ShuffleUnitV2 α×’. Other operations of ShuffleUnitV2 stay the same as in Ma et al. (2018). G PARAMETERS AND FLOPS We summarize the numbers of parameters and FLOPs for FALCON and competitors in Table 5.
1. What is the main contribution of the paper in terms of CNN compression? 2. What is the significance of the proposed EHP operation in analyzing and generalizing depthwise separable convolution? 3. How does the proposed method improve upon existing depthwise separable convolution approaches? 4. What is the reviewer's assessment of the paper's experimental results? 5. How does the reviewer perceive the impact of the paper's findings on the field of CNN compression?
Review
Review The paper proposes a CNN compression method, based on the so called EHP operation, which can be used to analyze and generalize depthwise separable convolution. Based on EHP, the paper develops depthwise separable convolution to compress CNNs, and extend it to a rank-k approach with further improved accuracy. Some analysis is provided about the operation equivalence. The experiments on standard benchmark datasets show the effectiveness of the method. Probably the most important contribution of this work is that it proposes a new operation that can summarize/generalize the existing depthwise separable convolution and reveal the relationship with the standard convolution. The work is meaningful to me because a compact representation of CNNs can largely reduce the computational resources and storage. The experiments are extensive as well.
ICLR
Title FALCON: Fast and Lightweight Convolution for Compressing and Accelerating CNN Abstract How can we efficiently compress Convolutional Neural Networks (CNN) while retaining their accuracy on classification tasks? A promising direction is based on depthwise separable convolution which replaces a standard convolution with a depthwise convolution and a pointwise convolution. However, previous works based on depthwise separable convolution are limited since 1) they are mostly heuristic approaches without a precise understanding of their relations to standard convolution, and 2) their accuracies do not match that of the standard convolution. In this paper, we propose FALCON, an accurate and lightweight method for compressing CNN. FALCON is derived by interpreting existing convolution methods based on depthwise separable convolution using EHP, our proposed mathematical formulation to approximate the standard convolution kernel. Such interpretation leads to developing a generalized version rank-k FALCON which further improves the accuracy while sacrificing a bit of compression and computation reduction rates. In addition, we propose FALCON-branch by fitting FALCON into the previous state-of-the-art convolution unit ShuffleUnitV2 which gives even better accuracy. Experiments show that FALCON and FALCON-branch outperform 1) existing methods based on depthwise separable convolution and 2) standard CNN models by up to 8× compression and 8× computation reduction while ensuring similar accuracy. We also demonstrate that rank-k FALCON provides even better accuracy than standard convolution in many cases, while using a smaller number of parameters and floating-point operations. 1 INTRODUCTION How can we efficiently reduce size and energy consumption of Convolutional Neural Networks (CNN) while maintaining their accuracy on classification tasks? Nowadays, CNN is widely used in various areas including computer vision (Krizhevsky et al. (2012); Simonyan & Zisserman (2014); Szegedy et al. (2017)), natural language processing (Yin et al. (2016)), recommendation system (Kim et al. (2016a)), etc. In addition, model compression has become an important technique due to an increase in the model capacity and the number of parameters in CNN. One recent and promising direction for compressing CNN is depthwise separable convolution (Sifre (2014)) which replaces standard convolution with depthwise and pointwise convolutions. The depthwise convolution applies a separate 2D convolution kernel for each input channel, and the pointwise convolution changes the channel size using 1×1 convolution (details in Section 2.1). Several recent methods (Howard et al. (2017); Sandler et al. (2018); Zhang et al. (2017)) based on depthwise separable convolution show reasonable performances in terms of compression and computation reduction. However, existing approaches based on depthwise separable convolution have several crucial limitations. First, they are heuristic methods, and their relation to the standard convolution is not clearly identified. Second, due to their heuristic nature, generalizing the methods is difficult. Third, although they give reasonable compression and computation reduction, their accuracy is not sufficient compared to that of standard-convolution-based models. In this paper, we propose FALCON, an accurate and lightweight method for compressing CNN. FALCON overcomes the limitations of the previous methods based on the depthwise separable convolution using the following two main ideas. First, we precisely define the relationship between the standard convolution and the depthwise separable convolution using EHP (Extended Hadamard Product), which is our proposed mathematical formulation to correlate the standard convolution kernel with the depthwise convolution kernel and the pointwise convolution kernel. We then design FALCON by fine-tuning and reordering the results of EHP to improve the accuracy of convolution operations. Second, based on the precise definition, we generalize the FALCON to design rank-k FALCON, which further improves accuracy while sacrificing a bit of compression and computation reduction rates. We also propose FALCON-branch by fitting FALCON into the state-of-the-art convolution unit ShuffleUnitV2 which gives even higher accuracy. As a result, FALCON and FALCON-branch provide a superior accuracy compared to other methods based on depthwise separable convolution, with similar compression and computation reduction rates, and rank-k FALCON further improves accuracy, outperforming even the original convolution in many cases. Our contributions are summarized as follows: • Generalization. We analyze and generalize depthwise separable convolution to our proposed EHP (Extended Hadamard Product) operation. This generalization enables a precise understanding of the relationship between depthwise separable convolution and standard convolution. Furthermore, with fine-tuning operations, it leads to our proposed method FALCON. • Algorithm. We propose FALCON, a CNN compression method based on depthwise separable convolution. FALCON is carefully designed to compress CNN with little accuracy loss. We also propose rank-k FALCON to further improve the accuracy with a little sacrifice in compression and computation reduction rates. FALCON can be easily integrated into other architectures, and we propose FALCON-branch which combines FALCON with a branch architecture for a better performance. We theoretically analyze the compression and computation reduction rates of FALCON and other competitors. • Experiments. We perform extensive experiments and show that FALCON 1) outperforms other state-of-the-art methods based on depthwise separable convolution for compressing CNN, and 2) provides up to 8× compression and computation reduction compared to the standard convolution while giving similar accuracies. Furthermore, we show that rank-k FALCON provides even better accuracy than the standard convolution in many cases while using a smaller number of parameters and floating-point operations. The rest of this paper is organized as follows. Section 2 explains preliminaries. Section 3 describes our proposed method FALCON. Section 4 presents experimental results. After discussing related works in Section 5, we conclude in Section 6. 2 PRELIMINARY We describe preliminaries on depthwise separable convolution and methods based on depthwise separable convolution. Symbols used in this paper are described in Table 4 of Appendix. 2.1 DEPTHWISE SEPARABLE CONVOLUTION Depthwise Separable Convolution (DSConv) consists of two sub-layers: depthwise convolution and pointwise convolution. The architecture of each convolution layer in DSConv is illustrated in Figure 5(a). Depthwise convolution (DWConv) kernel consists of several D×D 2-dimensional filters. The number of 2-dimension filters is the same as that of input feature maps. Each filter is applied on the corresponding input feature map, and produces an output feature map. Pointwise convolution (PWConv), known as 1× 1 convolution, is a standard convolution with kernel size 1. DSConv is defined as follows: O′h′,w′,m = D∑ i=1 D∑ j=1 Di,j,m · Ihi,wj ,m (1) Oh′,w′,n = M∑ m=1 Pm,n ·O′h′,w′,m (2) where Di,j,m and Pm,n are depthwise convolution kernel and pointwise convolution kernel, respectively. O′h′,w′,m ∈ RH ′×W ′×M denotes intermediate feature maps. DSConv performs DWConv on input feature maps Ihi,wj ,m using equation 1, and generates intermediate feature maps O ′ h′,w′,m. Then, DSConv performs PWConv on O′h′,w′,m using equation 2, and generates output feature maps Oh′,w′,n. 2.2 METHODS BASED ON DEPTHWISE SEPARABLE CONVOLUTION Several CNN methods have been proposed based on Depthwise Separable Convolution (DSConv) recently. DSConv was first introduced by Sifre (2014). Chollet & Franois (2016) built Xception module using DSConv in a few layers. Howard et al. (2017) built Mobilenet with all convolution layers replaced by DSConv. Sandler et al. (2018) built MobileNetV2 with inverted bottleneck block, denoted as MobileConvV2 in this paper. Zhang et al. (2017) built a CNN model with Shufflenet Unit, denoted as ShuffleUnit in this paper. Ma et al. (2018) improved Shufflenet by designing ShufflenetV2 Unit, denoted as ShuffleUnitV2 in this paper. The architecture and detailed descriptions are in Figure 5 and Appendix D. 3 PROPOSED METHOD We describe FALCON, our proposed method for compressing CNN. We first define Extended Hadamard Product (EHP), a key mathematical formulation to generalize depthwise separable convolution, in Section 3.1. We interpret depthwise separable convolution used in Mobilenet using EHP in Section 3.2. We propose FALCON in Section 3.3 and explain why FALCON can replace standard convolution. Then, we propose rank-k FALCON, which extends the basic FALCON, in Section 3.4. We show that FALCON can be easily integrated into a branch architecture to compress it with little sacrifice of accuracy in Section 3.5. Finally, we theoretically analyze the performance of FALCON in Appendix C. 3.1 EXTENDED HADAMARD PRODUCT (EHP) We define Extended Hadamard Product (EHP), a generalized elementwise product for two operands of different shapes, to generalize the formulation of the relation between standard convolution and depthwise separable convolution. Before generalizing the formulation, we give an example of formulating the relation between standard convolution and depthwise separable convolution. Suppose we have a 4-order standard convolution kernel K ∈ RI×J×M×N , a 3-order depthwise convolution kernel D ∈ RI×J×M , and a pointwise convolution kernel P ∈ RM×N . Let Ki,j,m,n be (i, j,m, n)-th element of K, Di,j,m be (i, j,m)-th element of D, and Pm,n be (m,n)-th element of P. Then, it can be shown that applying depthwise convolution with D and pointwise convolution with P is equivalent to applying standard convolution kernel K where Ki,j,m,n = Di,j,m · Pm,n (see Section 3.2 for detailed proof). To formally express this relation, we define Extended Hadamard Product (EHP) as follows. Definition 1 (Extended Hadamard Product). Given p-order tensor D ∈ RI1×···×Ip−1×M and qorder tensor P ∈ RM×J1×···×Jq−1 , the Extended Hadamard Product D E P of D and P is defined to be the tensor K ∈RI1×...×Ip−1×M×J1×...×Jq−1 where the last axis of D and the first axis of P are the common axes such that Ki1,...,ip−1,m,j1,...,jq−1 = Di1,...,ip−1,m ·Pm,j1,...,jq−1 for all elements of K. Contrary to Hadamard Product which is defined only if the shapes of the two operands are the same, Extended Hadamard Product (EHP) deals with tensors of different shapes. Now, we define a special case of Extended Hadamard Product (EHP) for a third-order tensor and a matrix. Definition 2 (Extended Hadamard Product for a third order tensor and a matrix). Given a thirdorder tensor D ∈ RI×J×M and a matrix P ∈ RM×N , the Extended Hadamard Product D E P is defined to be the tensor K ∈RI×J×M×N where the third axis of the tensor D and the first axis of the matrix P are the common axes such that Ki,j,m,n = Di,j,m ·Pm,n. for all elements of K. We will see that the depthwise separable convolution in Mobilenet can be easily expressed with EHP in Section 3.2; we also propose a new architecture FALCON based on EHP in Section 3.3. EHP is also a core operation that helps us understand other convolution architectures including MobilenetV2 and Shufflenet (see Appendix E). 3.2 DEPTHWISE SEPARABLE CONVOLUTION AND EHP In this section, we discuss how to represent the convolution layer of Mobilenet as Extended Hadamard Product (EHP) described in Section 3.1. We interpret the depthwise separable convolution, which is the convolution of Mobilenet, as an application of EHP. This interpretation leads to designing a better convolution architecture FALCON in Section 3.3. We represent the relationship between standard convolution kernel K ∈ RD×D×M×N and depthwise separable convolution consisting of depthwise convolution kernel D ∈ RD×D×M and pointwise convolution kernel P ∈ RM×N using one EHP operation. Figure 2(a) illustrates the relationship between standard convolution and depthwise separable convolution used in Mobilenet. We show that applying depthwise separable convolution with D and P is equivalent to applying standard convolution with a kernel K which is constructed from D and P. Theorem 1. Applying depthwise separable convolution with depthwise convolution kernel D ∈ RD×D×M and pointwise convolution kernel P ∈ RM×N is equivalent to applying standard convolution with kernel K = D E P. Proof. See Appendix B.1. 3.3 FAST AND LIGHTWEIGHT CONVOLUTION (FALCON) We propose FALCON (FAst and Lightweight CONvolution), a novel lightweight convolution that replaces standard convolution. FALCON is an efficient method with fewer parameters and computations than those that the standard convolution requires. In addition, FALCON has better accuracy than competitors while having similar compression and computation reduction rates. The main idea of FALCON is 1) to carefully align depthwise and pointwise convolutions, and 2) initialize kernels using the convolution kernels of the trained standard model. We observe that a typical convolution has more output channels than input channels. In such a setting, performing depthwise convolution after pointwise convolution would allow the depthwise convolution to extract more features from richer feature space; on the other hand, performing pointwise convolution after depthwise convolution as in Mobilenet only combines features extracted from a limited feature space. Based on the observation, FALCON first applies pointwise convolution to generate an intermediate tensor O′ ∈ RH×W×N and then applies depthwise convolution. We represent the relationship between standard convolution kernel K ∈ RD×D×M×N and FALCON by applying an EHP operation on pointwise convolution kernel P ∈ RN×M and depthwise convolution kernel D ∈ RD×D×N in Figure 2(b). In FALCON, the kernel K is represented by EHP of D and P as follows: K = TT(1,2,4,3)(D E P) s.t. Ki,j,m,n = Pn,m ·Di,j,n. where TT(1,2,4,3) indicates tensor transpose operation to permute the third and the fourth dimensions of a tensor. Note that the common axis is the output channel axis of the standard convolution, unlike EHP for depthwise separable convolution where the common axis is the input channel axis of the standard convolution. As in Section 3.2, we show that applying FALCON is equivalent to applying standard convolution with a specially constructed kernel. Theorem 2. FALCON which applies pointwise convolution with kernel P ∈ RN×M and then depthwise convolution with kernel D ∈ RD×D×N is equivalent to applying standard convolution with kernel K = TT(1,2,4,3)(D E P). Proof. See Appendix B.2. Based on the equivalence, we initialize pointwise convolution and depthwise convolution kernels D and P of FALCON by fitting them to the convolution kernels of the trained standard model; i.e., D,P = argminD′,P′ ||K−TT(1,2,4,3)(D′ E P′)||F . After pointwise convolution and depthwise convolution, we add batch-normalization and ReLU activation function as shown in Figure 1(b). We note that FALCON significantly reduces the numbers of parameters and FLOPs compared to standard convolution, which we discuss at Appendix C. 3.4 RANK-k FALCON We propose rank-k FALCON, an extended version of FALCON that improves accuracy while sacrificing a bit of compression and computation reduction rates. The main idea is to perform k independent FALCON operations and sum up the result. Then, we apply batch-normalization (BN) and ReLU activation function to the summed result. Since each FALCON operation requires independent parameters for pointwise convolution and depthwise convolution, the number of parameters increases and thus the compression and the computation reduction rates decrease; however, it improves accuracy by enlarging the model capacity. We formally define the rank-k FALCON with EHP as follows. Definition 3 (Rank-k FALCON with Extended Hadamard Product). Rank-k FALCON expresses standard convolution kernel K ∈ RD×D×M×N as EHP of depthwise convolution kernel D(i) ∈ RD×D×N and pointwise convolution kernel P(i) ∈ RN×M for i = 1, 2, ..., k such that K = k∑ i=1 TT(1,2,4,3)(D (i) E P(i)) s.t. Ki,j,m,n = k∑ i=1 P(i)n,m ·D (i) i,j,n Figure 2(c) illustrates the relation between standard convolution and rank-k FALCON. For each i = 1, 2, ..., k, we construct the tensor K(i) using EHP of the depthwise convolution kernel D(i) and the pointwise convolution kernel P(i). Then, we construct the standard kernel K by the elementwise sum of the tensors K(i) for all i. 3.5 FALCON-BRANCH FALCON can be easily integrated into a CNN architecture called standard convolution operation with a branch (StConv-branch), which consists of two branches: standard convolution on the left branch and a residual connection on the right branch (see Figure 1(c)). Ma et al. (2018) improved the performance of CNN by applying depthwise and pointwise convolutions on the left branch of StConv-branch. Since FALCON replaces standard convolution, we observe that StConv-branch can be easily compressed by applying FALCON on the left branch. StConv-branch first splits an input in half along the depth dimension. A standard convolution operation is applied to one half, and no operation to the other half. The two are concatenated along the depth dimension, and an output is produced by shuffling the channels of the concatenated tensor. FALCON-branch (see Figure 1(d)) is constructed by replacing the standard convolution branch (left branch) of StConv-branch with FALCON. Advantages of FALCON-branch are that 1) the branch architecture improves the efficiency since convolutions are applied to only half of input feature maps and that 2) FALCON further compresses the left branch effectively. FALCON-branch is initialized by fitting FALCON to the standard convolution kernel of the left branch of StConv-branch. 4 EXPERIMENTS We validate the performance of FALCON through extensive experiments. We aim to answer the following questions: • Q1. Accuracy vs. Compression (Section 4.3). What are the accuracy and the compression tradeoffs of FALCON, FALCON-branch, and competitors? Which method gives the best accuracy for a given compression rate? • Q2. Accuracy vs. Computation (Section 4.4). What are the accuracy and the computation tradeoffs of FALCON, FALCON-branch, and competitors? Which method gives the best accuracy for a given amount of computation? • Q3. Rank-k FALCON (Section 4.5). How do the accuracy, the number of parameters, and the number of FLOPs change as the rank k increases in FALCON? 4.1 EXPERIMENTAL SETUP Datasets. We perform image classification task on four famous datasets - CIFAR10, CIFAR100, SVHN, and ImageNet. Detailed information of these datasets is described in Table 1. Models. For CIFAR10, CIFAR100, and SVHN datasets, we choose VGG19 and ResNet34 to evaluate the performance. We shrink the sizes of both models since the sizes of these three datasets are smaller than that of Imagenet. In VGG19, we reduce the number of fully connected layers and the number of features in fully connected layers: three large fully connected layers (4096-4096-1000) in VGG19 are replaced with two small fully connected layers (512-10 or 512-100). In ResNet34, we remove the first 7 × 7 convolution layer and max-pooling layer since the input size (32 × 32) of these datasets is smaller than the input size (224 × 224) of ImageNet. On both models, we replace all standard convolution layers (except for the first convolution layer) with those of FALCON or other competitors in order to compress and accelerate the model. For ImageNet, we choose VGG16 BN (VGG16 with batch normalization after every convolution layer) and ResNet18. We use the pretrained model from Pytorch model zoo as the baseline model with standard convolution, and replace the standard convolution with other types of convolutions. Competitors. We compare FALCON and FALCON-branch with four convolution units consisting of depthwise convolution and pointwise convolution: DSConv, MobileConvV2, ShuffleUnit, and ShuffleUnitV2 (see Figure 5, Section 2.1, and Appendix D for more details). To evaluate the effectiveness of fitting depthwise and pointwise convolution kernels to standard convolution kernel, we build EHP-in which is DSConv where kernels D and P are fitted from the pretrained standard convolution kernel K; i.e., D,P = argminD′,P′ ||K−D′ E P′||F . Implementation. We construct all models using Pytorch framework. All the models are trained and tested on GeForce GTX 1080 Ti GPU. 4.2 FITTING CONVOLUTION UNIT INTO MODEL We evaluate the performance of FALCON against DSConv, MobileConvV2, ShuffleUnit, and ShuffleUnitV2. We take each standard convolution layer (StConv) as a unit, and replace StConv with those from FALCON or other competitors. We evaluate the classification accuracy, the number of parameters in the model, and the number of FLOPs needed for forwarding one image. We only explain how to apply FALCON in this section. The details of how to fit other convolution units into the models are described in Appendix F. FALCON. When replacing StConv with FALCON, we use the same setting as that of StConv. I.e., if there are BN and ReLU after StConv, we add BN and ReLU at the end of FALCON; if there is only ReLU after StConv, we add only ReLU at the end of FALCON. This is because FALCON is initialized by approximating the StConv kernel using EHP. Using the same setting for BN and ReLU as StConv is more efficient for FALCON to approximate the StConv. We initialize the pointwise convolution kernel and the depthwise convolution kernel of FALCON by approximating the pretrained standard convolution kernel using EHP. The approximation process is as follows: 1) we first initialize the pointwise convolution kernel and the depthwise convolution kernel randomly, and 2) the pointwise convolution kernel and the depthwise convolution kernel are updated using gradient descent such that the mean squared error of their EHP product and the standard convolution kernel is minimized. Rank-k FALCON uses the same initialization method. 4.3 ACCURACY VS. COMPRESSION We evaluate the accuracy and the compression rate of FALCON and competitors. Table 2 shows the results on four image datasets. Note that FALCON or FALCON-branch provides the highest accuracy in 7 out of 8 cases while using similar or smaller number of parameters than competitors. Specifically, FALCON and FALCON-branch achieve up to 8× compression rates with less than 1% accuracy drop compared to that of the standard convolution (StConv). Figure 3 shows the tradeoff between accuracy and the number of parameters. Note that FALCON and FALCON-branch show the best tradeoff (closest to the “best” point) between accuracy and compression rate, giving the highest accuracy with similar compression rates. Ablation study. We perform an ablation study on two components: 1) the order of depthwise and pointwise convolutions, and 2) initialization. We observe that with a similar number of parameters, 1) FALCON and FALCON without initialization always result in better accuracy than EHP-in and DSC, respectively, and 2) EHP-in always results in better accuracy than DSC. Furthermore, FALCON results in better accuracy than FALCON without initialization in 6 out of 8 cases. These observations prove our claims in Section 3.3 that 1) EHP-out (FALCON) is more efficient than EHP-in, and 2) the fitting and the initialization of kernels using EHP improves accuracy. Additionally, we observe that overall performance is more sensitive towards ordering compared to initialization. 4.4 ACCURACY VS. COMPUTATION We evaluate the accuracy and the amount of computation of FALCON and competitors. We use the number of multiply-adds floating point operations (FLOPs) needed for forwarding one image to a model as the metric of computation. Table 2 also shows the accuracies and the number of FLOPs of methods on four image datasets. Note that FALCON or FALCON-branch provide the highest accuracy in 7 out of 8 cases while using similar FLOPs as competitors do. Compared to StConv, FALCON and FALCON-branch achieve up to 8× FLOPs reduction across different models on different datasets. Figure 4 shows the tradeoff between accuracy and the number of FLOPs. Note that FALCON and FALCON-branch show the best tradeoff (closest to the “best” point) between accuracy and computation, giving the highest accuracy with a similar number of FLOPs. 4.5 RANK-K FALCON We evaluate the performance of rank-k FALCON by increasing the rank k and monitoring the changes in the numbers of parameters and FLOPs. In Table 3, we observe three trends as the rank k increases: 1) the accuracy becomes higher than that of rank-1 FALCON, 2) the number of parameters increases, and 3) the number of floating point operations (FLOPs) increases. Although the rank k that gives the best tradeoff of rank and compression/computation reduction varies, rank-k FALCON improves the accuracy of FALCON in all cases. Especially, we note that rank FALCON often gives even higher accuracy than the standard convolution, while using smaller number of parameters and FLOPs. For example, rank-3 FALCON applied to VGG19 on CIFAR100 dataset shows 1.31 percentage point higher accuracy compared to the standard convolution, with 2.8× smaller number of parameters and 2.8× smaller number of FLOPs. Thus, rank-k FALCON is a versatile method to further improve the accuracy of FALCON while sacrificing a bit of compression and computation. 5 RELATED WORK Over the past several years, a lot of studies focused on compressing and accelerating DNN to reduce model size, running time, and energy consumption. It is believed that DNNs are over-parameterized. Weight-sharing (Han et al. (2016); Ullrich et al. (2017); Chen et al. (2015); Choi et al. (2017); Agustsson et al. (2017)) is a common compression method which stores only assignments and centroids of weights. While using the model, weights are loaded according to assignments and centroids. Pruning (Han et al. (2014); Li et al. (2016)) aims at removing useless weights or setting them to zero. Although weight-sharing and pruning can significantly reduce the model size, they are not efficient in reducing the amount of computation. Quantizing (Courbariaux et al. (2015; 2016); Hou et al. (2017); Zhu et al. (2017)) the model into binary or ternary weights reduces model size and computation simultaneously: replacing arithmetic operations with bit-wise operations remarkably accelerates the model. Layer-wise approaches are also employed to efficiently compress models. A typical example of such approaches is low-rank approximation (Lebedev et al. (2015); Kim et al. (2016b); Novikov et al. (2015)); it treats the weights as a tensor and uses general tensor approximation methods to compress the tensor. To reduce computation, approximation methods should be carefully chosen, since some of approximation methods may increase computation of the model. Compressing existing models has limitations since they are originally designed to be deep and large to give high accuracy. A recent trend is to design a brand new architecture that is small and efficient. Mobilenet (Howard et al. (2017)), MobilenetV2 (Sandler et al. (2018)), Shufflenet (Zhang et al. (2017)), and ShufflenetV2 (Ma et al. (2018)) are the most representative approaches, and they use depthwise convolution and pointwise convolution as building blocks for designing convolution layers. Our proposed FALCON gives a thorough interpretation of depthwise convolution and pointwise convolution, and applies them into model compression, giving the best accuracies with regard to compression and computation. 6 CONCLUSION We propose FALCON, an accurate and lightweight convolution method to replace standard convolution. By interpreting existing convolution methods based on depthwise separable convolution using EHP operation, FALCON and its general version rank-k FALCON provide accurate and efficient compression on CNN. We also propose FALCON-branch, a variant of FALCON integrated into a branch architecture of CNN for model compression. Extensive experiments show that FALCON and its variants give the best accuracy for a given number of parameter or computation, outperforming other convolution models based on depthwise separable convolution. Compared to the standard convolution, FALCON and FALCON-branch give up to 8× compression and 8× computation reduction while giving similar accuracy. We also show that rank-k FALCON provides better accuracy than the standard convolution, while using smaller numbers of parameters and computations. A CONVOLUTIONAL NEURAL NETWORK Convolutional Neural Network (CNN) is a type of deep neural network used mainly for structured data. CNN uses convolution operation in convolution layers. In the following, we discuss CNN when applied to typical image data with RGB channels. Each convolution layer has three components: input feature maps, convolution kernel, and output feature maps. The input feature maps I ∈ RH×W×M and the output feature maps O ∈ RH′×W ′×N are 3-dimensional tensors, and the convolution kernel K ∈ RD×D×M×N is a 4-dimensional tensor. The convolution operation is defined as: Oh′,w′,n = D∑ i=1 D∑ j=1 M∑ m=1 Ki,j,m,n · Ihi,wj ,m (3) where the relations between height hi and width wj of input, and height h′ and width w′ of output are as follows: hi = (h ′ − 1)s+ i− p and wj = (w′ − 1)s+ j − p (4) where s is the stride size, and p is the padding size. The third and the fourth dimensions of the convolution kernel K must match the number M of input channels, and the number N of output channels, respectively. Convolution kernel K can be seen as N 3-dimensional filters Fn ∈ RD×D×M . Each filter Fn in kernel K performs convolution operation while sliding over all spatial locations on input feature maps. Each filter produces one output feature map. B PROOFS OF THEOREMS B.1 PROOF OF THEOREM 1 Proof. From the definition of EHP, Ki,j,m,n = Di,j,m ·Pm,n. Based on equation 3, we replace the kernel Ki,j,m,n with the depthwise convolution kernel Di,j,m and the pointwise convolution kernel Pm,n. Oh′,w′,n = D∑ i=1 D∑ j=1 M∑ m=1 Di,j,m ·Pm,n · Ihi,wj ,m where Ihi,wj ,m is the (hi, wj ,m)-th entry of the input. We split the above equation into the following two equations. O′h′,w′,m = D∑ i=1 D∑ j=1 Di,j,m · Ihi,wj ,m (5) Oh′,w′,n = M∑ m=1 Pm,n ·O′h′,w′,m (6) where O′h′,w′,m ∈ RH ′×W ′×M is an intermediate tensor. Note that equation 5 and equation 6 correspond to the depthwise convolution and the pointwise convolution, respectively. Therefore, the output O′h′,w′,m is equal to the output applying depthwise separable convolution used in Mobilenet. B.2 PROOF OF THEOREM 2 Proof. From equation 3, we replace the kernel Ki,j,m,n with the pointwise convolution kernel P and the depthwise convolution kernel D. Oh′,w′,n = M∑ m=1 D∑ i=1 D∑ j=1 Pm,n ·Di,j,n · Ihi,wj ,m where Ihi,wj ,m is the (hi, wj ,m)-th entry of the input I. We split the above equation into the following two equations. O′hi,wj ,n = M∑ m=1 Pm,n · Ihi,wj ,m (7) Oh′,w′,n = D∑ i=1 D∑ j=1 Di,j,n ·O′hi,wj ,n (8) where I, O′, and O are the input, the intermediate, and the output tensors of convolution layer, respectively. Note that equation 7 and equation 8 correspond to pointwise convolution and depthwise convolution, respectively. Therefore, the output Oh′,w′,n is equal to the output applying FALCON. C QUANTITATIVE ANALYSIS In this section, we evaluate the compression and the computation reduction of FALCON and rank-k FALCON. All the analysis is based on one convolution layer. The comparison of the numbers of parameters and FLOPs of FALCON and other competitors is in Appendix G. C.1 FALCON We analyze the compression and the computation reduction rates of FALCON in Theorems 3 and 4. Theorem 3. Compression Rate (CR) of FALCON is given by CR = # of parameters in standard convolution # of parameters in FALCON = D2MN MN +D2N where D2 is the size of standard kernel, M is the number of input channels, and N is the number of output channels. Proof. Standard convolution kernel has D2MN parameters. FALCON includes pointwise convolution and depthwise convolution which requires MN and D2N parameters, respectively. Thus, the compression rate of FALCON is CR = D2MN MN +D2N . Theorem 4. Computation Reduction Rate (CRR) of FALCON is described as: CRR = # of FLOPs in standard convolution # of FLOPs in FALCON = H ′W ′MD2N HWMN +H ′W ′D2N where H ′ and W ′ are the height and the width of output, respectively, and H and W are the height and the width of input, respectively. Proof. The standard convolution operation requires H ′W ′D2MN FLOPs (Molchanov et al. (2017)). FALCON includes pointwise convolution and depthwise convolution. Pointwise convolution has kernel size D = 1 with stride s = 1 and no padding, so the intermediate tensor O′ has the same height and width as those of the input feature maps. Thus, pointwise convolution needs HWMN FLOPs. Depthwise convolution has the number of input channel M = 1, so it needs H ′W ′D2N FLOPs. The total FLOPs of FALCON is HWMN +H ′W ′D2N , thus the computation reduction rate of FALCON is CRR = H ′W ′D2MN HWMN +H ′W ′D2N . C.2 RANK-K FALCON We analyze the compression and computation reduction rates of rank-k FALCON in Theorem 5. Theorem 5. Compression Rate (CRk) and Computation Reduction Rate (CRRk) of rank-k FALCON are described as: CRk = CR k CRRk = CRR k Proof. The numbers of parameters and FLOPs increase for k times since rank-k FALCON duplicates FALCON for k times. Thus, the compression rate and the computation reduction rate are calculated as CRk = CR k and CRRk = CRR k . D DESCRIPTION OF RELATED CONVOLUTION UNITS MobileNetV2. Sandler et al. (2018) proposed a new convolution architecture which we call as MobileConvV2, in their MobilenetV2 model. MobileConvV2 consists of three sub-layers as shown in Figure 5(b). The first and the third sub-layers are pointwise convolution for adjusting the number of channels. The first sub-layer expands the number of channels from M to tM , where t is an expansion ratio. The second sub-layer is a D×D depthwise convolution. Since depthwise convolution cannot change the number of channels, the third sub-layer adjusts the number of channels from tM to N . There is a shortcut connection between the input, and the output of the third sub-layer to facilitate flow of gradient across multiple layers. MobileConvV2 needs tM2 + D2tM + tMN parameters and tHWM2 + tH ′W ′D2M + tH ′W ′MN FLOPs. ShuffleNet. Zhang et al. (2017) proposed a computation-efficient CNN architecture named Shufflenet. As shown in Figure 5(c), each unit of Shufflenet (we call it ShuffleUnit) consists of three sublayers, first group pointwise convolution, depthwise convolution, and second group pointwise convolution, as well as a shortcut. The number of depthwise convolution channels is 14 of output channels N . ShuffleUnit uses group convolution in two pointwise convolution layers to reduce the parameters and FLOPs. However, it is hard to exchange information among groups when group convolutions are stacked. To deal with this problem, ShuffleUnit adds a channel shuffle layer after the first pointwise group convolution. The channel shuffle layer rearranges the order of channels. making it possible to obtain information from different groups. The number of groups is represented as g. ShuffleUnit needs 14gMN+ 1 4D 2N+ 14gN 2 parameters and 14gHWMN+ 1 4H ′W ′D2N+ 14gH ′W ′N2 FLOPs. ShufflenetV2. Ma et al. (2018) proposed a practically efficient CNN architecture ShufflenetV2. As shown in Figure 5(d), each unit of ShufflenetV2 (we call it ShuffleUnitV2) consists of two branches. The left branch consists of two pointwise convolutions and one depthwise convolution like MobileConvV2, and the right branch is an identity operation. Note that outputs of both branches maintain the number of channels as M/2. The final output is produced by concatenating and shuffling the output tensors from both of the branches. ShuffleUnitV2 needs 12 (M 2+D2M) parameters and 12HW (M 2 +D2M) FLOPs. E GENERALITY OF EHP We show that EHP is a key operation to understand other convolution architectures based on depthwise separable convolution. MobilenetV2. As shown in Figure 5(b), MobilenetV2 has an additional pointwise convolution before depthwise convolution in Mobilenet: one layer of MobilenetV2 consists of two pointwise convolutions and one depthwise convolution. In another point of view, MobilenetV2 can be understood as FALCON followed by additional pointwise convolution; i.e., MobilenetV2 performs EHP operation as FALCON does, and performs additional pointwise convolution after that. Shufflenet. As shown in Figure 5(c), Shufflenet consists of depthwise convolution and pointwise group convolution which is a variant of pointwise convolution. We represent the convolution layer of Shufflenet using EHP as follows. Let g be the number of groups. We divide the standard convolution kernel K ∈ RD×D×M×N into g group standard convolution kernels. Then, the relation of g-th group standard convolution kernel Kg ∈ RD×D× M g × N g with regard to g-th depthwise convolution kernel Dg ∈ RD×D× M g and g-th pointwise group convolution kernel Pg ∈ R M g × N g is Kg = Dg E Pg s.t. Kgi,j,mg,ng = D g i,j,mg ·Pgmg,ng where mg = 1, 2, ..., Mg and ng = 1, 2, ..., N g . Each group standard convolution is equivalent to the combination of a depthwise convolution and a pointwise convolution, and thus easily expressed with EHP as in Mobilenet. Therefore, each layer of Shufflenet is equivalent to the layer consisting of one group convolution followed by standard convolution. ShufflenetV2. As shown in Figure 5(d), the left branch of ShufflenetV2 has the same convolutions as in MobilenetV2: it consists of two pointwise convolutions and one depthwise convolution. Like MobilenetV2, the left branch of ShufflenetV2 can be understood as FALCON followed by additional pointwise convolution. F FITTING OTHER CONVOLUTION UNITS INTO MODELS DSConv. DSConv (shown in Figure 5(a)) has the most similar architecture as FALCON among competitors, and thus DSConv has nearly the same number of parameters as that of FALCON. As in the setting of FALCON, the existence of BN and ReLU at the end of DSConv depends on that of StConv. MobileConvV2. In MobileConvV2 architecture (shown in Figure 5(b)), we adjust the numbers of parameters and FLOPs by changing the expansion ratio t as described in Appendix D, which is represented as ‘MobileConvV2-t’. We choose t = 0.5 as the baseline MobileConvV2 to compare with FALCON, since two pointwise convolutions bring lots of parameters and FLOPs to MobileConvV2. ShuffleUnit. In ShuffleUnit (shown in Figure 5(c)), we adjust the numbers of parameters and FLOPs by changing the width multiplier α (Howard et al. (2017)) and the number of groups g, which is represented as ‘ShuffleUnit α×(g=g)’. Note that the width multiplier is used to adjust the number of input channels M and the number of output channels N of a convolution layer; if the width multiplier is α, the numbers of input and output channels become αM and αN , respectively. While experimenting with ResNet, we find that ShuffleUnit does not cooperate well with ResNet: ResNet34 with ShuffleUnit does not converge. We suspect that residual block and ShuffleUnit may conflict with each other because of redundant residual connections: the gradient may not find the right path towards previous layers. For this reason, we delete the shortcut of all residual blocks in ResNet34 when using ShuffleUnit. ShuffleUnitV2. In ShuffleUnitV2 (shown in Figure 5(d)),we also adjust the number of parameters and FLOPs by changing the width multiplier α, which is represented as ’ShuffleUnitV2 α×’. Other operations of ShuffleUnitV2 stay the same as in Ma et al. (2018). G PARAMETERS AND FLOPS We summarize the numbers of parameters and FLOPs for FALCON and competitors in Table 5.
1. What is the focus of the paper, and what are the authors' main contributions? 2. What are the strengths of the proposed approach, particularly in terms of its efficiency and accuracy? 3. What are the weaknesses of the paper, especially regarding the experiment design and comparisons with other works? 4. How does the reviewer assess the clarity and organization of the paper's content? 5. Are there any minor concerns or suggestions for improvement that the reviewer has regarding the paper?
Review
Review Overview: The paper is dedicated to studying fast and lightweight convolution for efficient compression and retaining original accuracy. In this paper, the authors interpret existing convolution methods based on depthwise separable convolution and derive FALCON. They claim their FALCON mathematically approximate the standard convolution kernel and achieves a better TA/efficiency tradeoff. They conduct extensive experiments to show that FALCON based method 1) outperforms previous state-of-the-art methods; 2) achieve 8X efficiency while ensuring similar TA. Strength Bullets: 1. The authors give a detailed interpretation of the depthwise separable convolution method via EHP. Then they propose novel FALCON has better accuracy than competitors while having similar compression and computation reduction rates. 2. The paper is well organized and easy to read. The author conducts extensive experiments to show FALCON based method not only surpass other depthwise separable convolution models but also give up to 8 times efficiency while giving similar accuracy. Weakness Bullets: 1. The authors need to provide more ablation experiments for two components of FALCON: 1) align depthwise and pointwise convolution; 2) initialize kernels. Moreover, for the order of performing depthwise convolution and pointwise convolution, it also needs experimental support. 2. I would like to see a comparison of the compression rate with previous filter compression methods. i.e, Soft weight-sharing (ICLR'17), Deep K-means (ICML'18) and so on. 3. [Minor] Even if there is computation reduction (Flops), it is not interesting enough. I am very curious about how much efficiency gains (i.e. latency) in the software implementation of FALCON. Recommendation: Although there are a few flaws in experiment design, it still is a novel and good technique. This is a weak accept.
ICLR
Title FALCON: Fast and Lightweight Convolution for Compressing and Accelerating CNN Abstract How can we efficiently compress Convolutional Neural Networks (CNN) while retaining their accuracy on classification tasks? A promising direction is based on depthwise separable convolution which replaces a standard convolution with a depthwise convolution and a pointwise convolution. However, previous works based on depthwise separable convolution are limited since 1) they are mostly heuristic approaches without a precise understanding of their relations to standard convolution, and 2) their accuracies do not match that of the standard convolution. In this paper, we propose FALCON, an accurate and lightweight method for compressing CNN. FALCON is derived by interpreting existing convolution methods based on depthwise separable convolution using EHP, our proposed mathematical formulation to approximate the standard convolution kernel. Such interpretation leads to developing a generalized version rank-k FALCON which further improves the accuracy while sacrificing a bit of compression and computation reduction rates. In addition, we propose FALCON-branch by fitting FALCON into the previous state-of-the-art convolution unit ShuffleUnitV2 which gives even better accuracy. Experiments show that FALCON and FALCON-branch outperform 1) existing methods based on depthwise separable convolution and 2) standard CNN models by up to 8× compression and 8× computation reduction while ensuring similar accuracy. We also demonstrate that rank-k FALCON provides even better accuracy than standard convolution in many cases, while using a smaller number of parameters and floating-point operations. 1 INTRODUCTION How can we efficiently reduce size and energy consumption of Convolutional Neural Networks (CNN) while maintaining their accuracy on classification tasks? Nowadays, CNN is widely used in various areas including computer vision (Krizhevsky et al. (2012); Simonyan & Zisserman (2014); Szegedy et al. (2017)), natural language processing (Yin et al. (2016)), recommendation system (Kim et al. (2016a)), etc. In addition, model compression has become an important technique due to an increase in the model capacity and the number of parameters in CNN. One recent and promising direction for compressing CNN is depthwise separable convolution (Sifre (2014)) which replaces standard convolution with depthwise and pointwise convolutions. The depthwise convolution applies a separate 2D convolution kernel for each input channel, and the pointwise convolution changes the channel size using 1×1 convolution (details in Section 2.1). Several recent methods (Howard et al. (2017); Sandler et al. (2018); Zhang et al. (2017)) based on depthwise separable convolution show reasonable performances in terms of compression and computation reduction. However, existing approaches based on depthwise separable convolution have several crucial limitations. First, they are heuristic methods, and their relation to the standard convolution is not clearly identified. Second, due to their heuristic nature, generalizing the methods is difficult. Third, although they give reasonable compression and computation reduction, their accuracy is not sufficient compared to that of standard-convolution-based models. In this paper, we propose FALCON, an accurate and lightweight method for compressing CNN. FALCON overcomes the limitations of the previous methods based on the depthwise separable convolution using the following two main ideas. First, we precisely define the relationship between the standard convolution and the depthwise separable convolution using EHP (Extended Hadamard Product), which is our proposed mathematical formulation to correlate the standard convolution kernel with the depthwise convolution kernel and the pointwise convolution kernel. We then design FALCON by fine-tuning and reordering the results of EHP to improve the accuracy of convolution operations. Second, based on the precise definition, we generalize the FALCON to design rank-k FALCON, which further improves accuracy while sacrificing a bit of compression and computation reduction rates. We also propose FALCON-branch by fitting FALCON into the state-of-the-art convolution unit ShuffleUnitV2 which gives even higher accuracy. As a result, FALCON and FALCON-branch provide a superior accuracy compared to other methods based on depthwise separable convolution, with similar compression and computation reduction rates, and rank-k FALCON further improves accuracy, outperforming even the original convolution in many cases. Our contributions are summarized as follows: • Generalization. We analyze and generalize depthwise separable convolution to our proposed EHP (Extended Hadamard Product) operation. This generalization enables a precise understanding of the relationship between depthwise separable convolution and standard convolution. Furthermore, with fine-tuning operations, it leads to our proposed method FALCON. • Algorithm. We propose FALCON, a CNN compression method based on depthwise separable convolution. FALCON is carefully designed to compress CNN with little accuracy loss. We also propose rank-k FALCON to further improve the accuracy with a little sacrifice in compression and computation reduction rates. FALCON can be easily integrated into other architectures, and we propose FALCON-branch which combines FALCON with a branch architecture for a better performance. We theoretically analyze the compression and computation reduction rates of FALCON and other competitors. • Experiments. We perform extensive experiments and show that FALCON 1) outperforms other state-of-the-art methods based on depthwise separable convolution for compressing CNN, and 2) provides up to 8× compression and computation reduction compared to the standard convolution while giving similar accuracies. Furthermore, we show that rank-k FALCON provides even better accuracy than the standard convolution in many cases while using a smaller number of parameters and floating-point operations. The rest of this paper is organized as follows. Section 2 explains preliminaries. Section 3 describes our proposed method FALCON. Section 4 presents experimental results. After discussing related works in Section 5, we conclude in Section 6. 2 PRELIMINARY We describe preliminaries on depthwise separable convolution and methods based on depthwise separable convolution. Symbols used in this paper are described in Table 4 of Appendix. 2.1 DEPTHWISE SEPARABLE CONVOLUTION Depthwise Separable Convolution (DSConv) consists of two sub-layers: depthwise convolution and pointwise convolution. The architecture of each convolution layer in DSConv is illustrated in Figure 5(a). Depthwise convolution (DWConv) kernel consists of several D×D 2-dimensional filters. The number of 2-dimension filters is the same as that of input feature maps. Each filter is applied on the corresponding input feature map, and produces an output feature map. Pointwise convolution (PWConv), known as 1× 1 convolution, is a standard convolution with kernel size 1. DSConv is defined as follows: O′h′,w′,m = D∑ i=1 D∑ j=1 Di,j,m · Ihi,wj ,m (1) Oh′,w′,n = M∑ m=1 Pm,n ·O′h′,w′,m (2) where Di,j,m and Pm,n are depthwise convolution kernel and pointwise convolution kernel, respectively. O′h′,w′,m ∈ RH ′×W ′×M denotes intermediate feature maps. DSConv performs DWConv on input feature maps Ihi,wj ,m using equation 1, and generates intermediate feature maps O ′ h′,w′,m. Then, DSConv performs PWConv on O′h′,w′,m using equation 2, and generates output feature maps Oh′,w′,n. 2.2 METHODS BASED ON DEPTHWISE SEPARABLE CONVOLUTION Several CNN methods have been proposed based on Depthwise Separable Convolution (DSConv) recently. DSConv was first introduced by Sifre (2014). Chollet & Franois (2016) built Xception module using DSConv in a few layers. Howard et al. (2017) built Mobilenet with all convolution layers replaced by DSConv. Sandler et al. (2018) built MobileNetV2 with inverted bottleneck block, denoted as MobileConvV2 in this paper. Zhang et al. (2017) built a CNN model with Shufflenet Unit, denoted as ShuffleUnit in this paper. Ma et al. (2018) improved Shufflenet by designing ShufflenetV2 Unit, denoted as ShuffleUnitV2 in this paper. The architecture and detailed descriptions are in Figure 5 and Appendix D. 3 PROPOSED METHOD We describe FALCON, our proposed method for compressing CNN. We first define Extended Hadamard Product (EHP), a key mathematical formulation to generalize depthwise separable convolution, in Section 3.1. We interpret depthwise separable convolution used in Mobilenet using EHP in Section 3.2. We propose FALCON in Section 3.3 and explain why FALCON can replace standard convolution. Then, we propose rank-k FALCON, which extends the basic FALCON, in Section 3.4. We show that FALCON can be easily integrated into a branch architecture to compress it with little sacrifice of accuracy in Section 3.5. Finally, we theoretically analyze the performance of FALCON in Appendix C. 3.1 EXTENDED HADAMARD PRODUCT (EHP) We define Extended Hadamard Product (EHP), a generalized elementwise product for two operands of different shapes, to generalize the formulation of the relation between standard convolution and depthwise separable convolution. Before generalizing the formulation, we give an example of formulating the relation between standard convolution and depthwise separable convolution. Suppose we have a 4-order standard convolution kernel K ∈ RI×J×M×N , a 3-order depthwise convolution kernel D ∈ RI×J×M , and a pointwise convolution kernel P ∈ RM×N . Let Ki,j,m,n be (i, j,m, n)-th element of K, Di,j,m be (i, j,m)-th element of D, and Pm,n be (m,n)-th element of P. Then, it can be shown that applying depthwise convolution with D and pointwise convolution with P is equivalent to applying standard convolution kernel K where Ki,j,m,n = Di,j,m · Pm,n (see Section 3.2 for detailed proof). To formally express this relation, we define Extended Hadamard Product (EHP) as follows. Definition 1 (Extended Hadamard Product). Given p-order tensor D ∈ RI1×···×Ip−1×M and qorder tensor P ∈ RM×J1×···×Jq−1 , the Extended Hadamard Product D E P of D and P is defined to be the tensor K ∈RI1×...×Ip−1×M×J1×...×Jq−1 where the last axis of D and the first axis of P are the common axes such that Ki1,...,ip−1,m,j1,...,jq−1 = Di1,...,ip−1,m ·Pm,j1,...,jq−1 for all elements of K. Contrary to Hadamard Product which is defined only if the shapes of the two operands are the same, Extended Hadamard Product (EHP) deals with tensors of different shapes. Now, we define a special case of Extended Hadamard Product (EHP) for a third-order tensor and a matrix. Definition 2 (Extended Hadamard Product for a third order tensor and a matrix). Given a thirdorder tensor D ∈ RI×J×M and a matrix P ∈ RM×N , the Extended Hadamard Product D E P is defined to be the tensor K ∈RI×J×M×N where the third axis of the tensor D and the first axis of the matrix P are the common axes such that Ki,j,m,n = Di,j,m ·Pm,n. for all elements of K. We will see that the depthwise separable convolution in Mobilenet can be easily expressed with EHP in Section 3.2; we also propose a new architecture FALCON based on EHP in Section 3.3. EHP is also a core operation that helps us understand other convolution architectures including MobilenetV2 and Shufflenet (see Appendix E). 3.2 DEPTHWISE SEPARABLE CONVOLUTION AND EHP In this section, we discuss how to represent the convolution layer of Mobilenet as Extended Hadamard Product (EHP) described in Section 3.1. We interpret the depthwise separable convolution, which is the convolution of Mobilenet, as an application of EHP. This interpretation leads to designing a better convolution architecture FALCON in Section 3.3. We represent the relationship between standard convolution kernel K ∈ RD×D×M×N and depthwise separable convolution consisting of depthwise convolution kernel D ∈ RD×D×M and pointwise convolution kernel P ∈ RM×N using one EHP operation. Figure 2(a) illustrates the relationship between standard convolution and depthwise separable convolution used in Mobilenet. We show that applying depthwise separable convolution with D and P is equivalent to applying standard convolution with a kernel K which is constructed from D and P. Theorem 1. Applying depthwise separable convolution with depthwise convolution kernel D ∈ RD×D×M and pointwise convolution kernel P ∈ RM×N is equivalent to applying standard convolution with kernel K = D E P. Proof. See Appendix B.1. 3.3 FAST AND LIGHTWEIGHT CONVOLUTION (FALCON) We propose FALCON (FAst and Lightweight CONvolution), a novel lightweight convolution that replaces standard convolution. FALCON is an efficient method with fewer parameters and computations than those that the standard convolution requires. In addition, FALCON has better accuracy than competitors while having similar compression and computation reduction rates. The main idea of FALCON is 1) to carefully align depthwise and pointwise convolutions, and 2) initialize kernels using the convolution kernels of the trained standard model. We observe that a typical convolution has more output channels than input channels. In such a setting, performing depthwise convolution after pointwise convolution would allow the depthwise convolution to extract more features from richer feature space; on the other hand, performing pointwise convolution after depthwise convolution as in Mobilenet only combines features extracted from a limited feature space. Based on the observation, FALCON first applies pointwise convolution to generate an intermediate tensor O′ ∈ RH×W×N and then applies depthwise convolution. We represent the relationship between standard convolution kernel K ∈ RD×D×M×N and FALCON by applying an EHP operation on pointwise convolution kernel P ∈ RN×M and depthwise convolution kernel D ∈ RD×D×N in Figure 2(b). In FALCON, the kernel K is represented by EHP of D and P as follows: K = TT(1,2,4,3)(D E P) s.t. Ki,j,m,n = Pn,m ·Di,j,n. where TT(1,2,4,3) indicates tensor transpose operation to permute the third and the fourth dimensions of a tensor. Note that the common axis is the output channel axis of the standard convolution, unlike EHP for depthwise separable convolution where the common axis is the input channel axis of the standard convolution. As in Section 3.2, we show that applying FALCON is equivalent to applying standard convolution with a specially constructed kernel. Theorem 2. FALCON which applies pointwise convolution with kernel P ∈ RN×M and then depthwise convolution with kernel D ∈ RD×D×N is equivalent to applying standard convolution with kernel K = TT(1,2,4,3)(D E P). Proof. See Appendix B.2. Based on the equivalence, we initialize pointwise convolution and depthwise convolution kernels D and P of FALCON by fitting them to the convolution kernels of the trained standard model; i.e., D,P = argminD′,P′ ||K−TT(1,2,4,3)(D′ E P′)||F . After pointwise convolution and depthwise convolution, we add batch-normalization and ReLU activation function as shown in Figure 1(b). We note that FALCON significantly reduces the numbers of parameters and FLOPs compared to standard convolution, which we discuss at Appendix C. 3.4 RANK-k FALCON We propose rank-k FALCON, an extended version of FALCON that improves accuracy while sacrificing a bit of compression and computation reduction rates. The main idea is to perform k independent FALCON operations and sum up the result. Then, we apply batch-normalization (BN) and ReLU activation function to the summed result. Since each FALCON operation requires independent parameters for pointwise convolution and depthwise convolution, the number of parameters increases and thus the compression and the computation reduction rates decrease; however, it improves accuracy by enlarging the model capacity. We formally define the rank-k FALCON with EHP as follows. Definition 3 (Rank-k FALCON with Extended Hadamard Product). Rank-k FALCON expresses standard convolution kernel K ∈ RD×D×M×N as EHP of depthwise convolution kernel D(i) ∈ RD×D×N and pointwise convolution kernel P(i) ∈ RN×M for i = 1, 2, ..., k such that K = k∑ i=1 TT(1,2,4,3)(D (i) E P(i)) s.t. Ki,j,m,n = k∑ i=1 P(i)n,m ·D (i) i,j,n Figure 2(c) illustrates the relation between standard convolution and rank-k FALCON. For each i = 1, 2, ..., k, we construct the tensor K(i) using EHP of the depthwise convolution kernel D(i) and the pointwise convolution kernel P(i). Then, we construct the standard kernel K by the elementwise sum of the tensors K(i) for all i. 3.5 FALCON-BRANCH FALCON can be easily integrated into a CNN architecture called standard convolution operation with a branch (StConv-branch), which consists of two branches: standard convolution on the left branch and a residual connection on the right branch (see Figure 1(c)). Ma et al. (2018) improved the performance of CNN by applying depthwise and pointwise convolutions on the left branch of StConv-branch. Since FALCON replaces standard convolution, we observe that StConv-branch can be easily compressed by applying FALCON on the left branch. StConv-branch first splits an input in half along the depth dimension. A standard convolution operation is applied to one half, and no operation to the other half. The two are concatenated along the depth dimension, and an output is produced by shuffling the channels of the concatenated tensor. FALCON-branch (see Figure 1(d)) is constructed by replacing the standard convolution branch (left branch) of StConv-branch with FALCON. Advantages of FALCON-branch are that 1) the branch architecture improves the efficiency since convolutions are applied to only half of input feature maps and that 2) FALCON further compresses the left branch effectively. FALCON-branch is initialized by fitting FALCON to the standard convolution kernel of the left branch of StConv-branch. 4 EXPERIMENTS We validate the performance of FALCON through extensive experiments. We aim to answer the following questions: • Q1. Accuracy vs. Compression (Section 4.3). What are the accuracy and the compression tradeoffs of FALCON, FALCON-branch, and competitors? Which method gives the best accuracy for a given compression rate? • Q2. Accuracy vs. Computation (Section 4.4). What are the accuracy and the computation tradeoffs of FALCON, FALCON-branch, and competitors? Which method gives the best accuracy for a given amount of computation? • Q3. Rank-k FALCON (Section 4.5). How do the accuracy, the number of parameters, and the number of FLOPs change as the rank k increases in FALCON? 4.1 EXPERIMENTAL SETUP Datasets. We perform image classification task on four famous datasets - CIFAR10, CIFAR100, SVHN, and ImageNet. Detailed information of these datasets is described in Table 1. Models. For CIFAR10, CIFAR100, and SVHN datasets, we choose VGG19 and ResNet34 to evaluate the performance. We shrink the sizes of both models since the sizes of these three datasets are smaller than that of Imagenet. In VGG19, we reduce the number of fully connected layers and the number of features in fully connected layers: three large fully connected layers (4096-4096-1000) in VGG19 are replaced with two small fully connected layers (512-10 or 512-100). In ResNet34, we remove the first 7 × 7 convolution layer and max-pooling layer since the input size (32 × 32) of these datasets is smaller than the input size (224 × 224) of ImageNet. On both models, we replace all standard convolution layers (except for the first convolution layer) with those of FALCON or other competitors in order to compress and accelerate the model. For ImageNet, we choose VGG16 BN (VGG16 with batch normalization after every convolution layer) and ResNet18. We use the pretrained model from Pytorch model zoo as the baseline model with standard convolution, and replace the standard convolution with other types of convolutions. Competitors. We compare FALCON and FALCON-branch with four convolution units consisting of depthwise convolution and pointwise convolution: DSConv, MobileConvV2, ShuffleUnit, and ShuffleUnitV2 (see Figure 5, Section 2.1, and Appendix D for more details). To evaluate the effectiveness of fitting depthwise and pointwise convolution kernels to standard convolution kernel, we build EHP-in which is DSConv where kernels D and P are fitted from the pretrained standard convolution kernel K; i.e., D,P = argminD′,P′ ||K−D′ E P′||F . Implementation. We construct all models using Pytorch framework. All the models are trained and tested on GeForce GTX 1080 Ti GPU. 4.2 FITTING CONVOLUTION UNIT INTO MODEL We evaluate the performance of FALCON against DSConv, MobileConvV2, ShuffleUnit, and ShuffleUnitV2. We take each standard convolution layer (StConv) as a unit, and replace StConv with those from FALCON or other competitors. We evaluate the classification accuracy, the number of parameters in the model, and the number of FLOPs needed for forwarding one image. We only explain how to apply FALCON in this section. The details of how to fit other convolution units into the models are described in Appendix F. FALCON. When replacing StConv with FALCON, we use the same setting as that of StConv. I.e., if there are BN and ReLU after StConv, we add BN and ReLU at the end of FALCON; if there is only ReLU after StConv, we add only ReLU at the end of FALCON. This is because FALCON is initialized by approximating the StConv kernel using EHP. Using the same setting for BN and ReLU as StConv is more efficient for FALCON to approximate the StConv. We initialize the pointwise convolution kernel and the depthwise convolution kernel of FALCON by approximating the pretrained standard convolution kernel using EHP. The approximation process is as follows: 1) we first initialize the pointwise convolution kernel and the depthwise convolution kernel randomly, and 2) the pointwise convolution kernel and the depthwise convolution kernel are updated using gradient descent such that the mean squared error of their EHP product and the standard convolution kernel is minimized. Rank-k FALCON uses the same initialization method. 4.3 ACCURACY VS. COMPRESSION We evaluate the accuracy and the compression rate of FALCON and competitors. Table 2 shows the results on four image datasets. Note that FALCON or FALCON-branch provides the highest accuracy in 7 out of 8 cases while using similar or smaller number of parameters than competitors. Specifically, FALCON and FALCON-branch achieve up to 8× compression rates with less than 1% accuracy drop compared to that of the standard convolution (StConv). Figure 3 shows the tradeoff between accuracy and the number of parameters. Note that FALCON and FALCON-branch show the best tradeoff (closest to the “best” point) between accuracy and compression rate, giving the highest accuracy with similar compression rates. Ablation study. We perform an ablation study on two components: 1) the order of depthwise and pointwise convolutions, and 2) initialization. We observe that with a similar number of parameters, 1) FALCON and FALCON without initialization always result in better accuracy than EHP-in and DSC, respectively, and 2) EHP-in always results in better accuracy than DSC. Furthermore, FALCON results in better accuracy than FALCON without initialization in 6 out of 8 cases. These observations prove our claims in Section 3.3 that 1) EHP-out (FALCON) is more efficient than EHP-in, and 2) the fitting and the initialization of kernels using EHP improves accuracy. Additionally, we observe that overall performance is more sensitive towards ordering compared to initialization. 4.4 ACCURACY VS. COMPUTATION We evaluate the accuracy and the amount of computation of FALCON and competitors. We use the number of multiply-adds floating point operations (FLOPs) needed for forwarding one image to a model as the metric of computation. Table 2 also shows the accuracies and the number of FLOPs of methods on four image datasets. Note that FALCON or FALCON-branch provide the highest accuracy in 7 out of 8 cases while using similar FLOPs as competitors do. Compared to StConv, FALCON and FALCON-branch achieve up to 8× FLOPs reduction across different models on different datasets. Figure 4 shows the tradeoff between accuracy and the number of FLOPs. Note that FALCON and FALCON-branch show the best tradeoff (closest to the “best” point) between accuracy and computation, giving the highest accuracy with a similar number of FLOPs. 4.5 RANK-K FALCON We evaluate the performance of rank-k FALCON by increasing the rank k and monitoring the changes in the numbers of parameters and FLOPs. In Table 3, we observe three trends as the rank k increases: 1) the accuracy becomes higher than that of rank-1 FALCON, 2) the number of parameters increases, and 3) the number of floating point operations (FLOPs) increases. Although the rank k that gives the best tradeoff of rank and compression/computation reduction varies, rank-k FALCON improves the accuracy of FALCON in all cases. Especially, we note that rank FALCON often gives even higher accuracy than the standard convolution, while using smaller number of parameters and FLOPs. For example, rank-3 FALCON applied to VGG19 on CIFAR100 dataset shows 1.31 percentage point higher accuracy compared to the standard convolution, with 2.8× smaller number of parameters and 2.8× smaller number of FLOPs. Thus, rank-k FALCON is a versatile method to further improve the accuracy of FALCON while sacrificing a bit of compression and computation. 5 RELATED WORK Over the past several years, a lot of studies focused on compressing and accelerating DNN to reduce model size, running time, and energy consumption. It is believed that DNNs are over-parameterized. Weight-sharing (Han et al. (2016); Ullrich et al. (2017); Chen et al. (2015); Choi et al. (2017); Agustsson et al. (2017)) is a common compression method which stores only assignments and centroids of weights. While using the model, weights are loaded according to assignments and centroids. Pruning (Han et al. (2014); Li et al. (2016)) aims at removing useless weights or setting them to zero. Although weight-sharing and pruning can significantly reduce the model size, they are not efficient in reducing the amount of computation. Quantizing (Courbariaux et al. (2015; 2016); Hou et al. (2017); Zhu et al. (2017)) the model into binary or ternary weights reduces model size and computation simultaneously: replacing arithmetic operations with bit-wise operations remarkably accelerates the model. Layer-wise approaches are also employed to efficiently compress models. A typical example of such approaches is low-rank approximation (Lebedev et al. (2015); Kim et al. (2016b); Novikov et al. (2015)); it treats the weights as a tensor and uses general tensor approximation methods to compress the tensor. To reduce computation, approximation methods should be carefully chosen, since some of approximation methods may increase computation of the model. Compressing existing models has limitations since they are originally designed to be deep and large to give high accuracy. A recent trend is to design a brand new architecture that is small and efficient. Mobilenet (Howard et al. (2017)), MobilenetV2 (Sandler et al. (2018)), Shufflenet (Zhang et al. (2017)), and ShufflenetV2 (Ma et al. (2018)) are the most representative approaches, and they use depthwise convolution and pointwise convolution as building blocks for designing convolution layers. Our proposed FALCON gives a thorough interpretation of depthwise convolution and pointwise convolution, and applies them into model compression, giving the best accuracies with regard to compression and computation. 6 CONCLUSION We propose FALCON, an accurate and lightweight convolution method to replace standard convolution. By interpreting existing convolution methods based on depthwise separable convolution using EHP operation, FALCON and its general version rank-k FALCON provide accurate and efficient compression on CNN. We also propose FALCON-branch, a variant of FALCON integrated into a branch architecture of CNN for model compression. Extensive experiments show that FALCON and its variants give the best accuracy for a given number of parameter or computation, outperforming other convolution models based on depthwise separable convolution. Compared to the standard convolution, FALCON and FALCON-branch give up to 8× compression and 8× computation reduction while giving similar accuracy. We also show that rank-k FALCON provides better accuracy than the standard convolution, while using smaller numbers of parameters and computations. A CONVOLUTIONAL NEURAL NETWORK Convolutional Neural Network (CNN) is a type of deep neural network used mainly for structured data. CNN uses convolution operation in convolution layers. In the following, we discuss CNN when applied to typical image data with RGB channels. Each convolution layer has three components: input feature maps, convolution kernel, and output feature maps. The input feature maps I ∈ RH×W×M and the output feature maps O ∈ RH′×W ′×N are 3-dimensional tensors, and the convolution kernel K ∈ RD×D×M×N is a 4-dimensional tensor. The convolution operation is defined as: Oh′,w′,n = D∑ i=1 D∑ j=1 M∑ m=1 Ki,j,m,n · Ihi,wj ,m (3) where the relations between height hi and width wj of input, and height h′ and width w′ of output are as follows: hi = (h ′ − 1)s+ i− p and wj = (w′ − 1)s+ j − p (4) where s is the stride size, and p is the padding size. The third and the fourth dimensions of the convolution kernel K must match the number M of input channels, and the number N of output channels, respectively. Convolution kernel K can be seen as N 3-dimensional filters Fn ∈ RD×D×M . Each filter Fn in kernel K performs convolution operation while sliding over all spatial locations on input feature maps. Each filter produces one output feature map. B PROOFS OF THEOREMS B.1 PROOF OF THEOREM 1 Proof. From the definition of EHP, Ki,j,m,n = Di,j,m ·Pm,n. Based on equation 3, we replace the kernel Ki,j,m,n with the depthwise convolution kernel Di,j,m and the pointwise convolution kernel Pm,n. Oh′,w′,n = D∑ i=1 D∑ j=1 M∑ m=1 Di,j,m ·Pm,n · Ihi,wj ,m where Ihi,wj ,m is the (hi, wj ,m)-th entry of the input. We split the above equation into the following two equations. O′h′,w′,m = D∑ i=1 D∑ j=1 Di,j,m · Ihi,wj ,m (5) Oh′,w′,n = M∑ m=1 Pm,n ·O′h′,w′,m (6) where O′h′,w′,m ∈ RH ′×W ′×M is an intermediate tensor. Note that equation 5 and equation 6 correspond to the depthwise convolution and the pointwise convolution, respectively. Therefore, the output O′h′,w′,m is equal to the output applying depthwise separable convolution used in Mobilenet. B.2 PROOF OF THEOREM 2 Proof. From equation 3, we replace the kernel Ki,j,m,n with the pointwise convolution kernel P and the depthwise convolution kernel D. Oh′,w′,n = M∑ m=1 D∑ i=1 D∑ j=1 Pm,n ·Di,j,n · Ihi,wj ,m where Ihi,wj ,m is the (hi, wj ,m)-th entry of the input I. We split the above equation into the following two equations. O′hi,wj ,n = M∑ m=1 Pm,n · Ihi,wj ,m (7) Oh′,w′,n = D∑ i=1 D∑ j=1 Di,j,n ·O′hi,wj ,n (8) where I, O′, and O are the input, the intermediate, and the output tensors of convolution layer, respectively. Note that equation 7 and equation 8 correspond to pointwise convolution and depthwise convolution, respectively. Therefore, the output Oh′,w′,n is equal to the output applying FALCON. C QUANTITATIVE ANALYSIS In this section, we evaluate the compression and the computation reduction of FALCON and rank-k FALCON. All the analysis is based on one convolution layer. The comparison of the numbers of parameters and FLOPs of FALCON and other competitors is in Appendix G. C.1 FALCON We analyze the compression and the computation reduction rates of FALCON in Theorems 3 and 4. Theorem 3. Compression Rate (CR) of FALCON is given by CR = # of parameters in standard convolution # of parameters in FALCON = D2MN MN +D2N where D2 is the size of standard kernel, M is the number of input channels, and N is the number of output channels. Proof. Standard convolution kernel has D2MN parameters. FALCON includes pointwise convolution and depthwise convolution which requires MN and D2N parameters, respectively. Thus, the compression rate of FALCON is CR = D2MN MN +D2N . Theorem 4. Computation Reduction Rate (CRR) of FALCON is described as: CRR = # of FLOPs in standard convolution # of FLOPs in FALCON = H ′W ′MD2N HWMN +H ′W ′D2N where H ′ and W ′ are the height and the width of output, respectively, and H and W are the height and the width of input, respectively. Proof. The standard convolution operation requires H ′W ′D2MN FLOPs (Molchanov et al. (2017)). FALCON includes pointwise convolution and depthwise convolution. Pointwise convolution has kernel size D = 1 with stride s = 1 and no padding, so the intermediate tensor O′ has the same height and width as those of the input feature maps. Thus, pointwise convolution needs HWMN FLOPs. Depthwise convolution has the number of input channel M = 1, so it needs H ′W ′D2N FLOPs. The total FLOPs of FALCON is HWMN +H ′W ′D2N , thus the computation reduction rate of FALCON is CRR = H ′W ′D2MN HWMN +H ′W ′D2N . C.2 RANK-K FALCON We analyze the compression and computation reduction rates of rank-k FALCON in Theorem 5. Theorem 5. Compression Rate (CRk) and Computation Reduction Rate (CRRk) of rank-k FALCON are described as: CRk = CR k CRRk = CRR k Proof. The numbers of parameters and FLOPs increase for k times since rank-k FALCON duplicates FALCON for k times. Thus, the compression rate and the computation reduction rate are calculated as CRk = CR k and CRRk = CRR k . D DESCRIPTION OF RELATED CONVOLUTION UNITS MobileNetV2. Sandler et al. (2018) proposed a new convolution architecture which we call as MobileConvV2, in their MobilenetV2 model. MobileConvV2 consists of three sub-layers as shown in Figure 5(b). The first and the third sub-layers are pointwise convolution for adjusting the number of channels. The first sub-layer expands the number of channels from M to tM , where t is an expansion ratio. The second sub-layer is a D×D depthwise convolution. Since depthwise convolution cannot change the number of channels, the third sub-layer adjusts the number of channels from tM to N . There is a shortcut connection between the input, and the output of the third sub-layer to facilitate flow of gradient across multiple layers. MobileConvV2 needs tM2 + D2tM + tMN parameters and tHWM2 + tH ′W ′D2M + tH ′W ′MN FLOPs. ShuffleNet. Zhang et al. (2017) proposed a computation-efficient CNN architecture named Shufflenet. As shown in Figure 5(c), each unit of Shufflenet (we call it ShuffleUnit) consists of three sublayers, first group pointwise convolution, depthwise convolution, and second group pointwise convolution, as well as a shortcut. The number of depthwise convolution channels is 14 of output channels N . ShuffleUnit uses group convolution in two pointwise convolution layers to reduce the parameters and FLOPs. However, it is hard to exchange information among groups when group convolutions are stacked. To deal with this problem, ShuffleUnit adds a channel shuffle layer after the first pointwise group convolution. The channel shuffle layer rearranges the order of channels. making it possible to obtain information from different groups. The number of groups is represented as g. ShuffleUnit needs 14gMN+ 1 4D 2N+ 14gN 2 parameters and 14gHWMN+ 1 4H ′W ′D2N+ 14gH ′W ′N2 FLOPs. ShufflenetV2. Ma et al. (2018) proposed a practically efficient CNN architecture ShufflenetV2. As shown in Figure 5(d), each unit of ShufflenetV2 (we call it ShuffleUnitV2) consists of two branches. The left branch consists of two pointwise convolutions and one depthwise convolution like MobileConvV2, and the right branch is an identity operation. Note that outputs of both branches maintain the number of channels as M/2. The final output is produced by concatenating and shuffling the output tensors from both of the branches. ShuffleUnitV2 needs 12 (M 2+D2M) parameters and 12HW (M 2 +D2M) FLOPs. E GENERALITY OF EHP We show that EHP is a key operation to understand other convolution architectures based on depthwise separable convolution. MobilenetV2. As shown in Figure 5(b), MobilenetV2 has an additional pointwise convolution before depthwise convolution in Mobilenet: one layer of MobilenetV2 consists of two pointwise convolutions and one depthwise convolution. In another point of view, MobilenetV2 can be understood as FALCON followed by additional pointwise convolution; i.e., MobilenetV2 performs EHP operation as FALCON does, and performs additional pointwise convolution after that. Shufflenet. As shown in Figure 5(c), Shufflenet consists of depthwise convolution and pointwise group convolution which is a variant of pointwise convolution. We represent the convolution layer of Shufflenet using EHP as follows. Let g be the number of groups. We divide the standard convolution kernel K ∈ RD×D×M×N into g group standard convolution kernels. Then, the relation of g-th group standard convolution kernel Kg ∈ RD×D× M g × N g with regard to g-th depthwise convolution kernel Dg ∈ RD×D× M g and g-th pointwise group convolution kernel Pg ∈ R M g × N g is Kg = Dg E Pg s.t. Kgi,j,mg,ng = D g i,j,mg ·Pgmg,ng where mg = 1, 2, ..., Mg and ng = 1, 2, ..., N g . Each group standard convolution is equivalent to the combination of a depthwise convolution and a pointwise convolution, and thus easily expressed with EHP as in Mobilenet. Therefore, each layer of Shufflenet is equivalent to the layer consisting of one group convolution followed by standard convolution. ShufflenetV2. As shown in Figure 5(d), the left branch of ShufflenetV2 has the same convolutions as in MobilenetV2: it consists of two pointwise convolutions and one depthwise convolution. Like MobilenetV2, the left branch of ShufflenetV2 can be understood as FALCON followed by additional pointwise convolution. F FITTING OTHER CONVOLUTION UNITS INTO MODELS DSConv. DSConv (shown in Figure 5(a)) has the most similar architecture as FALCON among competitors, and thus DSConv has nearly the same number of parameters as that of FALCON. As in the setting of FALCON, the existence of BN and ReLU at the end of DSConv depends on that of StConv. MobileConvV2. In MobileConvV2 architecture (shown in Figure 5(b)), we adjust the numbers of parameters and FLOPs by changing the expansion ratio t as described in Appendix D, which is represented as ‘MobileConvV2-t’. We choose t = 0.5 as the baseline MobileConvV2 to compare with FALCON, since two pointwise convolutions bring lots of parameters and FLOPs to MobileConvV2. ShuffleUnit. In ShuffleUnit (shown in Figure 5(c)), we adjust the numbers of parameters and FLOPs by changing the width multiplier α (Howard et al. (2017)) and the number of groups g, which is represented as ‘ShuffleUnit α×(g=g)’. Note that the width multiplier is used to adjust the number of input channels M and the number of output channels N of a convolution layer; if the width multiplier is α, the numbers of input and output channels become αM and αN , respectively. While experimenting with ResNet, we find that ShuffleUnit does not cooperate well with ResNet: ResNet34 with ShuffleUnit does not converge. We suspect that residual block and ShuffleUnit may conflict with each other because of redundant residual connections: the gradient may not find the right path towards previous layers. For this reason, we delete the shortcut of all residual blocks in ResNet34 when using ShuffleUnit. ShuffleUnitV2. In ShuffleUnitV2 (shown in Figure 5(d)),we also adjust the number of parameters and FLOPs by changing the width multiplier α, which is represented as ’ShuffleUnitV2 α×’. Other operations of ShuffleUnitV2 stay the same as in Ma et al. (2018). G PARAMETERS AND FLOPS We summarize the numbers of parameters and FLOPs for FALCON and competitors in Table 5.
1. What is the focus and contribution of the paper on model compression? 2. What are the strengths of the proposed approach, particularly in terms of memory saving and computational efficiency? 3. What are the weaknesses of the paper, especially regarding its claims and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any concerns or suggestions regarding the training process and the choice of decomposition method?
Review
Review This paper proposed a model compression method: Falcon and rank-k Falcon. Both are used to compress CNN type of models by replacing standard convolution layer with a compact Falcon or rank-k Falcon layer to compress the model. Falcon's main idea is to decompose the traditional convolution kernel K into two smaller tensors, one is depthwise convolution kernel D and pointwise convolution kernel P. And DP will reconstruct the original kernel K. Since D+P's memory is D*D*M+N*M which is smaller than the original size D*D*M*N, and thus when N is large, the memory saving could be large. The paper is in general in good writing and very easy to read. Below I have several concerns/suggestions for this paper: 1: Novelty. What is the main difference between this method and all the other tensor decomposition based methods for CNN compression? There are so many tensor decomposition based methods for CNN, and seems Falcon belongs to one of them. The one (maybe) special for Falcon is that it only decomposes along one dimension. Why this method could perform better than other tensor based decomposition methods(some of them are having even smaller memory footprint as they decompose more dimensions) or Falcon could be one special case of it? 2: In Section 3.3 and 3.4, the proposed Falcon and rank-k Falcon seems is fully recovering the original K, see equations above Theorem 2 and above Section 3.5, should it be minimizing the reconstruction error as other tensor decomposition methods? And how to find the solutions P and D from K? how the model is getting trained if using Falcon or rank-k Falcon? Do you have retrain step after the decomposition of K? 3: In the experiment, no any other standard compression techniques such as quantization, low-rank, weight-sharing, sparse, etc are compared. This makes us curious about the benefit of the proposed methods over other methods. In summary, I am mostly worried about the novelty of the paper, and wondering how the model is getting trained, and the comparison with other compression techniques.
ICLR
Title Autoregressive Latent Video Prediction with High-Fidelity Image Generator Abstract Video prediction is an important yet challenging problem; burdened with the tasks of generating future frames and learning environment dynamics. Recently, autoregressive latent video models have proved to be a powerful video prediction tool, by separating the video prediction into two sub-problems: pre-training an image generator model, followed by learning an autoregressive prediction model in the latent space of the image generator. However, successfully generating highfidelity and high-resolution videos has yet to be seen. In this work, we investigate how to train an autoregressive latent video prediction model capable of predicting high-fidelity future frames with minimal modification to existing models, and produce high-resolution (256x256) videos. Specifically, we scale up prior models by employing a high-fidelity image generator (VQ-GAN) with a causal transformer model, and introduce additional techniques of top-k sampling and data augmentation to further improve video prediction quality. Despite the simplicity, the proposed method achieves competitive performance to state-of-the-art approaches on standard video prediction benchmarks with fewer parameters, and enables highresolution video prediction on complex and large-scale datasets. Videos are available at the anonymized website https://sites.google.com/view/harp-anonymous. 1 INTRODUCTION Video prediction can enable agents to learn useful representations for predicting the future consequences of the decisions they make, which is crucial for solving the tasks that require long-term planning, including robotic manipulation (Finn & Levine, 2017; Kalashnikov et al., 2018) and autonomous driving (Levinson et al., 2011; Xu et al., 2017). Despite the recent advances in improving the quality of video prediction (Finn et al., 2016; Lotter et al., 2017; Liang et al., 2017; Babaeizadeh et al., 2018; Denton & Fergus, 2018; Lee et al., 2018; Byeon et al., 2018; Kumar et al., 2020; Weissenborn et al., 2020; Babaeizadeh et al., 2021), learning an accurate video prediction model remains notoriously difficult problem and requires a lot of computing resources, especially when the inputs are video sequences with high-resolution (Castrejon et al., 2019; Villegas et al., 2019; Clark et al., 2019; Luc et al., 2020; Walker et al., 2021). This is because the video prediction model should excel at both tasks of generating high-fidelity images and learning the dynamics of environments, though each task itself is already a very challenging problem. Recently, autoregressive latent video prediction methods (Rakhimov et al., 2021; Yan et al., 2021) have been proposed to improve the efficiency of video prediction, by separating video prediction into two sub-problems: first pre-training an image generator (e.g., VQ-VAE; Oord et al. 2017), and then learning the autoregressive prediction model (Weissenborn et al., 2020; Chen et al., 2020) in the latent space of the pre-trained image generator. However, the prior works are limited in that they only consider relatively low-resolution videos (up to 128 × 128 pixels) for demonstrating the efficiency of the approach; it is questionable that such experiments can fully demonstrate the benefit of operating in the latent space of image generator instead of high-dimensional pixel-channel space. In this paper, we present High-fidelity AutoRegressive latent video Prediction (HARP), which scales up the previous autoregressive latent video prediction methods for high-fidelity video prediction. The main principle for the design of HARP is simplicity: we improve the video prediction quality with minimal modification to existing methods. First, for image generation, we employ a highfidelity image generator, i.e., vector-quantized generative adversarial network (VQ-GAN; Esser et al. 2021). This improves video prediction by enabling high-fidelity image generation (up to 256× 256 pixels) on various video datasets. Then a causal transformer model (Chen et al., 2020), which operates on top of discrete latent codes, is trained to predict the discrete codes from VQ-GAN, and autoregressive predictions made by the transformer model are decoded into future frames at inference time. Moreover, motivated by the sampling techniques widely-used in language generation for making coherent and diverse predictions, we propose to utilize top-k sampling (Fan et al., 2018) that draws the next discrete code from the k-most probable codes. Since the number of discrete codes the autoregressive model has to predict is very large, e.g., 6,400 codes on KITTI driving dataset (Geiger et al., 2013), we find that discarding rare discrete codes helps the model predict diverse but high-quality videos, without any change to the training procedure. We highlight the main contributions of this paper below: • We show that our autogressive latent video prediction model, HARP, can predict high-resolution (256×256 pixels) future frames on simulated robotics dataset (i.e., Meta-World; Yu et al. 2020) and large-scale real-world robotics dataset (i.e., RoboNet; Dasari et al. 2019). • We show that HARP can leverage the image generator pre-trained on ImageNet (Deng et al., 2009) for training a high-resolution video prediction model on complex, large-scale Kinetics600 dataset (Carreira et al., 2018), significantly reducing the training cost. • HARP achieves competitive or superior performance to prior state-of-the-art video prediction models with large end-to-end networks on widely-used BAIR Robot Pushing (Ebert et al., 2017) and KITTI driving (Geiger et al., 2013) video prediction benchmarks. • We also show that the pre-trained representations of HARP can be useful for learning multi-task imitation learning agent on Meta-World MT50 benchmark (Yu et al., 2020). 2 RELATED WORK Video prediction. Video prediction aims to predict the future frames conditioned on images (Michalski et al., 2014; Ranzato et al., 2014; Srivastava et al., 2015; Vondrick et al., 2016; Lotter et al., 2017), texts (Wu et al., 2021b), and actions (Oh et al., 2015; Finn et al., 2016), which would be useful for several applications, e.g., model-based RL (Hafner et al., 2019; Kaiser et al., 2020; Rybkin et al., 2021), and simulator development (Kim et al., 2020; 2021). Various video prediction models have been proposed with different approaches, including generative adversarial networks (GANs; Goodfellow et al. 2014) known to generate high-fidelity images by introducing adversarial discriminators that also considers temporal or motion information (Aigner & Körner, 2018; Jang et al., 2018; Kwon & Park, 2019; Clark et al., 2019; Luc et al., 2020), latent video prediction models that operates on the latent space (Babaeizadeh et al., 2018; Denton & Fergus, 2018; Lee et al., 2018; Villegas et al., 2019; Wu et al., 2021a; Babaeizadeh et al., 2021), and autoregressive video prediction models that operates on pixel space by predicting the next pixels in an autoregressive way (Kalchbrenner et al., 2017; Reed et al., 2017; Weissenborn et al., 2020). Autoregressive latent video prediction. Most closely related to our work are autoregressive latent video prediction models that separate the video prediction problem into image generation and dynamics learning. Walker et al. (2021) proposed to learn a hierarchical VQ-VAE (Razavi et al., 2019) that extracts multi-scale hierarchical latents then train SNAIL blocks (Chen et al., 2018) that predict hierarchical latent codes, enabling high-fidelity video prediction. However, this involves a complicated training pipeline and a video-specific architecture, which limits its applicability. As simple alternatives, Rakhimov et al. (2021); Yan et al. (2021) proposed to first learn a VQ-VAE (Oord et al., 2017) and train a causal transformer with 3D self-attention (Weissenborn et al., 2020) and factorized 2D self-attention (Child et al., 2019), respectively. These approaches, however, are limited in that they only consider low-resolution videos. We instead present a simple high-resolution video prediction method that incorporates the strengths of both prior approaches. 3 PRELIMINARIES We consider the standard video prediction framework where the goal is to predict the future frames conditioned on the initial frames of a video. Specifically, conditioned on the first c frames of a video x<c = (x0,x1, ...,xc−1), we aim to learn a video prediction model that predicts the future frames xc:T = (xc, ...,xT−1), where xt ∈ RH×W×Nch is the frame at timestep t. Optionally, one can also consider conditioning the prediction model on actions a = (a0, ...,aT−1) that the agents in the video would take, i.e., action-conditioned video prediction. Autoregressive video prediction model. Motivated by the recent success of pixel-level autoregressive models on image generation (Menick & Kalchbrenner, 2018), Weissenborn et al. (2020) introduced an autoregressive video prediction model that approximates the distribution of a video in a pixel-channel space. Specifically, given a video x ∈ RT×H×W×Nch , the joint distribution over pixels conditioned on the first c frames is modelled as the product of channel intensities Nch and all Np = T ·H ·W pixels except Nc = c ·H ·W pixels of conditioning frames: p(xc:T |x<c) = Np−1∏ i=Nc−1 Nch−1∏ k=0 p(xkπ(i)|xπ(<i),x <k π(i)), (1) where π is a raster-scan ordering over all pixels from the video (we refer to Weissenborn et al. (2020) for more details on the case where π is the combination of a subscale and raster-scan ordering since we only utilize raster-scan ordering for our approach), xπ(<i) is all pixels before xπ(i), xkπ(i) is the k-th channel intensity of the pixel xπ(i), and x<kπ(i) is all channel intensities before x k π(i). Vector quantized variational autoencoder. VQ-VAE (Oord et al., 2017) consists of an encoder that compresses images into discrete representations, and a decoder that reconstructs images from these discrete representations. Both encoder and decoder share a codebook of prototype vectors which are also learned throughout training. Formally, given an image x ∈ RH×W×Nch , the encoder E encodes x into a feature map ze(x) ∈ RH ′×W ′×Nz that consists of a series of latent vectors zπ′(i)(x) ∈ RNz , where π′ is a raster-scan ordering of the feature map ze(x) of size |π′| = H ′ ·W ′. Then ze(x) is quantized to discrete representations zq(x) ∈ R|π ′|×Nz based on the distance of latent vectors zπ′(i)(x) to the prototype vectors in the codebook C = {ek}Kk=1 as follows: zq(x) = (eq(x,1), eq(x,2), · · · , eq(x,|π′|)), where q(x, i) = argmin k∈[K] ‖zπ′(i)(x)− ek‖2, (2) where [K] is the set {1, · · · ,K}. Then the decoder G learns to reconstruct x from discrete representations zq(x). The VQ-VAE is trained by minimizing the following objective: LVQVAE(x) = ‖x−G(zq(x))‖22︸ ︷︷ ︸ Lrecon + ‖sg [ze(x)]− zq(x)‖22︸ ︷︷ ︸ Lcodebook +β · ‖sg [zq(x)]− ze(x)‖22︸ ︷︷ ︸ Lcommit , (3) where the operator sg refers to a stop-gradient operator, Lrecon is a reconstruction loss for learning representations useful for reconstructing images, Lcodebook is a codebook loss to bring codebook representations closer to corresponding encoder outputs h, and Lcommit is a commitment loss weighted by β to prevent encoder outputs from fluctuating frequently between different representations. Vector quantized generative adversarial network. VQ-GAN (Esser et al., 2021) is a variant of VQ-VAE that (a) replaces the Lrecon in (3) by a perceptual loss LLPIPS (Zhang et al., 2018), and (b) introduces an adversarial training scheme where a patch-level discriminator D (Isola et al., 2017) is trained to discriminate real and generated images by maximizing following loss: LGAN(x) = [logD(x) + log(1−D(G(zq(x)))]. (4) Then, the objective for training the VQ-GAN model is defined as: min E,G,C max D Ex∼p(x) [(LLPIPS + Lcodebook + Lcommit) + λ · LGAN] , (5) where λ = ∇GL [LLPIPS]∇GL [LGAN]+δ is an adaptive weight, ∇GL is the gradient of the inputs to the last layer of the decoder GL, and δ = 10−6 is a scalar introduced for numerical stability. 4 METHOD We present HARP, a video prediction model capable of predicting high-fidelity future frames. Our method is designed to fully exploit the benefit of autoregressive latent video prediction model that separates the video prediction into image generation and dynamics learning. Specifically, we consider the combination of (a) the recently introduced high-fidelity image generator (Esser et al., 2021) and (b) an autoregressive latent video prediction model (Oord et al., 2017; Rakhimov et al., 2021; Walker et al., 2021; Yan et al., 2021) that operates on top of the pre-trained image generator. The full architecture of HARP is illustrated in Figure 2. 4.1 HIGH-FIDELITY IMAGE GENERATOR We consider the VQ-GAN model (Esser et al., 2021) that has proven to be effective for highresolution image generation as our image generator (see Section 3 for the formulation of VQ-GAN). Similar to the motivation of Tian et al. (2021) that utilizes a pre-trained image generator in the context of video synthesis, we first pre-train the image generator then freeze the model throughout training to improve the efficiency of learning video prediction models. The notable difference to a prior work that utilize 3D convolutions to temporally downsample the video for efficiency (Yan et al., 2021) is that our image generator operates on single images; hence our image generator solely focus on improving the quality of generated images. Importantly, this enables us to utilize the VQ-GAN model pre-trained on a wide range of natural images, e.g., ImageNet, without training the image generator on the target datasets, which can significantly reduce the training cost of high-resolution video prediction model, Also, the representations from our video prediction model can be easily transferred to downstream tasks that require fine-grained control at each timestep, e.g., imitation learning (see Section 5.3 for supporting experimental results on multi-task imitation learning). 4.2 AUTOREGRESSIVE LATENT VIDEO PREDICTION MODEL To leverage the VQ-GAN model for video prediction, we utilize the autoregressive latent video prediction architecture that operates on top of the discrete codes extracted from a video x. Specifically, we extract the discrete codes z(x) = (z(x1), ..., z(xT )) using the pre-trained VQ-GAN, where z(xt) = (q(xt,1), q(xt,2), ..., q(xt,|π′|)) is the discrete code extracted from the frame xt as in (2). Then, instead of modelling the distribution of video p(x) in the pixel-channel space as in (1), we learn the distribution of the video in the discrete latent representation space: p(z(xc:T |x<c)) = Nd−1∏ i=0 p(zπ′(i)(x)|zπ′(<i)(x)), (6) where Nd = (T − C) ·H ′ ·W ′ is the total number of codes from xc:T . While the specific implementation for modelling p(z(x)) differs in prior works (Oord et al., 2017; Rakhimov et al., 2021; Walker et al., 2021; Yan et al., 2021), due to its simplicity, we utilize the causal transformer architecture (Yan et al., 2021) where the output logits from input codes are trained to predict the next discrete codes. We remark that our approach is also compatible with other architectures. 4.3 ADDITIONAL TECHNIQUES Top-k sampling. To improve the video prediction quality of latent autoregressive models whose outputs are sampled from the probability distribution over a large number of discrete codes, we utilize the top-k sampling (Fan et al., 2018) that randomly samples the output from the top-k probable discrete codes. By preventing the model from sampling rare discrete codes from the long-tail of a probability distribution and predicting future frames conditioned on such discrete codes, we find that top-k sampling improves video prediction quality, especially given that the number of discrete encodings required for future prediction is very large, e.g., 2,560 on RoboNet (Dasari et al., 2019) up to 6,400 on KITTI dataset (Geiger et al., 2013) in our experimental setup. Data augmentation. We also investigate how data augmentation can be useful for improving the performance of autoregressive latent video prediction models. Since the image generator model is not trained with augmentation, we utilize a weak augmentation to avoid the instability coming from aggressive transformation of input frames, i.e., translation augmentation that moves the input images by m pixels along the X or Y direction. 5 EXPERIMENTS We design our experiments to investigate the following: • Can HARP predict high-resolution future frames (up to 256 × 256 pixels) on various video datasets with different characteristics? • How does HARP compare to state-of-the-art methods with large end-to-end networks on standard video prediction benchmarks in terms of quantitative evaluation? • How does the proposed techniques affect the performance of HARP? • Can HARP be transferred to solve multi-task imitation learning tasks? 5.1 HIGH-RESOLUTION VIDEO PREDICTION Implementation. We utilize up to 8 Nvidia 2080Ti GPU and 20 CPU cores for training each model. For training VQ-GAN (Esser et al., 2021), we first train the model without a discriminator lossLGAN, and then continue the training with the loss following the suggestion of the authors. For all experiments, VQ-GAN downsamples each frame into 16× 16 latent codes, i.e., by a factor of 4 for frames of size 64×64 frames, and 16 for frames of size 256×256. For training a transformer model, the VQ-GAN model is frozen so that its parameters are not updated. We use Sparse Transformers (Child et al., 2019) as our transformer architecture to accelerate the training. As for hyperparameter, we use k = 10 for sampling at inference time, but no data augmentation for high-resolution video prediction experiments. We report more detailed implementation details in Appendix A. Meta-World experiments. To demonstrate that our method can predict high-resolution videos (256× 256 pixels), we first use action-free Meta-World dataset (Yu et al., 2020) consisting of 2,500 demonstrations from 50 different robotics manipulation tasks, which we collected using the deterministic scripted policies1. Specifically, we train a single model to predict future videos of all 50 tasks, without leveraging task-specific information, such as task index. For evaluation, we use 10% of the demonstrations as a held-out test dataset. As shown in Figure 3, our model can accurately predict the high-resolution future frames of diverse tasks, capturing all the small details. This shows that our model can effectively learn all the information of multiple tasks required for predicting future frames. We also remark that such representations can be useful to improve the performance of imitation learner (see Section 5.3 for supporting experimental results). RoboNet experiments. Now we investigate how our model works on large-scale, real-world RoboNet dataset (Dasari et al., 2019) consisting of more than 15 million frames. While prior works successfully trained a video prediction model with 64 × 64 videos (Wu et al., 2021a; Babaeizadeh 1The dataset is available at: https://shorturl.at/acnxM et al., 2021), we show that our model can predict high-resolution 256× 256 videos even with fewer number of parameters than the one used in the prior works for predicting 64×64 videos. Specifically, we first train a VQ-GAN model with 91.5M parameters, and then train a 12-layer causal transformer model with 74.2M parameters that predicts future 10 frames conditioned on first two frames and future ten actions. Total number of parameters is 165.7M, which is smaller than 303.3M of FitVid (Babaeizadeh et al., 2021) that predicts 64 × 64 videos. Figure 1 and Figure 4 show the predicted frames on the held-out test video, where the model predicts the high-resolution future frames where a robot arm is moving around various objects of different colors and shapes. Kinetics-600 experiments using ImageNet pre-trained VQ-GAN. Finally, we consider a very complex, large-scale Kinetics-600 dataset (Carreira et al., 2018) consisting of more than 400,000 videos, which requires a large amount of computing resources for training even on 64 × 64 resolution (Clark et al., 2019; Luc et al., 2020). To avoid the prohibitively expensive training cost of high-resolution video prediction models on this dataset and fully exploit the benefit of employing a high-fidelity image generator, we utilize the VQ-GAN model pre-trained on ImageNet dataset (Deng et al., 2009). 2 As we only train the transformer model for video prediction, this enables us to train a high-resolution video prediction model in a very efficient manner. Specifically, we train the transformer model for 60,000 steps on training dataset, which takes less than a day using our machine. As shown in Figure 5, our model can predict future frames on the test videos3, which demonstrates that leveraging the large image generator pre-trained on a wide range of natural images can be a promising recipe for efficient video prediction on high-resolution, large-scale video datasets. 5.2 COMPARATIVE EVALUATION ON STANDARD BENCHMARKS Datasets. For quantitative evaluation, we first consider the BAIR robot pushing dataset (Ebert et al., 2017) consisting of roughly 40k training and 256 test videos. We consider action-free setup, hence video prediction models should be stochastic for predicting the diverse possible movement of a robot arm and objects. Following the setup in prior works (Clark et al., 2019; Weissenborn et al., 2020; Luc et al., 2020; Yan et al., 2021), we predict 15 future frames conditioned on one frame. We also evaluate our method on KITTI driving dataset (Geiger et al., 2013), where the training and test datasets are split by following the setup in Lotter et al. (2017). As the KITTI dataset is relatively small-scale compared to other datasets, i.e., 57 training videos, it provides a good testbed for investigating the effect of data augmentation. For hyperparameters, we use k = 10 for both datasets and data augmentation with m = 4 is only applied to KITTI as there was no sign of overfitting on BAIR dataset. For a fair comparison, we follow the setup of Villegas et al. (2019), where (i) a model is trained to predict future ten frames conditioned on five frames and evaluated to predict future 25 frames conditioned on five frames, and (ii) test dataset consists of 148 video clips constructed by extracting 30-frame clips and skipping every 5 frames. 2https://github.com/CompVis/taming-transformers 3We collected videos covered by CC-BY license, which are available to put the frames on a paper. Metrics. We use two evaluation metrics: Learned Perceptual Image Patch Similarity (LPIPS; Zhang et al. 2018), a frame-wise metric designed to better represent the human perceptual similarity of two frames compared to traditional metrics (Wang et al., 2004; Huynh-Thu & Ghanbari, 2008), and Frèchet Video Distance (FVD; Unterthiner et al. 2018), a dynamics-based evaluation metric known to be better correlated with the human evaluation compared to frame-wise evaluation metrics. FVD is computed by comparing the summary statistics of I3D network trained on Kinetics400 dataset (Carreira & Zisserman, 2017), and LPIPS is computed using the features from AlexNet (Krizhevsky et al., 2012). For comparison with the scores reported in prior works, we exactly follow the evaluation setup in Villegas et al. (2019) and Babaeizadeh et al. (2021) that samples 100 future videos for each ground-truth test video, then reports the best score over 100 videos for LPIPS, and the score using all videos for FVD, with the batch size of 256 for BAIR and 148 for KITTI. Results. Table 1 shows the performances of our method and baselines on test sets of BAIR Robot Pushing and KITTI driving dataset. We observe that our model achieves competitive or superior performance to state-of-the-art methods with large end-to-end networks, e.g., HARP outperforms FitVid with 302M parameters on KITTI driving dataset. Our model successfully extrapolates to unseen number of future frames (i.e., 25) instead of 10 future frames used in training on KITTI dataset. This implies that transformer-based video prediction models can also predict arbitrary number of frames at inference time. In the case of BAIR dataset, HARP achieves the similar performance of FitVid with 302M parameters, even though our method only requires 89M parameters. We provide videos predicted by HARP on BAIR and KITTI datasets in Appendix C. Analysis. We investigate how the top-k sampling, number of layers, and magnitude m of data augmentation affect the performance. Table 2a shows that smaller k leads to better performance, implying that the proposed top-k sampling is effective for improving the performance by discarding rare discrete codes that might degrade the prediction quality at inference time. As shown in Table 2b, we observe that more layers leads to better performance on BAIR dataset, which implies our model can be further improved by scaling up the networks. Finally, we find that (i) data augmentation on KITTI dataset is important for achieving strong performance, similar to the observation of Babaeizadeh et al. (2021), and (ii) too aggressive augmentation leads to worse performance. We provide the learning curves with and without augmentation in Appendix B. 4Baselines are SVG (Villegas et al., 2019), GHVAE (Wu et al., 2021a), FitVid (Babaeizadeh et al., 2021), LVT (Rakhimov et al., 2021), SAVP (Lee et al., 2018), DVD-GAN-FP (Clark et al., 2019), VideoGPT (Yan et al., 2021), TrIVD-GAN-FP (Luc et al., 2020), and Video Transformer (Weissenborn et al., 2020). 5.3 FINE-TUNING HARP FOR MULTI-TASK IMITATION LEARNING Setup. In order to demonstrate that the pre-trained representations from HARP can be useful for solving downstream tasks, we evaluate the imitation learning performance of fine-tuned HARP on MT-50 benchmark from Meta-World. Specifically, we take the pre-trained HARP model (see Figure 3 for the video predictions from this model), and fine-tune the model to predict expert actions by introducing a policy network on top of the transformer model. For comparative evaluation, we consider three baselines: (a) VQ-Transformer, which shares the same architecture with HARP but trained from scratch, (b) CNN-LSTM, which extracts features using convolutional neural networks (CNN) and LSTM networks, and (c) CNN-Transformer, that utilizes transformer networks instead of LSTM networks. For training and evaluation, we use the same training and test dataset used for video prediction experiments. We report the average success rate over 10 trials for each task. More details are available in Appendix A. Results. Table 3 shows the performance of imitation learning policies on MT50 test environments. We first observe that VQ-Transformer, which has a same architecture with HARP but trained from scratch, completely fails to solve the tasks. This shows the difficulty of training useful representations with fixed discrete codes as inputs. However, finetuned HARP model successfully outperforms other baselines because pre-trained representations contain useful information for long-term reasoning. This demonstrates that video prediction with HARP can be an effective self-supervised learning scheme for solving various control tasks. 6 DISCUSSION In this work, we present HARP, a high-fidelity autoregressive latent video prediction model. By employing a high-fidelity image generator and utilizing top-k sampling at inference time, HARP can predict high-resolution future frames, and achieve competitive performance to state-of-the-art video prediction methods with large end-to-end networks. We also show that HARP can leverage the image generator pre-trained on a wide range of natural images for video prediction, similar to the approach in the context of video synthesis (Tian et al., 2021). We hope this work inspires more investigation into leveraging a pre-trained image generator for video prediction, which can significantly reduce the cost for training a high-resolution video prediction model by building on the recent success of high-fidelity image generation (Oord et al., 2017; Razavi et al., 2019; Esser et al., 2021; Child, 2020; Ho et al., 2020; Karras et al., 2020; Dhariwal & Nichol, 2021). Finally, we report the failure cases of video prediction with HARP and discuss the possible extensions to resolve the issue. A common failure case for video prediction on RoboNet dataset is ignoring the interaction between a robot arm and objects. For example, in Figure 6a, our model ignores the objects and only predicts the movement of a robot arm. On the other hand, common failure case for Kinetics-600 is a degenerate video prediction, where a model just repeats the conditioning frame without predicting the future, as shown in Figure 6b. These failure cases might be resolved by training more larger networks similar to the observation in the field of natural language processing, e.g., GPT-3 (Brown et al., 2020), or might necessitate a new architecture for addressing the complexity of training autoregressive latent prediction models on video datasets. ETHICS STATEMENT While video prediction can be useful for various applications including robotic manipulation and autonomous driving, it might be misused by malicious users for unethical purposes, e.g., fake videos accusing politicians or sexual videos of any individuals. As our work introduces a method for generating more high-resolution future frames, our method may improve the chance of such videos being recognized as real videos. For this reason, in addition to developing a video prediction method that generates more realistic frames, it is important to be aware of potential problems and develop a method to detect generated videos (Gentine et al., 2018). REPRODUCIBILITY STATEMENT We describe the implementation and evaluation details in Section 5 and Appendix A. We also provide our code in the supplementary material. A EXPERIMENTAL SETUP A.1 DATASETS Meta-World. Meta-World (Yu et al., 2020) is a robotics manipulation simulator that supports 50 different tasks. For our experiments, we collect 50 demonstrations for each task using the deterministic scripted policies.5, hence we use total 2500 demonstrations. For evaluation, we construct the held-out test dataset with 10% of the demonstrations. To improve the visibility of rendered frames, we adjust the camera view of the camera using the publicly available source codes.6 We provide the dataset we used in our experiments7. RoboNet. RoboNet (Dasari et al., 2019) is a large-scale real-world robotics dataset consisting of more than 160,000 high-resolution videos. Since there is no test set for RoboNet dataset, we follow the setup in Wu et al. (2021a) for constructing a held-out test dataset8 of size 256. Following the setup in Wu et al. (2021a); Babaeizadeh et al. (2021), we train a video prediction model to predict ten future frames conditioned on two initial frames and ten future actions. For preprocessing the frames, we resize the original frames to 256×256 resolution frames without cropping. For downloading the dataset, we utilize the publicly available script.9 Kinetics-600. Kinetics-600 dataset is a large-scale video dataset consisting of more than 400,000 videos of total 600 action classes. Following the setup in Clark et al. (2019); Luc et al. (2020), we train a video prediction model to predict future 11 frames conditioned on the first five frames. For downloading the dataset, we use the publicly available repository10. BAIR Robot Pushing. BAIR Robot Pushing dataset (Ebert et al., 2017) consists of 43,264 training and 256 test videos. While BAIR dataset contains the information of actions robots take, common setup for evaluation on BAIR dataset is action-free (Clark et al., 2019; Weissenborn et al., 2020; Luc et al., 2020; Yan et al., 2021), where a video prediction model is trained to predict future 15 frames conditioned on the initial frame. For downloading and preprocessing the dataset, we utilize the publicly available script.11 KITTI driving dataset. KITTI driving dataset (Geiger et al., 2013) is the dataset that contains a large number of high-resolution driving videos. However, for video prediction, we follow the setup in Lotter et al. (2017) and utilize 57 training videos and 3 test videos for evaluation. To avoid utilizing too similar video clips from the test dataset for evaluation, we follow the setup in Villegas et al. (2019) where 30-frame clip is extracted with the interval of 5 frames, which constructs a test of size 148. For comparison with baselines, following the setup in Villegas et al. (2019), we train a model to predict future ten frames conditioned on five frames and evaluate the model to predict future 25 frames conditioned on five frames. For downloading and preprocessing the dataset, we utilize the publicly available script.12 5https://github.com/rlworkgroup/metaworld/tree/master/metaworld/policies 6we use the of 9e3863d in https://github.com/rlworkgroup/metaworld. 7https://shorturl.at/acnxM 8We use the list of videos available at https://github.com/google-research/fitvid/blob/master/robonet testset filenames.txt 9https://gist.github.com/soskek/d762751ce0aef4b2c7cf0a1537917016 10https://github.com/cvdfoundation/kinetics-dataset 11https://github.com/wilson1yan/VideoGPT 12https://github.com/coxlab/prednet A.2 IMPLEMENTATION DETAILS OF HARP VQ-GAN. The training of HARP consists of two-stages. First, for training a VQ-GAN model (Esser et al., 2021) we use the publicly available source code from the authors13. Following the suggestion of the authors, we trained a VQ-GAN model without a discriminator loss until it converges, then resume the training with a discriminator loss for total {300000, 500000, 150000, 30000} training steps with batch size of {24, 24, 320, 96} on Meta-World, RoboNet, BAIR Robot Pushing, and KITTI driving dataset, respectively. For Kinetics-600 dataset, we leverage the publicly available VQ-GAN model14 pre-trained on ImageNet without training a VQ-GAN model from scratch. The size k of codebook C in (2) is {8192, 8192, 1024, 1024, 1024} for Meta-World, RoboNet, Kinetics-600, BAIR Robot Pushing, and KITTI dataset, respectively. Causal transformer. Then we train a 12-layer Sparse Transformers (Child et al., 2019) to predict the discrete codes from VQ-GAN models in an autoregressive manner, by building on the publicly available source code15 of VideoGPT (Yan et al., 2021). For conditioning on initial frames, we utilize the same architecture as in Yan et al. (2021) that utilizes ResNet architecture to extract the downsampled feature map. We utilize ResNet-18 architecture for all experiments. We train the model until it converges on Meta-World, BAIR, and KITTI datasets, but we cannot find the sign of overfitting on large-scale datasets of RoboNet and Kinetics-600. Specifically, we train the model for {50000, 80000, 100000, 85000, 30000} training steps on Meta-World, RoboNet, Kinetics-600, BAIR Robot Pushing, and KITTI dataset, respectively. We use the data augmentation with the magnitude of m = 4 on KITTI driving dataset, which has shown to be very effective for improving the performance (see Appendix B) for a learning curve with and without augmentation). Inference with top-k sampling. We utilize top-k sampling (Fan et al., 2018) to improve the video prediction quality. For all datasets, we utilize k = 10. One major limitation of autoregressive prediction model is slow inference time. We utilize the implementation of Yan et al. (2021) that caches the previous key, values and utilize them for fast inference. In order to enable our model to extrapolate to unseen length of frames at evaluation time on KITTI dataset (e.g., the model has to predict 25 frames instead of 10 frames the model is trained to predict), we first predict T frames xc:T−1 conditioned on c initial frames x1:c, then predict the next frames by (a) keeping x1:c as conditioning frames and (b) giving the predicted last T−1 frames as inputs to the causal transformer model. We repeat this process until predicting all 25 future frames. A.3 IMPLEMENTATION DETAILS OF ACTION-CONDITIONED HARP. In order to predict future frames conditioned on future actions on RoboNet dataset (Dasari et al., 2019), we condition the prediction on actions by adding the action embeddings to the embeddings of discrete codes. Specifically, we introduce a linear layer that processes raw actions to action embeddings with the same dimension of token embeddings, then add the action embeddings of time step t+ 1 to token embeddings used for predicting the tokens of time step t + 1. At inference time, the inference procedure is exactly same as HARP except that future action embedding is added to the token embedding. We find that this simple modification to the original architecture enables HARP to predict future frames conditioned on actions. We provide the illustration of action-conditioned HARP in Figure 7. 13https://github.com/CompVis/taming-transformers 14https://heibox.uni-heidelberg.de/d/8088892a516d4e3baf92/ 15https://github.com/wilson1yan/VideoGPT A.4 IMPLEMENTATION DETAILS OF HARP FINE-TUNING. In order to fine-tune the HARP model for solving multi-task imitation learning tasks on Meta-World (Yu et al., 2020), we first pre-train a model to predict future 11 frames conditioned on the first frame using the dataset from all 50 tasks (see Appendix A.1 for the details on the dataset). Then we finetune the model to predict expert actions by introducing a two-layer policy network on top of the causal transformer model. Specifically, we train a behavioral cloning policy to minimize the mean squared error between the predicted actions from a layer and ground-truth expert actions. In order to further improve the performance of all methods, we follow the idea of Dasari & Gupta (2020) to learn a inverse dynamics predictor that predicts the action given two consecutive frames. We train all methods for 20,000 steps with the batch size of 20 and data augmentation of magnitude m = 4. For CNN-based methods, we use ResNet-50 for feature extractor, and use 4-layer LSTM16 for CNN-LSTM, and 12-layer causal transformer for CNN-Transformer. B EFFECTS OF AUGMENTATION Figure 8 shows the test error during the training of HARP on KITTI driving dataset (Geiger et al., 2013). One can see that the test error from a model trained without augmentation increases after initial ∼ 2500 training steps, which is a sign of overfitting. However, we observe that the test error from a model trained with data augmentation keeps decreasing throughout training until it converges, which shows the effectiveness of data augmentation for learning video prediction models, similar to the observation in Babaeizadeh et al. (2021). One notable detail here is that HARP overfits to KITTI training dataset with much fewer number of parameters (i.e., 89M) when compared to FitVid that utilizes 303M parameters. 16We try more deeper networks to match the number of trainable parameters, but we find that deeper LSTM networks are very unstable to train. Hence we search over the layers of {2, 4, 8, 16} and report the best results achieved with 4-layer LSTM. C VIDEO PREDICTIONS ON BAIR AND KITTI We provide the future frames predicted by HARP on BAIR Robot Pushing (Ebert et al., 2017) and KITTI driving dataset (Geiger et al., 2013). The model is trained to predict future 15 frames conditioned on the first frame on BAIR dataset. In the case of KITTI dataset, the model is trained to predict future 10 frames conditioned on the five frames, and evaluated to predict future 25 frames conditioned on the five frames. D MORE VIDEO PREDICTIONS ON META-WORLD We provide more videos predicted by HARP on Meta-World dataset (Yu et al., 2020). The model is trained to predict future 11 frames conditioned on the first frame. E MORE VIDEO PREDICTIONS ON ROBONET We provide more videos predicted by HARP on RoboNet dataset (Dasari et al., 2019). The model is trained to predict future ten frames conditioned on the initial two frames and future ten actions. F MORE VIDEO PREDICTIONS ON KINETICS-600 G ATTRIBUTION Figure 5 (top): ”Windsurfing Session 2013 Iballa Moreno” by morenotwins. Accessible here Figure 5 (bottom): ”How to fry an egg in 5 simple steps” by What’s Gaby Cooking. Accessible here Figure 6 (right): ”Prepare fruit cutting to pets #Shorts 4” by AP STUDIO. Accessible here Figure 12 (first): ”Windsurfing” by Dmitry Rudnev. Accessible here Figure 12 (second): ”Eric Cornelissen Windsurfing September 11, 2017” by Ron Van Dijk. Accessible here Figure 12 (third): ”Smart way to cut Fruits #6#” by MR. BEING SMART. Accessible here Figure 12 (fourth): ”How to make the Perfect Crunchy Half Cakes ( Kangumu ) ‖‖ Jinsi ya kupika Half Cakes za kupasuka.” by Ayleen’s Food & Vlogs. Accessible here
1. What is the focus and contribution of the paper on high-resolution video prediction? 2. What are the strengths of the proposed approach, particularly in leveraging pre-trained models? 3. What are the weaknesses of the paper, especially regarding the experimental results and parameter usage? 4. Do you have any concerns about the domain gap between image and video datasets? 5. What are the limitations of the paper's evaluation and presentation of results?
Summary Of The Paper Review
Summary Of The Paper This paper introduces HARP for high-resolution video prediction. Specifically, they leverage a pre-trained image generator (VQ-GAN) and fix it in the video prediction stage. The predicted codes are generated by a progressive causal transformer. Other techniques, like top-k sampling and data augmentation, are further employed to improve the diversity and quality. Experiments show it outperforms baselines on several datasets. Review Strengths: The paper is well-written and easy to follow. The idea is simple and clear. It is interesting to see an ImageNet pre-trained VQ-GAN can be used for predicting Kinetics-600 videos Weaknesses: Loss functions are missing. Though using fewer parameters, the proposed method, HARP, doesn’t outperform baselines on the BAIR dataset. As in table 2(b), simply increasing the transformer layers can improve the performance, it is interesting to see if HARP can outperform the baselines by increasing the parameters to the same level (~300M). The quantitative evaluation of video prediction is only conducted on BAIR and KITTI datasets, as KITTI is a small-scale dataset, it’s better to evaluate HARP with more datasets. There are domain gaps between ImageNet images and Kinetics-600 videos, how does HARP deal with such domain gaps? Is finetuning needed? Also in the paper, the predicted videos are only shown with some selected categories. How about the performance on the whole Kinetics-600 datasets? (e.g. examples on more categories and/or FVD scores on the whole dataset)
ICLR
Title Autoregressive Latent Video Prediction with High-Fidelity Image Generator Abstract Video prediction is an important yet challenging problem; burdened with the tasks of generating future frames and learning environment dynamics. Recently, autoregressive latent video models have proved to be a powerful video prediction tool, by separating the video prediction into two sub-problems: pre-training an image generator model, followed by learning an autoregressive prediction model in the latent space of the image generator. However, successfully generating highfidelity and high-resolution videos has yet to be seen. In this work, we investigate how to train an autoregressive latent video prediction model capable of predicting high-fidelity future frames with minimal modification to existing models, and produce high-resolution (256x256) videos. Specifically, we scale up prior models by employing a high-fidelity image generator (VQ-GAN) with a causal transformer model, and introduce additional techniques of top-k sampling and data augmentation to further improve video prediction quality. Despite the simplicity, the proposed method achieves competitive performance to state-of-the-art approaches on standard video prediction benchmarks with fewer parameters, and enables highresolution video prediction on complex and large-scale datasets. Videos are available at the anonymized website https://sites.google.com/view/harp-anonymous. 1 INTRODUCTION Video prediction can enable agents to learn useful representations for predicting the future consequences of the decisions they make, which is crucial for solving the tasks that require long-term planning, including robotic manipulation (Finn & Levine, 2017; Kalashnikov et al., 2018) and autonomous driving (Levinson et al., 2011; Xu et al., 2017). Despite the recent advances in improving the quality of video prediction (Finn et al., 2016; Lotter et al., 2017; Liang et al., 2017; Babaeizadeh et al., 2018; Denton & Fergus, 2018; Lee et al., 2018; Byeon et al., 2018; Kumar et al., 2020; Weissenborn et al., 2020; Babaeizadeh et al., 2021), learning an accurate video prediction model remains notoriously difficult problem and requires a lot of computing resources, especially when the inputs are video sequences with high-resolution (Castrejon et al., 2019; Villegas et al., 2019; Clark et al., 2019; Luc et al., 2020; Walker et al., 2021). This is because the video prediction model should excel at both tasks of generating high-fidelity images and learning the dynamics of environments, though each task itself is already a very challenging problem. Recently, autoregressive latent video prediction methods (Rakhimov et al., 2021; Yan et al., 2021) have been proposed to improve the efficiency of video prediction, by separating video prediction into two sub-problems: first pre-training an image generator (e.g., VQ-VAE; Oord et al. 2017), and then learning the autoregressive prediction model (Weissenborn et al., 2020; Chen et al., 2020) in the latent space of the pre-trained image generator. However, the prior works are limited in that they only consider relatively low-resolution videos (up to 128 × 128 pixels) for demonstrating the efficiency of the approach; it is questionable that such experiments can fully demonstrate the benefit of operating in the latent space of image generator instead of high-dimensional pixel-channel space. In this paper, we present High-fidelity AutoRegressive latent video Prediction (HARP), which scales up the previous autoregressive latent video prediction methods for high-fidelity video prediction. The main principle for the design of HARP is simplicity: we improve the video prediction quality with minimal modification to existing methods. First, for image generation, we employ a highfidelity image generator, i.e., vector-quantized generative adversarial network (VQ-GAN; Esser et al. 2021). This improves video prediction by enabling high-fidelity image generation (up to 256× 256 pixels) on various video datasets. Then a causal transformer model (Chen et al., 2020), which operates on top of discrete latent codes, is trained to predict the discrete codes from VQ-GAN, and autoregressive predictions made by the transformer model are decoded into future frames at inference time. Moreover, motivated by the sampling techniques widely-used in language generation for making coherent and diverse predictions, we propose to utilize top-k sampling (Fan et al., 2018) that draws the next discrete code from the k-most probable codes. Since the number of discrete codes the autoregressive model has to predict is very large, e.g., 6,400 codes on KITTI driving dataset (Geiger et al., 2013), we find that discarding rare discrete codes helps the model predict diverse but high-quality videos, without any change to the training procedure. We highlight the main contributions of this paper below: • We show that our autogressive latent video prediction model, HARP, can predict high-resolution (256×256 pixels) future frames on simulated robotics dataset (i.e., Meta-World; Yu et al. 2020) and large-scale real-world robotics dataset (i.e., RoboNet; Dasari et al. 2019). • We show that HARP can leverage the image generator pre-trained on ImageNet (Deng et al., 2009) for training a high-resolution video prediction model on complex, large-scale Kinetics600 dataset (Carreira et al., 2018), significantly reducing the training cost. • HARP achieves competitive or superior performance to prior state-of-the-art video prediction models with large end-to-end networks on widely-used BAIR Robot Pushing (Ebert et al., 2017) and KITTI driving (Geiger et al., 2013) video prediction benchmarks. • We also show that the pre-trained representations of HARP can be useful for learning multi-task imitation learning agent on Meta-World MT50 benchmark (Yu et al., 2020). 2 RELATED WORK Video prediction. Video prediction aims to predict the future frames conditioned on images (Michalski et al., 2014; Ranzato et al., 2014; Srivastava et al., 2015; Vondrick et al., 2016; Lotter et al., 2017), texts (Wu et al., 2021b), and actions (Oh et al., 2015; Finn et al., 2016), which would be useful for several applications, e.g., model-based RL (Hafner et al., 2019; Kaiser et al., 2020; Rybkin et al., 2021), and simulator development (Kim et al., 2020; 2021). Various video prediction models have been proposed with different approaches, including generative adversarial networks (GANs; Goodfellow et al. 2014) known to generate high-fidelity images by introducing adversarial discriminators that also considers temporal or motion information (Aigner & Körner, 2018; Jang et al., 2018; Kwon & Park, 2019; Clark et al., 2019; Luc et al., 2020), latent video prediction models that operates on the latent space (Babaeizadeh et al., 2018; Denton & Fergus, 2018; Lee et al., 2018; Villegas et al., 2019; Wu et al., 2021a; Babaeizadeh et al., 2021), and autoregressive video prediction models that operates on pixel space by predicting the next pixels in an autoregressive way (Kalchbrenner et al., 2017; Reed et al., 2017; Weissenborn et al., 2020). Autoregressive latent video prediction. Most closely related to our work are autoregressive latent video prediction models that separate the video prediction problem into image generation and dynamics learning. Walker et al. (2021) proposed to learn a hierarchical VQ-VAE (Razavi et al., 2019) that extracts multi-scale hierarchical latents then train SNAIL blocks (Chen et al., 2018) that predict hierarchical latent codes, enabling high-fidelity video prediction. However, this involves a complicated training pipeline and a video-specific architecture, which limits its applicability. As simple alternatives, Rakhimov et al. (2021); Yan et al. (2021) proposed to first learn a VQ-VAE (Oord et al., 2017) and train a causal transformer with 3D self-attention (Weissenborn et al., 2020) and factorized 2D self-attention (Child et al., 2019), respectively. These approaches, however, are limited in that they only consider low-resolution videos. We instead present a simple high-resolution video prediction method that incorporates the strengths of both prior approaches. 3 PRELIMINARIES We consider the standard video prediction framework where the goal is to predict the future frames conditioned on the initial frames of a video. Specifically, conditioned on the first c frames of a video x<c = (x0,x1, ...,xc−1), we aim to learn a video prediction model that predicts the future frames xc:T = (xc, ...,xT−1), where xt ∈ RH×W×Nch is the frame at timestep t. Optionally, one can also consider conditioning the prediction model on actions a = (a0, ...,aT−1) that the agents in the video would take, i.e., action-conditioned video prediction. Autoregressive video prediction model. Motivated by the recent success of pixel-level autoregressive models on image generation (Menick & Kalchbrenner, 2018), Weissenborn et al. (2020) introduced an autoregressive video prediction model that approximates the distribution of a video in a pixel-channel space. Specifically, given a video x ∈ RT×H×W×Nch , the joint distribution over pixels conditioned on the first c frames is modelled as the product of channel intensities Nch and all Np = T ·H ·W pixels except Nc = c ·H ·W pixels of conditioning frames: p(xc:T |x<c) = Np−1∏ i=Nc−1 Nch−1∏ k=0 p(xkπ(i)|xπ(<i),x <k π(i)), (1) where π is a raster-scan ordering over all pixels from the video (we refer to Weissenborn et al. (2020) for more details on the case where π is the combination of a subscale and raster-scan ordering since we only utilize raster-scan ordering for our approach), xπ(<i) is all pixels before xπ(i), xkπ(i) is the k-th channel intensity of the pixel xπ(i), and x<kπ(i) is all channel intensities before x k π(i). Vector quantized variational autoencoder. VQ-VAE (Oord et al., 2017) consists of an encoder that compresses images into discrete representations, and a decoder that reconstructs images from these discrete representations. Both encoder and decoder share a codebook of prototype vectors which are also learned throughout training. Formally, given an image x ∈ RH×W×Nch , the encoder E encodes x into a feature map ze(x) ∈ RH ′×W ′×Nz that consists of a series of latent vectors zπ′(i)(x) ∈ RNz , where π′ is a raster-scan ordering of the feature map ze(x) of size |π′| = H ′ ·W ′. Then ze(x) is quantized to discrete representations zq(x) ∈ R|π ′|×Nz based on the distance of latent vectors zπ′(i)(x) to the prototype vectors in the codebook C = {ek}Kk=1 as follows: zq(x) = (eq(x,1), eq(x,2), · · · , eq(x,|π′|)), where q(x, i) = argmin k∈[K] ‖zπ′(i)(x)− ek‖2, (2) where [K] is the set {1, · · · ,K}. Then the decoder G learns to reconstruct x from discrete representations zq(x). The VQ-VAE is trained by minimizing the following objective: LVQVAE(x) = ‖x−G(zq(x))‖22︸ ︷︷ ︸ Lrecon + ‖sg [ze(x)]− zq(x)‖22︸ ︷︷ ︸ Lcodebook +β · ‖sg [zq(x)]− ze(x)‖22︸ ︷︷ ︸ Lcommit , (3) where the operator sg refers to a stop-gradient operator, Lrecon is a reconstruction loss for learning representations useful for reconstructing images, Lcodebook is a codebook loss to bring codebook representations closer to corresponding encoder outputs h, and Lcommit is a commitment loss weighted by β to prevent encoder outputs from fluctuating frequently between different representations. Vector quantized generative adversarial network. VQ-GAN (Esser et al., 2021) is a variant of VQ-VAE that (a) replaces the Lrecon in (3) by a perceptual loss LLPIPS (Zhang et al., 2018), and (b) introduces an adversarial training scheme where a patch-level discriminator D (Isola et al., 2017) is trained to discriminate real and generated images by maximizing following loss: LGAN(x) = [logD(x) + log(1−D(G(zq(x)))]. (4) Then, the objective for training the VQ-GAN model is defined as: min E,G,C max D Ex∼p(x) [(LLPIPS + Lcodebook + Lcommit) + λ · LGAN] , (5) where λ = ∇GL [LLPIPS]∇GL [LGAN]+δ is an adaptive weight, ∇GL is the gradient of the inputs to the last layer of the decoder GL, and δ = 10−6 is a scalar introduced for numerical stability. 4 METHOD We present HARP, a video prediction model capable of predicting high-fidelity future frames. Our method is designed to fully exploit the benefit of autoregressive latent video prediction model that separates the video prediction into image generation and dynamics learning. Specifically, we consider the combination of (a) the recently introduced high-fidelity image generator (Esser et al., 2021) and (b) an autoregressive latent video prediction model (Oord et al., 2017; Rakhimov et al., 2021; Walker et al., 2021; Yan et al., 2021) that operates on top of the pre-trained image generator. The full architecture of HARP is illustrated in Figure 2. 4.1 HIGH-FIDELITY IMAGE GENERATOR We consider the VQ-GAN model (Esser et al., 2021) that has proven to be effective for highresolution image generation as our image generator (see Section 3 for the formulation of VQ-GAN). Similar to the motivation of Tian et al. (2021) that utilizes a pre-trained image generator in the context of video synthesis, we first pre-train the image generator then freeze the model throughout training to improve the efficiency of learning video prediction models. The notable difference to a prior work that utilize 3D convolutions to temporally downsample the video for efficiency (Yan et al., 2021) is that our image generator operates on single images; hence our image generator solely focus on improving the quality of generated images. Importantly, this enables us to utilize the VQ-GAN model pre-trained on a wide range of natural images, e.g., ImageNet, without training the image generator on the target datasets, which can significantly reduce the training cost of high-resolution video prediction model, Also, the representations from our video prediction model can be easily transferred to downstream tasks that require fine-grained control at each timestep, e.g., imitation learning (see Section 5.3 for supporting experimental results on multi-task imitation learning). 4.2 AUTOREGRESSIVE LATENT VIDEO PREDICTION MODEL To leverage the VQ-GAN model for video prediction, we utilize the autoregressive latent video prediction architecture that operates on top of the discrete codes extracted from a video x. Specifically, we extract the discrete codes z(x) = (z(x1), ..., z(xT )) using the pre-trained VQ-GAN, where z(xt) = (q(xt,1), q(xt,2), ..., q(xt,|π′|)) is the discrete code extracted from the frame xt as in (2). Then, instead of modelling the distribution of video p(x) in the pixel-channel space as in (1), we learn the distribution of the video in the discrete latent representation space: p(z(xc:T |x<c)) = Nd−1∏ i=0 p(zπ′(i)(x)|zπ′(<i)(x)), (6) where Nd = (T − C) ·H ′ ·W ′ is the total number of codes from xc:T . While the specific implementation for modelling p(z(x)) differs in prior works (Oord et al., 2017; Rakhimov et al., 2021; Walker et al., 2021; Yan et al., 2021), due to its simplicity, we utilize the causal transformer architecture (Yan et al., 2021) where the output logits from input codes are trained to predict the next discrete codes. We remark that our approach is also compatible with other architectures. 4.3 ADDITIONAL TECHNIQUES Top-k sampling. To improve the video prediction quality of latent autoregressive models whose outputs are sampled from the probability distribution over a large number of discrete codes, we utilize the top-k sampling (Fan et al., 2018) that randomly samples the output from the top-k probable discrete codes. By preventing the model from sampling rare discrete codes from the long-tail of a probability distribution and predicting future frames conditioned on such discrete codes, we find that top-k sampling improves video prediction quality, especially given that the number of discrete encodings required for future prediction is very large, e.g., 2,560 on RoboNet (Dasari et al., 2019) up to 6,400 on KITTI dataset (Geiger et al., 2013) in our experimental setup. Data augmentation. We also investigate how data augmentation can be useful for improving the performance of autoregressive latent video prediction models. Since the image generator model is not trained with augmentation, we utilize a weak augmentation to avoid the instability coming from aggressive transformation of input frames, i.e., translation augmentation that moves the input images by m pixels along the X or Y direction. 5 EXPERIMENTS We design our experiments to investigate the following: • Can HARP predict high-resolution future frames (up to 256 × 256 pixels) on various video datasets with different characteristics? • How does HARP compare to state-of-the-art methods with large end-to-end networks on standard video prediction benchmarks in terms of quantitative evaluation? • How does the proposed techniques affect the performance of HARP? • Can HARP be transferred to solve multi-task imitation learning tasks? 5.1 HIGH-RESOLUTION VIDEO PREDICTION Implementation. We utilize up to 8 Nvidia 2080Ti GPU and 20 CPU cores for training each model. For training VQ-GAN (Esser et al., 2021), we first train the model without a discriminator lossLGAN, and then continue the training with the loss following the suggestion of the authors. For all experiments, VQ-GAN downsamples each frame into 16× 16 latent codes, i.e., by a factor of 4 for frames of size 64×64 frames, and 16 for frames of size 256×256. For training a transformer model, the VQ-GAN model is frozen so that its parameters are not updated. We use Sparse Transformers (Child et al., 2019) as our transformer architecture to accelerate the training. As for hyperparameter, we use k = 10 for sampling at inference time, but no data augmentation for high-resolution video prediction experiments. We report more detailed implementation details in Appendix A. Meta-World experiments. To demonstrate that our method can predict high-resolution videos (256× 256 pixels), we first use action-free Meta-World dataset (Yu et al., 2020) consisting of 2,500 demonstrations from 50 different robotics manipulation tasks, which we collected using the deterministic scripted policies1. Specifically, we train a single model to predict future videos of all 50 tasks, without leveraging task-specific information, such as task index. For evaluation, we use 10% of the demonstrations as a held-out test dataset. As shown in Figure 3, our model can accurately predict the high-resolution future frames of diverse tasks, capturing all the small details. This shows that our model can effectively learn all the information of multiple tasks required for predicting future frames. We also remark that such representations can be useful to improve the performance of imitation learner (see Section 5.3 for supporting experimental results). RoboNet experiments. Now we investigate how our model works on large-scale, real-world RoboNet dataset (Dasari et al., 2019) consisting of more than 15 million frames. While prior works successfully trained a video prediction model with 64 × 64 videos (Wu et al., 2021a; Babaeizadeh 1The dataset is available at: https://shorturl.at/acnxM et al., 2021), we show that our model can predict high-resolution 256× 256 videos even with fewer number of parameters than the one used in the prior works for predicting 64×64 videos. Specifically, we first train a VQ-GAN model with 91.5M parameters, and then train a 12-layer causal transformer model with 74.2M parameters that predicts future 10 frames conditioned on first two frames and future ten actions. Total number of parameters is 165.7M, which is smaller than 303.3M of FitVid (Babaeizadeh et al., 2021) that predicts 64 × 64 videos. Figure 1 and Figure 4 show the predicted frames on the held-out test video, where the model predicts the high-resolution future frames where a robot arm is moving around various objects of different colors and shapes. Kinetics-600 experiments using ImageNet pre-trained VQ-GAN. Finally, we consider a very complex, large-scale Kinetics-600 dataset (Carreira et al., 2018) consisting of more than 400,000 videos, which requires a large amount of computing resources for training even on 64 × 64 resolution (Clark et al., 2019; Luc et al., 2020). To avoid the prohibitively expensive training cost of high-resolution video prediction models on this dataset and fully exploit the benefit of employing a high-fidelity image generator, we utilize the VQ-GAN model pre-trained on ImageNet dataset (Deng et al., 2009). 2 As we only train the transformer model for video prediction, this enables us to train a high-resolution video prediction model in a very efficient manner. Specifically, we train the transformer model for 60,000 steps on training dataset, which takes less than a day using our machine. As shown in Figure 5, our model can predict future frames on the test videos3, which demonstrates that leveraging the large image generator pre-trained on a wide range of natural images can be a promising recipe for efficient video prediction on high-resolution, large-scale video datasets. 5.2 COMPARATIVE EVALUATION ON STANDARD BENCHMARKS Datasets. For quantitative evaluation, we first consider the BAIR robot pushing dataset (Ebert et al., 2017) consisting of roughly 40k training and 256 test videos. We consider action-free setup, hence video prediction models should be stochastic for predicting the diverse possible movement of a robot arm and objects. Following the setup in prior works (Clark et al., 2019; Weissenborn et al., 2020; Luc et al., 2020; Yan et al., 2021), we predict 15 future frames conditioned on one frame. We also evaluate our method on KITTI driving dataset (Geiger et al., 2013), where the training and test datasets are split by following the setup in Lotter et al. (2017). As the KITTI dataset is relatively small-scale compared to other datasets, i.e., 57 training videos, it provides a good testbed for investigating the effect of data augmentation. For hyperparameters, we use k = 10 for both datasets and data augmentation with m = 4 is only applied to KITTI as there was no sign of overfitting on BAIR dataset. For a fair comparison, we follow the setup of Villegas et al. (2019), where (i) a model is trained to predict future ten frames conditioned on five frames and evaluated to predict future 25 frames conditioned on five frames, and (ii) test dataset consists of 148 video clips constructed by extracting 30-frame clips and skipping every 5 frames. 2https://github.com/CompVis/taming-transformers 3We collected videos covered by CC-BY license, which are available to put the frames on a paper. Metrics. We use two evaluation metrics: Learned Perceptual Image Patch Similarity (LPIPS; Zhang et al. 2018), a frame-wise metric designed to better represent the human perceptual similarity of two frames compared to traditional metrics (Wang et al., 2004; Huynh-Thu & Ghanbari, 2008), and Frèchet Video Distance (FVD; Unterthiner et al. 2018), a dynamics-based evaluation metric known to be better correlated with the human evaluation compared to frame-wise evaluation metrics. FVD is computed by comparing the summary statistics of I3D network trained on Kinetics400 dataset (Carreira & Zisserman, 2017), and LPIPS is computed using the features from AlexNet (Krizhevsky et al., 2012). For comparison with the scores reported in prior works, we exactly follow the evaluation setup in Villegas et al. (2019) and Babaeizadeh et al. (2021) that samples 100 future videos for each ground-truth test video, then reports the best score over 100 videos for LPIPS, and the score using all videos for FVD, with the batch size of 256 for BAIR and 148 for KITTI. Results. Table 1 shows the performances of our method and baselines on test sets of BAIR Robot Pushing and KITTI driving dataset. We observe that our model achieves competitive or superior performance to state-of-the-art methods with large end-to-end networks, e.g., HARP outperforms FitVid with 302M parameters on KITTI driving dataset. Our model successfully extrapolates to unseen number of future frames (i.e., 25) instead of 10 future frames used in training on KITTI dataset. This implies that transformer-based video prediction models can also predict arbitrary number of frames at inference time. In the case of BAIR dataset, HARP achieves the similar performance of FitVid with 302M parameters, even though our method only requires 89M parameters. We provide videos predicted by HARP on BAIR and KITTI datasets in Appendix C. Analysis. We investigate how the top-k sampling, number of layers, and magnitude m of data augmentation affect the performance. Table 2a shows that smaller k leads to better performance, implying that the proposed top-k sampling is effective for improving the performance by discarding rare discrete codes that might degrade the prediction quality at inference time. As shown in Table 2b, we observe that more layers leads to better performance on BAIR dataset, which implies our model can be further improved by scaling up the networks. Finally, we find that (i) data augmentation on KITTI dataset is important for achieving strong performance, similar to the observation of Babaeizadeh et al. (2021), and (ii) too aggressive augmentation leads to worse performance. We provide the learning curves with and without augmentation in Appendix B. 4Baselines are SVG (Villegas et al., 2019), GHVAE (Wu et al., 2021a), FitVid (Babaeizadeh et al., 2021), LVT (Rakhimov et al., 2021), SAVP (Lee et al., 2018), DVD-GAN-FP (Clark et al., 2019), VideoGPT (Yan et al., 2021), TrIVD-GAN-FP (Luc et al., 2020), and Video Transformer (Weissenborn et al., 2020). 5.3 FINE-TUNING HARP FOR MULTI-TASK IMITATION LEARNING Setup. In order to demonstrate that the pre-trained representations from HARP can be useful for solving downstream tasks, we evaluate the imitation learning performance of fine-tuned HARP on MT-50 benchmark from Meta-World. Specifically, we take the pre-trained HARP model (see Figure 3 for the video predictions from this model), and fine-tune the model to predict expert actions by introducing a policy network on top of the transformer model. For comparative evaluation, we consider three baselines: (a) VQ-Transformer, which shares the same architecture with HARP but trained from scratch, (b) CNN-LSTM, which extracts features using convolutional neural networks (CNN) and LSTM networks, and (c) CNN-Transformer, that utilizes transformer networks instead of LSTM networks. For training and evaluation, we use the same training and test dataset used for video prediction experiments. We report the average success rate over 10 trials for each task. More details are available in Appendix A. Results. Table 3 shows the performance of imitation learning policies on MT50 test environments. We first observe that VQ-Transformer, which has a same architecture with HARP but trained from scratch, completely fails to solve the tasks. This shows the difficulty of training useful representations with fixed discrete codes as inputs. However, finetuned HARP model successfully outperforms other baselines because pre-trained representations contain useful information for long-term reasoning. This demonstrates that video prediction with HARP can be an effective self-supervised learning scheme for solving various control tasks. 6 DISCUSSION In this work, we present HARP, a high-fidelity autoregressive latent video prediction model. By employing a high-fidelity image generator and utilizing top-k sampling at inference time, HARP can predict high-resolution future frames, and achieve competitive performance to state-of-the-art video prediction methods with large end-to-end networks. We also show that HARP can leverage the image generator pre-trained on a wide range of natural images for video prediction, similar to the approach in the context of video synthesis (Tian et al., 2021). We hope this work inspires more investigation into leveraging a pre-trained image generator for video prediction, which can significantly reduce the cost for training a high-resolution video prediction model by building on the recent success of high-fidelity image generation (Oord et al., 2017; Razavi et al., 2019; Esser et al., 2021; Child, 2020; Ho et al., 2020; Karras et al., 2020; Dhariwal & Nichol, 2021). Finally, we report the failure cases of video prediction with HARP and discuss the possible extensions to resolve the issue. A common failure case for video prediction on RoboNet dataset is ignoring the interaction between a robot arm and objects. For example, in Figure 6a, our model ignores the objects and only predicts the movement of a robot arm. On the other hand, common failure case for Kinetics-600 is a degenerate video prediction, where a model just repeats the conditioning frame without predicting the future, as shown in Figure 6b. These failure cases might be resolved by training more larger networks similar to the observation in the field of natural language processing, e.g., GPT-3 (Brown et al., 2020), or might necessitate a new architecture for addressing the complexity of training autoregressive latent prediction models on video datasets. ETHICS STATEMENT While video prediction can be useful for various applications including robotic manipulation and autonomous driving, it might be misused by malicious users for unethical purposes, e.g., fake videos accusing politicians or sexual videos of any individuals. As our work introduces a method for generating more high-resolution future frames, our method may improve the chance of such videos being recognized as real videos. For this reason, in addition to developing a video prediction method that generates more realistic frames, it is important to be aware of potential problems and develop a method to detect generated videos (Gentine et al., 2018). REPRODUCIBILITY STATEMENT We describe the implementation and evaluation details in Section 5 and Appendix A. We also provide our code in the supplementary material. A EXPERIMENTAL SETUP A.1 DATASETS Meta-World. Meta-World (Yu et al., 2020) is a robotics manipulation simulator that supports 50 different tasks. For our experiments, we collect 50 demonstrations for each task using the deterministic scripted policies.5, hence we use total 2500 demonstrations. For evaluation, we construct the held-out test dataset with 10% of the demonstrations. To improve the visibility of rendered frames, we adjust the camera view of the camera using the publicly available source codes.6 We provide the dataset we used in our experiments7. RoboNet. RoboNet (Dasari et al., 2019) is a large-scale real-world robotics dataset consisting of more than 160,000 high-resolution videos. Since there is no test set for RoboNet dataset, we follow the setup in Wu et al. (2021a) for constructing a held-out test dataset8 of size 256. Following the setup in Wu et al. (2021a); Babaeizadeh et al. (2021), we train a video prediction model to predict ten future frames conditioned on two initial frames and ten future actions. For preprocessing the frames, we resize the original frames to 256×256 resolution frames without cropping. For downloading the dataset, we utilize the publicly available script.9 Kinetics-600. Kinetics-600 dataset is a large-scale video dataset consisting of more than 400,000 videos of total 600 action classes. Following the setup in Clark et al. (2019); Luc et al. (2020), we train a video prediction model to predict future 11 frames conditioned on the first five frames. For downloading the dataset, we use the publicly available repository10. BAIR Robot Pushing. BAIR Robot Pushing dataset (Ebert et al., 2017) consists of 43,264 training and 256 test videos. While BAIR dataset contains the information of actions robots take, common setup for evaluation on BAIR dataset is action-free (Clark et al., 2019; Weissenborn et al., 2020; Luc et al., 2020; Yan et al., 2021), where a video prediction model is trained to predict future 15 frames conditioned on the initial frame. For downloading and preprocessing the dataset, we utilize the publicly available script.11 KITTI driving dataset. KITTI driving dataset (Geiger et al., 2013) is the dataset that contains a large number of high-resolution driving videos. However, for video prediction, we follow the setup in Lotter et al. (2017) and utilize 57 training videos and 3 test videos for evaluation. To avoid utilizing too similar video clips from the test dataset for evaluation, we follow the setup in Villegas et al. (2019) where 30-frame clip is extracted with the interval of 5 frames, which constructs a test of size 148. For comparison with baselines, following the setup in Villegas et al. (2019), we train a model to predict future ten frames conditioned on five frames and evaluate the model to predict future 25 frames conditioned on five frames. For downloading and preprocessing the dataset, we utilize the publicly available script.12 5https://github.com/rlworkgroup/metaworld/tree/master/metaworld/policies 6we use the of 9e3863d in https://github.com/rlworkgroup/metaworld. 7https://shorturl.at/acnxM 8We use the list of videos available at https://github.com/google-research/fitvid/blob/master/robonet testset filenames.txt 9https://gist.github.com/soskek/d762751ce0aef4b2c7cf0a1537917016 10https://github.com/cvdfoundation/kinetics-dataset 11https://github.com/wilson1yan/VideoGPT 12https://github.com/coxlab/prednet A.2 IMPLEMENTATION DETAILS OF HARP VQ-GAN. The training of HARP consists of two-stages. First, for training a VQ-GAN model (Esser et al., 2021) we use the publicly available source code from the authors13. Following the suggestion of the authors, we trained a VQ-GAN model without a discriminator loss until it converges, then resume the training with a discriminator loss for total {300000, 500000, 150000, 30000} training steps with batch size of {24, 24, 320, 96} on Meta-World, RoboNet, BAIR Robot Pushing, and KITTI driving dataset, respectively. For Kinetics-600 dataset, we leverage the publicly available VQ-GAN model14 pre-trained on ImageNet without training a VQ-GAN model from scratch. The size k of codebook C in (2) is {8192, 8192, 1024, 1024, 1024} for Meta-World, RoboNet, Kinetics-600, BAIR Robot Pushing, and KITTI dataset, respectively. Causal transformer. Then we train a 12-layer Sparse Transformers (Child et al., 2019) to predict the discrete codes from VQ-GAN models in an autoregressive manner, by building on the publicly available source code15 of VideoGPT (Yan et al., 2021). For conditioning on initial frames, we utilize the same architecture as in Yan et al. (2021) that utilizes ResNet architecture to extract the downsampled feature map. We utilize ResNet-18 architecture for all experiments. We train the model until it converges on Meta-World, BAIR, and KITTI datasets, but we cannot find the sign of overfitting on large-scale datasets of RoboNet and Kinetics-600. Specifically, we train the model for {50000, 80000, 100000, 85000, 30000} training steps on Meta-World, RoboNet, Kinetics-600, BAIR Robot Pushing, and KITTI dataset, respectively. We use the data augmentation with the magnitude of m = 4 on KITTI driving dataset, which has shown to be very effective for improving the performance (see Appendix B) for a learning curve with and without augmentation). Inference with top-k sampling. We utilize top-k sampling (Fan et al., 2018) to improve the video prediction quality. For all datasets, we utilize k = 10. One major limitation of autoregressive prediction model is slow inference time. We utilize the implementation of Yan et al. (2021) that caches the previous key, values and utilize them for fast inference. In order to enable our model to extrapolate to unseen length of frames at evaluation time on KITTI dataset (e.g., the model has to predict 25 frames instead of 10 frames the model is trained to predict), we first predict T frames xc:T−1 conditioned on c initial frames x1:c, then predict the next frames by (a) keeping x1:c as conditioning frames and (b) giving the predicted last T−1 frames as inputs to the causal transformer model. We repeat this process until predicting all 25 future frames. A.3 IMPLEMENTATION DETAILS OF ACTION-CONDITIONED HARP. In order to predict future frames conditioned on future actions on RoboNet dataset (Dasari et al., 2019), we condition the prediction on actions by adding the action embeddings to the embeddings of discrete codes. Specifically, we introduce a linear layer that processes raw actions to action embeddings with the same dimension of token embeddings, then add the action embeddings of time step t+ 1 to token embeddings used for predicting the tokens of time step t + 1. At inference time, the inference procedure is exactly same as HARP except that future action embedding is added to the token embedding. We find that this simple modification to the original architecture enables HARP to predict future frames conditioned on actions. We provide the illustration of action-conditioned HARP in Figure 7. 13https://github.com/CompVis/taming-transformers 14https://heibox.uni-heidelberg.de/d/8088892a516d4e3baf92/ 15https://github.com/wilson1yan/VideoGPT A.4 IMPLEMENTATION DETAILS OF HARP FINE-TUNING. In order to fine-tune the HARP model for solving multi-task imitation learning tasks on Meta-World (Yu et al., 2020), we first pre-train a model to predict future 11 frames conditioned on the first frame using the dataset from all 50 tasks (see Appendix A.1 for the details on the dataset). Then we finetune the model to predict expert actions by introducing a two-layer policy network on top of the causal transformer model. Specifically, we train a behavioral cloning policy to minimize the mean squared error between the predicted actions from a layer and ground-truth expert actions. In order to further improve the performance of all methods, we follow the idea of Dasari & Gupta (2020) to learn a inverse dynamics predictor that predicts the action given two consecutive frames. We train all methods for 20,000 steps with the batch size of 20 and data augmentation of magnitude m = 4. For CNN-based methods, we use ResNet-50 for feature extractor, and use 4-layer LSTM16 for CNN-LSTM, and 12-layer causal transformer for CNN-Transformer. B EFFECTS OF AUGMENTATION Figure 8 shows the test error during the training of HARP on KITTI driving dataset (Geiger et al., 2013). One can see that the test error from a model trained without augmentation increases after initial ∼ 2500 training steps, which is a sign of overfitting. However, we observe that the test error from a model trained with data augmentation keeps decreasing throughout training until it converges, which shows the effectiveness of data augmentation for learning video prediction models, similar to the observation in Babaeizadeh et al. (2021). One notable detail here is that HARP overfits to KITTI training dataset with much fewer number of parameters (i.e., 89M) when compared to FitVid that utilizes 303M parameters. 16We try more deeper networks to match the number of trainable parameters, but we find that deeper LSTM networks are very unstable to train. Hence we search over the layers of {2, 4, 8, 16} and report the best results achieved with 4-layer LSTM. C VIDEO PREDICTIONS ON BAIR AND KITTI We provide the future frames predicted by HARP on BAIR Robot Pushing (Ebert et al., 2017) and KITTI driving dataset (Geiger et al., 2013). The model is trained to predict future 15 frames conditioned on the first frame on BAIR dataset. In the case of KITTI dataset, the model is trained to predict future 10 frames conditioned on the five frames, and evaluated to predict future 25 frames conditioned on the five frames. D MORE VIDEO PREDICTIONS ON META-WORLD We provide more videos predicted by HARP on Meta-World dataset (Yu et al., 2020). The model is trained to predict future 11 frames conditioned on the first frame. E MORE VIDEO PREDICTIONS ON ROBONET We provide more videos predicted by HARP on RoboNet dataset (Dasari et al., 2019). The model is trained to predict future ten frames conditioned on the initial two frames and future ten actions. F MORE VIDEO PREDICTIONS ON KINETICS-600 G ATTRIBUTION Figure 5 (top): ”Windsurfing Session 2013 Iballa Moreno” by morenotwins. Accessible here Figure 5 (bottom): ”How to fry an egg in 5 simple steps” by What’s Gaby Cooking. Accessible here Figure 6 (right): ”Prepare fruit cutting to pets #Shorts 4” by AP STUDIO. Accessible here Figure 12 (first): ”Windsurfing” by Dmitry Rudnev. Accessible here Figure 12 (second): ”Eric Cornelissen Windsurfing September 11, 2017” by Ron Van Dijk. Accessible here Figure 12 (third): ”Smart way to cut Fruits #6#” by MR. BEING SMART. Accessible here Figure 12 (fourth): ”How to make the Perfect Crunchy Half Cakes ( Kangumu ) ‖‖ Jinsi ya kupika Half Cakes za kupasuka.” by Ayleen’s Food & Vlogs. Accessible here
1. What is the main contribution of the paper in terms of combining transformer-based auto-regressive model with a pre-trained image generator? 2. What are the strengths and weaknesses of the proposed method compared to prior works in video prediction? 3. How does the reviewer assess the novelty and originality of the proposed method regarding its relation to other relevant works? 4. What are the similarities and differences between the proposed method and previous works such as [1], [5], and others? 5. Does the author provide sufficient discussion and comparison with other relevant works in the paper? If not, what specific aspects should be discussed or compared?
Summary Of The Paper Review
Summary Of The Paper This paper presents a video prediction method that combines a transformer-based auto-regressive model with a pre-trained image generator for high-resolution frame generation. The image generator is based on VQGAN. The auto-regressive model is trained to predict the discrete VQGAN's latent codes which is then used by the image generator to generate patches in the video. Experiments on different video prediction benchmarks demonstrate the effectiveness of the proposed method for high-resolution (256x256) video prediction. Review Strengths: The paper is well written and easy to follow. The idea of leveraging pre-trained image generator for high-fidelity frame outputs while using transformer to focus on learning video dynamics is well motivated. Experiments on a wide range of video prediction benchmarks demonstrate the proposed method can generate visually good results and performs well compared to state-of-the-art methods on this challenging problem. Weaknesses: While I appreciate the good results achieved by the proposed method, my major concern is that the novelty of it is somewhat limited. The idea of combining image generator with latent space auto-regressive model has been widely explored in previous works ([1], [2], [3], [4]). At the high level, the idea of the proposed method is similar to [1], only with VQ-VAE replaced by VQ-GAN. It is not surprising that the visual quality of the prediction results improves with a better image generator and I don't feel that it provides much novelty by itself. I also feel that the paper lacks in-depth discussion on the relation between the proposed method and other relevant works. In particular, the overall framework followed by the proposed method, including the top-k sampling strategy, is very similar to that of [5]. Yet there is no discussion on [5] except mentioning of the use of the same VQGAN model. Can the proposed method be considered a direct extension of [5] from image generation to video generation? Is there any fundamental challenge in doing that extension? and is there any fundamental change that the author incorporated to address such challenge? [1] Wilson Yan, Yunzhi Zhang, Pieter Abbeel, and Aravind Srinivas. Videogpt: Video generation using vq-vae and transformers. arXiv preprint arXiv:2104.10157, 2021. [2] Dirk Weissenborn, Oscar Tackstr ¨ om, and Jakob Uszkoreit. Scaling autoregressive video models. In ¨ International Conference on Learning Representations, 2020. [3] Ruslan Rakhimov, Denis Volkhonskiy, Alexey Artemov, Denis Zorin, and Evgeny Burnaev. Latent video transformer. In International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, 2021. [4] Yu Tian, Jian Ren, Menglei Chai, Kyle Olszewski, Xi Peng, Dimitris N Metaxas, and Sergey Tulyakov. A good image generator is what you need for high-resolution video synthesis. In International Conference on Learning Representations, 2021. [5] Patrick Esser, Robin Rombach, and Bjorn Ommer. Taming transformers for high-resolution image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021.
ICLR
Title Autoregressive Latent Video Prediction with High-Fidelity Image Generator Abstract Video prediction is an important yet challenging problem; burdened with the tasks of generating future frames and learning environment dynamics. Recently, autoregressive latent video models have proved to be a powerful video prediction tool, by separating the video prediction into two sub-problems: pre-training an image generator model, followed by learning an autoregressive prediction model in the latent space of the image generator. However, successfully generating highfidelity and high-resolution videos has yet to be seen. In this work, we investigate how to train an autoregressive latent video prediction model capable of predicting high-fidelity future frames with minimal modification to existing models, and produce high-resolution (256x256) videos. Specifically, we scale up prior models by employing a high-fidelity image generator (VQ-GAN) with a causal transformer model, and introduce additional techniques of top-k sampling and data augmentation to further improve video prediction quality. Despite the simplicity, the proposed method achieves competitive performance to state-of-the-art approaches on standard video prediction benchmarks with fewer parameters, and enables highresolution video prediction on complex and large-scale datasets. Videos are available at the anonymized website https://sites.google.com/view/harp-anonymous. 1 INTRODUCTION Video prediction can enable agents to learn useful representations for predicting the future consequences of the decisions they make, which is crucial for solving the tasks that require long-term planning, including robotic manipulation (Finn & Levine, 2017; Kalashnikov et al., 2018) and autonomous driving (Levinson et al., 2011; Xu et al., 2017). Despite the recent advances in improving the quality of video prediction (Finn et al., 2016; Lotter et al., 2017; Liang et al., 2017; Babaeizadeh et al., 2018; Denton & Fergus, 2018; Lee et al., 2018; Byeon et al., 2018; Kumar et al., 2020; Weissenborn et al., 2020; Babaeizadeh et al., 2021), learning an accurate video prediction model remains notoriously difficult problem and requires a lot of computing resources, especially when the inputs are video sequences with high-resolution (Castrejon et al., 2019; Villegas et al., 2019; Clark et al., 2019; Luc et al., 2020; Walker et al., 2021). This is because the video prediction model should excel at both tasks of generating high-fidelity images and learning the dynamics of environments, though each task itself is already a very challenging problem. Recently, autoregressive latent video prediction methods (Rakhimov et al., 2021; Yan et al., 2021) have been proposed to improve the efficiency of video prediction, by separating video prediction into two sub-problems: first pre-training an image generator (e.g., VQ-VAE; Oord et al. 2017), and then learning the autoregressive prediction model (Weissenborn et al., 2020; Chen et al., 2020) in the latent space of the pre-trained image generator. However, the prior works are limited in that they only consider relatively low-resolution videos (up to 128 × 128 pixels) for demonstrating the efficiency of the approach; it is questionable that such experiments can fully demonstrate the benefit of operating in the latent space of image generator instead of high-dimensional pixel-channel space. In this paper, we present High-fidelity AutoRegressive latent video Prediction (HARP), which scales up the previous autoregressive latent video prediction methods for high-fidelity video prediction. The main principle for the design of HARP is simplicity: we improve the video prediction quality with minimal modification to existing methods. First, for image generation, we employ a highfidelity image generator, i.e., vector-quantized generative adversarial network (VQ-GAN; Esser et al. 2021). This improves video prediction by enabling high-fidelity image generation (up to 256× 256 pixels) on various video datasets. Then a causal transformer model (Chen et al., 2020), which operates on top of discrete latent codes, is trained to predict the discrete codes from VQ-GAN, and autoregressive predictions made by the transformer model are decoded into future frames at inference time. Moreover, motivated by the sampling techniques widely-used in language generation for making coherent and diverse predictions, we propose to utilize top-k sampling (Fan et al., 2018) that draws the next discrete code from the k-most probable codes. Since the number of discrete codes the autoregressive model has to predict is very large, e.g., 6,400 codes on KITTI driving dataset (Geiger et al., 2013), we find that discarding rare discrete codes helps the model predict diverse but high-quality videos, without any change to the training procedure. We highlight the main contributions of this paper below: • We show that our autogressive latent video prediction model, HARP, can predict high-resolution (256×256 pixels) future frames on simulated robotics dataset (i.e., Meta-World; Yu et al. 2020) and large-scale real-world robotics dataset (i.e., RoboNet; Dasari et al. 2019). • We show that HARP can leverage the image generator pre-trained on ImageNet (Deng et al., 2009) for training a high-resolution video prediction model on complex, large-scale Kinetics600 dataset (Carreira et al., 2018), significantly reducing the training cost. • HARP achieves competitive or superior performance to prior state-of-the-art video prediction models with large end-to-end networks on widely-used BAIR Robot Pushing (Ebert et al., 2017) and KITTI driving (Geiger et al., 2013) video prediction benchmarks. • We also show that the pre-trained representations of HARP can be useful for learning multi-task imitation learning agent on Meta-World MT50 benchmark (Yu et al., 2020). 2 RELATED WORK Video prediction. Video prediction aims to predict the future frames conditioned on images (Michalski et al., 2014; Ranzato et al., 2014; Srivastava et al., 2015; Vondrick et al., 2016; Lotter et al., 2017), texts (Wu et al., 2021b), and actions (Oh et al., 2015; Finn et al., 2016), which would be useful for several applications, e.g., model-based RL (Hafner et al., 2019; Kaiser et al., 2020; Rybkin et al., 2021), and simulator development (Kim et al., 2020; 2021). Various video prediction models have been proposed with different approaches, including generative adversarial networks (GANs; Goodfellow et al. 2014) known to generate high-fidelity images by introducing adversarial discriminators that also considers temporal or motion information (Aigner & Körner, 2018; Jang et al., 2018; Kwon & Park, 2019; Clark et al., 2019; Luc et al., 2020), latent video prediction models that operates on the latent space (Babaeizadeh et al., 2018; Denton & Fergus, 2018; Lee et al., 2018; Villegas et al., 2019; Wu et al., 2021a; Babaeizadeh et al., 2021), and autoregressive video prediction models that operates on pixel space by predicting the next pixels in an autoregressive way (Kalchbrenner et al., 2017; Reed et al., 2017; Weissenborn et al., 2020). Autoregressive latent video prediction. Most closely related to our work are autoregressive latent video prediction models that separate the video prediction problem into image generation and dynamics learning. Walker et al. (2021) proposed to learn a hierarchical VQ-VAE (Razavi et al., 2019) that extracts multi-scale hierarchical latents then train SNAIL blocks (Chen et al., 2018) that predict hierarchical latent codes, enabling high-fidelity video prediction. However, this involves a complicated training pipeline and a video-specific architecture, which limits its applicability. As simple alternatives, Rakhimov et al. (2021); Yan et al. (2021) proposed to first learn a VQ-VAE (Oord et al., 2017) and train a causal transformer with 3D self-attention (Weissenborn et al., 2020) and factorized 2D self-attention (Child et al., 2019), respectively. These approaches, however, are limited in that they only consider low-resolution videos. We instead present a simple high-resolution video prediction method that incorporates the strengths of both prior approaches. 3 PRELIMINARIES We consider the standard video prediction framework where the goal is to predict the future frames conditioned on the initial frames of a video. Specifically, conditioned on the first c frames of a video x<c = (x0,x1, ...,xc−1), we aim to learn a video prediction model that predicts the future frames xc:T = (xc, ...,xT−1), where xt ∈ RH×W×Nch is the frame at timestep t. Optionally, one can also consider conditioning the prediction model on actions a = (a0, ...,aT−1) that the agents in the video would take, i.e., action-conditioned video prediction. Autoregressive video prediction model. Motivated by the recent success of pixel-level autoregressive models on image generation (Menick & Kalchbrenner, 2018), Weissenborn et al. (2020) introduced an autoregressive video prediction model that approximates the distribution of a video in a pixel-channel space. Specifically, given a video x ∈ RT×H×W×Nch , the joint distribution over pixels conditioned on the first c frames is modelled as the product of channel intensities Nch and all Np = T ·H ·W pixels except Nc = c ·H ·W pixels of conditioning frames: p(xc:T |x<c) = Np−1∏ i=Nc−1 Nch−1∏ k=0 p(xkπ(i)|xπ(<i),x <k π(i)), (1) where π is a raster-scan ordering over all pixels from the video (we refer to Weissenborn et al. (2020) for more details on the case where π is the combination of a subscale and raster-scan ordering since we only utilize raster-scan ordering for our approach), xπ(<i) is all pixels before xπ(i), xkπ(i) is the k-th channel intensity of the pixel xπ(i), and x<kπ(i) is all channel intensities before x k π(i). Vector quantized variational autoencoder. VQ-VAE (Oord et al., 2017) consists of an encoder that compresses images into discrete representations, and a decoder that reconstructs images from these discrete representations. Both encoder and decoder share a codebook of prototype vectors which are also learned throughout training. Formally, given an image x ∈ RH×W×Nch , the encoder E encodes x into a feature map ze(x) ∈ RH ′×W ′×Nz that consists of a series of latent vectors zπ′(i)(x) ∈ RNz , where π′ is a raster-scan ordering of the feature map ze(x) of size |π′| = H ′ ·W ′. Then ze(x) is quantized to discrete representations zq(x) ∈ R|π ′|×Nz based on the distance of latent vectors zπ′(i)(x) to the prototype vectors in the codebook C = {ek}Kk=1 as follows: zq(x) = (eq(x,1), eq(x,2), · · · , eq(x,|π′|)), where q(x, i) = argmin k∈[K] ‖zπ′(i)(x)− ek‖2, (2) where [K] is the set {1, · · · ,K}. Then the decoder G learns to reconstruct x from discrete representations zq(x). The VQ-VAE is trained by minimizing the following objective: LVQVAE(x) = ‖x−G(zq(x))‖22︸ ︷︷ ︸ Lrecon + ‖sg [ze(x)]− zq(x)‖22︸ ︷︷ ︸ Lcodebook +β · ‖sg [zq(x)]− ze(x)‖22︸ ︷︷ ︸ Lcommit , (3) where the operator sg refers to a stop-gradient operator, Lrecon is a reconstruction loss for learning representations useful for reconstructing images, Lcodebook is a codebook loss to bring codebook representations closer to corresponding encoder outputs h, and Lcommit is a commitment loss weighted by β to prevent encoder outputs from fluctuating frequently between different representations. Vector quantized generative adversarial network. VQ-GAN (Esser et al., 2021) is a variant of VQ-VAE that (a) replaces the Lrecon in (3) by a perceptual loss LLPIPS (Zhang et al., 2018), and (b) introduces an adversarial training scheme where a patch-level discriminator D (Isola et al., 2017) is trained to discriminate real and generated images by maximizing following loss: LGAN(x) = [logD(x) + log(1−D(G(zq(x)))]. (4) Then, the objective for training the VQ-GAN model is defined as: min E,G,C max D Ex∼p(x) [(LLPIPS + Lcodebook + Lcommit) + λ · LGAN] , (5) where λ = ∇GL [LLPIPS]∇GL [LGAN]+δ is an adaptive weight, ∇GL is the gradient of the inputs to the last layer of the decoder GL, and δ = 10−6 is a scalar introduced for numerical stability. 4 METHOD We present HARP, a video prediction model capable of predicting high-fidelity future frames. Our method is designed to fully exploit the benefit of autoregressive latent video prediction model that separates the video prediction into image generation and dynamics learning. Specifically, we consider the combination of (a) the recently introduced high-fidelity image generator (Esser et al., 2021) and (b) an autoregressive latent video prediction model (Oord et al., 2017; Rakhimov et al., 2021; Walker et al., 2021; Yan et al., 2021) that operates on top of the pre-trained image generator. The full architecture of HARP is illustrated in Figure 2. 4.1 HIGH-FIDELITY IMAGE GENERATOR We consider the VQ-GAN model (Esser et al., 2021) that has proven to be effective for highresolution image generation as our image generator (see Section 3 for the formulation of VQ-GAN). Similar to the motivation of Tian et al. (2021) that utilizes a pre-trained image generator in the context of video synthesis, we first pre-train the image generator then freeze the model throughout training to improve the efficiency of learning video prediction models. The notable difference to a prior work that utilize 3D convolutions to temporally downsample the video for efficiency (Yan et al., 2021) is that our image generator operates on single images; hence our image generator solely focus on improving the quality of generated images. Importantly, this enables us to utilize the VQ-GAN model pre-trained on a wide range of natural images, e.g., ImageNet, without training the image generator on the target datasets, which can significantly reduce the training cost of high-resolution video prediction model, Also, the representations from our video prediction model can be easily transferred to downstream tasks that require fine-grained control at each timestep, e.g., imitation learning (see Section 5.3 for supporting experimental results on multi-task imitation learning). 4.2 AUTOREGRESSIVE LATENT VIDEO PREDICTION MODEL To leverage the VQ-GAN model for video prediction, we utilize the autoregressive latent video prediction architecture that operates on top of the discrete codes extracted from a video x. Specifically, we extract the discrete codes z(x) = (z(x1), ..., z(xT )) using the pre-trained VQ-GAN, where z(xt) = (q(xt,1), q(xt,2), ..., q(xt,|π′|)) is the discrete code extracted from the frame xt as in (2). Then, instead of modelling the distribution of video p(x) in the pixel-channel space as in (1), we learn the distribution of the video in the discrete latent representation space: p(z(xc:T |x<c)) = Nd−1∏ i=0 p(zπ′(i)(x)|zπ′(<i)(x)), (6) where Nd = (T − C) ·H ′ ·W ′ is the total number of codes from xc:T . While the specific implementation for modelling p(z(x)) differs in prior works (Oord et al., 2017; Rakhimov et al., 2021; Walker et al., 2021; Yan et al., 2021), due to its simplicity, we utilize the causal transformer architecture (Yan et al., 2021) where the output logits from input codes are trained to predict the next discrete codes. We remark that our approach is also compatible with other architectures. 4.3 ADDITIONAL TECHNIQUES Top-k sampling. To improve the video prediction quality of latent autoregressive models whose outputs are sampled from the probability distribution over a large number of discrete codes, we utilize the top-k sampling (Fan et al., 2018) that randomly samples the output from the top-k probable discrete codes. By preventing the model from sampling rare discrete codes from the long-tail of a probability distribution and predicting future frames conditioned on such discrete codes, we find that top-k sampling improves video prediction quality, especially given that the number of discrete encodings required for future prediction is very large, e.g., 2,560 on RoboNet (Dasari et al., 2019) up to 6,400 on KITTI dataset (Geiger et al., 2013) in our experimental setup. Data augmentation. We also investigate how data augmentation can be useful for improving the performance of autoregressive latent video prediction models. Since the image generator model is not trained with augmentation, we utilize a weak augmentation to avoid the instability coming from aggressive transformation of input frames, i.e., translation augmentation that moves the input images by m pixels along the X or Y direction. 5 EXPERIMENTS We design our experiments to investigate the following: • Can HARP predict high-resolution future frames (up to 256 × 256 pixels) on various video datasets with different characteristics? • How does HARP compare to state-of-the-art methods with large end-to-end networks on standard video prediction benchmarks in terms of quantitative evaluation? • How does the proposed techniques affect the performance of HARP? • Can HARP be transferred to solve multi-task imitation learning tasks? 5.1 HIGH-RESOLUTION VIDEO PREDICTION Implementation. We utilize up to 8 Nvidia 2080Ti GPU and 20 CPU cores for training each model. For training VQ-GAN (Esser et al., 2021), we first train the model without a discriminator lossLGAN, and then continue the training with the loss following the suggestion of the authors. For all experiments, VQ-GAN downsamples each frame into 16× 16 latent codes, i.e., by a factor of 4 for frames of size 64×64 frames, and 16 for frames of size 256×256. For training a transformer model, the VQ-GAN model is frozen so that its parameters are not updated. We use Sparse Transformers (Child et al., 2019) as our transformer architecture to accelerate the training. As for hyperparameter, we use k = 10 for sampling at inference time, but no data augmentation for high-resolution video prediction experiments. We report more detailed implementation details in Appendix A. Meta-World experiments. To demonstrate that our method can predict high-resolution videos (256× 256 pixels), we first use action-free Meta-World dataset (Yu et al., 2020) consisting of 2,500 demonstrations from 50 different robotics manipulation tasks, which we collected using the deterministic scripted policies1. Specifically, we train a single model to predict future videos of all 50 tasks, without leveraging task-specific information, such as task index. For evaluation, we use 10% of the demonstrations as a held-out test dataset. As shown in Figure 3, our model can accurately predict the high-resolution future frames of diverse tasks, capturing all the small details. This shows that our model can effectively learn all the information of multiple tasks required for predicting future frames. We also remark that such representations can be useful to improve the performance of imitation learner (see Section 5.3 for supporting experimental results). RoboNet experiments. Now we investigate how our model works on large-scale, real-world RoboNet dataset (Dasari et al., 2019) consisting of more than 15 million frames. While prior works successfully trained a video prediction model with 64 × 64 videos (Wu et al., 2021a; Babaeizadeh 1The dataset is available at: https://shorturl.at/acnxM et al., 2021), we show that our model can predict high-resolution 256× 256 videos even with fewer number of parameters than the one used in the prior works for predicting 64×64 videos. Specifically, we first train a VQ-GAN model with 91.5M parameters, and then train a 12-layer causal transformer model with 74.2M parameters that predicts future 10 frames conditioned on first two frames and future ten actions. Total number of parameters is 165.7M, which is smaller than 303.3M of FitVid (Babaeizadeh et al., 2021) that predicts 64 × 64 videos. Figure 1 and Figure 4 show the predicted frames on the held-out test video, where the model predicts the high-resolution future frames where a robot arm is moving around various objects of different colors and shapes. Kinetics-600 experiments using ImageNet pre-trained VQ-GAN. Finally, we consider a very complex, large-scale Kinetics-600 dataset (Carreira et al., 2018) consisting of more than 400,000 videos, which requires a large amount of computing resources for training even on 64 × 64 resolution (Clark et al., 2019; Luc et al., 2020). To avoid the prohibitively expensive training cost of high-resolution video prediction models on this dataset and fully exploit the benefit of employing a high-fidelity image generator, we utilize the VQ-GAN model pre-trained on ImageNet dataset (Deng et al., 2009). 2 As we only train the transformer model for video prediction, this enables us to train a high-resolution video prediction model in a very efficient manner. Specifically, we train the transformer model for 60,000 steps on training dataset, which takes less than a day using our machine. As shown in Figure 5, our model can predict future frames on the test videos3, which demonstrates that leveraging the large image generator pre-trained on a wide range of natural images can be a promising recipe for efficient video prediction on high-resolution, large-scale video datasets. 5.2 COMPARATIVE EVALUATION ON STANDARD BENCHMARKS Datasets. For quantitative evaluation, we first consider the BAIR robot pushing dataset (Ebert et al., 2017) consisting of roughly 40k training and 256 test videos. We consider action-free setup, hence video prediction models should be stochastic for predicting the diverse possible movement of a robot arm and objects. Following the setup in prior works (Clark et al., 2019; Weissenborn et al., 2020; Luc et al., 2020; Yan et al., 2021), we predict 15 future frames conditioned on one frame. We also evaluate our method on KITTI driving dataset (Geiger et al., 2013), where the training and test datasets are split by following the setup in Lotter et al. (2017). As the KITTI dataset is relatively small-scale compared to other datasets, i.e., 57 training videos, it provides a good testbed for investigating the effect of data augmentation. For hyperparameters, we use k = 10 for both datasets and data augmentation with m = 4 is only applied to KITTI as there was no sign of overfitting on BAIR dataset. For a fair comparison, we follow the setup of Villegas et al. (2019), where (i) a model is trained to predict future ten frames conditioned on five frames and evaluated to predict future 25 frames conditioned on five frames, and (ii) test dataset consists of 148 video clips constructed by extracting 30-frame clips and skipping every 5 frames. 2https://github.com/CompVis/taming-transformers 3We collected videos covered by CC-BY license, which are available to put the frames on a paper. Metrics. We use two evaluation metrics: Learned Perceptual Image Patch Similarity (LPIPS; Zhang et al. 2018), a frame-wise metric designed to better represent the human perceptual similarity of two frames compared to traditional metrics (Wang et al., 2004; Huynh-Thu & Ghanbari, 2008), and Frèchet Video Distance (FVD; Unterthiner et al. 2018), a dynamics-based evaluation metric known to be better correlated with the human evaluation compared to frame-wise evaluation metrics. FVD is computed by comparing the summary statistics of I3D network trained on Kinetics400 dataset (Carreira & Zisserman, 2017), and LPIPS is computed using the features from AlexNet (Krizhevsky et al., 2012). For comparison with the scores reported in prior works, we exactly follow the evaluation setup in Villegas et al. (2019) and Babaeizadeh et al. (2021) that samples 100 future videos for each ground-truth test video, then reports the best score over 100 videos for LPIPS, and the score using all videos for FVD, with the batch size of 256 for BAIR and 148 for KITTI. Results. Table 1 shows the performances of our method and baselines on test sets of BAIR Robot Pushing and KITTI driving dataset. We observe that our model achieves competitive or superior performance to state-of-the-art methods with large end-to-end networks, e.g., HARP outperforms FitVid with 302M parameters on KITTI driving dataset. Our model successfully extrapolates to unseen number of future frames (i.e., 25) instead of 10 future frames used in training on KITTI dataset. This implies that transformer-based video prediction models can also predict arbitrary number of frames at inference time. In the case of BAIR dataset, HARP achieves the similar performance of FitVid with 302M parameters, even though our method only requires 89M parameters. We provide videos predicted by HARP on BAIR and KITTI datasets in Appendix C. Analysis. We investigate how the top-k sampling, number of layers, and magnitude m of data augmentation affect the performance. Table 2a shows that smaller k leads to better performance, implying that the proposed top-k sampling is effective for improving the performance by discarding rare discrete codes that might degrade the prediction quality at inference time. As shown in Table 2b, we observe that more layers leads to better performance on BAIR dataset, which implies our model can be further improved by scaling up the networks. Finally, we find that (i) data augmentation on KITTI dataset is important for achieving strong performance, similar to the observation of Babaeizadeh et al. (2021), and (ii) too aggressive augmentation leads to worse performance. We provide the learning curves with and without augmentation in Appendix B. 4Baselines are SVG (Villegas et al., 2019), GHVAE (Wu et al., 2021a), FitVid (Babaeizadeh et al., 2021), LVT (Rakhimov et al., 2021), SAVP (Lee et al., 2018), DVD-GAN-FP (Clark et al., 2019), VideoGPT (Yan et al., 2021), TrIVD-GAN-FP (Luc et al., 2020), and Video Transformer (Weissenborn et al., 2020). 5.3 FINE-TUNING HARP FOR MULTI-TASK IMITATION LEARNING Setup. In order to demonstrate that the pre-trained representations from HARP can be useful for solving downstream tasks, we evaluate the imitation learning performance of fine-tuned HARP on MT-50 benchmark from Meta-World. Specifically, we take the pre-trained HARP model (see Figure 3 for the video predictions from this model), and fine-tune the model to predict expert actions by introducing a policy network on top of the transformer model. For comparative evaluation, we consider three baselines: (a) VQ-Transformer, which shares the same architecture with HARP but trained from scratch, (b) CNN-LSTM, which extracts features using convolutional neural networks (CNN) and LSTM networks, and (c) CNN-Transformer, that utilizes transformer networks instead of LSTM networks. For training and evaluation, we use the same training and test dataset used for video prediction experiments. We report the average success rate over 10 trials for each task. More details are available in Appendix A. Results. Table 3 shows the performance of imitation learning policies on MT50 test environments. We first observe that VQ-Transformer, which has a same architecture with HARP but trained from scratch, completely fails to solve the tasks. This shows the difficulty of training useful representations with fixed discrete codes as inputs. However, finetuned HARP model successfully outperforms other baselines because pre-trained representations contain useful information for long-term reasoning. This demonstrates that video prediction with HARP can be an effective self-supervised learning scheme for solving various control tasks. 6 DISCUSSION In this work, we present HARP, a high-fidelity autoregressive latent video prediction model. By employing a high-fidelity image generator and utilizing top-k sampling at inference time, HARP can predict high-resolution future frames, and achieve competitive performance to state-of-the-art video prediction methods with large end-to-end networks. We also show that HARP can leverage the image generator pre-trained on a wide range of natural images for video prediction, similar to the approach in the context of video synthesis (Tian et al., 2021). We hope this work inspires more investigation into leveraging a pre-trained image generator for video prediction, which can significantly reduce the cost for training a high-resolution video prediction model by building on the recent success of high-fidelity image generation (Oord et al., 2017; Razavi et al., 2019; Esser et al., 2021; Child, 2020; Ho et al., 2020; Karras et al., 2020; Dhariwal & Nichol, 2021). Finally, we report the failure cases of video prediction with HARP and discuss the possible extensions to resolve the issue. A common failure case for video prediction on RoboNet dataset is ignoring the interaction between a robot arm and objects. For example, in Figure 6a, our model ignores the objects and only predicts the movement of a robot arm. On the other hand, common failure case for Kinetics-600 is a degenerate video prediction, where a model just repeats the conditioning frame without predicting the future, as shown in Figure 6b. These failure cases might be resolved by training more larger networks similar to the observation in the field of natural language processing, e.g., GPT-3 (Brown et al., 2020), or might necessitate a new architecture for addressing the complexity of training autoregressive latent prediction models on video datasets. ETHICS STATEMENT While video prediction can be useful for various applications including robotic manipulation and autonomous driving, it might be misused by malicious users for unethical purposes, e.g., fake videos accusing politicians or sexual videos of any individuals. As our work introduces a method for generating more high-resolution future frames, our method may improve the chance of such videos being recognized as real videos. For this reason, in addition to developing a video prediction method that generates more realistic frames, it is important to be aware of potential problems and develop a method to detect generated videos (Gentine et al., 2018). REPRODUCIBILITY STATEMENT We describe the implementation and evaluation details in Section 5 and Appendix A. We also provide our code in the supplementary material. A EXPERIMENTAL SETUP A.1 DATASETS Meta-World. Meta-World (Yu et al., 2020) is a robotics manipulation simulator that supports 50 different tasks. For our experiments, we collect 50 demonstrations for each task using the deterministic scripted policies.5, hence we use total 2500 demonstrations. For evaluation, we construct the held-out test dataset with 10% of the demonstrations. To improve the visibility of rendered frames, we adjust the camera view of the camera using the publicly available source codes.6 We provide the dataset we used in our experiments7. RoboNet. RoboNet (Dasari et al., 2019) is a large-scale real-world robotics dataset consisting of more than 160,000 high-resolution videos. Since there is no test set for RoboNet dataset, we follow the setup in Wu et al. (2021a) for constructing a held-out test dataset8 of size 256. Following the setup in Wu et al. (2021a); Babaeizadeh et al. (2021), we train a video prediction model to predict ten future frames conditioned on two initial frames and ten future actions. For preprocessing the frames, we resize the original frames to 256×256 resolution frames without cropping. For downloading the dataset, we utilize the publicly available script.9 Kinetics-600. Kinetics-600 dataset is a large-scale video dataset consisting of more than 400,000 videos of total 600 action classes. Following the setup in Clark et al. (2019); Luc et al. (2020), we train a video prediction model to predict future 11 frames conditioned on the first five frames. For downloading the dataset, we use the publicly available repository10. BAIR Robot Pushing. BAIR Robot Pushing dataset (Ebert et al., 2017) consists of 43,264 training and 256 test videos. While BAIR dataset contains the information of actions robots take, common setup for evaluation on BAIR dataset is action-free (Clark et al., 2019; Weissenborn et al., 2020; Luc et al., 2020; Yan et al., 2021), where a video prediction model is trained to predict future 15 frames conditioned on the initial frame. For downloading and preprocessing the dataset, we utilize the publicly available script.11 KITTI driving dataset. KITTI driving dataset (Geiger et al., 2013) is the dataset that contains a large number of high-resolution driving videos. However, for video prediction, we follow the setup in Lotter et al. (2017) and utilize 57 training videos and 3 test videos for evaluation. To avoid utilizing too similar video clips from the test dataset for evaluation, we follow the setup in Villegas et al. (2019) where 30-frame clip is extracted with the interval of 5 frames, which constructs a test of size 148. For comparison with baselines, following the setup in Villegas et al. (2019), we train a model to predict future ten frames conditioned on five frames and evaluate the model to predict future 25 frames conditioned on five frames. For downloading and preprocessing the dataset, we utilize the publicly available script.12 5https://github.com/rlworkgroup/metaworld/tree/master/metaworld/policies 6we use the of 9e3863d in https://github.com/rlworkgroup/metaworld. 7https://shorturl.at/acnxM 8We use the list of videos available at https://github.com/google-research/fitvid/blob/master/robonet testset filenames.txt 9https://gist.github.com/soskek/d762751ce0aef4b2c7cf0a1537917016 10https://github.com/cvdfoundation/kinetics-dataset 11https://github.com/wilson1yan/VideoGPT 12https://github.com/coxlab/prednet A.2 IMPLEMENTATION DETAILS OF HARP VQ-GAN. The training of HARP consists of two-stages. First, for training a VQ-GAN model (Esser et al., 2021) we use the publicly available source code from the authors13. Following the suggestion of the authors, we trained a VQ-GAN model without a discriminator loss until it converges, then resume the training with a discriminator loss for total {300000, 500000, 150000, 30000} training steps with batch size of {24, 24, 320, 96} on Meta-World, RoboNet, BAIR Robot Pushing, and KITTI driving dataset, respectively. For Kinetics-600 dataset, we leverage the publicly available VQ-GAN model14 pre-trained on ImageNet without training a VQ-GAN model from scratch. The size k of codebook C in (2) is {8192, 8192, 1024, 1024, 1024} for Meta-World, RoboNet, Kinetics-600, BAIR Robot Pushing, and KITTI dataset, respectively. Causal transformer. Then we train a 12-layer Sparse Transformers (Child et al., 2019) to predict the discrete codes from VQ-GAN models in an autoregressive manner, by building on the publicly available source code15 of VideoGPT (Yan et al., 2021). For conditioning on initial frames, we utilize the same architecture as in Yan et al. (2021) that utilizes ResNet architecture to extract the downsampled feature map. We utilize ResNet-18 architecture for all experiments. We train the model until it converges on Meta-World, BAIR, and KITTI datasets, but we cannot find the sign of overfitting on large-scale datasets of RoboNet and Kinetics-600. Specifically, we train the model for {50000, 80000, 100000, 85000, 30000} training steps on Meta-World, RoboNet, Kinetics-600, BAIR Robot Pushing, and KITTI dataset, respectively. We use the data augmentation with the magnitude of m = 4 on KITTI driving dataset, which has shown to be very effective for improving the performance (see Appendix B) for a learning curve with and without augmentation). Inference with top-k sampling. We utilize top-k sampling (Fan et al., 2018) to improve the video prediction quality. For all datasets, we utilize k = 10. One major limitation of autoregressive prediction model is slow inference time. We utilize the implementation of Yan et al. (2021) that caches the previous key, values and utilize them for fast inference. In order to enable our model to extrapolate to unseen length of frames at evaluation time on KITTI dataset (e.g., the model has to predict 25 frames instead of 10 frames the model is trained to predict), we first predict T frames xc:T−1 conditioned on c initial frames x1:c, then predict the next frames by (a) keeping x1:c as conditioning frames and (b) giving the predicted last T−1 frames as inputs to the causal transformer model. We repeat this process until predicting all 25 future frames. A.3 IMPLEMENTATION DETAILS OF ACTION-CONDITIONED HARP. In order to predict future frames conditioned on future actions on RoboNet dataset (Dasari et al., 2019), we condition the prediction on actions by adding the action embeddings to the embeddings of discrete codes. Specifically, we introduce a linear layer that processes raw actions to action embeddings with the same dimension of token embeddings, then add the action embeddings of time step t+ 1 to token embeddings used for predicting the tokens of time step t + 1. At inference time, the inference procedure is exactly same as HARP except that future action embedding is added to the token embedding. We find that this simple modification to the original architecture enables HARP to predict future frames conditioned on actions. We provide the illustration of action-conditioned HARP in Figure 7. 13https://github.com/CompVis/taming-transformers 14https://heibox.uni-heidelberg.de/d/8088892a516d4e3baf92/ 15https://github.com/wilson1yan/VideoGPT A.4 IMPLEMENTATION DETAILS OF HARP FINE-TUNING. In order to fine-tune the HARP model for solving multi-task imitation learning tasks on Meta-World (Yu et al., 2020), we first pre-train a model to predict future 11 frames conditioned on the first frame using the dataset from all 50 tasks (see Appendix A.1 for the details on the dataset). Then we finetune the model to predict expert actions by introducing a two-layer policy network on top of the causal transformer model. Specifically, we train a behavioral cloning policy to minimize the mean squared error between the predicted actions from a layer and ground-truth expert actions. In order to further improve the performance of all methods, we follow the idea of Dasari & Gupta (2020) to learn a inverse dynamics predictor that predicts the action given two consecutive frames. We train all methods for 20,000 steps with the batch size of 20 and data augmentation of magnitude m = 4. For CNN-based methods, we use ResNet-50 for feature extractor, and use 4-layer LSTM16 for CNN-LSTM, and 12-layer causal transformer for CNN-Transformer. B EFFECTS OF AUGMENTATION Figure 8 shows the test error during the training of HARP on KITTI driving dataset (Geiger et al., 2013). One can see that the test error from a model trained without augmentation increases after initial ∼ 2500 training steps, which is a sign of overfitting. However, we observe that the test error from a model trained with data augmentation keeps decreasing throughout training until it converges, which shows the effectiveness of data augmentation for learning video prediction models, similar to the observation in Babaeizadeh et al. (2021). One notable detail here is that HARP overfits to KITTI training dataset with much fewer number of parameters (i.e., 89M) when compared to FitVid that utilizes 303M parameters. 16We try more deeper networks to match the number of trainable parameters, but we find that deeper LSTM networks are very unstable to train. Hence we search over the layers of {2, 4, 8, 16} and report the best results achieved with 4-layer LSTM. C VIDEO PREDICTIONS ON BAIR AND KITTI We provide the future frames predicted by HARP on BAIR Robot Pushing (Ebert et al., 2017) and KITTI driving dataset (Geiger et al., 2013). The model is trained to predict future 15 frames conditioned on the first frame on BAIR dataset. In the case of KITTI dataset, the model is trained to predict future 10 frames conditioned on the five frames, and evaluated to predict future 25 frames conditioned on the five frames. D MORE VIDEO PREDICTIONS ON META-WORLD We provide more videos predicted by HARP on Meta-World dataset (Yu et al., 2020). The model is trained to predict future 11 frames conditioned on the first frame. E MORE VIDEO PREDICTIONS ON ROBONET We provide more videos predicted by HARP on RoboNet dataset (Dasari et al., 2019). The model is trained to predict future ten frames conditioned on the initial two frames and future ten actions. F MORE VIDEO PREDICTIONS ON KINETICS-600 G ATTRIBUTION Figure 5 (top): ”Windsurfing Session 2013 Iballa Moreno” by morenotwins. Accessible here Figure 5 (bottom): ”How to fry an egg in 5 simple steps” by What’s Gaby Cooking. Accessible here Figure 6 (right): ”Prepare fruit cutting to pets #Shorts 4” by AP STUDIO. Accessible here Figure 12 (first): ”Windsurfing” by Dmitry Rudnev. Accessible here Figure 12 (second): ”Eric Cornelissen Windsurfing September 11, 2017” by Ron Van Dijk. Accessible here Figure 12 (third): ”Smart way to cut Fruits #6#” by MR. BEING SMART. Accessible here Figure 12 (fourth): ”How to make the Perfect Crunchy Half Cakes ( Kangumu ) ‖‖ Jinsi ya kupika Half Cakes za kupasuka.” by Ayleen’s Food & Vlogs. Accessible here
1. What is the main contribution of the paper in the field of video synthesis? 2. What are the strengths and weaknesses of the proposed approach HARP compared to other works in the literature? 3. Do you have any questions or concerns regarding the technical details of the baseline models used in the paper? 4. How does the reviewer assess the novelty and efficiency claims made in the paper? 5. Are there any missing analyses or comparisons that could further support the paper's findings?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a very simple method (dubbed HARP) for video synthesis: By relying on a pretrained VQGAN which encodes (and decodes) each frame seperately to (and from) a highly compressed representation, a (sparse) transformer model can be trained as an autoregressive generative model on the sequence of discrete image representations. The paper shows that despite its simplicity, the proposed approach can compete with the state of the art and produce high quality video at 256 × 256 resolution. As a side contribution, it is shown that the proposed model can be fine-tuned towards multi-task imitation learning. Review Strengths: I think this work fits well into the recent literature on generative models, which divides the learning process of perceptually highly redundant data (such as images, audio, or videos) into a two-stage procedure (e.g., [1, 2, 3]), where the first stage is trained to achieve maximum compression while still yielding perceptually close representations, and the actual generative model is then trained on this compressed representation in a second stage. This has the distinct advantage that the generative model does not have to "relearn" the compression itself (which is particularly difficult in mode-covering approaches such as AR likelihood models) and can focus on salient high-level features. However, this also reveals a weakness of the present work: the "compressibility" is only exploited for single frames, but not for the temporal dimension, which is particularly wasteful for video data. Weaknesses: The novelty of HARP over previous works is quite limited: There is a large body of work such as [4,5,6] that uses a two-stage procedure for video generation. The main advantage over VideoGPT is that HARP does not rely on a hierarchical representation and uses a VQGAN instead of a VQVAE. Also, I don't quite understand why top-k sampling is "sold" so much; it is an established technique in dealing with autoregressive models. What about other pruning techniques like nucleus sampling or temperature rescaling of logits? In addition, the work should also qualitatively demonstrate sensitivity to the specific value of k . Finally, a quantitative analysis of the "high resolution" experiments ( 256 × 256 px) is missing. Mixed Comments: The technical details regarding the baseline models in Sec. 5.3 are very sparse. What does "on top of the transformer model" mean? How exactly does the CNN+Transformer model differ from the VQ+Transformer model? Is it possible to train only the policy network while fixing the (pre-trained) transformer model? Does the VQGAN's codebook collapse affect training, and if so, how is it taken into account? How long does it take to sample the different models? How does it compare to other approaches? What is m in Sec. 5.2? Sec. 5.2 and Tab.2 heavily rely on the fact that HARP uses less parameters than some other models, but does not explicitly state why this is useful. See also [7] on this "efficiency misnomer". References [1]: Esser, Patrick, Robin Rombach, and Bjorn Ommer. "Taming transformers for high-resolution image synthesis." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021. [2]: Iashin, Vladimir, and Esa Rahtu. "Taming Visually Guided Sound Generation." arXiv preprint arXiv:2110.08791 (2021). [3]: Mentzer, Fabian, et al. "High-fidelity generative image compression." arXiv preprint arXiv:2006.09965 (2020). [4]: Yan, Wilson, et al. "VideoGPT: Video Generation using VQ-VAE and Transformers." arXiv preprint arXiv:2104.10157 (2021). [5]: Rakhimov, Ruslan, et al. "Latent video transformer." arXiv preprint arXiv:2006.10704 (2020). [6]: Dorkenwald, Michael, et al. "Stochastic Image-to-Video Synthesis using cINNs." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021. [7]: Dehghani, Mostafa, et al. "The Efficiency Misnomer." arXiv preprint arXiv:2110.12894 (2021).
ICLR
Title Autoregressive Latent Video Prediction with High-Fidelity Image Generator Abstract Video prediction is an important yet challenging problem; burdened with the tasks of generating future frames and learning environment dynamics. Recently, autoregressive latent video models have proved to be a powerful video prediction tool, by separating the video prediction into two sub-problems: pre-training an image generator model, followed by learning an autoregressive prediction model in the latent space of the image generator. However, successfully generating highfidelity and high-resolution videos has yet to be seen. In this work, we investigate how to train an autoregressive latent video prediction model capable of predicting high-fidelity future frames with minimal modification to existing models, and produce high-resolution (256x256) videos. Specifically, we scale up prior models by employing a high-fidelity image generator (VQ-GAN) with a causal transformer model, and introduce additional techniques of top-k sampling and data augmentation to further improve video prediction quality. Despite the simplicity, the proposed method achieves competitive performance to state-of-the-art approaches on standard video prediction benchmarks with fewer parameters, and enables highresolution video prediction on complex and large-scale datasets. Videos are available at the anonymized website https://sites.google.com/view/harp-anonymous. 1 INTRODUCTION Video prediction can enable agents to learn useful representations for predicting the future consequences of the decisions they make, which is crucial for solving the tasks that require long-term planning, including robotic manipulation (Finn & Levine, 2017; Kalashnikov et al., 2018) and autonomous driving (Levinson et al., 2011; Xu et al., 2017). Despite the recent advances in improving the quality of video prediction (Finn et al., 2016; Lotter et al., 2017; Liang et al., 2017; Babaeizadeh et al., 2018; Denton & Fergus, 2018; Lee et al., 2018; Byeon et al., 2018; Kumar et al., 2020; Weissenborn et al., 2020; Babaeizadeh et al., 2021), learning an accurate video prediction model remains notoriously difficult problem and requires a lot of computing resources, especially when the inputs are video sequences with high-resolution (Castrejon et al., 2019; Villegas et al., 2019; Clark et al., 2019; Luc et al., 2020; Walker et al., 2021). This is because the video prediction model should excel at both tasks of generating high-fidelity images and learning the dynamics of environments, though each task itself is already a very challenging problem. Recently, autoregressive latent video prediction methods (Rakhimov et al., 2021; Yan et al., 2021) have been proposed to improve the efficiency of video prediction, by separating video prediction into two sub-problems: first pre-training an image generator (e.g., VQ-VAE; Oord et al. 2017), and then learning the autoregressive prediction model (Weissenborn et al., 2020; Chen et al., 2020) in the latent space of the pre-trained image generator. However, the prior works are limited in that they only consider relatively low-resolution videos (up to 128 × 128 pixels) for demonstrating the efficiency of the approach; it is questionable that such experiments can fully demonstrate the benefit of operating in the latent space of image generator instead of high-dimensional pixel-channel space. In this paper, we present High-fidelity AutoRegressive latent video Prediction (HARP), which scales up the previous autoregressive latent video prediction methods for high-fidelity video prediction. The main principle for the design of HARP is simplicity: we improve the video prediction quality with minimal modification to existing methods. First, for image generation, we employ a highfidelity image generator, i.e., vector-quantized generative adversarial network (VQ-GAN; Esser et al. 2021). This improves video prediction by enabling high-fidelity image generation (up to 256× 256 pixels) on various video datasets. Then a causal transformer model (Chen et al., 2020), which operates on top of discrete latent codes, is trained to predict the discrete codes from VQ-GAN, and autoregressive predictions made by the transformer model are decoded into future frames at inference time. Moreover, motivated by the sampling techniques widely-used in language generation for making coherent and diverse predictions, we propose to utilize top-k sampling (Fan et al., 2018) that draws the next discrete code from the k-most probable codes. Since the number of discrete codes the autoregressive model has to predict is very large, e.g., 6,400 codes on KITTI driving dataset (Geiger et al., 2013), we find that discarding rare discrete codes helps the model predict diverse but high-quality videos, without any change to the training procedure. We highlight the main contributions of this paper below: • We show that our autogressive latent video prediction model, HARP, can predict high-resolution (256×256 pixels) future frames on simulated robotics dataset (i.e., Meta-World; Yu et al. 2020) and large-scale real-world robotics dataset (i.e., RoboNet; Dasari et al. 2019). • We show that HARP can leverage the image generator pre-trained on ImageNet (Deng et al., 2009) for training a high-resolution video prediction model on complex, large-scale Kinetics600 dataset (Carreira et al., 2018), significantly reducing the training cost. • HARP achieves competitive or superior performance to prior state-of-the-art video prediction models with large end-to-end networks on widely-used BAIR Robot Pushing (Ebert et al., 2017) and KITTI driving (Geiger et al., 2013) video prediction benchmarks. • We also show that the pre-trained representations of HARP can be useful for learning multi-task imitation learning agent on Meta-World MT50 benchmark (Yu et al., 2020). 2 RELATED WORK Video prediction. Video prediction aims to predict the future frames conditioned on images (Michalski et al., 2014; Ranzato et al., 2014; Srivastava et al., 2015; Vondrick et al., 2016; Lotter et al., 2017), texts (Wu et al., 2021b), and actions (Oh et al., 2015; Finn et al., 2016), which would be useful for several applications, e.g., model-based RL (Hafner et al., 2019; Kaiser et al., 2020; Rybkin et al., 2021), and simulator development (Kim et al., 2020; 2021). Various video prediction models have been proposed with different approaches, including generative adversarial networks (GANs; Goodfellow et al. 2014) known to generate high-fidelity images by introducing adversarial discriminators that also considers temporal or motion information (Aigner & Körner, 2018; Jang et al., 2018; Kwon & Park, 2019; Clark et al., 2019; Luc et al., 2020), latent video prediction models that operates on the latent space (Babaeizadeh et al., 2018; Denton & Fergus, 2018; Lee et al., 2018; Villegas et al., 2019; Wu et al., 2021a; Babaeizadeh et al., 2021), and autoregressive video prediction models that operates on pixel space by predicting the next pixels in an autoregressive way (Kalchbrenner et al., 2017; Reed et al., 2017; Weissenborn et al., 2020). Autoregressive latent video prediction. Most closely related to our work are autoregressive latent video prediction models that separate the video prediction problem into image generation and dynamics learning. Walker et al. (2021) proposed to learn a hierarchical VQ-VAE (Razavi et al., 2019) that extracts multi-scale hierarchical latents then train SNAIL blocks (Chen et al., 2018) that predict hierarchical latent codes, enabling high-fidelity video prediction. However, this involves a complicated training pipeline and a video-specific architecture, which limits its applicability. As simple alternatives, Rakhimov et al. (2021); Yan et al. (2021) proposed to first learn a VQ-VAE (Oord et al., 2017) and train a causal transformer with 3D self-attention (Weissenborn et al., 2020) and factorized 2D self-attention (Child et al., 2019), respectively. These approaches, however, are limited in that they only consider low-resolution videos. We instead present a simple high-resolution video prediction method that incorporates the strengths of both prior approaches. 3 PRELIMINARIES We consider the standard video prediction framework where the goal is to predict the future frames conditioned on the initial frames of a video. Specifically, conditioned on the first c frames of a video x<c = (x0,x1, ...,xc−1), we aim to learn a video prediction model that predicts the future frames xc:T = (xc, ...,xT−1), where xt ∈ RH×W×Nch is the frame at timestep t. Optionally, one can also consider conditioning the prediction model on actions a = (a0, ...,aT−1) that the agents in the video would take, i.e., action-conditioned video prediction. Autoregressive video prediction model. Motivated by the recent success of pixel-level autoregressive models on image generation (Menick & Kalchbrenner, 2018), Weissenborn et al. (2020) introduced an autoregressive video prediction model that approximates the distribution of a video in a pixel-channel space. Specifically, given a video x ∈ RT×H×W×Nch , the joint distribution over pixels conditioned on the first c frames is modelled as the product of channel intensities Nch and all Np = T ·H ·W pixels except Nc = c ·H ·W pixels of conditioning frames: p(xc:T |x<c) = Np−1∏ i=Nc−1 Nch−1∏ k=0 p(xkπ(i)|xπ(<i),x <k π(i)), (1) where π is a raster-scan ordering over all pixels from the video (we refer to Weissenborn et al. (2020) for more details on the case where π is the combination of a subscale and raster-scan ordering since we only utilize raster-scan ordering for our approach), xπ(<i) is all pixels before xπ(i), xkπ(i) is the k-th channel intensity of the pixel xπ(i), and x<kπ(i) is all channel intensities before x k π(i). Vector quantized variational autoencoder. VQ-VAE (Oord et al., 2017) consists of an encoder that compresses images into discrete representations, and a decoder that reconstructs images from these discrete representations. Both encoder and decoder share a codebook of prototype vectors which are also learned throughout training. Formally, given an image x ∈ RH×W×Nch , the encoder E encodes x into a feature map ze(x) ∈ RH ′×W ′×Nz that consists of a series of latent vectors zπ′(i)(x) ∈ RNz , where π′ is a raster-scan ordering of the feature map ze(x) of size |π′| = H ′ ·W ′. Then ze(x) is quantized to discrete representations zq(x) ∈ R|π ′|×Nz based on the distance of latent vectors zπ′(i)(x) to the prototype vectors in the codebook C = {ek}Kk=1 as follows: zq(x) = (eq(x,1), eq(x,2), · · · , eq(x,|π′|)), where q(x, i) = argmin k∈[K] ‖zπ′(i)(x)− ek‖2, (2) where [K] is the set {1, · · · ,K}. Then the decoder G learns to reconstruct x from discrete representations zq(x). The VQ-VAE is trained by minimizing the following objective: LVQVAE(x) = ‖x−G(zq(x))‖22︸ ︷︷ ︸ Lrecon + ‖sg [ze(x)]− zq(x)‖22︸ ︷︷ ︸ Lcodebook +β · ‖sg [zq(x)]− ze(x)‖22︸ ︷︷ ︸ Lcommit , (3) where the operator sg refers to a stop-gradient operator, Lrecon is a reconstruction loss for learning representations useful for reconstructing images, Lcodebook is a codebook loss to bring codebook representations closer to corresponding encoder outputs h, and Lcommit is a commitment loss weighted by β to prevent encoder outputs from fluctuating frequently between different representations. Vector quantized generative adversarial network. VQ-GAN (Esser et al., 2021) is a variant of VQ-VAE that (a) replaces the Lrecon in (3) by a perceptual loss LLPIPS (Zhang et al., 2018), and (b) introduces an adversarial training scheme where a patch-level discriminator D (Isola et al., 2017) is trained to discriminate real and generated images by maximizing following loss: LGAN(x) = [logD(x) + log(1−D(G(zq(x)))]. (4) Then, the objective for training the VQ-GAN model is defined as: min E,G,C max D Ex∼p(x) [(LLPIPS + Lcodebook + Lcommit) + λ · LGAN] , (5) where λ = ∇GL [LLPIPS]∇GL [LGAN]+δ is an adaptive weight, ∇GL is the gradient of the inputs to the last layer of the decoder GL, and δ = 10−6 is a scalar introduced for numerical stability. 4 METHOD We present HARP, a video prediction model capable of predicting high-fidelity future frames. Our method is designed to fully exploit the benefit of autoregressive latent video prediction model that separates the video prediction into image generation and dynamics learning. Specifically, we consider the combination of (a) the recently introduced high-fidelity image generator (Esser et al., 2021) and (b) an autoregressive latent video prediction model (Oord et al., 2017; Rakhimov et al., 2021; Walker et al., 2021; Yan et al., 2021) that operates on top of the pre-trained image generator. The full architecture of HARP is illustrated in Figure 2. 4.1 HIGH-FIDELITY IMAGE GENERATOR We consider the VQ-GAN model (Esser et al., 2021) that has proven to be effective for highresolution image generation as our image generator (see Section 3 for the formulation of VQ-GAN). Similar to the motivation of Tian et al. (2021) that utilizes a pre-trained image generator in the context of video synthesis, we first pre-train the image generator then freeze the model throughout training to improve the efficiency of learning video prediction models. The notable difference to a prior work that utilize 3D convolutions to temporally downsample the video for efficiency (Yan et al., 2021) is that our image generator operates on single images; hence our image generator solely focus on improving the quality of generated images. Importantly, this enables us to utilize the VQ-GAN model pre-trained on a wide range of natural images, e.g., ImageNet, without training the image generator on the target datasets, which can significantly reduce the training cost of high-resolution video prediction model, Also, the representations from our video prediction model can be easily transferred to downstream tasks that require fine-grained control at each timestep, e.g., imitation learning (see Section 5.3 for supporting experimental results on multi-task imitation learning). 4.2 AUTOREGRESSIVE LATENT VIDEO PREDICTION MODEL To leverage the VQ-GAN model for video prediction, we utilize the autoregressive latent video prediction architecture that operates on top of the discrete codes extracted from a video x. Specifically, we extract the discrete codes z(x) = (z(x1), ..., z(xT )) using the pre-trained VQ-GAN, where z(xt) = (q(xt,1), q(xt,2), ..., q(xt,|π′|)) is the discrete code extracted from the frame xt as in (2). Then, instead of modelling the distribution of video p(x) in the pixel-channel space as in (1), we learn the distribution of the video in the discrete latent representation space: p(z(xc:T |x<c)) = Nd−1∏ i=0 p(zπ′(i)(x)|zπ′(<i)(x)), (6) where Nd = (T − C) ·H ′ ·W ′ is the total number of codes from xc:T . While the specific implementation for modelling p(z(x)) differs in prior works (Oord et al., 2017; Rakhimov et al., 2021; Walker et al., 2021; Yan et al., 2021), due to its simplicity, we utilize the causal transformer architecture (Yan et al., 2021) where the output logits from input codes are trained to predict the next discrete codes. We remark that our approach is also compatible with other architectures. 4.3 ADDITIONAL TECHNIQUES Top-k sampling. To improve the video prediction quality of latent autoregressive models whose outputs are sampled from the probability distribution over a large number of discrete codes, we utilize the top-k sampling (Fan et al., 2018) that randomly samples the output from the top-k probable discrete codes. By preventing the model from sampling rare discrete codes from the long-tail of a probability distribution and predicting future frames conditioned on such discrete codes, we find that top-k sampling improves video prediction quality, especially given that the number of discrete encodings required for future prediction is very large, e.g., 2,560 on RoboNet (Dasari et al., 2019) up to 6,400 on KITTI dataset (Geiger et al., 2013) in our experimental setup. Data augmentation. We also investigate how data augmentation can be useful for improving the performance of autoregressive latent video prediction models. Since the image generator model is not trained with augmentation, we utilize a weak augmentation to avoid the instability coming from aggressive transformation of input frames, i.e., translation augmentation that moves the input images by m pixels along the X or Y direction. 5 EXPERIMENTS We design our experiments to investigate the following: • Can HARP predict high-resolution future frames (up to 256 × 256 pixels) on various video datasets with different characteristics? • How does HARP compare to state-of-the-art methods with large end-to-end networks on standard video prediction benchmarks in terms of quantitative evaluation? • How does the proposed techniques affect the performance of HARP? • Can HARP be transferred to solve multi-task imitation learning tasks? 5.1 HIGH-RESOLUTION VIDEO PREDICTION Implementation. We utilize up to 8 Nvidia 2080Ti GPU and 20 CPU cores for training each model. For training VQ-GAN (Esser et al., 2021), we first train the model without a discriminator lossLGAN, and then continue the training with the loss following the suggestion of the authors. For all experiments, VQ-GAN downsamples each frame into 16× 16 latent codes, i.e., by a factor of 4 for frames of size 64×64 frames, and 16 for frames of size 256×256. For training a transformer model, the VQ-GAN model is frozen so that its parameters are not updated. We use Sparse Transformers (Child et al., 2019) as our transformer architecture to accelerate the training. As for hyperparameter, we use k = 10 for sampling at inference time, but no data augmentation for high-resolution video prediction experiments. We report more detailed implementation details in Appendix A. Meta-World experiments. To demonstrate that our method can predict high-resolution videos (256× 256 pixels), we first use action-free Meta-World dataset (Yu et al., 2020) consisting of 2,500 demonstrations from 50 different robotics manipulation tasks, which we collected using the deterministic scripted policies1. Specifically, we train a single model to predict future videos of all 50 tasks, without leveraging task-specific information, such as task index. For evaluation, we use 10% of the demonstrations as a held-out test dataset. As shown in Figure 3, our model can accurately predict the high-resolution future frames of diverse tasks, capturing all the small details. This shows that our model can effectively learn all the information of multiple tasks required for predicting future frames. We also remark that such representations can be useful to improve the performance of imitation learner (see Section 5.3 for supporting experimental results). RoboNet experiments. Now we investigate how our model works on large-scale, real-world RoboNet dataset (Dasari et al., 2019) consisting of more than 15 million frames. While prior works successfully trained a video prediction model with 64 × 64 videos (Wu et al., 2021a; Babaeizadeh 1The dataset is available at: https://shorturl.at/acnxM et al., 2021), we show that our model can predict high-resolution 256× 256 videos even with fewer number of parameters than the one used in the prior works for predicting 64×64 videos. Specifically, we first train a VQ-GAN model with 91.5M parameters, and then train a 12-layer causal transformer model with 74.2M parameters that predicts future 10 frames conditioned on first two frames and future ten actions. Total number of parameters is 165.7M, which is smaller than 303.3M of FitVid (Babaeizadeh et al., 2021) that predicts 64 × 64 videos. Figure 1 and Figure 4 show the predicted frames on the held-out test video, where the model predicts the high-resolution future frames where a robot arm is moving around various objects of different colors and shapes. Kinetics-600 experiments using ImageNet pre-trained VQ-GAN. Finally, we consider a very complex, large-scale Kinetics-600 dataset (Carreira et al., 2018) consisting of more than 400,000 videos, which requires a large amount of computing resources for training even on 64 × 64 resolution (Clark et al., 2019; Luc et al., 2020). To avoid the prohibitively expensive training cost of high-resolution video prediction models on this dataset and fully exploit the benefit of employing a high-fidelity image generator, we utilize the VQ-GAN model pre-trained on ImageNet dataset (Deng et al., 2009). 2 As we only train the transformer model for video prediction, this enables us to train a high-resolution video prediction model in a very efficient manner. Specifically, we train the transformer model for 60,000 steps on training dataset, which takes less than a day using our machine. As shown in Figure 5, our model can predict future frames on the test videos3, which demonstrates that leveraging the large image generator pre-trained on a wide range of natural images can be a promising recipe for efficient video prediction on high-resolution, large-scale video datasets. 5.2 COMPARATIVE EVALUATION ON STANDARD BENCHMARKS Datasets. For quantitative evaluation, we first consider the BAIR robot pushing dataset (Ebert et al., 2017) consisting of roughly 40k training and 256 test videos. We consider action-free setup, hence video prediction models should be stochastic for predicting the diverse possible movement of a robot arm and objects. Following the setup in prior works (Clark et al., 2019; Weissenborn et al., 2020; Luc et al., 2020; Yan et al., 2021), we predict 15 future frames conditioned on one frame. We also evaluate our method on KITTI driving dataset (Geiger et al., 2013), where the training and test datasets are split by following the setup in Lotter et al. (2017). As the KITTI dataset is relatively small-scale compared to other datasets, i.e., 57 training videos, it provides a good testbed for investigating the effect of data augmentation. For hyperparameters, we use k = 10 for both datasets and data augmentation with m = 4 is only applied to KITTI as there was no sign of overfitting on BAIR dataset. For a fair comparison, we follow the setup of Villegas et al. (2019), where (i) a model is trained to predict future ten frames conditioned on five frames and evaluated to predict future 25 frames conditioned on five frames, and (ii) test dataset consists of 148 video clips constructed by extracting 30-frame clips and skipping every 5 frames. 2https://github.com/CompVis/taming-transformers 3We collected videos covered by CC-BY license, which are available to put the frames on a paper. Metrics. We use two evaluation metrics: Learned Perceptual Image Patch Similarity (LPIPS; Zhang et al. 2018), a frame-wise metric designed to better represent the human perceptual similarity of two frames compared to traditional metrics (Wang et al., 2004; Huynh-Thu & Ghanbari, 2008), and Frèchet Video Distance (FVD; Unterthiner et al. 2018), a dynamics-based evaluation metric known to be better correlated with the human evaluation compared to frame-wise evaluation metrics. FVD is computed by comparing the summary statistics of I3D network trained on Kinetics400 dataset (Carreira & Zisserman, 2017), and LPIPS is computed using the features from AlexNet (Krizhevsky et al., 2012). For comparison with the scores reported in prior works, we exactly follow the evaluation setup in Villegas et al. (2019) and Babaeizadeh et al. (2021) that samples 100 future videos for each ground-truth test video, then reports the best score over 100 videos for LPIPS, and the score using all videos for FVD, with the batch size of 256 for BAIR and 148 for KITTI. Results. Table 1 shows the performances of our method and baselines on test sets of BAIR Robot Pushing and KITTI driving dataset. We observe that our model achieves competitive or superior performance to state-of-the-art methods with large end-to-end networks, e.g., HARP outperforms FitVid with 302M parameters on KITTI driving dataset. Our model successfully extrapolates to unseen number of future frames (i.e., 25) instead of 10 future frames used in training on KITTI dataset. This implies that transformer-based video prediction models can also predict arbitrary number of frames at inference time. In the case of BAIR dataset, HARP achieves the similar performance of FitVid with 302M parameters, even though our method only requires 89M parameters. We provide videos predicted by HARP on BAIR and KITTI datasets in Appendix C. Analysis. We investigate how the top-k sampling, number of layers, and magnitude m of data augmentation affect the performance. Table 2a shows that smaller k leads to better performance, implying that the proposed top-k sampling is effective for improving the performance by discarding rare discrete codes that might degrade the prediction quality at inference time. As shown in Table 2b, we observe that more layers leads to better performance on BAIR dataset, which implies our model can be further improved by scaling up the networks. Finally, we find that (i) data augmentation on KITTI dataset is important for achieving strong performance, similar to the observation of Babaeizadeh et al. (2021), and (ii) too aggressive augmentation leads to worse performance. We provide the learning curves with and without augmentation in Appendix B. 4Baselines are SVG (Villegas et al., 2019), GHVAE (Wu et al., 2021a), FitVid (Babaeizadeh et al., 2021), LVT (Rakhimov et al., 2021), SAVP (Lee et al., 2018), DVD-GAN-FP (Clark et al., 2019), VideoGPT (Yan et al., 2021), TrIVD-GAN-FP (Luc et al., 2020), and Video Transformer (Weissenborn et al., 2020). 5.3 FINE-TUNING HARP FOR MULTI-TASK IMITATION LEARNING Setup. In order to demonstrate that the pre-trained representations from HARP can be useful for solving downstream tasks, we evaluate the imitation learning performance of fine-tuned HARP on MT-50 benchmark from Meta-World. Specifically, we take the pre-trained HARP model (see Figure 3 for the video predictions from this model), and fine-tune the model to predict expert actions by introducing a policy network on top of the transformer model. For comparative evaluation, we consider three baselines: (a) VQ-Transformer, which shares the same architecture with HARP but trained from scratch, (b) CNN-LSTM, which extracts features using convolutional neural networks (CNN) and LSTM networks, and (c) CNN-Transformer, that utilizes transformer networks instead of LSTM networks. For training and evaluation, we use the same training and test dataset used for video prediction experiments. We report the average success rate over 10 trials for each task. More details are available in Appendix A. Results. Table 3 shows the performance of imitation learning policies on MT50 test environments. We first observe that VQ-Transformer, which has a same architecture with HARP but trained from scratch, completely fails to solve the tasks. This shows the difficulty of training useful representations with fixed discrete codes as inputs. However, finetuned HARP model successfully outperforms other baselines because pre-trained representations contain useful information for long-term reasoning. This demonstrates that video prediction with HARP can be an effective self-supervised learning scheme for solving various control tasks. 6 DISCUSSION In this work, we present HARP, a high-fidelity autoregressive latent video prediction model. By employing a high-fidelity image generator and utilizing top-k sampling at inference time, HARP can predict high-resolution future frames, and achieve competitive performance to state-of-the-art video prediction methods with large end-to-end networks. We also show that HARP can leverage the image generator pre-trained on a wide range of natural images for video prediction, similar to the approach in the context of video synthesis (Tian et al., 2021). We hope this work inspires more investigation into leveraging a pre-trained image generator for video prediction, which can significantly reduce the cost for training a high-resolution video prediction model by building on the recent success of high-fidelity image generation (Oord et al., 2017; Razavi et al., 2019; Esser et al., 2021; Child, 2020; Ho et al., 2020; Karras et al., 2020; Dhariwal & Nichol, 2021). Finally, we report the failure cases of video prediction with HARP and discuss the possible extensions to resolve the issue. A common failure case for video prediction on RoboNet dataset is ignoring the interaction between a robot arm and objects. For example, in Figure 6a, our model ignores the objects and only predicts the movement of a robot arm. On the other hand, common failure case for Kinetics-600 is a degenerate video prediction, where a model just repeats the conditioning frame without predicting the future, as shown in Figure 6b. These failure cases might be resolved by training more larger networks similar to the observation in the field of natural language processing, e.g., GPT-3 (Brown et al., 2020), or might necessitate a new architecture for addressing the complexity of training autoregressive latent prediction models on video datasets. ETHICS STATEMENT While video prediction can be useful for various applications including robotic manipulation and autonomous driving, it might be misused by malicious users for unethical purposes, e.g., fake videos accusing politicians or sexual videos of any individuals. As our work introduces a method for generating more high-resolution future frames, our method may improve the chance of such videos being recognized as real videos. For this reason, in addition to developing a video prediction method that generates more realistic frames, it is important to be aware of potential problems and develop a method to detect generated videos (Gentine et al., 2018). REPRODUCIBILITY STATEMENT We describe the implementation and evaluation details in Section 5 and Appendix A. We also provide our code in the supplementary material. A EXPERIMENTAL SETUP A.1 DATASETS Meta-World. Meta-World (Yu et al., 2020) is a robotics manipulation simulator that supports 50 different tasks. For our experiments, we collect 50 demonstrations for each task using the deterministic scripted policies.5, hence we use total 2500 demonstrations. For evaluation, we construct the held-out test dataset with 10% of the demonstrations. To improve the visibility of rendered frames, we adjust the camera view of the camera using the publicly available source codes.6 We provide the dataset we used in our experiments7. RoboNet. RoboNet (Dasari et al., 2019) is a large-scale real-world robotics dataset consisting of more than 160,000 high-resolution videos. Since there is no test set for RoboNet dataset, we follow the setup in Wu et al. (2021a) for constructing a held-out test dataset8 of size 256. Following the setup in Wu et al. (2021a); Babaeizadeh et al. (2021), we train a video prediction model to predict ten future frames conditioned on two initial frames and ten future actions. For preprocessing the frames, we resize the original frames to 256×256 resolution frames without cropping. For downloading the dataset, we utilize the publicly available script.9 Kinetics-600. Kinetics-600 dataset is a large-scale video dataset consisting of more than 400,000 videos of total 600 action classes. Following the setup in Clark et al. (2019); Luc et al. (2020), we train a video prediction model to predict future 11 frames conditioned on the first five frames. For downloading the dataset, we use the publicly available repository10. BAIR Robot Pushing. BAIR Robot Pushing dataset (Ebert et al., 2017) consists of 43,264 training and 256 test videos. While BAIR dataset contains the information of actions robots take, common setup for evaluation on BAIR dataset is action-free (Clark et al., 2019; Weissenborn et al., 2020; Luc et al., 2020; Yan et al., 2021), where a video prediction model is trained to predict future 15 frames conditioned on the initial frame. For downloading and preprocessing the dataset, we utilize the publicly available script.11 KITTI driving dataset. KITTI driving dataset (Geiger et al., 2013) is the dataset that contains a large number of high-resolution driving videos. However, for video prediction, we follow the setup in Lotter et al. (2017) and utilize 57 training videos and 3 test videos for evaluation. To avoid utilizing too similar video clips from the test dataset for evaluation, we follow the setup in Villegas et al. (2019) where 30-frame clip is extracted with the interval of 5 frames, which constructs a test of size 148. For comparison with baselines, following the setup in Villegas et al. (2019), we train a model to predict future ten frames conditioned on five frames and evaluate the model to predict future 25 frames conditioned on five frames. For downloading and preprocessing the dataset, we utilize the publicly available script.12 5https://github.com/rlworkgroup/metaworld/tree/master/metaworld/policies 6we use the of 9e3863d in https://github.com/rlworkgroup/metaworld. 7https://shorturl.at/acnxM 8We use the list of videos available at https://github.com/google-research/fitvid/blob/master/robonet testset filenames.txt 9https://gist.github.com/soskek/d762751ce0aef4b2c7cf0a1537917016 10https://github.com/cvdfoundation/kinetics-dataset 11https://github.com/wilson1yan/VideoGPT 12https://github.com/coxlab/prednet A.2 IMPLEMENTATION DETAILS OF HARP VQ-GAN. The training of HARP consists of two-stages. First, for training a VQ-GAN model (Esser et al., 2021) we use the publicly available source code from the authors13. Following the suggestion of the authors, we trained a VQ-GAN model without a discriminator loss until it converges, then resume the training with a discriminator loss for total {300000, 500000, 150000, 30000} training steps with batch size of {24, 24, 320, 96} on Meta-World, RoboNet, BAIR Robot Pushing, and KITTI driving dataset, respectively. For Kinetics-600 dataset, we leverage the publicly available VQ-GAN model14 pre-trained on ImageNet without training a VQ-GAN model from scratch. The size k of codebook C in (2) is {8192, 8192, 1024, 1024, 1024} for Meta-World, RoboNet, Kinetics-600, BAIR Robot Pushing, and KITTI dataset, respectively. Causal transformer. Then we train a 12-layer Sparse Transformers (Child et al., 2019) to predict the discrete codes from VQ-GAN models in an autoregressive manner, by building on the publicly available source code15 of VideoGPT (Yan et al., 2021). For conditioning on initial frames, we utilize the same architecture as in Yan et al. (2021) that utilizes ResNet architecture to extract the downsampled feature map. We utilize ResNet-18 architecture for all experiments. We train the model until it converges on Meta-World, BAIR, and KITTI datasets, but we cannot find the sign of overfitting on large-scale datasets of RoboNet and Kinetics-600. Specifically, we train the model for {50000, 80000, 100000, 85000, 30000} training steps on Meta-World, RoboNet, Kinetics-600, BAIR Robot Pushing, and KITTI dataset, respectively. We use the data augmentation with the magnitude of m = 4 on KITTI driving dataset, which has shown to be very effective for improving the performance (see Appendix B) for a learning curve with and without augmentation). Inference with top-k sampling. We utilize top-k sampling (Fan et al., 2018) to improve the video prediction quality. For all datasets, we utilize k = 10. One major limitation of autoregressive prediction model is slow inference time. We utilize the implementation of Yan et al. (2021) that caches the previous key, values and utilize them for fast inference. In order to enable our model to extrapolate to unseen length of frames at evaluation time on KITTI dataset (e.g., the model has to predict 25 frames instead of 10 frames the model is trained to predict), we first predict T frames xc:T−1 conditioned on c initial frames x1:c, then predict the next frames by (a) keeping x1:c as conditioning frames and (b) giving the predicted last T−1 frames as inputs to the causal transformer model. We repeat this process until predicting all 25 future frames. A.3 IMPLEMENTATION DETAILS OF ACTION-CONDITIONED HARP. In order to predict future frames conditioned on future actions on RoboNet dataset (Dasari et al., 2019), we condition the prediction on actions by adding the action embeddings to the embeddings of discrete codes. Specifically, we introduce a linear layer that processes raw actions to action embeddings with the same dimension of token embeddings, then add the action embeddings of time step t+ 1 to token embeddings used for predicting the tokens of time step t + 1. At inference time, the inference procedure is exactly same as HARP except that future action embedding is added to the token embedding. We find that this simple modification to the original architecture enables HARP to predict future frames conditioned on actions. We provide the illustration of action-conditioned HARP in Figure 7. 13https://github.com/CompVis/taming-transformers 14https://heibox.uni-heidelberg.de/d/8088892a516d4e3baf92/ 15https://github.com/wilson1yan/VideoGPT A.4 IMPLEMENTATION DETAILS OF HARP FINE-TUNING. In order to fine-tune the HARP model for solving multi-task imitation learning tasks on Meta-World (Yu et al., 2020), we first pre-train a model to predict future 11 frames conditioned on the first frame using the dataset from all 50 tasks (see Appendix A.1 for the details on the dataset). Then we finetune the model to predict expert actions by introducing a two-layer policy network on top of the causal transformer model. Specifically, we train a behavioral cloning policy to minimize the mean squared error between the predicted actions from a layer and ground-truth expert actions. In order to further improve the performance of all methods, we follow the idea of Dasari & Gupta (2020) to learn a inverse dynamics predictor that predicts the action given two consecutive frames. We train all methods for 20,000 steps with the batch size of 20 and data augmentation of magnitude m = 4. For CNN-based methods, we use ResNet-50 for feature extractor, and use 4-layer LSTM16 for CNN-LSTM, and 12-layer causal transformer for CNN-Transformer. B EFFECTS OF AUGMENTATION Figure 8 shows the test error during the training of HARP on KITTI driving dataset (Geiger et al., 2013). One can see that the test error from a model trained without augmentation increases after initial ∼ 2500 training steps, which is a sign of overfitting. However, we observe that the test error from a model trained with data augmentation keeps decreasing throughout training until it converges, which shows the effectiveness of data augmentation for learning video prediction models, similar to the observation in Babaeizadeh et al. (2021). One notable detail here is that HARP overfits to KITTI training dataset with much fewer number of parameters (i.e., 89M) when compared to FitVid that utilizes 303M parameters. 16We try more deeper networks to match the number of trainable parameters, but we find that deeper LSTM networks are very unstable to train. Hence we search over the layers of {2, 4, 8, 16} and report the best results achieved with 4-layer LSTM. C VIDEO PREDICTIONS ON BAIR AND KITTI We provide the future frames predicted by HARP on BAIR Robot Pushing (Ebert et al., 2017) and KITTI driving dataset (Geiger et al., 2013). The model is trained to predict future 15 frames conditioned on the first frame on BAIR dataset. In the case of KITTI dataset, the model is trained to predict future 10 frames conditioned on the five frames, and evaluated to predict future 25 frames conditioned on the five frames. D MORE VIDEO PREDICTIONS ON META-WORLD We provide more videos predicted by HARP on Meta-World dataset (Yu et al., 2020). The model is trained to predict future 11 frames conditioned on the first frame. E MORE VIDEO PREDICTIONS ON ROBONET We provide more videos predicted by HARP on RoboNet dataset (Dasari et al., 2019). The model is trained to predict future ten frames conditioned on the initial two frames and future ten actions. F MORE VIDEO PREDICTIONS ON KINETICS-600 G ATTRIBUTION Figure 5 (top): ”Windsurfing Session 2013 Iballa Moreno” by morenotwins. Accessible here Figure 5 (bottom): ”How to fry an egg in 5 simple steps” by What’s Gaby Cooking. Accessible here Figure 6 (right): ”Prepare fruit cutting to pets #Shorts 4” by AP STUDIO. Accessible here Figure 12 (first): ”Windsurfing” by Dmitry Rudnev. Accessible here Figure 12 (second): ”Eric Cornelissen Windsurfing September 11, 2017” by Ron Van Dijk. Accessible here Figure 12 (third): ”Smart way to cut Fruits #6#” by MR. BEING SMART. Accessible here Figure 12 (fourth): ”How to make the Perfect Crunchy Half Cakes ( Kangumu ) ‖‖ Jinsi ya kupika Half Cakes za kupasuka.” by Ayleen’s Food & Vlogs. Accessible here
1. What is the main contribution of the paper, and how does it differ from prior works? 2. How effective is the proposed method in reducing the complexity of learning video prediction models? 3. How does the paper's experimental evaluation support the claimed benefits of the proposed method? 4. What are some limitations regarding the novelty and experimental evaluation of the paper? 5. How does the paper compare to other recent works in the field, such as VideoGPT and VideoFlow?
Summary Of The Paper Review
Summary Of The Paper This paper introduces a new video prediction model, HARP, whose training operates in two stages: pretraining an image autoencoder on the frames with discrete latent states, then learning a transformer predictor on the resulting latent space instead of the pixel space. The authors complement this procedure with data augmentation during training and a modification of latent code sampling during inference in order to improve the performance of the model. This experimentally results in an efficient training procedure, with a model that is demonstrated to be able to generate high-resolution videos and match or outperform the state of the art in the domain on multiple datasets and tasks. Review The proposed method is an interesting contribution for the community, since it reduces the complexity of learning video prediction models with appealing results. The paper contains a simple, actionable and effective methodology which is supported by the experiments. High-resolution ( 256 × 256 ) forecasting is one of the main advantage of HARP as it is rare in the literature, especially using reasonable computing power. The empirical evaluation partly illustrates the benefits of the method, thanks to results matching the state of the art with limited parameter count. Overall, the paper is well written and easy to read, and experiments seem to be reproducible. However, I find the contribution of this paper to be insufficient for two main reasons. Novelty The first limitation deals with the novelty of the method. As acknowledged by the authors, the proposed two-stage training has already been proposed in prior works: Rakhimov et al. (2021) and Yan et al. (2021). While these articles are recent, the reviewed paper is not concurrent and clearly builds on them. Indeed, to my understanding, the proposed method amounts to instantiating e.g. VideoGPT (Yan et al., 2021) with another more powerful image generator of the same nature (VQGAN instead of VQVAE) and the same transformer architecture. This significantly reduces the originality of the approach. Experiments The second limitation lies in the experimental evaluation which does not compensate the lack of novelty, thereby limiting the significance of the overall contribution. The issues are multiple and described below. Regarding the forecasting experiments on Meta-World, RoboNet and Kinetics-600, presented results seem satisfying but there lacks some point of comparison in the paper to assess their significance. This could consist in performance comparison with another baseline (like Video transformer which was tested on Kinetics in the original article or another autoregressive model like VideoGPT) or, because of scalability or computational power issues, a more detailed analysis of the computational advantages of HARP against the other baselines. The only provided comparison is on parameter count vs. a single other model, which is insufficient (cf. also the forthcoming remarks on parameter count). I do believe that HARP is computationally appealing, but a factual comparison is needed to confirm this impression. Regarding the forecasting experiments on BAIR and KITTI, the presented results do not allow us to unambiguously conclude about the advantages of HARP. The only comparison with a similar model (VideoGPT) is not conclusive, since the parameter count and performance of both models are similar. More generally, there are fairness issues in the comparisons that prevent the reader to correctly assess the performance of HARP. Indeed, top- k sampling and data augmentation are beneficial to the model but it is unclear how the other models could benefit from them as well and how this would change the performed comparison. For example, the authors may discuss the impact of data augmentation on the considered baselines, as well as the links of top- k sampling with temperature reduction in generative models like in VideoFlow (Kumar et al., 2021) which could similarly improve the performance of the other models. Finally, I fear that the sole indication of parameter count to evaluate the efficiency of a model may be insufficient for these comparisons, as it is an imperfect proxy of the actual capacity of computational requirements of a model (unless the numbers of parameters are orders of magnitude different). More information like GPU, memory and training time requirements for the baselines would circumvent this issue. Ideally, a study on the influence of the number of parameters on the performance of HARP and selected baselines would greatly improve the comparison. Less importantly, I would optionally suggest the authors to complement the numerical evaluation with the standard PSNR metric, like e.g. Denton et al. (2018) and Lee et al. (2018). Despite its flaws, PSNR is complementary to the perceptual scores FVD and LPIPS as it can better reveal dynamics prediction errors. Considering a synthetic dataset like Moving MNIST, that remains challenging for long-term prediction, would also help in this regard to confirm the ability of transformers to accurately predict dynamics on pretrained representations as proposed in the paper.
ICLR
Title Autoregressive Latent Video Prediction with High-Fidelity Image Generator Abstract Video prediction is an important yet challenging problem; burdened with the tasks of generating future frames and learning environment dynamics. Recently, autoregressive latent video models have proved to be a powerful video prediction tool, by separating the video prediction into two sub-problems: pre-training an image generator model, followed by learning an autoregressive prediction model in the latent space of the image generator. However, successfully generating highfidelity and high-resolution videos has yet to be seen. In this work, we investigate how to train an autoregressive latent video prediction model capable of predicting high-fidelity future frames with minimal modification to existing models, and produce high-resolution (256x256) videos. Specifically, we scale up prior models by employing a high-fidelity image generator (VQ-GAN) with a causal transformer model, and introduce additional techniques of top-k sampling and data augmentation to further improve video prediction quality. Despite the simplicity, the proposed method achieves competitive performance to state-of-the-art approaches on standard video prediction benchmarks with fewer parameters, and enables highresolution video prediction on complex and large-scale datasets. Videos are available at the anonymized website https://sites.google.com/view/harp-anonymous. 1 INTRODUCTION Video prediction can enable agents to learn useful representations for predicting the future consequences of the decisions they make, which is crucial for solving the tasks that require long-term planning, including robotic manipulation (Finn & Levine, 2017; Kalashnikov et al., 2018) and autonomous driving (Levinson et al., 2011; Xu et al., 2017). Despite the recent advances in improving the quality of video prediction (Finn et al., 2016; Lotter et al., 2017; Liang et al., 2017; Babaeizadeh et al., 2018; Denton & Fergus, 2018; Lee et al., 2018; Byeon et al., 2018; Kumar et al., 2020; Weissenborn et al., 2020; Babaeizadeh et al., 2021), learning an accurate video prediction model remains notoriously difficult problem and requires a lot of computing resources, especially when the inputs are video sequences with high-resolution (Castrejon et al., 2019; Villegas et al., 2019; Clark et al., 2019; Luc et al., 2020; Walker et al., 2021). This is because the video prediction model should excel at both tasks of generating high-fidelity images and learning the dynamics of environments, though each task itself is already a very challenging problem. Recently, autoregressive latent video prediction methods (Rakhimov et al., 2021; Yan et al., 2021) have been proposed to improve the efficiency of video prediction, by separating video prediction into two sub-problems: first pre-training an image generator (e.g., VQ-VAE; Oord et al. 2017), and then learning the autoregressive prediction model (Weissenborn et al., 2020; Chen et al., 2020) in the latent space of the pre-trained image generator. However, the prior works are limited in that they only consider relatively low-resolution videos (up to 128 × 128 pixels) for demonstrating the efficiency of the approach; it is questionable that such experiments can fully demonstrate the benefit of operating in the latent space of image generator instead of high-dimensional pixel-channel space. In this paper, we present High-fidelity AutoRegressive latent video Prediction (HARP), which scales up the previous autoregressive latent video prediction methods for high-fidelity video prediction. The main principle for the design of HARP is simplicity: we improve the video prediction quality with minimal modification to existing methods. First, for image generation, we employ a highfidelity image generator, i.e., vector-quantized generative adversarial network (VQ-GAN; Esser et al. 2021). This improves video prediction by enabling high-fidelity image generation (up to 256× 256 pixels) on various video datasets. Then a causal transformer model (Chen et al., 2020), which operates on top of discrete latent codes, is trained to predict the discrete codes from VQ-GAN, and autoregressive predictions made by the transformer model are decoded into future frames at inference time. Moreover, motivated by the sampling techniques widely-used in language generation for making coherent and diverse predictions, we propose to utilize top-k sampling (Fan et al., 2018) that draws the next discrete code from the k-most probable codes. Since the number of discrete codes the autoregressive model has to predict is very large, e.g., 6,400 codes on KITTI driving dataset (Geiger et al., 2013), we find that discarding rare discrete codes helps the model predict diverse but high-quality videos, without any change to the training procedure. We highlight the main contributions of this paper below: • We show that our autogressive latent video prediction model, HARP, can predict high-resolution (256×256 pixels) future frames on simulated robotics dataset (i.e., Meta-World; Yu et al. 2020) and large-scale real-world robotics dataset (i.e., RoboNet; Dasari et al. 2019). • We show that HARP can leverage the image generator pre-trained on ImageNet (Deng et al., 2009) for training a high-resolution video prediction model on complex, large-scale Kinetics600 dataset (Carreira et al., 2018), significantly reducing the training cost. • HARP achieves competitive or superior performance to prior state-of-the-art video prediction models with large end-to-end networks on widely-used BAIR Robot Pushing (Ebert et al., 2017) and KITTI driving (Geiger et al., 2013) video prediction benchmarks. • We also show that the pre-trained representations of HARP can be useful for learning multi-task imitation learning agent on Meta-World MT50 benchmark (Yu et al., 2020). 2 RELATED WORK Video prediction. Video prediction aims to predict the future frames conditioned on images (Michalski et al., 2014; Ranzato et al., 2014; Srivastava et al., 2015; Vondrick et al., 2016; Lotter et al., 2017), texts (Wu et al., 2021b), and actions (Oh et al., 2015; Finn et al., 2016), which would be useful for several applications, e.g., model-based RL (Hafner et al., 2019; Kaiser et al., 2020; Rybkin et al., 2021), and simulator development (Kim et al., 2020; 2021). Various video prediction models have been proposed with different approaches, including generative adversarial networks (GANs; Goodfellow et al. 2014) known to generate high-fidelity images by introducing adversarial discriminators that also considers temporal or motion information (Aigner & Körner, 2018; Jang et al., 2018; Kwon & Park, 2019; Clark et al., 2019; Luc et al., 2020), latent video prediction models that operates on the latent space (Babaeizadeh et al., 2018; Denton & Fergus, 2018; Lee et al., 2018; Villegas et al., 2019; Wu et al., 2021a; Babaeizadeh et al., 2021), and autoregressive video prediction models that operates on pixel space by predicting the next pixels in an autoregressive way (Kalchbrenner et al., 2017; Reed et al., 2017; Weissenborn et al., 2020). Autoregressive latent video prediction. Most closely related to our work are autoregressive latent video prediction models that separate the video prediction problem into image generation and dynamics learning. Walker et al. (2021) proposed to learn a hierarchical VQ-VAE (Razavi et al., 2019) that extracts multi-scale hierarchical latents then train SNAIL blocks (Chen et al., 2018) that predict hierarchical latent codes, enabling high-fidelity video prediction. However, this involves a complicated training pipeline and a video-specific architecture, which limits its applicability. As simple alternatives, Rakhimov et al. (2021); Yan et al. (2021) proposed to first learn a VQ-VAE (Oord et al., 2017) and train a causal transformer with 3D self-attention (Weissenborn et al., 2020) and factorized 2D self-attention (Child et al., 2019), respectively. These approaches, however, are limited in that they only consider low-resolution videos. We instead present a simple high-resolution video prediction method that incorporates the strengths of both prior approaches. 3 PRELIMINARIES We consider the standard video prediction framework where the goal is to predict the future frames conditioned on the initial frames of a video. Specifically, conditioned on the first c frames of a video x<c = (x0,x1, ...,xc−1), we aim to learn a video prediction model that predicts the future frames xc:T = (xc, ...,xT−1), where xt ∈ RH×W×Nch is the frame at timestep t. Optionally, one can also consider conditioning the prediction model on actions a = (a0, ...,aT−1) that the agents in the video would take, i.e., action-conditioned video prediction. Autoregressive video prediction model. Motivated by the recent success of pixel-level autoregressive models on image generation (Menick & Kalchbrenner, 2018), Weissenborn et al. (2020) introduced an autoregressive video prediction model that approximates the distribution of a video in a pixel-channel space. Specifically, given a video x ∈ RT×H×W×Nch , the joint distribution over pixels conditioned on the first c frames is modelled as the product of channel intensities Nch and all Np = T ·H ·W pixels except Nc = c ·H ·W pixels of conditioning frames: p(xc:T |x<c) = Np−1∏ i=Nc−1 Nch−1∏ k=0 p(xkπ(i)|xπ(<i),x <k π(i)), (1) where π is a raster-scan ordering over all pixels from the video (we refer to Weissenborn et al. (2020) for more details on the case where π is the combination of a subscale and raster-scan ordering since we only utilize raster-scan ordering for our approach), xπ(<i) is all pixels before xπ(i), xkπ(i) is the k-th channel intensity of the pixel xπ(i), and x<kπ(i) is all channel intensities before x k π(i). Vector quantized variational autoencoder. VQ-VAE (Oord et al., 2017) consists of an encoder that compresses images into discrete representations, and a decoder that reconstructs images from these discrete representations. Both encoder and decoder share a codebook of prototype vectors which are also learned throughout training. Formally, given an image x ∈ RH×W×Nch , the encoder E encodes x into a feature map ze(x) ∈ RH ′×W ′×Nz that consists of a series of latent vectors zπ′(i)(x) ∈ RNz , where π′ is a raster-scan ordering of the feature map ze(x) of size |π′| = H ′ ·W ′. Then ze(x) is quantized to discrete representations zq(x) ∈ R|π ′|×Nz based on the distance of latent vectors zπ′(i)(x) to the prototype vectors in the codebook C = {ek}Kk=1 as follows: zq(x) = (eq(x,1), eq(x,2), · · · , eq(x,|π′|)), where q(x, i) = argmin k∈[K] ‖zπ′(i)(x)− ek‖2, (2) where [K] is the set {1, · · · ,K}. Then the decoder G learns to reconstruct x from discrete representations zq(x). The VQ-VAE is trained by minimizing the following objective: LVQVAE(x) = ‖x−G(zq(x))‖22︸ ︷︷ ︸ Lrecon + ‖sg [ze(x)]− zq(x)‖22︸ ︷︷ ︸ Lcodebook +β · ‖sg [zq(x)]− ze(x)‖22︸ ︷︷ ︸ Lcommit , (3) where the operator sg refers to a stop-gradient operator, Lrecon is a reconstruction loss for learning representations useful for reconstructing images, Lcodebook is a codebook loss to bring codebook representations closer to corresponding encoder outputs h, and Lcommit is a commitment loss weighted by β to prevent encoder outputs from fluctuating frequently between different representations. Vector quantized generative adversarial network. VQ-GAN (Esser et al., 2021) is a variant of VQ-VAE that (a) replaces the Lrecon in (3) by a perceptual loss LLPIPS (Zhang et al., 2018), and (b) introduces an adversarial training scheme where a patch-level discriminator D (Isola et al., 2017) is trained to discriminate real and generated images by maximizing following loss: LGAN(x) = [logD(x) + log(1−D(G(zq(x)))]. (4) Then, the objective for training the VQ-GAN model is defined as: min E,G,C max D Ex∼p(x) [(LLPIPS + Lcodebook + Lcommit) + λ · LGAN] , (5) where λ = ∇GL [LLPIPS]∇GL [LGAN]+δ is an adaptive weight, ∇GL is the gradient of the inputs to the last layer of the decoder GL, and δ = 10−6 is a scalar introduced for numerical stability. 4 METHOD We present HARP, a video prediction model capable of predicting high-fidelity future frames. Our method is designed to fully exploit the benefit of autoregressive latent video prediction model that separates the video prediction into image generation and dynamics learning. Specifically, we consider the combination of (a) the recently introduced high-fidelity image generator (Esser et al., 2021) and (b) an autoregressive latent video prediction model (Oord et al., 2017; Rakhimov et al., 2021; Walker et al., 2021; Yan et al., 2021) that operates on top of the pre-trained image generator. The full architecture of HARP is illustrated in Figure 2. 4.1 HIGH-FIDELITY IMAGE GENERATOR We consider the VQ-GAN model (Esser et al., 2021) that has proven to be effective for highresolution image generation as our image generator (see Section 3 for the formulation of VQ-GAN). Similar to the motivation of Tian et al. (2021) that utilizes a pre-trained image generator in the context of video synthesis, we first pre-train the image generator then freeze the model throughout training to improve the efficiency of learning video prediction models. The notable difference to a prior work that utilize 3D convolutions to temporally downsample the video for efficiency (Yan et al., 2021) is that our image generator operates on single images; hence our image generator solely focus on improving the quality of generated images. Importantly, this enables us to utilize the VQ-GAN model pre-trained on a wide range of natural images, e.g., ImageNet, without training the image generator on the target datasets, which can significantly reduce the training cost of high-resolution video prediction model, Also, the representations from our video prediction model can be easily transferred to downstream tasks that require fine-grained control at each timestep, e.g., imitation learning (see Section 5.3 for supporting experimental results on multi-task imitation learning). 4.2 AUTOREGRESSIVE LATENT VIDEO PREDICTION MODEL To leverage the VQ-GAN model for video prediction, we utilize the autoregressive latent video prediction architecture that operates on top of the discrete codes extracted from a video x. Specifically, we extract the discrete codes z(x) = (z(x1), ..., z(xT )) using the pre-trained VQ-GAN, where z(xt) = (q(xt,1), q(xt,2), ..., q(xt,|π′|)) is the discrete code extracted from the frame xt as in (2). Then, instead of modelling the distribution of video p(x) in the pixel-channel space as in (1), we learn the distribution of the video in the discrete latent representation space: p(z(xc:T |x<c)) = Nd−1∏ i=0 p(zπ′(i)(x)|zπ′(<i)(x)), (6) where Nd = (T − C) ·H ′ ·W ′ is the total number of codes from xc:T . While the specific implementation for modelling p(z(x)) differs in prior works (Oord et al., 2017; Rakhimov et al., 2021; Walker et al., 2021; Yan et al., 2021), due to its simplicity, we utilize the causal transformer architecture (Yan et al., 2021) where the output logits from input codes are trained to predict the next discrete codes. We remark that our approach is also compatible with other architectures. 4.3 ADDITIONAL TECHNIQUES Top-k sampling. To improve the video prediction quality of latent autoregressive models whose outputs are sampled from the probability distribution over a large number of discrete codes, we utilize the top-k sampling (Fan et al., 2018) that randomly samples the output from the top-k probable discrete codes. By preventing the model from sampling rare discrete codes from the long-tail of a probability distribution and predicting future frames conditioned on such discrete codes, we find that top-k sampling improves video prediction quality, especially given that the number of discrete encodings required for future prediction is very large, e.g., 2,560 on RoboNet (Dasari et al., 2019) up to 6,400 on KITTI dataset (Geiger et al., 2013) in our experimental setup. Data augmentation. We also investigate how data augmentation can be useful for improving the performance of autoregressive latent video prediction models. Since the image generator model is not trained with augmentation, we utilize a weak augmentation to avoid the instability coming from aggressive transformation of input frames, i.e., translation augmentation that moves the input images by m pixels along the X or Y direction. 5 EXPERIMENTS We design our experiments to investigate the following: • Can HARP predict high-resolution future frames (up to 256 × 256 pixels) on various video datasets with different characteristics? • How does HARP compare to state-of-the-art methods with large end-to-end networks on standard video prediction benchmarks in terms of quantitative evaluation? • How does the proposed techniques affect the performance of HARP? • Can HARP be transferred to solve multi-task imitation learning tasks? 5.1 HIGH-RESOLUTION VIDEO PREDICTION Implementation. We utilize up to 8 Nvidia 2080Ti GPU and 20 CPU cores for training each model. For training VQ-GAN (Esser et al., 2021), we first train the model without a discriminator lossLGAN, and then continue the training with the loss following the suggestion of the authors. For all experiments, VQ-GAN downsamples each frame into 16× 16 latent codes, i.e., by a factor of 4 for frames of size 64×64 frames, and 16 for frames of size 256×256. For training a transformer model, the VQ-GAN model is frozen so that its parameters are not updated. We use Sparse Transformers (Child et al., 2019) as our transformer architecture to accelerate the training. As for hyperparameter, we use k = 10 for sampling at inference time, but no data augmentation for high-resolution video prediction experiments. We report more detailed implementation details in Appendix A. Meta-World experiments. To demonstrate that our method can predict high-resolution videos (256× 256 pixels), we first use action-free Meta-World dataset (Yu et al., 2020) consisting of 2,500 demonstrations from 50 different robotics manipulation tasks, which we collected using the deterministic scripted policies1. Specifically, we train a single model to predict future videos of all 50 tasks, without leveraging task-specific information, such as task index. For evaluation, we use 10% of the demonstrations as a held-out test dataset. As shown in Figure 3, our model can accurately predict the high-resolution future frames of diverse tasks, capturing all the small details. This shows that our model can effectively learn all the information of multiple tasks required for predicting future frames. We also remark that such representations can be useful to improve the performance of imitation learner (see Section 5.3 for supporting experimental results). RoboNet experiments. Now we investigate how our model works on large-scale, real-world RoboNet dataset (Dasari et al., 2019) consisting of more than 15 million frames. While prior works successfully trained a video prediction model with 64 × 64 videos (Wu et al., 2021a; Babaeizadeh 1The dataset is available at: https://shorturl.at/acnxM et al., 2021), we show that our model can predict high-resolution 256× 256 videos even with fewer number of parameters than the one used in the prior works for predicting 64×64 videos. Specifically, we first train a VQ-GAN model with 91.5M parameters, and then train a 12-layer causal transformer model with 74.2M parameters that predicts future 10 frames conditioned on first two frames and future ten actions. Total number of parameters is 165.7M, which is smaller than 303.3M of FitVid (Babaeizadeh et al., 2021) that predicts 64 × 64 videos. Figure 1 and Figure 4 show the predicted frames on the held-out test video, where the model predicts the high-resolution future frames where a robot arm is moving around various objects of different colors and shapes. Kinetics-600 experiments using ImageNet pre-trained VQ-GAN. Finally, we consider a very complex, large-scale Kinetics-600 dataset (Carreira et al., 2018) consisting of more than 400,000 videos, which requires a large amount of computing resources for training even on 64 × 64 resolution (Clark et al., 2019; Luc et al., 2020). To avoid the prohibitively expensive training cost of high-resolution video prediction models on this dataset and fully exploit the benefit of employing a high-fidelity image generator, we utilize the VQ-GAN model pre-trained on ImageNet dataset (Deng et al., 2009). 2 As we only train the transformer model for video prediction, this enables us to train a high-resolution video prediction model in a very efficient manner. Specifically, we train the transformer model for 60,000 steps on training dataset, which takes less than a day using our machine. As shown in Figure 5, our model can predict future frames on the test videos3, which demonstrates that leveraging the large image generator pre-trained on a wide range of natural images can be a promising recipe for efficient video prediction on high-resolution, large-scale video datasets. 5.2 COMPARATIVE EVALUATION ON STANDARD BENCHMARKS Datasets. For quantitative evaluation, we first consider the BAIR robot pushing dataset (Ebert et al., 2017) consisting of roughly 40k training and 256 test videos. We consider action-free setup, hence video prediction models should be stochastic for predicting the diverse possible movement of a robot arm and objects. Following the setup in prior works (Clark et al., 2019; Weissenborn et al., 2020; Luc et al., 2020; Yan et al., 2021), we predict 15 future frames conditioned on one frame. We also evaluate our method on KITTI driving dataset (Geiger et al., 2013), where the training and test datasets are split by following the setup in Lotter et al. (2017). As the KITTI dataset is relatively small-scale compared to other datasets, i.e., 57 training videos, it provides a good testbed for investigating the effect of data augmentation. For hyperparameters, we use k = 10 for both datasets and data augmentation with m = 4 is only applied to KITTI as there was no sign of overfitting on BAIR dataset. For a fair comparison, we follow the setup of Villegas et al. (2019), where (i) a model is trained to predict future ten frames conditioned on five frames and evaluated to predict future 25 frames conditioned on five frames, and (ii) test dataset consists of 148 video clips constructed by extracting 30-frame clips and skipping every 5 frames. 2https://github.com/CompVis/taming-transformers 3We collected videos covered by CC-BY license, which are available to put the frames on a paper. Metrics. We use two evaluation metrics: Learned Perceptual Image Patch Similarity (LPIPS; Zhang et al. 2018), a frame-wise metric designed to better represent the human perceptual similarity of two frames compared to traditional metrics (Wang et al., 2004; Huynh-Thu & Ghanbari, 2008), and Frèchet Video Distance (FVD; Unterthiner et al. 2018), a dynamics-based evaluation metric known to be better correlated with the human evaluation compared to frame-wise evaluation metrics. FVD is computed by comparing the summary statistics of I3D network trained on Kinetics400 dataset (Carreira & Zisserman, 2017), and LPIPS is computed using the features from AlexNet (Krizhevsky et al., 2012). For comparison with the scores reported in prior works, we exactly follow the evaluation setup in Villegas et al. (2019) and Babaeizadeh et al. (2021) that samples 100 future videos for each ground-truth test video, then reports the best score over 100 videos for LPIPS, and the score using all videos for FVD, with the batch size of 256 for BAIR and 148 for KITTI. Results. Table 1 shows the performances of our method and baselines on test sets of BAIR Robot Pushing and KITTI driving dataset. We observe that our model achieves competitive or superior performance to state-of-the-art methods with large end-to-end networks, e.g., HARP outperforms FitVid with 302M parameters on KITTI driving dataset. Our model successfully extrapolates to unseen number of future frames (i.e., 25) instead of 10 future frames used in training on KITTI dataset. This implies that transformer-based video prediction models can also predict arbitrary number of frames at inference time. In the case of BAIR dataset, HARP achieves the similar performance of FitVid with 302M parameters, even though our method only requires 89M parameters. We provide videos predicted by HARP on BAIR and KITTI datasets in Appendix C. Analysis. We investigate how the top-k sampling, number of layers, and magnitude m of data augmentation affect the performance. Table 2a shows that smaller k leads to better performance, implying that the proposed top-k sampling is effective for improving the performance by discarding rare discrete codes that might degrade the prediction quality at inference time. As shown in Table 2b, we observe that more layers leads to better performance on BAIR dataset, which implies our model can be further improved by scaling up the networks. Finally, we find that (i) data augmentation on KITTI dataset is important for achieving strong performance, similar to the observation of Babaeizadeh et al. (2021), and (ii) too aggressive augmentation leads to worse performance. We provide the learning curves with and without augmentation in Appendix B. 4Baselines are SVG (Villegas et al., 2019), GHVAE (Wu et al., 2021a), FitVid (Babaeizadeh et al., 2021), LVT (Rakhimov et al., 2021), SAVP (Lee et al., 2018), DVD-GAN-FP (Clark et al., 2019), VideoGPT (Yan et al., 2021), TrIVD-GAN-FP (Luc et al., 2020), and Video Transformer (Weissenborn et al., 2020). 5.3 FINE-TUNING HARP FOR MULTI-TASK IMITATION LEARNING Setup. In order to demonstrate that the pre-trained representations from HARP can be useful for solving downstream tasks, we evaluate the imitation learning performance of fine-tuned HARP on MT-50 benchmark from Meta-World. Specifically, we take the pre-trained HARP model (see Figure 3 for the video predictions from this model), and fine-tune the model to predict expert actions by introducing a policy network on top of the transformer model. For comparative evaluation, we consider three baselines: (a) VQ-Transformer, which shares the same architecture with HARP but trained from scratch, (b) CNN-LSTM, which extracts features using convolutional neural networks (CNN) and LSTM networks, and (c) CNN-Transformer, that utilizes transformer networks instead of LSTM networks. For training and evaluation, we use the same training and test dataset used for video prediction experiments. We report the average success rate over 10 trials for each task. More details are available in Appendix A. Results. Table 3 shows the performance of imitation learning policies on MT50 test environments. We first observe that VQ-Transformer, which has a same architecture with HARP but trained from scratch, completely fails to solve the tasks. This shows the difficulty of training useful representations with fixed discrete codes as inputs. However, finetuned HARP model successfully outperforms other baselines because pre-trained representations contain useful information for long-term reasoning. This demonstrates that video prediction with HARP can be an effective self-supervised learning scheme for solving various control tasks. 6 DISCUSSION In this work, we present HARP, a high-fidelity autoregressive latent video prediction model. By employing a high-fidelity image generator and utilizing top-k sampling at inference time, HARP can predict high-resolution future frames, and achieve competitive performance to state-of-the-art video prediction methods with large end-to-end networks. We also show that HARP can leverage the image generator pre-trained on a wide range of natural images for video prediction, similar to the approach in the context of video synthesis (Tian et al., 2021). We hope this work inspires more investigation into leveraging a pre-trained image generator for video prediction, which can significantly reduce the cost for training a high-resolution video prediction model by building on the recent success of high-fidelity image generation (Oord et al., 2017; Razavi et al., 2019; Esser et al., 2021; Child, 2020; Ho et al., 2020; Karras et al., 2020; Dhariwal & Nichol, 2021). Finally, we report the failure cases of video prediction with HARP and discuss the possible extensions to resolve the issue. A common failure case for video prediction on RoboNet dataset is ignoring the interaction between a robot arm and objects. For example, in Figure 6a, our model ignores the objects and only predicts the movement of a robot arm. On the other hand, common failure case for Kinetics-600 is a degenerate video prediction, where a model just repeats the conditioning frame without predicting the future, as shown in Figure 6b. These failure cases might be resolved by training more larger networks similar to the observation in the field of natural language processing, e.g., GPT-3 (Brown et al., 2020), or might necessitate a new architecture for addressing the complexity of training autoregressive latent prediction models on video datasets. ETHICS STATEMENT While video prediction can be useful for various applications including robotic manipulation and autonomous driving, it might be misused by malicious users for unethical purposes, e.g., fake videos accusing politicians or sexual videos of any individuals. As our work introduces a method for generating more high-resolution future frames, our method may improve the chance of such videos being recognized as real videos. For this reason, in addition to developing a video prediction method that generates more realistic frames, it is important to be aware of potential problems and develop a method to detect generated videos (Gentine et al., 2018). REPRODUCIBILITY STATEMENT We describe the implementation and evaluation details in Section 5 and Appendix A. We also provide our code in the supplementary material. A EXPERIMENTAL SETUP A.1 DATASETS Meta-World. Meta-World (Yu et al., 2020) is a robotics manipulation simulator that supports 50 different tasks. For our experiments, we collect 50 demonstrations for each task using the deterministic scripted policies.5, hence we use total 2500 demonstrations. For evaluation, we construct the held-out test dataset with 10% of the demonstrations. To improve the visibility of rendered frames, we adjust the camera view of the camera using the publicly available source codes.6 We provide the dataset we used in our experiments7. RoboNet. RoboNet (Dasari et al., 2019) is a large-scale real-world robotics dataset consisting of more than 160,000 high-resolution videos. Since there is no test set for RoboNet dataset, we follow the setup in Wu et al. (2021a) for constructing a held-out test dataset8 of size 256. Following the setup in Wu et al. (2021a); Babaeizadeh et al. (2021), we train a video prediction model to predict ten future frames conditioned on two initial frames and ten future actions. For preprocessing the frames, we resize the original frames to 256×256 resolution frames without cropping. For downloading the dataset, we utilize the publicly available script.9 Kinetics-600. Kinetics-600 dataset is a large-scale video dataset consisting of more than 400,000 videos of total 600 action classes. Following the setup in Clark et al. (2019); Luc et al. (2020), we train a video prediction model to predict future 11 frames conditioned on the first five frames. For downloading the dataset, we use the publicly available repository10. BAIR Robot Pushing. BAIR Robot Pushing dataset (Ebert et al., 2017) consists of 43,264 training and 256 test videos. While BAIR dataset contains the information of actions robots take, common setup for evaluation on BAIR dataset is action-free (Clark et al., 2019; Weissenborn et al., 2020; Luc et al., 2020; Yan et al., 2021), where a video prediction model is trained to predict future 15 frames conditioned on the initial frame. For downloading and preprocessing the dataset, we utilize the publicly available script.11 KITTI driving dataset. KITTI driving dataset (Geiger et al., 2013) is the dataset that contains a large number of high-resolution driving videos. However, for video prediction, we follow the setup in Lotter et al. (2017) and utilize 57 training videos and 3 test videos for evaluation. To avoid utilizing too similar video clips from the test dataset for evaluation, we follow the setup in Villegas et al. (2019) where 30-frame clip is extracted with the interval of 5 frames, which constructs a test of size 148. For comparison with baselines, following the setup in Villegas et al. (2019), we train a model to predict future ten frames conditioned on five frames and evaluate the model to predict future 25 frames conditioned on five frames. For downloading and preprocessing the dataset, we utilize the publicly available script.12 5https://github.com/rlworkgroup/metaworld/tree/master/metaworld/policies 6we use the of 9e3863d in https://github.com/rlworkgroup/metaworld. 7https://shorturl.at/acnxM 8We use the list of videos available at https://github.com/google-research/fitvid/blob/master/robonet testset filenames.txt 9https://gist.github.com/soskek/d762751ce0aef4b2c7cf0a1537917016 10https://github.com/cvdfoundation/kinetics-dataset 11https://github.com/wilson1yan/VideoGPT 12https://github.com/coxlab/prednet A.2 IMPLEMENTATION DETAILS OF HARP VQ-GAN. The training of HARP consists of two-stages. First, for training a VQ-GAN model (Esser et al., 2021) we use the publicly available source code from the authors13. Following the suggestion of the authors, we trained a VQ-GAN model without a discriminator loss until it converges, then resume the training with a discriminator loss for total {300000, 500000, 150000, 30000} training steps with batch size of {24, 24, 320, 96} on Meta-World, RoboNet, BAIR Robot Pushing, and KITTI driving dataset, respectively. For Kinetics-600 dataset, we leverage the publicly available VQ-GAN model14 pre-trained on ImageNet without training a VQ-GAN model from scratch. The size k of codebook C in (2) is {8192, 8192, 1024, 1024, 1024} for Meta-World, RoboNet, Kinetics-600, BAIR Robot Pushing, and KITTI dataset, respectively. Causal transformer. Then we train a 12-layer Sparse Transformers (Child et al., 2019) to predict the discrete codes from VQ-GAN models in an autoregressive manner, by building on the publicly available source code15 of VideoGPT (Yan et al., 2021). For conditioning on initial frames, we utilize the same architecture as in Yan et al. (2021) that utilizes ResNet architecture to extract the downsampled feature map. We utilize ResNet-18 architecture for all experiments. We train the model until it converges on Meta-World, BAIR, and KITTI datasets, but we cannot find the sign of overfitting on large-scale datasets of RoboNet and Kinetics-600. Specifically, we train the model for {50000, 80000, 100000, 85000, 30000} training steps on Meta-World, RoboNet, Kinetics-600, BAIR Robot Pushing, and KITTI dataset, respectively. We use the data augmentation with the magnitude of m = 4 on KITTI driving dataset, which has shown to be very effective for improving the performance (see Appendix B) for a learning curve with and without augmentation). Inference with top-k sampling. We utilize top-k sampling (Fan et al., 2018) to improve the video prediction quality. For all datasets, we utilize k = 10. One major limitation of autoregressive prediction model is slow inference time. We utilize the implementation of Yan et al. (2021) that caches the previous key, values and utilize them for fast inference. In order to enable our model to extrapolate to unseen length of frames at evaluation time on KITTI dataset (e.g., the model has to predict 25 frames instead of 10 frames the model is trained to predict), we first predict T frames xc:T−1 conditioned on c initial frames x1:c, then predict the next frames by (a) keeping x1:c as conditioning frames and (b) giving the predicted last T−1 frames as inputs to the causal transformer model. We repeat this process until predicting all 25 future frames. A.3 IMPLEMENTATION DETAILS OF ACTION-CONDITIONED HARP. In order to predict future frames conditioned on future actions on RoboNet dataset (Dasari et al., 2019), we condition the prediction on actions by adding the action embeddings to the embeddings of discrete codes. Specifically, we introduce a linear layer that processes raw actions to action embeddings with the same dimension of token embeddings, then add the action embeddings of time step t+ 1 to token embeddings used for predicting the tokens of time step t + 1. At inference time, the inference procedure is exactly same as HARP except that future action embedding is added to the token embedding. We find that this simple modification to the original architecture enables HARP to predict future frames conditioned on actions. We provide the illustration of action-conditioned HARP in Figure 7. 13https://github.com/CompVis/taming-transformers 14https://heibox.uni-heidelberg.de/d/8088892a516d4e3baf92/ 15https://github.com/wilson1yan/VideoGPT A.4 IMPLEMENTATION DETAILS OF HARP FINE-TUNING. In order to fine-tune the HARP model for solving multi-task imitation learning tasks on Meta-World (Yu et al., 2020), we first pre-train a model to predict future 11 frames conditioned on the first frame using the dataset from all 50 tasks (see Appendix A.1 for the details on the dataset). Then we finetune the model to predict expert actions by introducing a two-layer policy network on top of the causal transformer model. Specifically, we train a behavioral cloning policy to minimize the mean squared error between the predicted actions from a layer and ground-truth expert actions. In order to further improve the performance of all methods, we follow the idea of Dasari & Gupta (2020) to learn a inverse dynamics predictor that predicts the action given two consecutive frames. We train all methods for 20,000 steps with the batch size of 20 and data augmentation of magnitude m = 4. For CNN-based methods, we use ResNet-50 for feature extractor, and use 4-layer LSTM16 for CNN-LSTM, and 12-layer causal transformer for CNN-Transformer. B EFFECTS OF AUGMENTATION Figure 8 shows the test error during the training of HARP on KITTI driving dataset (Geiger et al., 2013). One can see that the test error from a model trained without augmentation increases after initial ∼ 2500 training steps, which is a sign of overfitting. However, we observe that the test error from a model trained with data augmentation keeps decreasing throughout training until it converges, which shows the effectiveness of data augmentation for learning video prediction models, similar to the observation in Babaeizadeh et al. (2021). One notable detail here is that HARP overfits to KITTI training dataset with much fewer number of parameters (i.e., 89M) when compared to FitVid that utilizes 303M parameters. 16We try more deeper networks to match the number of trainable parameters, but we find that deeper LSTM networks are very unstable to train. Hence we search over the layers of {2, 4, 8, 16} and report the best results achieved with 4-layer LSTM. C VIDEO PREDICTIONS ON BAIR AND KITTI We provide the future frames predicted by HARP on BAIR Robot Pushing (Ebert et al., 2017) and KITTI driving dataset (Geiger et al., 2013). The model is trained to predict future 15 frames conditioned on the first frame on BAIR dataset. In the case of KITTI dataset, the model is trained to predict future 10 frames conditioned on the five frames, and evaluated to predict future 25 frames conditioned on the five frames. D MORE VIDEO PREDICTIONS ON META-WORLD We provide more videos predicted by HARP on Meta-World dataset (Yu et al., 2020). The model is trained to predict future 11 frames conditioned on the first frame. E MORE VIDEO PREDICTIONS ON ROBONET We provide more videos predicted by HARP on RoboNet dataset (Dasari et al., 2019). The model is trained to predict future ten frames conditioned on the initial two frames and future ten actions. F MORE VIDEO PREDICTIONS ON KINETICS-600 G ATTRIBUTION Figure 5 (top): ”Windsurfing Session 2013 Iballa Moreno” by morenotwins. Accessible here Figure 5 (bottom): ”How to fry an egg in 5 simple steps” by What’s Gaby Cooking. Accessible here Figure 6 (right): ”Prepare fruit cutting to pets #Shorts 4” by AP STUDIO. Accessible here Figure 12 (first): ”Windsurfing” by Dmitry Rudnev. Accessible here Figure 12 (second): ”Eric Cornelissen Windsurfing September 11, 2017” by Ron Van Dijk. Accessible here Figure 12 (third): ”Smart way to cut Fruits #6#” by MR. BEING SMART. Accessible here Figure 12 (fourth): ”How to make the Perfect Crunchy Half Cakes ( Kangumu ) ‖‖ Jinsi ya kupika Half Cakes za kupasuka.” by Ayleen’s Food & Vlogs. Accessible here
1. What is the main contribution of the paper regarding autoregressive latent video prediction? 2. What are the strengths and weaknesses of the proposed framework, particularly in its technical novelty and design choices? 3. Do you have any questions regarding the paper's experiments and results, such as the impact of replacing VQ-GAN with VQ-VAE, reporting quantitative results on Kinetics600 64x64, and comparing performances without top-k sampling? 4. Can the authors provide more information on how they tuned HARP and arrived at the codebook sizes used in their experimental setup?
Summary Of The Paper Review
Summary Of The Paper The paper proposes an autoregressive latent video prediction model with two components: An image generation model (VG-GAN) is trained to compress an image into 2-D latent codes. A 1-D autoregressive transformer generates the next latent code given the flattened history of the previous latent codes, i.e (HWT). This latent code is then transformed into the next frame of the sequence by the decoder of the VQ-GAN model. Quantitative Results are shown on 64x64 Images on the BAIR Robot Pushing dataset and KITTI driving dataset. Some performance improvement is shown using top-k sampling on the BAIR dataset and KITTI dataset and data-augmentation (pixel-translation) on the KITTI dataset. Qualitative Results are shown on high-resolution images (256x256) on the RoboNet, MetaWorld and Kinetics600 datasets. Review Strengths The paper is well written and easy to read and the proposed framework is conceptually simple. Authors show qualitative results on 256x256 images which is quite challenging. The reported results on KITTI are strong. Weaknesses The main technical novelty seems to be in the usage of VQ-GAN model (instead of the VQ-VAE model) to produce latent codes. But there is no ablation that quantifies the impact of this design choice. The authors should replace VQ-GAN with the standard VQ-VAE model and report the improvements (if any). The paper claims that the prior related works are limited to generation of 128x128 images. But it is unclear what design decisions allow HARP to generate 256x256 resolution images. In terms of complexity, the proposed HARP should be similar to LVT (Rakhimov et al 2021) and VideoGPT (Yan el atl 2021) since both rely on attention-based architectures in latent space. Is it that the prior works could generate 256x256 images but with poor fidelity due to the reliance on VQ-VAE (instead of VQ-GAN)? This is simple to verify by just replacing the VQ-GAN with VQ-VAE and reporting the results on 256x256 images. The authors should report their quantitative results on Kinetics600 64x64 to provide another point of comparison. Additionally, they should report their numbers on Kinetics600 256x256 with and without ImageNet pre training as a future baseline. Further, [Clark et al 2019] report results on high resolution Kinestics600 in Table 1 of their paper, which should be compared to. The final comparison in Table 2 is reported using top-k sampling, which is unfair to both LVT and VideoTransformer as top-k sampling is readily applicable to these models as well. After removing top-k sampling, the performance of HARP is comparable to VideoGPT. What is “number of layers” in the Results Section? Is it the number of layers in VQ-GAN or the 1-D Transformer? How was HARP tuned? The datasets in Section 5.2 only talk about the train and test split but no validation split? Can the authors provide some ablations on the codebook size? Since it is specifically stated that “given that the number of discrete encodings required for future prediction is very large, e.g., 2,560 on RoboNet (Dasari et al., 2019) up to 6,400 on KITTI dataset (Geiger et al., 2013) in our experimental setup.”. How were these codebook numbers arrived at?
ICLR
Title Not All Features Are Equal: Feature Leveling Deep Neural Networks for Better Interpretation Abstract Self-explaining models are models that reveal decision making parameters in an interpretable manner so that the model reasoning process can be directly understood by human beings. General Linear Models (GLMs) are self-explaining because the model weights directly show how each feature contributes to the output value. However, deep neural networks (DNNs) are in general not self-explaining due to the non-linearity of the activation functions, complex architectures, obscure feature extraction and transformation process. In this work, we illustrate the fact that existing deep architectures are hard to interpret because each hidden layer carries a mix of low level features and high level features. As a solution, we propose a novel feature leveling architecture that isolates low level features from high level features on a per-layer basis to better utilize the GLM layer in the proposed architecture for interpretation. Experimental results show that our modified models are able to achieve competitive results comparing to main-stream architectures on standard datasets while being more self-explainable. Our implementations and configurations are publicly available for reproductions†. 1 INTRODUCTION Deep Neural Networks (DNNs) are viewed as back-box models because of their obscure decision making process. One reason that makes deep neural networks hard to interpret is that they are able to magically extract abstract concepts through multi-layer non-linear activations and end-toend training. From a human perspective, it is hard to understand how features are extracted from different hidden layers and what features are used for final decision making. In response to the challenge of interpretability, two paths are taken to unbox neural networks’ decision learning process. One method is to design verifying algorithms that can be applied to existing models to back-trace their decision learning process. Another method is to design models that ”explain” the decision making process automatically. The second direction is promising in that the interpretability is built-in architecturally. Thus, the verification feedback can be directly used to improve the model. One class of the self-explaining models borrows the interpretability of General Linear Models (GLMs) such as linear regression. GLMs are naturally interpretable in that complicated interactions of non-linear activations are not involved. The contribution of each feature to the final decision output can simply be analyzed by examining the corresponding weight parameters. Therefore, we take a step forward to investigate ways to make DNNs as similar to GLMs as possible for interpretability purpose while maintaining competitive performance. Fortunately, a GLM model naturally exists in the last layer of most discriminative architectures of DNNs (See appendix A.3 for the reason that the last layer is a GLM layer). However, the GLM could only account for the output generated by the last layer and this output is not easy to interpret because it potentially contains mixed levels of features. In the following section, we use empirical results to demonstrate this mixture effect. Based on this observation, one way to naturally improve interpretation is to prevent features extracted by different layers from mixing together. Thus, we † Public Repo URL annonymized for review purpose-See code folder for detailed implementation directly pass features extracted by each layer to the final GLM layer. This can further improve interpretability by leveraging the weights of the GLM layer to explain the decision making process. Motivated by this observation, we design a feature leveling network structure that can automatically separate low level features from high level features to avoid mixture effect. In other words, if the low level features extracted by the kth hidden layer can be readily used by the GLM layer, we should directly pass these features to the GLM rather than feeding them to the k + 1th hidden layer. We also propose a feature leveling scale to measure the complexity of different sets of features’ in an unambiguous manner rather than simply using vague terms such as ”low” and ”high” to describe these features. In the following sections, we will first lay out the proposed definition of feature leveling. We then will illustrate how different levels of features reside in the same feature space. Based on the above observations, we propose feature leveling network, an architectural modification on existing models that can isolate low level features from high level features within different layers of the neural network in an unsupervised manner. In the experiment section, we will use empirical results to show that this modification can also be applied to reduce the number of layers in an architecture and thus reduce the complexity of the network. In this paper, we focus primarily on fully connected neural networks(FCNN) with ReLU activation function in the hidden layers. Our main contributions are as follows: • We take a step forward to quantify feature complexity for DNNs. • We investigate the mixture effect between features of different complexities in the hidden layers of DNNs. • We propose a feature leveling architecture that is able to isolate low level features from high level features in each layer to improve interpretation. • We further show that the proposed architecture is able to prune redundant hidden layers to reduce DNNs’ complexity with little compromise on performance. The remaining content is organized as follows: In section 2, we first introduce our definitions of feature leveling and use a toy example to show the mixture effect of features in hidden layers. In section 3, we give a detailed account of our proposed feature leveling network that could effectively isolate different levels of features. In section 4, we provide a high level introduction to some related works that motivated our architectural design. In Section 5, we test and analyze our proposed architecture on various real world datasets and show that our architecture is able to achieve competitive performance while improving interpretability. In section 6, we show that our model is also able to automatically prune redundant hidden layers, thus reducing the complexity of DNNs. 2 FEATURE LEVELING FOR NEURAL NETWORKS The concepts of low level and high level features are often brought up within the machine learning literature. However, their definitions are vague and not precise enough for applications. Intuitively, low level features are usually ”simple” concepts or patterns whereas high level features are ”abstract” or ”implicit” features. Within the scope of this paper, we take a step forward to give a formal definition of feature leveling that quantizes feature complexity in an absolute scale. This concept of a features’ scale is better than simply having ”low” and ”high” as descriptions because it reveals an unambiguous ordering between different sets of features. We will use a toy example to demonstrate how features can have different levels and explain why separating different levels of features could improve interpretability. 2.1 A TOY EXAMPLE We create a toy dataset called Independent XOR(IXOR). IXOR consists of a set of uniformally distributed features X : {(x1, x2, x3)|x1 ∈ [−2, 2], x2 ∈ [−2, 2], x3 ∈ [0, 1]} and a set of labels Y : {0, 1}. We use top indices for attributes of this toy example. The labels are assigned as:{ y = 1 x1 × x2 > 0 ∧ x3 > 0.5 y = 0 otherwise In this dataset, (x1, x2, x3) clearly have different levels of feature. x3 can be directly used by the GLM layer as it has a linear decision boundary. (x1, x2) is more complex as they form an XOR pattern and cannot be linearly separated, thus requiring further decomposition to be made sufficient for the GLM layer. To make correct decisions, the DNN should use one layer to decompose the XOR into lower level features, and directly transport x3’s value to into the GLM layer. 2.2 CHARACTERIZE LOW AND HIGH LEVEL FEATURES WITH FEATURE LEVELING From IXOR we can see that not all features have the same level of ”complexity”. Some could be directly fed into the GLM layer, others may need to go through one or more hidden layers to be transformed to features that can directly contribute to decision making. Thus, instead of using ”low” and ”high” level to characterize features, we propose to frame the complexity of different features with the definition of feature leveling. For a dataset D consisting of N i.i.d samples with features and their corresponding labels {(a1,y1), ..., (aN ,yN )}. We assume that samples ai ∈ D contains features that requires at most K hidden layers to be transformed to perform optimal inference. For a DNN trained with K hidden layers and a GLM layer, we define the set of kth level feature as the set of features that requires k − 1 hidden layers to extract under the current network setup to be sufficiently utilized by the GLM layer. In the following paragraphs, we denote lk ∈ Lk as the kth level features extracted from one sample and Lk denotes the set of all kth level feature to be learned in the target distribution. The rest of high level features are denoted by Hk that should be passed to the kth layer to extract further level features. In this case, Lk and Hk should be disjoint, that is Lk ⋂ Hk = ∅. In the case of the toy example, x3 is l1, level one feature, as it is learned by the first hidden layer to directly transport its value to the GLM layer. (x1, x2) is h1. The XOR can be decomposed by one hidden layer with sufficient number of parameters to be directly used by the GLM layer to make accurate decisions. Assuming the first hidden layer f1 has sufficient parameters, it should take in h1 and output l2. 2.3 HOW THE PROPOSED MODEL SOLVES THE MIXTURE EFFECT AND BOOSTS INTERPRETATION However, common FCNN does not separate each level of feature explicitly. Figure 2 shows the heatmaps of the weight vectors for both FCNN baseline and proposed feature leveling network trained on the IXOR dataset. We observe from FCNN that x3’s value is able to be preserved by the last column of the weight vector from the first layer but is mixed with all other features in the second layer, before passing into the GLM layer. Our proposed model, on the other hand, is able to cleanly separate x3 and preserve its identity as an input to the GLM layer. In addition, our model is able to identify that the interaction between (x1, x2) can be captured by one single layer. Thus, the model eliminates the second layer and pass (x1, x2) features extracted by the first hidden layer directly to the GLM layer. Looking at the results obtained from the toy example, we can clearly see that the proposed model is able to solve the mixture effect of features and gives out correct levels for features with different complexities in the context of the original problem. Therefore, the model is more interpretable in that it creates a clear path of reasoning and the contirbution of each level of features can be understood from the weight parameters in the GLM. 3 OUR PROPOSED ARCHITECTURE Inspired by our definition of feature leveling and to resolve the mixture of features problem, we design an architecture that is able to recursively filter the kth level features from the kth layer inputs and allow them to be directly passed to the final GLM layer. We start with a definition of a FCNN and extend that to our model: we aim to learn a function F parametrized by a neural network with K hidden layers. The function F can be written as: F = d ( fK(fK−1(...f1(a; θ1)); θK) ) (1) fk is the kth hidden layer function with parameters θk. d(·) is the GLM model used for either classification, or regression. Thus, the goal is to learn the function F such that: R(θ) = 1 N ( N∑ i=1 L(F(ai; θ),yi) ) θ∗ = argmin θ (R(θ)) (2) In our formulation, each hidden layer can be viewed as separator for the kth level features and extractor for higher level features. Thus, the output of fk has two parts: lk is the set of kth level feature extracted from inputs and can be readily transported to the GLM layer for decision making. And hk is the abstract features that require further transformations by fk. In formal language, we can describe our network with the following equation (”\”denotes set subtraction): F = d ( l1, l2, ...lK , fK(fK−1(...f1(a\l1; θ1))− lK) ) (3) In order for fk to learn mutually exclusive separation, we propose a gating system for layer k, paramatrized by φk, that is responsible for determining whether a certain dimension of the input feature should be in lk or hk. For a layer with input dimension J , the gate {z1k, ...zJk } forms the corresponding gate where zjk ∈ {0, 1}. φk is the parameter that learns the probability for the gate zjk to have value 1 for the input feature at j th dimension to be allocated to hk and lk otherwise. In order to maintain mutual exclusiveness between lk and hk, we aim to learn φk such that the it allows a feature to pass to lk if and only if the gate is exactly zero. Otherwise, the gate is 1 and the feature goes to hk. Thus, we can rewrite the neural network F with the gating mechanism for the ith sample ai from the dataset: F = d ( B(z1) ai, B(z2) f1(z1 ai), ..., fK(zK fK−1(zK−1 fK−2(...f1(z1 ai))))) ) (4) Here, acts as element-wise multiplication. The functionB acts as a binary activation function that returns 1 if and only if the value of z is 1 and 0 otherwise. The function B allows level k feature lk = B(zk) fk−1 to be filter out if and only if it does not flow into the next layer at all. Then the optimization objective becomes: R(θ, φ) = 1 N ( N∑ i=1 (L(F(ai, z; θ, φ,B),yi) ) + λ K∑ k=1 ||zk||0 , zk = g(φk) (5) With an additional L0 regularization term to encourage less hk to pass into the next layer but more lk to flow directly to the GLM layer. g(φ) act as a transformation function that maps the parameter φ to the corresponding gate value. To achieve this discrete gate construction, we propose to learn the gating parameters under the context of L0 regularization. To be able to update parameter values through backpropogation, we propose to use the approximation technique developed by (Louizos et al., 2017) on differentiable L0 regularization. We direct interested readers to the original work for full establishment of approximating L0 and will summarize the key concept in terms of our gating mechanism below. Although the gate value z ∈ {0, 1} is discrete and the probability for a certain gate to be 0 or 1 is typically treated as a Bernoulli distribution, the probability space can be relaxed by the following: Consider s to be a continuous random variable with distribution q(s|φ) paramaterized by φ. The gate could be obtained by transformation function m(·) as: s ∼ q(s|φ), z = m(s) = min(1,max(0, s)) (6) Then the underlying probability space is continuous because s is continuous and can achieve exactly 0 gate value. The probability for the gate to be non-zero is calculated by the cumulative distribution function Q: q(z 6= 0|φ) = 1−Q(s ≤ 0|φ) (7) The authors furthers use the reparameterization trick to create a sampling free noise ∼ p( ) to obtain s: s = n( , φ) with a differentiable transformation function n(·), and thus g(·) is equivalent to m ◦ n where ◦ denotes function composition. Then the objective function under our feature leveling network is: R(θ, φ) = 1 N ( N∑ i=1 (L(F(ai, z; θ, φ,B, g), yi) ) + λ K K∑ k=1 ( 1−Q(sk ≤ 0|φ) ) zk = g(φk, ), g(φk, ) = m ◦ n(φk, ), ∼ p( ) (8) 4 RELATED WORK Interpreting existing models: The ability to explain the reasoning process within a neural network is essential to validate the robustness of the model and to ensure that the network is secure against adversarial attacks (Moosavi-Dezfooli et al., 2016; Brown et al., 2017; Gehr et al., 2018). In recent years, Many works have been done to explain the reasoning process of an existing neural network either through extracting the decision boundary (Bastani et al., 2018; Verma et al., 2018; Wang et al., 2018; Zakrzewski, 2001), or through a variety of visualization methods (Mahendran & Vedaldi, 2015; Zeiler & Fergus, 2014; Li et al., 2015). Most of those methods are designed for validation purpose. However, their results cannot be easily used to improve the original models. Self explaining models are proposed by (Alvarez Melis & Jaakkola, 2018) and it refers to models whose reasoning process is easy to interpret. This class of models does not require a separate validation process. Many works have focused on designing self-explaining architectures that can be trained end-to-end(Zhang et al., 2018; Worrall et al., 2017; Li et al., 2018; Kim & Mnih, 2018; Higgins et al., 2017). However, most self-explaining models sacrifice certain amount of performance for interpretability. Two noticeable models among these models are able to achieve competitive performance on standard tasks while maintaining interpretability. The NIT framework (Tsang et al., 2018) is able to interpret neural decision process by detecting feature interactions in a Generalized Additive Model style. The framework is able to achieve competitive performance but is only able to disentangle up to K groups of interactions and the value K needs to be searched manually during the training process. The SENN framework proposed by (Alvarez Melis & Jaakkola, 2018) focuses on abstract concept prototyping. It aggregates abstract concepts with a linear and interpretable model. Compared to our model, SENN requires an additional step to train an autoencoding network to prototype concepts and is not able to disentangle simple concepts from more abstract ones in a per-layer basis. Sparse neural network training refers to various methods developed to reduce the number of parameters of a neural model. Many investigations have been done in using L2 or L1 (Han et al., 2015; Ng, 2004; Wen et al., 2016; Girosi et al., 1995) regularization to prune neural network while maintaining differentiability for back propagation. Another choice for regularization and creating sparsity is the L0 regularization. However, due to its discrete nature, it does not support parameter learning through backpropagation. A continuous approximation of L0 is proposed in regard to resolve this problem and has shown effectiveness in pruning both FCNN and Convolutional Neural Networks (CNNs) in an end to end manner (Louizos et al., 2017). This regularization technique is further applied not only to neural architecture pruning but to feature selections (Yamada et al., 2018). Our work applies the L0 regularization’s feature selection ability in a novel context to select maximum amount of features as direct inputs for the GLM layer. Compared to Residual Structures, our model is able to explain features at different levels and their contributions separately due to the linear nature of GLM. ResNet (He et al., 2016), Highway Networka (Srivastava et al., 2015) cannot isolate each level as their skip features are further entangled by polling, non-linear activation and if the following blocks. Different from ResNet with full connection to all features, we propose to learn which feature to pass to GLM from a probabilistic perspective. 5 EXPERIMENTS We validate our proposed architecture through three commonly used datasets - MNIST, and California Housing. For each task, we use the same initial architecture to compare our proposed model and FCNN baseline. However, due to the gating effect of our model, some of the neurons in the middle layers are effectively pruned. The architecture we report in this section for our proposed model is the pruned version after training with the gates. The second to last layer of our proposed models is labeled with a star to denote concatenation with all previous lk and the output of the last hidden layer. For example, in the California Housing architecture, both proposed and FCNN baseline start with 13− 64− 32− 1 as the initial architecture, but due to gating effect on deeper layers, the layer with 32∗ neurons should have in effect 32 + (13 − 10) + (64 − 28) = 71 neurons accounting for previously gated features. (13− 10 = 3 for l1, 64− 28 = 36 for l2). The two objectives of our experiments are: 1) To test if our model is able to achieve competitive results, under the same initial architecture, compared to FCNN baseline and other recently proposed self-explaining models. This test is conducted by comparing model metrics such as root mean square error (RMSE) for regression tasks, classification accuracy for multi-class datasets. 2) Because our model can separately account for each layer’s contribution, we can apply the gradient with respect to each layer and get the level of features our model recognize for each part of the input. Experiment implementation details are deferred to appendix A7-10. 5.1 DATASETS & PERFORMANCES The MNIST hand writing dataset (LeCun et al., 2010) consists of pictures of hand written digits from 0 to 9 in 28×28 grey scale format. We use a 784−300−100−10 architecture for both FCNN baseline and the proposed model. This is the same architecture used in the original implementations of (Louizos et al., 2017). Our model is able to achieve similar result, with less number of layers, as those state-of-the-art architectures using ReLU activated FCNNs . The feature gates completely eliminated message passing to the 100 neuron layer, which implies that our model only need level 1 and level 2 layers for feature extractions to learn the MNIST datasets effectively. The California Housing dataset (Pace & Barry, 1997) is a regression task that contains various metrics, such as longitude and owners’ age to predict the price of a house. It contains 8 features and one of the features is nominal. We converted the nominal feature into one-hot encoding and there are 13 features in total. Since California Housing dataset does not contain standard test set, we split the dataset randomly with 4:1 train-test ratio. Our proposed model could beat the FCNN baseline with the same initial architecture. Only 3 out of 13 original features are directly passed to the GLM layer, implying that California Housing’s input features are mostly second and third level. 5.2 DISENTANGLING THE CONTRIBUTION OF EACH LEVEL OF FEATURES MNIST: With digit 4 as an example, compared to FCNN and SENN-FCNN, our model’s l1 identifies the contour of digit 4 and the corners of 4(larger in gradient) as first level feature for 4. l2 shows concentrated negative gradients in the middle of the digit which corresponds to the ”hole” in digit 4. Cal Housing: We compare with NIT in [22]. The left figure shows that our model with different colors indicating feature gradients from different layers, NIT’s colors indicate different groups. Compared to [22], our model’s l1 identifies ”longitude”(long) as a feature that linearly relates to housing price since in California, longitude is a major determining factor for housing price comparing to latitude. According to the gradients, l2 and l3 emphasizes on different parts of the input, justifying that our model could divide the features to different sets. However, for [22], gradients of most groups are similar, indicating that the features are not sufficiently disentangled among groups. In contrast, our model identifies most important features with stronger weight and zero or minimal weight for irrelevant ones. 5.3 SCALABILITY During the training stage, our model requires more computation resources as features from higher layers are passed to the final layer as well as to the final GLM layer. However, during the inference time when the gates are learned, each feature input to the neural layer is only computed once due to the mutual exclusion of our gating setup. The weight parameters related to the ”zeroed out” features can also be eliminated. In most cases our model results in lower parameter count. In the Appendix (A2) we show the number of parameters our framework needs for the reported inference models. 5.4 EXTEND TO CONVOLUTIONAL NEURAL NETWORKS Our framework is also applicable to be applied to convolutional architectures. To modify, we simply apply the gate to the input features. Appendix (A11 and Figure 5) shows our model can also clearly isolate features from different levels. To reduce the gated feature size, we apply convolutions with no activation to reduce dimension while maintaining linearity. 6 STRENGTH IN PRUNING REDUNDANT HIDDEN LAYERS Due to our proposed model’s ability to encourage linearity, our model is also able to reduce its network complexity automatically by decreasing the number of hidden layers. MNIST classification, as an example, when the dataset feature level is less than the number of hidden layers, our proposed model can learn to prune excess hidden layers automatically as the network learns not to pass information to further hidden layers. As a result, the number of hidden layers are effectively reduced. Therefore, we believe that our framework is helpful for architectural design by helping researchers to probe the ideal number of hidden layers to use as well as understanding the complexity of a given task. 7 DISCUSSION In this work we propose a novel architecture that could perform feature leveling automatically to boost interpretability. We use a toy example to demonstrate the fact that not all features are equal in complexity and most DNNs take mixed levels of features as input, decreasing interpretability. We then characterize absolute feature complexity by the number of layers it requires to be extracted to make GLM decision. To boost interpretability by isolating the kth level features. We propose feature leveling network with a gating mechanics and an end-to-end training process that allow the kth level features to be directly passed to the GLM layer. We perfrom various experiments to show that our feature leveling network is able to successfully separate out the kth level features without compromising performance. A APPENDIX A.1 ADDITIONAL EXPERIMENT The CIFAR-10 Dataset (Krizhevsky et al., 2014) consists of 32 × 32 RGB images of 10 different classes. We test our model’s ability to extract abstract concepts. For comparison, we follow the experiments in the NIT paper and choose the class cat and deer to perform binary classification. The resulting architecture shows that for FCNN networks, most of the the two chosen classes are mainly differentiated through their second level features. None of the raw inputs are used for direct classification. This corresponds to the assumption that RGB images of animals are relatively high level features. A.2 NUMBER OF PARAMETERS REQUIRED FOR THE REPORTED MODELS Due to unable to reproduce a result reported by NIT paper on CIFAR-10, we used the original architecture that the authors used in the original NIT paper. As a result, we did not include the number of parameters in our table. A.3 REVISIT GLM FOR INTERPRETATIONS OF DEEP NEURAL NETWORKS Consider training a linear model with dataset {X ,Y} where X is the set of features and Y is the corresponding set of labels. The goal is to learn a function f(x) from (xi, yi) ∈ {X ,Y}subject to a criteria function Lθ(xi, yi) with parameter set θ. In a classical setting of Linear Models, θ usually refers to a matrix w such that: ŷ = f(x) = T (w>x+ β) (9) Here, ŷ refers to the predicted label given a sample instance of a set of feature x and T refers to the set of functions such as Logictic, Softmax and Identity. GLM is easy to interpret because the contribution of each individual dimension of x to the decision output y by its corresponding weight. Therefore, we hope to emulate GML’s interpretability in a DNN setting - by creating a method to efficiently back-trace the contribution of different features. We argue that our proposed architecture is similar to a GLM in that our final layer makes decision based on the weights assigned to each level of input features. Our model is linear in relationship to various levels of features. Given k levels of features, our model makes decision with y = [w>1 l1, w > 2 l2, ..., w > K lK ], each weight parameter wi indicates the influence of that layer. With this construction, we can easily interpret how each levels of feature contribute to decision making. This insight can help us to understand whether the given task is more ”low level” or ”high level” and thus can also help us to understand the complexity of a given task with precise characterization. A.4 THE LAST LAYER OF COMMON NEURAL NETWORKS IS A GLM LAYER The ”classical” DNN architecture consists of a set of hidden layers with non-linear activations and a final layer that aggregates the result through sigmoid, softmax, or a linear function. The final layer is in fact similar to the GLM layer since it itself has the same form and optimization objective. A.5 OUR NOVELTY COMPARED TO RESNET AND OTHER SIMILAR ARCHITECTURE Compared to Residual Networks, our model is able to explain features at different levels and their contributions separately due to the linear nature of GLM. ResNet and DenseNet cannot isolate each level as their skip features are further entangled by polling, non-linear activation and the following hidden layers. Different from ResNet with full connection to all features, we propose to learn which feature to pass to GLM from a probabilistic perspective. Specifically, we introduce the l0 regularization for the purpose of performing effective feature leveling. In contrast, Resnet and Dense Net do not perform such layer wise regularization. A.6 POWER TO EXTRACT FEATURE COMPLEXITY THROUGH PRUNING To demonstrate that our network achieves effective pruning and can help practitioners to determine the complexity of a given problem, we use Cal Housing as an example and train our models with 2-5 hidden layers. Each intermediate hidden layer has 32-32 structure. To prove that our model can find the optimal structure, we first run the baseline model (without gating) with 2-5 hidden layers separately. We observe that the mse is 0.2364 for 3 layer, 0.2618 for 4 and 0.4807 for 5. Thus, the 3 layer model is sufficient to make accurate prediction. Then we train our model with gate selections and observed that when started with more than 4 hidden layers, our model would completely be reduced to a 3 layer model after training and this is indeed the best structure for the Cal housing task as verified with the complete models. Thus, we argue that our model is able to discover optimal number of hidden layer to make accurate prediction of housing price. This further proves that our model would be helpful for architecture engineers to decide on the optimal number of layers for any given task. A.7 REPRODUCING EMPIRICAL RESULTS: GENERAL CONFIGURATION All models are implemented in TensorFlow(Abadi et al., 2016) and hyperparameters configurations could be found in our public repository or supplemental code. Model name with citation denotes that the result is obtained from the original paper. SEEN’s architecture listed is the prototyping network while we use similar architecture for autoencoder parts. All SENN models are re-implemented with fully connected networks for comparison purposes. A.8 DATASET AND PREPROCESSING MNIST is a dataset that contains 60000 training and 10000 testing of handwriting digits from 0 to 9. Experiment results were tested against the allocated testing set. CIFAR-10 is a dataset consists of 10 classes of images each with 10000 training and 2000 testing. We used the allocated testing set for reporting results. For MNIST, CIFAR-10, we rescaled the color channel with a divisor of 255., to make pixel values from 0 to 1. For Cal Housing, we dropped all samples with any empty value entry. Normalize all numerical values with mean and standard deviation. The IXOR dataset is generated with the script attached in the supplemental material under src/independent xor. A.9 HYPERPARAMETER The only tunable hyperparameter in our model is the λ which we usually consider values from 0.5 to 0.01. All the λ values to display result is in the model scripts of the attached folder. Generally, lower λ are better for training more complicated dataset such as CIFAR-10 to prevent too many Gating at early stage. A.10 EXACT NUMBER OF ITERATION RUNS MNIST 280000 CIFAR-10 680000 California Housing 988000 A.11 RESULT OF OUR MODEL ON CONVOLUTIONAL NEURAL NETWORKS
1. What is the reviewer trying to understand regarding the paper's motivation? 2. What does the reviewer find unclear about the definitions of \mathbb{L_k} and \mathbb{H_k}? 3. Why does the reviewer question the necessity of the function B(.) and g(.)? 4. What does the reviewer find unclear about the sentences after equation 7? 5. How does the reviewer think the toy example can be improved to demonstrate the usefulness of the proposed method? 6. What does the reviewer think is missing in the experiments presented in the paper? 7. Does the reviewer have any questions or concerns about the paper's references?
Review
Review This paper proposes a categorization of inner layer weights to be linearly or non-linearly correlated with the output. The motivation on why this is important is somewhat weak in the paper. But I could see cases where this is important, if there is supporting evidence that it helps with interpretability. However, I did not see that in this draft unfortunately. “GLMs … interactions of non-linear activations are not involved” – the link function in all GLMs except linear regression is non-linear. So I am not sure what this statement means. The definitions of \mathbb{L_k} and \mathbb{H_k} are not clear when they are introduced. The example given talks about l_1 and h_1, not about \mathbb{L_k} and \mathbb{H_k}. I didn’t get the point of the function B(.). Why not directly use z \in {0,1} ? Similarly, I did not get the point of introducing g(.), why not just push the entire mapping into \phi ? The notation is made unnecessarily complicated. The sentences after eq 7 are unclear to me. What does \epsilon \sim p(\epsilon) even mean ? Is \epsilon the parameter or the random variable here ? What is \epsilon used for ? What is m(.) ? The toy example is simple. But it does not address why such a classification into h_k and l_k is helpful and how it can be used. Can the authors motivate the usecases where such an interpretation (using the toy example) is useful ? The experiments are not convincing. The MNIST experiment regarding identification of contours is very vague. There are tons of other methods that give better interpretation. The identification of “long” feature to be linearly correlated with the output can be done in a much faster and easier way, by simply checking individual feature correlations. The strength of such a method that the authors are proposing would be to extract useful information for cases where the input features are /not/ linearly correlated, while there are some inner layer features which /are/ linearly correlated. This could help in understanding the landscape of the classification better. But I did not see that happening here. Happy to be proved wrong by the authors or other reviewers if I am missing something here. Similarly, there is lot of work on sparsification/pruning of NNs, in light of which I am not sure what Section 6 adds. Appendix A.4 is redundant, softmax/sigmoid being a glm is well-known. Also, in the main text (e.g. last para page 1), the reference is to Appendix A.3, while it should be to A.4.
ICLR
Title Not All Features Are Equal: Feature Leveling Deep Neural Networks for Better Interpretation Abstract Self-explaining models are models that reveal decision making parameters in an interpretable manner so that the model reasoning process can be directly understood by human beings. General Linear Models (GLMs) are self-explaining because the model weights directly show how each feature contributes to the output value. However, deep neural networks (DNNs) are in general not self-explaining due to the non-linearity of the activation functions, complex architectures, obscure feature extraction and transformation process. In this work, we illustrate the fact that existing deep architectures are hard to interpret because each hidden layer carries a mix of low level features and high level features. As a solution, we propose a novel feature leveling architecture that isolates low level features from high level features on a per-layer basis to better utilize the GLM layer in the proposed architecture for interpretation. Experimental results show that our modified models are able to achieve competitive results comparing to main-stream architectures on standard datasets while being more self-explainable. Our implementations and configurations are publicly available for reproductions†. 1 INTRODUCTION Deep Neural Networks (DNNs) are viewed as back-box models because of their obscure decision making process. One reason that makes deep neural networks hard to interpret is that they are able to magically extract abstract concepts through multi-layer non-linear activations and end-toend training. From a human perspective, it is hard to understand how features are extracted from different hidden layers and what features are used for final decision making. In response to the challenge of interpretability, two paths are taken to unbox neural networks’ decision learning process. One method is to design verifying algorithms that can be applied to existing models to back-trace their decision learning process. Another method is to design models that ”explain” the decision making process automatically. The second direction is promising in that the interpretability is built-in architecturally. Thus, the verification feedback can be directly used to improve the model. One class of the self-explaining models borrows the interpretability of General Linear Models (GLMs) such as linear regression. GLMs are naturally interpretable in that complicated interactions of non-linear activations are not involved. The contribution of each feature to the final decision output can simply be analyzed by examining the corresponding weight parameters. Therefore, we take a step forward to investigate ways to make DNNs as similar to GLMs as possible for interpretability purpose while maintaining competitive performance. Fortunately, a GLM model naturally exists in the last layer of most discriminative architectures of DNNs (See appendix A.3 for the reason that the last layer is a GLM layer). However, the GLM could only account for the output generated by the last layer and this output is not easy to interpret because it potentially contains mixed levels of features. In the following section, we use empirical results to demonstrate this mixture effect. Based on this observation, one way to naturally improve interpretation is to prevent features extracted by different layers from mixing together. Thus, we † Public Repo URL annonymized for review purpose-See code folder for detailed implementation directly pass features extracted by each layer to the final GLM layer. This can further improve interpretability by leveraging the weights of the GLM layer to explain the decision making process. Motivated by this observation, we design a feature leveling network structure that can automatically separate low level features from high level features to avoid mixture effect. In other words, if the low level features extracted by the kth hidden layer can be readily used by the GLM layer, we should directly pass these features to the GLM rather than feeding them to the k + 1th hidden layer. We also propose a feature leveling scale to measure the complexity of different sets of features’ in an unambiguous manner rather than simply using vague terms such as ”low” and ”high” to describe these features. In the following sections, we will first lay out the proposed definition of feature leveling. We then will illustrate how different levels of features reside in the same feature space. Based on the above observations, we propose feature leveling network, an architectural modification on existing models that can isolate low level features from high level features within different layers of the neural network in an unsupervised manner. In the experiment section, we will use empirical results to show that this modification can also be applied to reduce the number of layers in an architecture and thus reduce the complexity of the network. In this paper, we focus primarily on fully connected neural networks(FCNN) with ReLU activation function in the hidden layers. Our main contributions are as follows: • We take a step forward to quantify feature complexity for DNNs. • We investigate the mixture effect between features of different complexities in the hidden layers of DNNs. • We propose a feature leveling architecture that is able to isolate low level features from high level features in each layer to improve interpretation. • We further show that the proposed architecture is able to prune redundant hidden layers to reduce DNNs’ complexity with little compromise on performance. The remaining content is organized as follows: In section 2, we first introduce our definitions of feature leveling and use a toy example to show the mixture effect of features in hidden layers. In section 3, we give a detailed account of our proposed feature leveling network that could effectively isolate different levels of features. In section 4, we provide a high level introduction to some related works that motivated our architectural design. In Section 5, we test and analyze our proposed architecture on various real world datasets and show that our architecture is able to achieve competitive performance while improving interpretability. In section 6, we show that our model is also able to automatically prune redundant hidden layers, thus reducing the complexity of DNNs. 2 FEATURE LEVELING FOR NEURAL NETWORKS The concepts of low level and high level features are often brought up within the machine learning literature. However, their definitions are vague and not precise enough for applications. Intuitively, low level features are usually ”simple” concepts or patterns whereas high level features are ”abstract” or ”implicit” features. Within the scope of this paper, we take a step forward to give a formal definition of feature leveling that quantizes feature complexity in an absolute scale. This concept of a features’ scale is better than simply having ”low” and ”high” as descriptions because it reveals an unambiguous ordering between different sets of features. We will use a toy example to demonstrate how features can have different levels and explain why separating different levels of features could improve interpretability. 2.1 A TOY EXAMPLE We create a toy dataset called Independent XOR(IXOR). IXOR consists of a set of uniformally distributed features X : {(x1, x2, x3)|x1 ∈ [−2, 2], x2 ∈ [−2, 2], x3 ∈ [0, 1]} and a set of labels Y : {0, 1}. We use top indices for attributes of this toy example. The labels are assigned as:{ y = 1 x1 × x2 > 0 ∧ x3 > 0.5 y = 0 otherwise In this dataset, (x1, x2, x3) clearly have different levels of feature. x3 can be directly used by the GLM layer as it has a linear decision boundary. (x1, x2) is more complex as they form an XOR pattern and cannot be linearly separated, thus requiring further decomposition to be made sufficient for the GLM layer. To make correct decisions, the DNN should use one layer to decompose the XOR into lower level features, and directly transport x3’s value to into the GLM layer. 2.2 CHARACTERIZE LOW AND HIGH LEVEL FEATURES WITH FEATURE LEVELING From IXOR we can see that not all features have the same level of ”complexity”. Some could be directly fed into the GLM layer, others may need to go through one or more hidden layers to be transformed to features that can directly contribute to decision making. Thus, instead of using ”low” and ”high” level to characterize features, we propose to frame the complexity of different features with the definition of feature leveling. For a dataset D consisting of N i.i.d samples with features and their corresponding labels {(a1,y1), ..., (aN ,yN )}. We assume that samples ai ∈ D contains features that requires at most K hidden layers to be transformed to perform optimal inference. For a DNN trained with K hidden layers and a GLM layer, we define the set of kth level feature as the set of features that requires k − 1 hidden layers to extract under the current network setup to be sufficiently utilized by the GLM layer. In the following paragraphs, we denote lk ∈ Lk as the kth level features extracted from one sample and Lk denotes the set of all kth level feature to be learned in the target distribution. The rest of high level features are denoted by Hk that should be passed to the kth layer to extract further level features. In this case, Lk and Hk should be disjoint, that is Lk ⋂ Hk = ∅. In the case of the toy example, x3 is l1, level one feature, as it is learned by the first hidden layer to directly transport its value to the GLM layer. (x1, x2) is h1. The XOR can be decomposed by one hidden layer with sufficient number of parameters to be directly used by the GLM layer to make accurate decisions. Assuming the first hidden layer f1 has sufficient parameters, it should take in h1 and output l2. 2.3 HOW THE PROPOSED MODEL SOLVES THE MIXTURE EFFECT AND BOOSTS INTERPRETATION However, common FCNN does not separate each level of feature explicitly. Figure 2 shows the heatmaps of the weight vectors for both FCNN baseline and proposed feature leveling network trained on the IXOR dataset. We observe from FCNN that x3’s value is able to be preserved by the last column of the weight vector from the first layer but is mixed with all other features in the second layer, before passing into the GLM layer. Our proposed model, on the other hand, is able to cleanly separate x3 and preserve its identity as an input to the GLM layer. In addition, our model is able to identify that the interaction between (x1, x2) can be captured by one single layer. Thus, the model eliminates the second layer and pass (x1, x2) features extracted by the first hidden layer directly to the GLM layer. Looking at the results obtained from the toy example, we can clearly see that the proposed model is able to solve the mixture effect of features and gives out correct levels for features with different complexities in the context of the original problem. Therefore, the model is more interpretable in that it creates a clear path of reasoning and the contirbution of each level of features can be understood from the weight parameters in the GLM. 3 OUR PROPOSED ARCHITECTURE Inspired by our definition of feature leveling and to resolve the mixture of features problem, we design an architecture that is able to recursively filter the kth level features from the kth layer inputs and allow them to be directly passed to the final GLM layer. We start with a definition of a FCNN and extend that to our model: we aim to learn a function F parametrized by a neural network with K hidden layers. The function F can be written as: F = d ( fK(fK−1(...f1(a; θ1)); θK) ) (1) fk is the kth hidden layer function with parameters θk. d(·) is the GLM model used for either classification, or regression. Thus, the goal is to learn the function F such that: R(θ) = 1 N ( N∑ i=1 L(F(ai; θ),yi) ) θ∗ = argmin θ (R(θ)) (2) In our formulation, each hidden layer can be viewed as separator for the kth level features and extractor for higher level features. Thus, the output of fk has two parts: lk is the set of kth level feature extracted from inputs and can be readily transported to the GLM layer for decision making. And hk is the abstract features that require further transformations by fk. In formal language, we can describe our network with the following equation (”\”denotes set subtraction): F = d ( l1, l2, ...lK , fK(fK−1(...f1(a\l1; θ1))− lK) ) (3) In order for fk to learn mutually exclusive separation, we propose a gating system for layer k, paramatrized by φk, that is responsible for determining whether a certain dimension of the input feature should be in lk or hk. For a layer with input dimension J , the gate {z1k, ...zJk } forms the corresponding gate where zjk ∈ {0, 1}. φk is the parameter that learns the probability for the gate zjk to have value 1 for the input feature at j th dimension to be allocated to hk and lk otherwise. In order to maintain mutual exclusiveness between lk and hk, we aim to learn φk such that the it allows a feature to pass to lk if and only if the gate is exactly zero. Otherwise, the gate is 1 and the feature goes to hk. Thus, we can rewrite the neural network F with the gating mechanism for the ith sample ai from the dataset: F = d ( B(z1) ai, B(z2) f1(z1 ai), ..., fK(zK fK−1(zK−1 fK−2(...f1(z1 ai))))) ) (4) Here, acts as element-wise multiplication. The functionB acts as a binary activation function that returns 1 if and only if the value of z is 1 and 0 otherwise. The function B allows level k feature lk = B(zk) fk−1 to be filter out if and only if it does not flow into the next layer at all. Then the optimization objective becomes: R(θ, φ) = 1 N ( N∑ i=1 (L(F(ai, z; θ, φ,B),yi) ) + λ K∑ k=1 ||zk||0 , zk = g(φk) (5) With an additional L0 regularization term to encourage less hk to pass into the next layer but more lk to flow directly to the GLM layer. g(φ) act as a transformation function that maps the parameter φ to the corresponding gate value. To achieve this discrete gate construction, we propose to learn the gating parameters under the context of L0 regularization. To be able to update parameter values through backpropogation, we propose to use the approximation technique developed by (Louizos et al., 2017) on differentiable L0 regularization. We direct interested readers to the original work for full establishment of approximating L0 and will summarize the key concept in terms of our gating mechanism below. Although the gate value z ∈ {0, 1} is discrete and the probability for a certain gate to be 0 or 1 is typically treated as a Bernoulli distribution, the probability space can be relaxed by the following: Consider s to be a continuous random variable with distribution q(s|φ) paramaterized by φ. The gate could be obtained by transformation function m(·) as: s ∼ q(s|φ), z = m(s) = min(1,max(0, s)) (6) Then the underlying probability space is continuous because s is continuous and can achieve exactly 0 gate value. The probability for the gate to be non-zero is calculated by the cumulative distribution function Q: q(z 6= 0|φ) = 1−Q(s ≤ 0|φ) (7) The authors furthers use the reparameterization trick to create a sampling free noise ∼ p( ) to obtain s: s = n( , φ) with a differentiable transformation function n(·), and thus g(·) is equivalent to m ◦ n where ◦ denotes function composition. Then the objective function under our feature leveling network is: R(θ, φ) = 1 N ( N∑ i=1 (L(F(ai, z; θ, φ,B, g), yi) ) + λ K K∑ k=1 ( 1−Q(sk ≤ 0|φ) ) zk = g(φk, ), g(φk, ) = m ◦ n(φk, ), ∼ p( ) (8) 4 RELATED WORK Interpreting existing models: The ability to explain the reasoning process within a neural network is essential to validate the robustness of the model and to ensure that the network is secure against adversarial attacks (Moosavi-Dezfooli et al., 2016; Brown et al., 2017; Gehr et al., 2018). In recent years, Many works have been done to explain the reasoning process of an existing neural network either through extracting the decision boundary (Bastani et al., 2018; Verma et al., 2018; Wang et al., 2018; Zakrzewski, 2001), or through a variety of visualization methods (Mahendran & Vedaldi, 2015; Zeiler & Fergus, 2014; Li et al., 2015). Most of those methods are designed for validation purpose. However, their results cannot be easily used to improve the original models. Self explaining models are proposed by (Alvarez Melis & Jaakkola, 2018) and it refers to models whose reasoning process is easy to interpret. This class of models does not require a separate validation process. Many works have focused on designing self-explaining architectures that can be trained end-to-end(Zhang et al., 2018; Worrall et al., 2017; Li et al., 2018; Kim & Mnih, 2018; Higgins et al., 2017). However, most self-explaining models sacrifice certain amount of performance for interpretability. Two noticeable models among these models are able to achieve competitive performance on standard tasks while maintaining interpretability. The NIT framework (Tsang et al., 2018) is able to interpret neural decision process by detecting feature interactions in a Generalized Additive Model style. The framework is able to achieve competitive performance but is only able to disentangle up to K groups of interactions and the value K needs to be searched manually during the training process. The SENN framework proposed by (Alvarez Melis & Jaakkola, 2018) focuses on abstract concept prototyping. It aggregates abstract concepts with a linear and interpretable model. Compared to our model, SENN requires an additional step to train an autoencoding network to prototype concepts and is not able to disentangle simple concepts from more abstract ones in a per-layer basis. Sparse neural network training refers to various methods developed to reduce the number of parameters of a neural model. Many investigations have been done in using L2 or L1 (Han et al., 2015; Ng, 2004; Wen et al., 2016; Girosi et al., 1995) regularization to prune neural network while maintaining differentiability for back propagation. Another choice for regularization and creating sparsity is the L0 regularization. However, due to its discrete nature, it does not support parameter learning through backpropagation. A continuous approximation of L0 is proposed in regard to resolve this problem and has shown effectiveness in pruning both FCNN and Convolutional Neural Networks (CNNs) in an end to end manner (Louizos et al., 2017). This regularization technique is further applied not only to neural architecture pruning but to feature selections (Yamada et al., 2018). Our work applies the L0 regularization’s feature selection ability in a novel context to select maximum amount of features as direct inputs for the GLM layer. Compared to Residual Structures, our model is able to explain features at different levels and their contributions separately due to the linear nature of GLM. ResNet (He et al., 2016), Highway Networka (Srivastava et al., 2015) cannot isolate each level as their skip features are further entangled by polling, non-linear activation and if the following blocks. Different from ResNet with full connection to all features, we propose to learn which feature to pass to GLM from a probabilistic perspective. 5 EXPERIMENTS We validate our proposed architecture through three commonly used datasets - MNIST, and California Housing. For each task, we use the same initial architecture to compare our proposed model and FCNN baseline. However, due to the gating effect of our model, some of the neurons in the middle layers are effectively pruned. The architecture we report in this section for our proposed model is the pruned version after training with the gates. The second to last layer of our proposed models is labeled with a star to denote concatenation with all previous lk and the output of the last hidden layer. For example, in the California Housing architecture, both proposed and FCNN baseline start with 13− 64− 32− 1 as the initial architecture, but due to gating effect on deeper layers, the layer with 32∗ neurons should have in effect 32 + (13 − 10) + (64 − 28) = 71 neurons accounting for previously gated features. (13− 10 = 3 for l1, 64− 28 = 36 for l2). The two objectives of our experiments are: 1) To test if our model is able to achieve competitive results, under the same initial architecture, compared to FCNN baseline and other recently proposed self-explaining models. This test is conducted by comparing model metrics such as root mean square error (RMSE) for regression tasks, classification accuracy for multi-class datasets. 2) Because our model can separately account for each layer’s contribution, we can apply the gradient with respect to each layer and get the level of features our model recognize for each part of the input. Experiment implementation details are deferred to appendix A7-10. 5.1 DATASETS & PERFORMANCES The MNIST hand writing dataset (LeCun et al., 2010) consists of pictures of hand written digits from 0 to 9 in 28×28 grey scale format. We use a 784−300−100−10 architecture for both FCNN baseline and the proposed model. This is the same architecture used in the original implementations of (Louizos et al., 2017). Our model is able to achieve similar result, with less number of layers, as those state-of-the-art architectures using ReLU activated FCNNs . The feature gates completely eliminated message passing to the 100 neuron layer, which implies that our model only need level 1 and level 2 layers for feature extractions to learn the MNIST datasets effectively. The California Housing dataset (Pace & Barry, 1997) is a regression task that contains various metrics, such as longitude and owners’ age to predict the price of a house. It contains 8 features and one of the features is nominal. We converted the nominal feature into one-hot encoding and there are 13 features in total. Since California Housing dataset does not contain standard test set, we split the dataset randomly with 4:1 train-test ratio. Our proposed model could beat the FCNN baseline with the same initial architecture. Only 3 out of 13 original features are directly passed to the GLM layer, implying that California Housing’s input features are mostly second and third level. 5.2 DISENTANGLING THE CONTRIBUTION OF EACH LEVEL OF FEATURES MNIST: With digit 4 as an example, compared to FCNN and SENN-FCNN, our model’s l1 identifies the contour of digit 4 and the corners of 4(larger in gradient) as first level feature for 4. l2 shows concentrated negative gradients in the middle of the digit which corresponds to the ”hole” in digit 4. Cal Housing: We compare with NIT in [22]. The left figure shows that our model with different colors indicating feature gradients from different layers, NIT’s colors indicate different groups. Compared to [22], our model’s l1 identifies ”longitude”(long) as a feature that linearly relates to housing price since in California, longitude is a major determining factor for housing price comparing to latitude. According to the gradients, l2 and l3 emphasizes on different parts of the input, justifying that our model could divide the features to different sets. However, for [22], gradients of most groups are similar, indicating that the features are not sufficiently disentangled among groups. In contrast, our model identifies most important features with stronger weight and zero or minimal weight for irrelevant ones. 5.3 SCALABILITY During the training stage, our model requires more computation resources as features from higher layers are passed to the final layer as well as to the final GLM layer. However, during the inference time when the gates are learned, each feature input to the neural layer is only computed once due to the mutual exclusion of our gating setup. The weight parameters related to the ”zeroed out” features can also be eliminated. In most cases our model results in lower parameter count. In the Appendix (A2) we show the number of parameters our framework needs for the reported inference models. 5.4 EXTEND TO CONVOLUTIONAL NEURAL NETWORKS Our framework is also applicable to be applied to convolutional architectures. To modify, we simply apply the gate to the input features. Appendix (A11 and Figure 5) shows our model can also clearly isolate features from different levels. To reduce the gated feature size, we apply convolutions with no activation to reduce dimension while maintaining linearity. 6 STRENGTH IN PRUNING REDUNDANT HIDDEN LAYERS Due to our proposed model’s ability to encourage linearity, our model is also able to reduce its network complexity automatically by decreasing the number of hidden layers. MNIST classification, as an example, when the dataset feature level is less than the number of hidden layers, our proposed model can learn to prune excess hidden layers automatically as the network learns not to pass information to further hidden layers. As a result, the number of hidden layers are effectively reduced. Therefore, we believe that our framework is helpful for architectural design by helping researchers to probe the ideal number of hidden layers to use as well as understanding the complexity of a given task. 7 DISCUSSION In this work we propose a novel architecture that could perform feature leveling automatically to boost interpretability. We use a toy example to demonstrate the fact that not all features are equal in complexity and most DNNs take mixed levels of features as input, decreasing interpretability. We then characterize absolute feature complexity by the number of layers it requires to be extracted to make GLM decision. To boost interpretability by isolating the kth level features. We propose feature leveling network with a gating mechanics and an end-to-end training process that allow the kth level features to be directly passed to the GLM layer. We perfrom various experiments to show that our feature leveling network is able to successfully separate out the kth level features without compromising performance. A APPENDIX A.1 ADDITIONAL EXPERIMENT The CIFAR-10 Dataset (Krizhevsky et al., 2014) consists of 32 × 32 RGB images of 10 different classes. We test our model’s ability to extract abstract concepts. For comparison, we follow the experiments in the NIT paper and choose the class cat and deer to perform binary classification. The resulting architecture shows that for FCNN networks, most of the the two chosen classes are mainly differentiated through their second level features. None of the raw inputs are used for direct classification. This corresponds to the assumption that RGB images of animals are relatively high level features. A.2 NUMBER OF PARAMETERS REQUIRED FOR THE REPORTED MODELS Due to unable to reproduce a result reported by NIT paper on CIFAR-10, we used the original architecture that the authors used in the original NIT paper. As a result, we did not include the number of parameters in our table. A.3 REVISIT GLM FOR INTERPRETATIONS OF DEEP NEURAL NETWORKS Consider training a linear model with dataset {X ,Y} where X is the set of features and Y is the corresponding set of labels. The goal is to learn a function f(x) from (xi, yi) ∈ {X ,Y}subject to a criteria function Lθ(xi, yi) with parameter set θ. In a classical setting of Linear Models, θ usually refers to a matrix w such that: ŷ = f(x) = T (w>x+ β) (9) Here, ŷ refers to the predicted label given a sample instance of a set of feature x and T refers to the set of functions such as Logictic, Softmax and Identity. GLM is easy to interpret because the contribution of each individual dimension of x to the decision output y by its corresponding weight. Therefore, we hope to emulate GML’s interpretability in a DNN setting - by creating a method to efficiently back-trace the contribution of different features. We argue that our proposed architecture is similar to a GLM in that our final layer makes decision based on the weights assigned to each level of input features. Our model is linear in relationship to various levels of features. Given k levels of features, our model makes decision with y = [w>1 l1, w > 2 l2, ..., w > K lK ], each weight parameter wi indicates the influence of that layer. With this construction, we can easily interpret how each levels of feature contribute to decision making. This insight can help us to understand whether the given task is more ”low level” or ”high level” and thus can also help us to understand the complexity of a given task with precise characterization. A.4 THE LAST LAYER OF COMMON NEURAL NETWORKS IS A GLM LAYER The ”classical” DNN architecture consists of a set of hidden layers with non-linear activations and a final layer that aggregates the result through sigmoid, softmax, or a linear function. The final layer is in fact similar to the GLM layer since it itself has the same form and optimization objective. A.5 OUR NOVELTY COMPARED TO RESNET AND OTHER SIMILAR ARCHITECTURE Compared to Residual Networks, our model is able to explain features at different levels and their contributions separately due to the linear nature of GLM. ResNet and DenseNet cannot isolate each level as their skip features are further entangled by polling, non-linear activation and the following hidden layers. Different from ResNet with full connection to all features, we propose to learn which feature to pass to GLM from a probabilistic perspective. Specifically, we introduce the l0 regularization for the purpose of performing effective feature leveling. In contrast, Resnet and Dense Net do not perform such layer wise regularization. A.6 POWER TO EXTRACT FEATURE COMPLEXITY THROUGH PRUNING To demonstrate that our network achieves effective pruning and can help practitioners to determine the complexity of a given problem, we use Cal Housing as an example and train our models with 2-5 hidden layers. Each intermediate hidden layer has 32-32 structure. To prove that our model can find the optimal structure, we first run the baseline model (without gating) with 2-5 hidden layers separately. We observe that the mse is 0.2364 for 3 layer, 0.2618 for 4 and 0.4807 for 5. Thus, the 3 layer model is sufficient to make accurate prediction. Then we train our model with gate selections and observed that when started with more than 4 hidden layers, our model would completely be reduced to a 3 layer model after training and this is indeed the best structure for the Cal housing task as verified with the complete models. Thus, we argue that our model is able to discover optimal number of hidden layer to make accurate prediction of housing price. This further proves that our model would be helpful for architecture engineers to decide on the optimal number of layers for any given task. A.7 REPRODUCING EMPIRICAL RESULTS: GENERAL CONFIGURATION All models are implemented in TensorFlow(Abadi et al., 2016) and hyperparameters configurations could be found in our public repository or supplemental code. Model name with citation denotes that the result is obtained from the original paper. SEEN’s architecture listed is the prototyping network while we use similar architecture for autoencoder parts. All SENN models are re-implemented with fully connected networks for comparison purposes. A.8 DATASET AND PREPROCESSING MNIST is a dataset that contains 60000 training and 10000 testing of handwriting digits from 0 to 9. Experiment results were tested against the allocated testing set. CIFAR-10 is a dataset consists of 10 classes of images each with 10000 training and 2000 testing. We used the allocated testing set for reporting results. For MNIST, CIFAR-10, we rescaled the color channel with a divisor of 255., to make pixel values from 0 to 1. For Cal Housing, we dropped all samples with any empty value entry. Normalize all numerical values with mean and standard deviation. The IXOR dataset is generated with the script attached in the supplemental material under src/independent xor. A.9 HYPERPARAMETER The only tunable hyperparameter in our model is the λ which we usually consider values from 0.5 to 0.01. All the λ values to display result is in the model scripts of the attached folder. Generally, lower λ are better for training more complicated dataset such as CIFAR-10 to prevent too many Gating at early stage. A.10 EXACT NUMBER OF ITERATION RUNS MNIST 280000 CIFAR-10 680000 California Housing 988000 A.11 RESULT OF OUR MODEL ON CONVOLUTIONAL NEURAL NETWORKS
1. What is the main contribution of the paper regarding feature separation in neural networks? 2. How does the proposed architecture differ from previous works using gating structures? 3. What are some minor issues with the paper's quality, such as typos and inconsistent capitalization? 4. What are some major concerns regarding the interpretability of features and the gates themselves? 5. Would it be simpler to explain the network by looking at which features are passed to the GLM layer, rather than finding gradients? 6. How do the claims about the number of layers required for sufficient classification stand without thorough ablation studies? 7. What other attribution methods could supplement their claims about interpretability? 8. Why were only a few datasets used in the experiments, and how might the architecture perform on larger vision datasets such as ImageNet?
Review
Review Summary: This paper proposes a novel architecture that is able to separate different types of features learned at each layer of a neural network through a gating structure -- features that are sufficiently passed through the network are immediately sent to the final output layer. In addition, they provide reasonable definitions of levels of features, in contrast to the standard "low" to "high" descriptions. Lastly, in order to make the model more interpretable, they utilize an L0 loss on the gates of each layer to prioritize lower level features being used in the final layer. Significance: Although gating is not novel, their use to send kth-level features to the final GLM layer is. Other than that, not much is contributed, as their differentiability trick, as mentioned, has already been done. The motivation to separate different types of features is interesting and definitely an issue that should be studied more. Quality: The paper is easy to follow and nicely written, but with a few minor typo issues: 1. Page 1, refers to appendix A.3 but should be for A.4 2. Page 2, "section" is inconsistently capitalized 3. Page 6, mentions three commonly used datasets but only mentions MNIST and California Housing. 4. Page 8, mentions Appendix A.11 for CNN but this section is empty. In regards to content quality, a few things stand out that could be improved: 1. A major issue is that the interpretability of features with k > 1 are still not explained -- all we know is that they don't need to be sent further through the network. (i.e. solves the separation issue but leaves gaps in interpretability) 2. Since the gates themselves can be studied, rather than finding gradients, wouldn't a simpler way to explain the network be to look at which features are passed to the GLM layer? This would especially be helpful in the first layer when looking at the original input features. 3. Currently it is not clear if the architecture learns when features (l_k) are directly "useful" for classification, or if they are just not compatible with the features passed on to the next layer (h_k). 4. In terms of interpretability, only a few other methods are tested, and gradients are the only way they compare. An exploration of other attribution methods could have further supplemented their claims. 5. Claims are made about how many layers a certain dataset needs for sufficient classification through heuristic experiments; however they are not thorough enough in terms of ablation to fully make this claim. Width of layers are chosen but not analyzed; how is gating affected by the width of the network? For example, in MNIST, would only 3 layers be needed if the width is increased or decreased? This isn't immediately clear. 6. Extensiveness of experiments -- I do like the toy dataset as an example, but to show effectiveness of this framework, a larger breadth of datasets could have been used. As an example, in the SENN paper, they utilize breast cancer and COMPAS but these were not tested on this architecture. In addition, the results from convolutional layers would be much more preferred, since the best performing architectures on large vision datasets such as ImageNet primarily use convolutions.
ICLR
Title Not All Features Are Equal: Feature Leveling Deep Neural Networks for Better Interpretation Abstract Self-explaining models are models that reveal decision making parameters in an interpretable manner so that the model reasoning process can be directly understood by human beings. General Linear Models (GLMs) are self-explaining because the model weights directly show how each feature contributes to the output value. However, deep neural networks (DNNs) are in general not self-explaining due to the non-linearity of the activation functions, complex architectures, obscure feature extraction and transformation process. In this work, we illustrate the fact that existing deep architectures are hard to interpret because each hidden layer carries a mix of low level features and high level features. As a solution, we propose a novel feature leveling architecture that isolates low level features from high level features on a per-layer basis to better utilize the GLM layer in the proposed architecture for interpretation. Experimental results show that our modified models are able to achieve competitive results comparing to main-stream architectures on standard datasets while being more self-explainable. Our implementations and configurations are publicly available for reproductions†. 1 INTRODUCTION Deep Neural Networks (DNNs) are viewed as back-box models because of their obscure decision making process. One reason that makes deep neural networks hard to interpret is that they are able to magically extract abstract concepts through multi-layer non-linear activations and end-toend training. From a human perspective, it is hard to understand how features are extracted from different hidden layers and what features are used for final decision making. In response to the challenge of interpretability, two paths are taken to unbox neural networks’ decision learning process. One method is to design verifying algorithms that can be applied to existing models to back-trace their decision learning process. Another method is to design models that ”explain” the decision making process automatically. The second direction is promising in that the interpretability is built-in architecturally. Thus, the verification feedback can be directly used to improve the model. One class of the self-explaining models borrows the interpretability of General Linear Models (GLMs) such as linear regression. GLMs are naturally interpretable in that complicated interactions of non-linear activations are not involved. The contribution of each feature to the final decision output can simply be analyzed by examining the corresponding weight parameters. Therefore, we take a step forward to investigate ways to make DNNs as similar to GLMs as possible for interpretability purpose while maintaining competitive performance. Fortunately, a GLM model naturally exists in the last layer of most discriminative architectures of DNNs (See appendix A.3 for the reason that the last layer is a GLM layer). However, the GLM could only account for the output generated by the last layer and this output is not easy to interpret because it potentially contains mixed levels of features. In the following section, we use empirical results to demonstrate this mixture effect. Based on this observation, one way to naturally improve interpretation is to prevent features extracted by different layers from mixing together. Thus, we † Public Repo URL annonymized for review purpose-See code folder for detailed implementation directly pass features extracted by each layer to the final GLM layer. This can further improve interpretability by leveraging the weights of the GLM layer to explain the decision making process. Motivated by this observation, we design a feature leveling network structure that can automatically separate low level features from high level features to avoid mixture effect. In other words, if the low level features extracted by the kth hidden layer can be readily used by the GLM layer, we should directly pass these features to the GLM rather than feeding them to the k + 1th hidden layer. We also propose a feature leveling scale to measure the complexity of different sets of features’ in an unambiguous manner rather than simply using vague terms such as ”low” and ”high” to describe these features. In the following sections, we will first lay out the proposed definition of feature leveling. We then will illustrate how different levels of features reside in the same feature space. Based on the above observations, we propose feature leveling network, an architectural modification on existing models that can isolate low level features from high level features within different layers of the neural network in an unsupervised manner. In the experiment section, we will use empirical results to show that this modification can also be applied to reduce the number of layers in an architecture and thus reduce the complexity of the network. In this paper, we focus primarily on fully connected neural networks(FCNN) with ReLU activation function in the hidden layers. Our main contributions are as follows: • We take a step forward to quantify feature complexity for DNNs. • We investigate the mixture effect between features of different complexities in the hidden layers of DNNs. • We propose a feature leveling architecture that is able to isolate low level features from high level features in each layer to improve interpretation. • We further show that the proposed architecture is able to prune redundant hidden layers to reduce DNNs’ complexity with little compromise on performance. The remaining content is organized as follows: In section 2, we first introduce our definitions of feature leveling and use a toy example to show the mixture effect of features in hidden layers. In section 3, we give a detailed account of our proposed feature leveling network that could effectively isolate different levels of features. In section 4, we provide a high level introduction to some related works that motivated our architectural design. In Section 5, we test and analyze our proposed architecture on various real world datasets and show that our architecture is able to achieve competitive performance while improving interpretability. In section 6, we show that our model is also able to automatically prune redundant hidden layers, thus reducing the complexity of DNNs. 2 FEATURE LEVELING FOR NEURAL NETWORKS The concepts of low level and high level features are often brought up within the machine learning literature. However, their definitions are vague and not precise enough for applications. Intuitively, low level features are usually ”simple” concepts or patterns whereas high level features are ”abstract” or ”implicit” features. Within the scope of this paper, we take a step forward to give a formal definition of feature leveling that quantizes feature complexity in an absolute scale. This concept of a features’ scale is better than simply having ”low” and ”high” as descriptions because it reveals an unambiguous ordering between different sets of features. We will use a toy example to demonstrate how features can have different levels and explain why separating different levels of features could improve interpretability. 2.1 A TOY EXAMPLE We create a toy dataset called Independent XOR(IXOR). IXOR consists of a set of uniformally distributed features X : {(x1, x2, x3)|x1 ∈ [−2, 2], x2 ∈ [−2, 2], x3 ∈ [0, 1]} and a set of labels Y : {0, 1}. We use top indices for attributes of this toy example. The labels are assigned as:{ y = 1 x1 × x2 > 0 ∧ x3 > 0.5 y = 0 otherwise In this dataset, (x1, x2, x3) clearly have different levels of feature. x3 can be directly used by the GLM layer as it has a linear decision boundary. (x1, x2) is more complex as they form an XOR pattern and cannot be linearly separated, thus requiring further decomposition to be made sufficient for the GLM layer. To make correct decisions, the DNN should use one layer to decompose the XOR into lower level features, and directly transport x3’s value to into the GLM layer. 2.2 CHARACTERIZE LOW AND HIGH LEVEL FEATURES WITH FEATURE LEVELING From IXOR we can see that not all features have the same level of ”complexity”. Some could be directly fed into the GLM layer, others may need to go through one or more hidden layers to be transformed to features that can directly contribute to decision making. Thus, instead of using ”low” and ”high” level to characterize features, we propose to frame the complexity of different features with the definition of feature leveling. For a dataset D consisting of N i.i.d samples with features and their corresponding labels {(a1,y1), ..., (aN ,yN )}. We assume that samples ai ∈ D contains features that requires at most K hidden layers to be transformed to perform optimal inference. For a DNN trained with K hidden layers and a GLM layer, we define the set of kth level feature as the set of features that requires k − 1 hidden layers to extract under the current network setup to be sufficiently utilized by the GLM layer. In the following paragraphs, we denote lk ∈ Lk as the kth level features extracted from one sample and Lk denotes the set of all kth level feature to be learned in the target distribution. The rest of high level features are denoted by Hk that should be passed to the kth layer to extract further level features. In this case, Lk and Hk should be disjoint, that is Lk ⋂ Hk = ∅. In the case of the toy example, x3 is l1, level one feature, as it is learned by the first hidden layer to directly transport its value to the GLM layer. (x1, x2) is h1. The XOR can be decomposed by one hidden layer with sufficient number of parameters to be directly used by the GLM layer to make accurate decisions. Assuming the first hidden layer f1 has sufficient parameters, it should take in h1 and output l2. 2.3 HOW THE PROPOSED MODEL SOLVES THE MIXTURE EFFECT AND BOOSTS INTERPRETATION However, common FCNN does not separate each level of feature explicitly. Figure 2 shows the heatmaps of the weight vectors for both FCNN baseline and proposed feature leveling network trained on the IXOR dataset. We observe from FCNN that x3’s value is able to be preserved by the last column of the weight vector from the first layer but is mixed with all other features in the second layer, before passing into the GLM layer. Our proposed model, on the other hand, is able to cleanly separate x3 and preserve its identity as an input to the GLM layer. In addition, our model is able to identify that the interaction between (x1, x2) can be captured by one single layer. Thus, the model eliminates the second layer and pass (x1, x2) features extracted by the first hidden layer directly to the GLM layer. Looking at the results obtained from the toy example, we can clearly see that the proposed model is able to solve the mixture effect of features and gives out correct levels for features with different complexities in the context of the original problem. Therefore, the model is more interpretable in that it creates a clear path of reasoning and the contirbution of each level of features can be understood from the weight parameters in the GLM. 3 OUR PROPOSED ARCHITECTURE Inspired by our definition of feature leveling and to resolve the mixture of features problem, we design an architecture that is able to recursively filter the kth level features from the kth layer inputs and allow them to be directly passed to the final GLM layer. We start with a definition of a FCNN and extend that to our model: we aim to learn a function F parametrized by a neural network with K hidden layers. The function F can be written as: F = d ( fK(fK−1(...f1(a; θ1)); θK) ) (1) fk is the kth hidden layer function with parameters θk. d(·) is the GLM model used for either classification, or regression. Thus, the goal is to learn the function F such that: R(θ) = 1 N ( N∑ i=1 L(F(ai; θ),yi) ) θ∗ = argmin θ (R(θ)) (2) In our formulation, each hidden layer can be viewed as separator for the kth level features and extractor for higher level features. Thus, the output of fk has two parts: lk is the set of kth level feature extracted from inputs and can be readily transported to the GLM layer for decision making. And hk is the abstract features that require further transformations by fk. In formal language, we can describe our network with the following equation (”\”denotes set subtraction): F = d ( l1, l2, ...lK , fK(fK−1(...f1(a\l1; θ1))− lK) ) (3) In order for fk to learn mutually exclusive separation, we propose a gating system for layer k, paramatrized by φk, that is responsible for determining whether a certain dimension of the input feature should be in lk or hk. For a layer with input dimension J , the gate {z1k, ...zJk } forms the corresponding gate where zjk ∈ {0, 1}. φk is the parameter that learns the probability for the gate zjk to have value 1 for the input feature at j th dimension to be allocated to hk and lk otherwise. In order to maintain mutual exclusiveness between lk and hk, we aim to learn φk such that the it allows a feature to pass to lk if and only if the gate is exactly zero. Otherwise, the gate is 1 and the feature goes to hk. Thus, we can rewrite the neural network F with the gating mechanism for the ith sample ai from the dataset: F = d ( B(z1) ai, B(z2) f1(z1 ai), ..., fK(zK fK−1(zK−1 fK−2(...f1(z1 ai))))) ) (4) Here, acts as element-wise multiplication. The functionB acts as a binary activation function that returns 1 if and only if the value of z is 1 and 0 otherwise. The function B allows level k feature lk = B(zk) fk−1 to be filter out if and only if it does not flow into the next layer at all. Then the optimization objective becomes: R(θ, φ) = 1 N ( N∑ i=1 (L(F(ai, z; θ, φ,B),yi) ) + λ K∑ k=1 ||zk||0 , zk = g(φk) (5) With an additional L0 regularization term to encourage less hk to pass into the next layer but more lk to flow directly to the GLM layer. g(φ) act as a transformation function that maps the parameter φ to the corresponding gate value. To achieve this discrete gate construction, we propose to learn the gating parameters under the context of L0 regularization. To be able to update parameter values through backpropogation, we propose to use the approximation technique developed by (Louizos et al., 2017) on differentiable L0 regularization. We direct interested readers to the original work for full establishment of approximating L0 and will summarize the key concept in terms of our gating mechanism below. Although the gate value z ∈ {0, 1} is discrete and the probability for a certain gate to be 0 or 1 is typically treated as a Bernoulli distribution, the probability space can be relaxed by the following: Consider s to be a continuous random variable with distribution q(s|φ) paramaterized by φ. The gate could be obtained by transformation function m(·) as: s ∼ q(s|φ), z = m(s) = min(1,max(0, s)) (6) Then the underlying probability space is continuous because s is continuous and can achieve exactly 0 gate value. The probability for the gate to be non-zero is calculated by the cumulative distribution function Q: q(z 6= 0|φ) = 1−Q(s ≤ 0|φ) (7) The authors furthers use the reparameterization trick to create a sampling free noise ∼ p( ) to obtain s: s = n( , φ) with a differentiable transformation function n(·), and thus g(·) is equivalent to m ◦ n where ◦ denotes function composition. Then the objective function under our feature leveling network is: R(θ, φ) = 1 N ( N∑ i=1 (L(F(ai, z; θ, φ,B, g), yi) ) + λ K K∑ k=1 ( 1−Q(sk ≤ 0|φ) ) zk = g(φk, ), g(φk, ) = m ◦ n(φk, ), ∼ p( ) (8) 4 RELATED WORK Interpreting existing models: The ability to explain the reasoning process within a neural network is essential to validate the robustness of the model and to ensure that the network is secure against adversarial attacks (Moosavi-Dezfooli et al., 2016; Brown et al., 2017; Gehr et al., 2018). In recent years, Many works have been done to explain the reasoning process of an existing neural network either through extracting the decision boundary (Bastani et al., 2018; Verma et al., 2018; Wang et al., 2018; Zakrzewski, 2001), or through a variety of visualization methods (Mahendran & Vedaldi, 2015; Zeiler & Fergus, 2014; Li et al., 2015). Most of those methods are designed for validation purpose. However, their results cannot be easily used to improve the original models. Self explaining models are proposed by (Alvarez Melis & Jaakkola, 2018) and it refers to models whose reasoning process is easy to interpret. This class of models does not require a separate validation process. Many works have focused on designing self-explaining architectures that can be trained end-to-end(Zhang et al., 2018; Worrall et al., 2017; Li et al., 2018; Kim & Mnih, 2018; Higgins et al., 2017). However, most self-explaining models sacrifice certain amount of performance for interpretability. Two noticeable models among these models are able to achieve competitive performance on standard tasks while maintaining interpretability. The NIT framework (Tsang et al., 2018) is able to interpret neural decision process by detecting feature interactions in a Generalized Additive Model style. The framework is able to achieve competitive performance but is only able to disentangle up to K groups of interactions and the value K needs to be searched manually during the training process. The SENN framework proposed by (Alvarez Melis & Jaakkola, 2018) focuses on abstract concept prototyping. It aggregates abstract concepts with a linear and interpretable model. Compared to our model, SENN requires an additional step to train an autoencoding network to prototype concepts and is not able to disentangle simple concepts from more abstract ones in a per-layer basis. Sparse neural network training refers to various methods developed to reduce the number of parameters of a neural model. Many investigations have been done in using L2 or L1 (Han et al., 2015; Ng, 2004; Wen et al., 2016; Girosi et al., 1995) regularization to prune neural network while maintaining differentiability for back propagation. Another choice for regularization and creating sparsity is the L0 regularization. However, due to its discrete nature, it does not support parameter learning through backpropagation. A continuous approximation of L0 is proposed in regard to resolve this problem and has shown effectiveness in pruning both FCNN and Convolutional Neural Networks (CNNs) in an end to end manner (Louizos et al., 2017). This regularization technique is further applied not only to neural architecture pruning but to feature selections (Yamada et al., 2018). Our work applies the L0 regularization’s feature selection ability in a novel context to select maximum amount of features as direct inputs for the GLM layer. Compared to Residual Structures, our model is able to explain features at different levels and their contributions separately due to the linear nature of GLM. ResNet (He et al., 2016), Highway Networka (Srivastava et al., 2015) cannot isolate each level as their skip features are further entangled by polling, non-linear activation and if the following blocks. Different from ResNet with full connection to all features, we propose to learn which feature to pass to GLM from a probabilistic perspective. 5 EXPERIMENTS We validate our proposed architecture through three commonly used datasets - MNIST, and California Housing. For each task, we use the same initial architecture to compare our proposed model and FCNN baseline. However, due to the gating effect of our model, some of the neurons in the middle layers are effectively pruned. The architecture we report in this section for our proposed model is the pruned version after training with the gates. The second to last layer of our proposed models is labeled with a star to denote concatenation with all previous lk and the output of the last hidden layer. For example, in the California Housing architecture, both proposed and FCNN baseline start with 13− 64− 32− 1 as the initial architecture, but due to gating effect on deeper layers, the layer with 32∗ neurons should have in effect 32 + (13 − 10) + (64 − 28) = 71 neurons accounting for previously gated features. (13− 10 = 3 for l1, 64− 28 = 36 for l2). The two objectives of our experiments are: 1) To test if our model is able to achieve competitive results, under the same initial architecture, compared to FCNN baseline and other recently proposed self-explaining models. This test is conducted by comparing model metrics such as root mean square error (RMSE) for regression tasks, classification accuracy for multi-class datasets. 2) Because our model can separately account for each layer’s contribution, we can apply the gradient with respect to each layer and get the level of features our model recognize for each part of the input. Experiment implementation details are deferred to appendix A7-10. 5.1 DATASETS & PERFORMANCES The MNIST hand writing dataset (LeCun et al., 2010) consists of pictures of hand written digits from 0 to 9 in 28×28 grey scale format. We use a 784−300−100−10 architecture for both FCNN baseline and the proposed model. This is the same architecture used in the original implementations of (Louizos et al., 2017). Our model is able to achieve similar result, with less number of layers, as those state-of-the-art architectures using ReLU activated FCNNs . The feature gates completely eliminated message passing to the 100 neuron layer, which implies that our model only need level 1 and level 2 layers for feature extractions to learn the MNIST datasets effectively. The California Housing dataset (Pace & Barry, 1997) is a regression task that contains various metrics, such as longitude and owners’ age to predict the price of a house. It contains 8 features and one of the features is nominal. We converted the nominal feature into one-hot encoding and there are 13 features in total. Since California Housing dataset does not contain standard test set, we split the dataset randomly with 4:1 train-test ratio. Our proposed model could beat the FCNN baseline with the same initial architecture. Only 3 out of 13 original features are directly passed to the GLM layer, implying that California Housing’s input features are mostly second and third level. 5.2 DISENTANGLING THE CONTRIBUTION OF EACH LEVEL OF FEATURES MNIST: With digit 4 as an example, compared to FCNN and SENN-FCNN, our model’s l1 identifies the contour of digit 4 and the corners of 4(larger in gradient) as first level feature for 4. l2 shows concentrated negative gradients in the middle of the digit which corresponds to the ”hole” in digit 4. Cal Housing: We compare with NIT in [22]. The left figure shows that our model with different colors indicating feature gradients from different layers, NIT’s colors indicate different groups. Compared to [22], our model’s l1 identifies ”longitude”(long) as a feature that linearly relates to housing price since in California, longitude is a major determining factor for housing price comparing to latitude. According to the gradients, l2 and l3 emphasizes on different parts of the input, justifying that our model could divide the features to different sets. However, for [22], gradients of most groups are similar, indicating that the features are not sufficiently disentangled among groups. In contrast, our model identifies most important features with stronger weight and zero or minimal weight for irrelevant ones. 5.3 SCALABILITY During the training stage, our model requires more computation resources as features from higher layers are passed to the final layer as well as to the final GLM layer. However, during the inference time when the gates are learned, each feature input to the neural layer is only computed once due to the mutual exclusion of our gating setup. The weight parameters related to the ”zeroed out” features can also be eliminated. In most cases our model results in lower parameter count. In the Appendix (A2) we show the number of parameters our framework needs for the reported inference models. 5.4 EXTEND TO CONVOLUTIONAL NEURAL NETWORKS Our framework is also applicable to be applied to convolutional architectures. To modify, we simply apply the gate to the input features. Appendix (A11 and Figure 5) shows our model can also clearly isolate features from different levels. To reduce the gated feature size, we apply convolutions with no activation to reduce dimension while maintaining linearity. 6 STRENGTH IN PRUNING REDUNDANT HIDDEN LAYERS Due to our proposed model’s ability to encourage linearity, our model is also able to reduce its network complexity automatically by decreasing the number of hidden layers. MNIST classification, as an example, when the dataset feature level is less than the number of hidden layers, our proposed model can learn to prune excess hidden layers automatically as the network learns not to pass information to further hidden layers. As a result, the number of hidden layers are effectively reduced. Therefore, we believe that our framework is helpful for architectural design by helping researchers to probe the ideal number of hidden layers to use as well as understanding the complexity of a given task. 7 DISCUSSION In this work we propose a novel architecture that could perform feature leveling automatically to boost interpretability. We use a toy example to demonstrate the fact that not all features are equal in complexity and most DNNs take mixed levels of features as input, decreasing interpretability. We then characterize absolute feature complexity by the number of layers it requires to be extracted to make GLM decision. To boost interpretability by isolating the kth level features. We propose feature leveling network with a gating mechanics and an end-to-end training process that allow the kth level features to be directly passed to the GLM layer. We perfrom various experiments to show that our feature leveling network is able to successfully separate out the kth level features without compromising performance. A APPENDIX A.1 ADDITIONAL EXPERIMENT The CIFAR-10 Dataset (Krizhevsky et al., 2014) consists of 32 × 32 RGB images of 10 different classes. We test our model’s ability to extract abstract concepts. For comparison, we follow the experiments in the NIT paper and choose the class cat and deer to perform binary classification. The resulting architecture shows that for FCNN networks, most of the the two chosen classes are mainly differentiated through their second level features. None of the raw inputs are used for direct classification. This corresponds to the assumption that RGB images of animals are relatively high level features. A.2 NUMBER OF PARAMETERS REQUIRED FOR THE REPORTED MODELS Due to unable to reproduce a result reported by NIT paper on CIFAR-10, we used the original architecture that the authors used in the original NIT paper. As a result, we did not include the number of parameters in our table. A.3 REVISIT GLM FOR INTERPRETATIONS OF DEEP NEURAL NETWORKS Consider training a linear model with dataset {X ,Y} where X is the set of features and Y is the corresponding set of labels. The goal is to learn a function f(x) from (xi, yi) ∈ {X ,Y}subject to a criteria function Lθ(xi, yi) with parameter set θ. In a classical setting of Linear Models, θ usually refers to a matrix w such that: ŷ = f(x) = T (w>x+ β) (9) Here, ŷ refers to the predicted label given a sample instance of a set of feature x and T refers to the set of functions such as Logictic, Softmax and Identity. GLM is easy to interpret because the contribution of each individual dimension of x to the decision output y by its corresponding weight. Therefore, we hope to emulate GML’s interpretability in a DNN setting - by creating a method to efficiently back-trace the contribution of different features. We argue that our proposed architecture is similar to a GLM in that our final layer makes decision based on the weights assigned to each level of input features. Our model is linear in relationship to various levels of features. Given k levels of features, our model makes decision with y = [w>1 l1, w > 2 l2, ..., w > K lK ], each weight parameter wi indicates the influence of that layer. With this construction, we can easily interpret how each levels of feature contribute to decision making. This insight can help us to understand whether the given task is more ”low level” or ”high level” and thus can also help us to understand the complexity of a given task with precise characterization. A.4 THE LAST LAYER OF COMMON NEURAL NETWORKS IS A GLM LAYER The ”classical” DNN architecture consists of a set of hidden layers with non-linear activations and a final layer that aggregates the result through sigmoid, softmax, or a linear function. The final layer is in fact similar to the GLM layer since it itself has the same form and optimization objective. A.5 OUR NOVELTY COMPARED TO RESNET AND OTHER SIMILAR ARCHITECTURE Compared to Residual Networks, our model is able to explain features at different levels and their contributions separately due to the linear nature of GLM. ResNet and DenseNet cannot isolate each level as their skip features are further entangled by polling, non-linear activation and the following hidden layers. Different from ResNet with full connection to all features, we propose to learn which feature to pass to GLM from a probabilistic perspective. Specifically, we introduce the l0 regularization for the purpose of performing effective feature leveling. In contrast, Resnet and Dense Net do not perform such layer wise regularization. A.6 POWER TO EXTRACT FEATURE COMPLEXITY THROUGH PRUNING To demonstrate that our network achieves effective pruning and can help practitioners to determine the complexity of a given problem, we use Cal Housing as an example and train our models with 2-5 hidden layers. Each intermediate hidden layer has 32-32 structure. To prove that our model can find the optimal structure, we first run the baseline model (without gating) with 2-5 hidden layers separately. We observe that the mse is 0.2364 for 3 layer, 0.2618 for 4 and 0.4807 for 5. Thus, the 3 layer model is sufficient to make accurate prediction. Then we train our model with gate selections and observed that when started with more than 4 hidden layers, our model would completely be reduced to a 3 layer model after training and this is indeed the best structure for the Cal housing task as verified with the complete models. Thus, we argue that our model is able to discover optimal number of hidden layer to make accurate prediction of housing price. This further proves that our model would be helpful for architecture engineers to decide on the optimal number of layers for any given task. A.7 REPRODUCING EMPIRICAL RESULTS: GENERAL CONFIGURATION All models are implemented in TensorFlow(Abadi et al., 2016) and hyperparameters configurations could be found in our public repository or supplemental code. Model name with citation denotes that the result is obtained from the original paper. SEEN’s architecture listed is the prototyping network while we use similar architecture for autoencoder parts. All SENN models are re-implemented with fully connected networks for comparison purposes. A.8 DATASET AND PREPROCESSING MNIST is a dataset that contains 60000 training and 10000 testing of handwriting digits from 0 to 9. Experiment results were tested against the allocated testing set. CIFAR-10 is a dataset consists of 10 classes of images each with 10000 training and 2000 testing. We used the allocated testing set for reporting results. For MNIST, CIFAR-10, we rescaled the color channel with a divisor of 255., to make pixel values from 0 to 1. For Cal Housing, we dropped all samples with any empty value entry. Normalize all numerical values with mean and standard deviation. The IXOR dataset is generated with the script attached in the supplemental material under src/independent xor. A.9 HYPERPARAMETER The only tunable hyperparameter in our model is the λ which we usually consider values from 0.5 to 0.01. All the λ values to display result is in the model scripts of the attached folder. Generally, lower λ are better for training more complicated dataset such as CIFAR-10 to prevent too many Gating at early stage. A.10 EXACT NUMBER OF ITERATION RUNS MNIST 280000 CIFAR-10 680000 California Housing 988000 A.11 RESULT OF OUR MODEL ON CONVOLUTIONAL NEURAL NETWORKS
1. What is the main contribution of the paper regarding feature leveling in deep fully connected networks? 2. What are the strengths and weaknesses of the proposed method compared to existing algorithms for sparse neural network training? 3. Do you have any concerns or questions regarding the novelty and significance of the paper's contributions? 4. Can you clarify some parts of the paper, such as the disjoint requirement between l_k and h_k, the inverse binary activation function in (4), and the meaning of g(.)? 5. How does the reviewer assess the clarity, quality, novelty, and significance of the paper's content?
Review
Review This paper proposes a feature leveling technique to improve the self-explaining of deep fully connected neural networks. The authors propose to learn a gated function for each feature dimension for whether to directly send the feature to the final linear layer. The gated function is trained with L0 regularization technique proposed in (Louizos et al., 2017) to encourage more low-level features passed to the final layer. Experimental results on MNIST, California Housing, CIFAR10 show that the proposed method can achieve comparable performance with existing algorithms on sparse neural network training. Quality: Overall, the paper is well written with some minor formatting errors. The toy example demonstrates the idea of this paper clearly. However, the novelty of this paper, when compared to NIT, is that the L0 regularization is used to pass the feature to last layer. Considering the self-explaining feature, this work can only explain that some of the input features are suited for final layer, while there is no explanation on the other features since they are used to construct higher level features. Claririty: Some parts of this paper are not clear: 1. Why l_k and h_k has to be disjoint? A feature suited for final classification does not suggest that it can’t be used to construct higher-level feature. 2. In (4), should not B() be an inverse binary activation function: (1-z)? 3. Is g(.) the Bernoulli distribution? 4. In section 5.2, why compare the gradients of a specific input example while one can directly look at z_k, the gated function? 5. I assume the fully connected layers have bias term. If so, (4) suggests that the gated location will also be added with a learned bias, which is different than what the paper proposes. Novelty: The novelty of this paper lies in the sparse training objective becomes passing as many lower-level features to final layer as possible instead of zeros out the intermediate weights. However, the key technique, L0 regularization, has been proposed and used as stated in the related work. While the authors state the application L0 to a novel context to select features is different from prior work, the novelty is rather incremental. Significance: This work demonstrates that the L0 regularization technique for sparse neural network training can also be applied to learn a skip-layer connection. However, from both novelty, performance, and self-explaining perspectives, this work does not introduce much to the field. Pros: 1. The paper is well written. 2. The toy example showcases the issue that this work tries to tackle. 3. The experimental results show the comparable performance to existing works. Cons: 1. The novelty is not sufficient considering the prior works on sparse neural network training. 2. There are some clarification issues as mentioned before. 3. The performance is only comparable to existing works. 4. The self-explaining contribution is not clear since only a few input features can be explained if they are passed to the final layers. 5. There is no experiment on how \lambda would affect the resulting network architecture. Minor corrections: 1. First paragraph on sec. 5: three datasets: two datasets (or mention it’s in appendix). 2. 5.2 compare to NIT: the citations are in wrong format. Also the reference for NIT is corrupted.
ICLR
Title Not All Features Are Equal: Feature Leveling Deep Neural Networks for Better Interpretation Abstract Self-explaining models are models that reveal decision making parameters in an interpretable manner so that the model reasoning process can be directly understood by human beings. General Linear Models (GLMs) are self-explaining because the model weights directly show how each feature contributes to the output value. However, deep neural networks (DNNs) are in general not self-explaining due to the non-linearity of the activation functions, complex architectures, obscure feature extraction and transformation process. In this work, we illustrate the fact that existing deep architectures are hard to interpret because each hidden layer carries a mix of low level features and high level features. As a solution, we propose a novel feature leveling architecture that isolates low level features from high level features on a per-layer basis to better utilize the GLM layer in the proposed architecture for interpretation. Experimental results show that our modified models are able to achieve competitive results comparing to main-stream architectures on standard datasets while being more self-explainable. Our implementations and configurations are publicly available for reproductions†. 1 INTRODUCTION Deep Neural Networks (DNNs) are viewed as back-box models because of their obscure decision making process. One reason that makes deep neural networks hard to interpret is that they are able to magically extract abstract concepts through multi-layer non-linear activations and end-toend training. From a human perspective, it is hard to understand how features are extracted from different hidden layers and what features are used for final decision making. In response to the challenge of interpretability, two paths are taken to unbox neural networks’ decision learning process. One method is to design verifying algorithms that can be applied to existing models to back-trace their decision learning process. Another method is to design models that ”explain” the decision making process automatically. The second direction is promising in that the interpretability is built-in architecturally. Thus, the verification feedback can be directly used to improve the model. One class of the self-explaining models borrows the interpretability of General Linear Models (GLMs) such as linear regression. GLMs are naturally interpretable in that complicated interactions of non-linear activations are not involved. The contribution of each feature to the final decision output can simply be analyzed by examining the corresponding weight parameters. Therefore, we take a step forward to investigate ways to make DNNs as similar to GLMs as possible for interpretability purpose while maintaining competitive performance. Fortunately, a GLM model naturally exists in the last layer of most discriminative architectures of DNNs (See appendix A.3 for the reason that the last layer is a GLM layer). However, the GLM could only account for the output generated by the last layer and this output is not easy to interpret because it potentially contains mixed levels of features. In the following section, we use empirical results to demonstrate this mixture effect. Based on this observation, one way to naturally improve interpretation is to prevent features extracted by different layers from mixing together. Thus, we † Public Repo URL annonymized for review purpose-See code folder for detailed implementation directly pass features extracted by each layer to the final GLM layer. This can further improve interpretability by leveraging the weights of the GLM layer to explain the decision making process. Motivated by this observation, we design a feature leveling network structure that can automatically separate low level features from high level features to avoid mixture effect. In other words, if the low level features extracted by the kth hidden layer can be readily used by the GLM layer, we should directly pass these features to the GLM rather than feeding them to the k + 1th hidden layer. We also propose a feature leveling scale to measure the complexity of different sets of features’ in an unambiguous manner rather than simply using vague terms such as ”low” and ”high” to describe these features. In the following sections, we will first lay out the proposed definition of feature leveling. We then will illustrate how different levels of features reside in the same feature space. Based on the above observations, we propose feature leveling network, an architectural modification on existing models that can isolate low level features from high level features within different layers of the neural network in an unsupervised manner. In the experiment section, we will use empirical results to show that this modification can also be applied to reduce the number of layers in an architecture and thus reduce the complexity of the network. In this paper, we focus primarily on fully connected neural networks(FCNN) with ReLU activation function in the hidden layers. Our main contributions are as follows: • We take a step forward to quantify feature complexity for DNNs. • We investigate the mixture effect between features of different complexities in the hidden layers of DNNs. • We propose a feature leveling architecture that is able to isolate low level features from high level features in each layer to improve interpretation. • We further show that the proposed architecture is able to prune redundant hidden layers to reduce DNNs’ complexity with little compromise on performance. The remaining content is organized as follows: In section 2, we first introduce our definitions of feature leveling and use a toy example to show the mixture effect of features in hidden layers. In section 3, we give a detailed account of our proposed feature leveling network that could effectively isolate different levels of features. In section 4, we provide a high level introduction to some related works that motivated our architectural design. In Section 5, we test and analyze our proposed architecture on various real world datasets and show that our architecture is able to achieve competitive performance while improving interpretability. In section 6, we show that our model is also able to automatically prune redundant hidden layers, thus reducing the complexity of DNNs. 2 FEATURE LEVELING FOR NEURAL NETWORKS The concepts of low level and high level features are often brought up within the machine learning literature. However, their definitions are vague and not precise enough for applications. Intuitively, low level features are usually ”simple” concepts or patterns whereas high level features are ”abstract” or ”implicit” features. Within the scope of this paper, we take a step forward to give a formal definition of feature leveling that quantizes feature complexity in an absolute scale. This concept of a features’ scale is better than simply having ”low” and ”high” as descriptions because it reveals an unambiguous ordering between different sets of features. We will use a toy example to demonstrate how features can have different levels and explain why separating different levels of features could improve interpretability. 2.1 A TOY EXAMPLE We create a toy dataset called Independent XOR(IXOR). IXOR consists of a set of uniformally distributed features X : {(x1, x2, x3)|x1 ∈ [−2, 2], x2 ∈ [−2, 2], x3 ∈ [0, 1]} and a set of labels Y : {0, 1}. We use top indices for attributes of this toy example. The labels are assigned as:{ y = 1 x1 × x2 > 0 ∧ x3 > 0.5 y = 0 otherwise In this dataset, (x1, x2, x3) clearly have different levels of feature. x3 can be directly used by the GLM layer as it has a linear decision boundary. (x1, x2) is more complex as they form an XOR pattern and cannot be linearly separated, thus requiring further decomposition to be made sufficient for the GLM layer. To make correct decisions, the DNN should use one layer to decompose the XOR into lower level features, and directly transport x3’s value to into the GLM layer. 2.2 CHARACTERIZE LOW AND HIGH LEVEL FEATURES WITH FEATURE LEVELING From IXOR we can see that not all features have the same level of ”complexity”. Some could be directly fed into the GLM layer, others may need to go through one or more hidden layers to be transformed to features that can directly contribute to decision making. Thus, instead of using ”low” and ”high” level to characterize features, we propose to frame the complexity of different features with the definition of feature leveling. For a dataset D consisting of N i.i.d samples with features and their corresponding labels {(a1,y1), ..., (aN ,yN )}. We assume that samples ai ∈ D contains features that requires at most K hidden layers to be transformed to perform optimal inference. For a DNN trained with K hidden layers and a GLM layer, we define the set of kth level feature as the set of features that requires k − 1 hidden layers to extract under the current network setup to be sufficiently utilized by the GLM layer. In the following paragraphs, we denote lk ∈ Lk as the kth level features extracted from one sample and Lk denotes the set of all kth level feature to be learned in the target distribution. The rest of high level features are denoted by Hk that should be passed to the kth layer to extract further level features. In this case, Lk and Hk should be disjoint, that is Lk ⋂ Hk = ∅. In the case of the toy example, x3 is l1, level one feature, as it is learned by the first hidden layer to directly transport its value to the GLM layer. (x1, x2) is h1. The XOR can be decomposed by one hidden layer with sufficient number of parameters to be directly used by the GLM layer to make accurate decisions. Assuming the first hidden layer f1 has sufficient parameters, it should take in h1 and output l2. 2.3 HOW THE PROPOSED MODEL SOLVES THE MIXTURE EFFECT AND BOOSTS INTERPRETATION However, common FCNN does not separate each level of feature explicitly. Figure 2 shows the heatmaps of the weight vectors for both FCNN baseline and proposed feature leveling network trained on the IXOR dataset. We observe from FCNN that x3’s value is able to be preserved by the last column of the weight vector from the first layer but is mixed with all other features in the second layer, before passing into the GLM layer. Our proposed model, on the other hand, is able to cleanly separate x3 and preserve its identity as an input to the GLM layer. In addition, our model is able to identify that the interaction between (x1, x2) can be captured by one single layer. Thus, the model eliminates the second layer and pass (x1, x2) features extracted by the first hidden layer directly to the GLM layer. Looking at the results obtained from the toy example, we can clearly see that the proposed model is able to solve the mixture effect of features and gives out correct levels for features with different complexities in the context of the original problem. Therefore, the model is more interpretable in that it creates a clear path of reasoning and the contirbution of each level of features can be understood from the weight parameters in the GLM. 3 OUR PROPOSED ARCHITECTURE Inspired by our definition of feature leveling and to resolve the mixture of features problem, we design an architecture that is able to recursively filter the kth level features from the kth layer inputs and allow them to be directly passed to the final GLM layer. We start with a definition of a FCNN and extend that to our model: we aim to learn a function F parametrized by a neural network with K hidden layers. The function F can be written as: F = d ( fK(fK−1(...f1(a; θ1)); θK) ) (1) fk is the kth hidden layer function with parameters θk. d(·) is the GLM model used for either classification, or regression. Thus, the goal is to learn the function F such that: R(θ) = 1 N ( N∑ i=1 L(F(ai; θ),yi) ) θ∗ = argmin θ (R(θ)) (2) In our formulation, each hidden layer can be viewed as separator for the kth level features and extractor for higher level features. Thus, the output of fk has two parts: lk is the set of kth level feature extracted from inputs and can be readily transported to the GLM layer for decision making. And hk is the abstract features that require further transformations by fk. In formal language, we can describe our network with the following equation (”\”denotes set subtraction): F = d ( l1, l2, ...lK , fK(fK−1(...f1(a\l1; θ1))− lK) ) (3) In order for fk to learn mutually exclusive separation, we propose a gating system for layer k, paramatrized by φk, that is responsible for determining whether a certain dimension of the input feature should be in lk or hk. For a layer with input dimension J , the gate {z1k, ...zJk } forms the corresponding gate where zjk ∈ {0, 1}. φk is the parameter that learns the probability for the gate zjk to have value 1 for the input feature at j th dimension to be allocated to hk and lk otherwise. In order to maintain mutual exclusiveness between lk and hk, we aim to learn φk such that the it allows a feature to pass to lk if and only if the gate is exactly zero. Otherwise, the gate is 1 and the feature goes to hk. Thus, we can rewrite the neural network F with the gating mechanism for the ith sample ai from the dataset: F = d ( B(z1) ai, B(z2) f1(z1 ai), ..., fK(zK fK−1(zK−1 fK−2(...f1(z1 ai))))) ) (4) Here, acts as element-wise multiplication. The functionB acts as a binary activation function that returns 1 if and only if the value of z is 1 and 0 otherwise. The function B allows level k feature lk = B(zk) fk−1 to be filter out if and only if it does not flow into the next layer at all. Then the optimization objective becomes: R(θ, φ) = 1 N ( N∑ i=1 (L(F(ai, z; θ, φ,B),yi) ) + λ K∑ k=1 ||zk||0 , zk = g(φk) (5) With an additional L0 regularization term to encourage less hk to pass into the next layer but more lk to flow directly to the GLM layer. g(φ) act as a transformation function that maps the parameter φ to the corresponding gate value. To achieve this discrete gate construction, we propose to learn the gating parameters under the context of L0 regularization. To be able to update parameter values through backpropogation, we propose to use the approximation technique developed by (Louizos et al., 2017) on differentiable L0 regularization. We direct interested readers to the original work for full establishment of approximating L0 and will summarize the key concept in terms of our gating mechanism below. Although the gate value z ∈ {0, 1} is discrete and the probability for a certain gate to be 0 or 1 is typically treated as a Bernoulli distribution, the probability space can be relaxed by the following: Consider s to be a continuous random variable with distribution q(s|φ) paramaterized by φ. The gate could be obtained by transformation function m(·) as: s ∼ q(s|φ), z = m(s) = min(1,max(0, s)) (6) Then the underlying probability space is continuous because s is continuous and can achieve exactly 0 gate value. The probability for the gate to be non-zero is calculated by the cumulative distribution function Q: q(z 6= 0|φ) = 1−Q(s ≤ 0|φ) (7) The authors furthers use the reparameterization trick to create a sampling free noise ∼ p( ) to obtain s: s = n( , φ) with a differentiable transformation function n(·), and thus g(·) is equivalent to m ◦ n where ◦ denotes function composition. Then the objective function under our feature leveling network is: R(θ, φ) = 1 N ( N∑ i=1 (L(F(ai, z; θ, φ,B, g), yi) ) + λ K K∑ k=1 ( 1−Q(sk ≤ 0|φ) ) zk = g(φk, ), g(φk, ) = m ◦ n(φk, ), ∼ p( ) (8) 4 RELATED WORK Interpreting existing models: The ability to explain the reasoning process within a neural network is essential to validate the robustness of the model and to ensure that the network is secure against adversarial attacks (Moosavi-Dezfooli et al., 2016; Brown et al., 2017; Gehr et al., 2018). In recent years, Many works have been done to explain the reasoning process of an existing neural network either through extracting the decision boundary (Bastani et al., 2018; Verma et al., 2018; Wang et al., 2018; Zakrzewski, 2001), or through a variety of visualization methods (Mahendran & Vedaldi, 2015; Zeiler & Fergus, 2014; Li et al., 2015). Most of those methods are designed for validation purpose. However, their results cannot be easily used to improve the original models. Self explaining models are proposed by (Alvarez Melis & Jaakkola, 2018) and it refers to models whose reasoning process is easy to interpret. This class of models does not require a separate validation process. Many works have focused on designing self-explaining architectures that can be trained end-to-end(Zhang et al., 2018; Worrall et al., 2017; Li et al., 2018; Kim & Mnih, 2018; Higgins et al., 2017). However, most self-explaining models sacrifice certain amount of performance for interpretability. Two noticeable models among these models are able to achieve competitive performance on standard tasks while maintaining interpretability. The NIT framework (Tsang et al., 2018) is able to interpret neural decision process by detecting feature interactions in a Generalized Additive Model style. The framework is able to achieve competitive performance but is only able to disentangle up to K groups of interactions and the value K needs to be searched manually during the training process. The SENN framework proposed by (Alvarez Melis & Jaakkola, 2018) focuses on abstract concept prototyping. It aggregates abstract concepts with a linear and interpretable model. Compared to our model, SENN requires an additional step to train an autoencoding network to prototype concepts and is not able to disentangle simple concepts from more abstract ones in a per-layer basis. Sparse neural network training refers to various methods developed to reduce the number of parameters of a neural model. Many investigations have been done in using L2 or L1 (Han et al., 2015; Ng, 2004; Wen et al., 2016; Girosi et al., 1995) regularization to prune neural network while maintaining differentiability for back propagation. Another choice for regularization and creating sparsity is the L0 regularization. However, due to its discrete nature, it does not support parameter learning through backpropagation. A continuous approximation of L0 is proposed in regard to resolve this problem and has shown effectiveness in pruning both FCNN and Convolutional Neural Networks (CNNs) in an end to end manner (Louizos et al., 2017). This regularization technique is further applied not only to neural architecture pruning but to feature selections (Yamada et al., 2018). Our work applies the L0 regularization’s feature selection ability in a novel context to select maximum amount of features as direct inputs for the GLM layer. Compared to Residual Structures, our model is able to explain features at different levels and their contributions separately due to the linear nature of GLM. ResNet (He et al., 2016), Highway Networka (Srivastava et al., 2015) cannot isolate each level as their skip features are further entangled by polling, non-linear activation and if the following blocks. Different from ResNet with full connection to all features, we propose to learn which feature to pass to GLM from a probabilistic perspective. 5 EXPERIMENTS We validate our proposed architecture through three commonly used datasets - MNIST, and California Housing. For each task, we use the same initial architecture to compare our proposed model and FCNN baseline. However, due to the gating effect of our model, some of the neurons in the middle layers are effectively pruned. The architecture we report in this section for our proposed model is the pruned version after training with the gates. The second to last layer of our proposed models is labeled with a star to denote concatenation with all previous lk and the output of the last hidden layer. For example, in the California Housing architecture, both proposed and FCNN baseline start with 13− 64− 32− 1 as the initial architecture, but due to gating effect on deeper layers, the layer with 32∗ neurons should have in effect 32 + (13 − 10) + (64 − 28) = 71 neurons accounting for previously gated features. (13− 10 = 3 for l1, 64− 28 = 36 for l2). The two objectives of our experiments are: 1) To test if our model is able to achieve competitive results, under the same initial architecture, compared to FCNN baseline and other recently proposed self-explaining models. This test is conducted by comparing model metrics such as root mean square error (RMSE) for regression tasks, classification accuracy for multi-class datasets. 2) Because our model can separately account for each layer’s contribution, we can apply the gradient with respect to each layer and get the level of features our model recognize for each part of the input. Experiment implementation details are deferred to appendix A7-10. 5.1 DATASETS & PERFORMANCES The MNIST hand writing dataset (LeCun et al., 2010) consists of pictures of hand written digits from 0 to 9 in 28×28 grey scale format. We use a 784−300−100−10 architecture for both FCNN baseline and the proposed model. This is the same architecture used in the original implementations of (Louizos et al., 2017). Our model is able to achieve similar result, with less number of layers, as those state-of-the-art architectures using ReLU activated FCNNs . The feature gates completely eliminated message passing to the 100 neuron layer, which implies that our model only need level 1 and level 2 layers for feature extractions to learn the MNIST datasets effectively. The California Housing dataset (Pace & Barry, 1997) is a regression task that contains various metrics, such as longitude and owners’ age to predict the price of a house. It contains 8 features and one of the features is nominal. We converted the nominal feature into one-hot encoding and there are 13 features in total. Since California Housing dataset does not contain standard test set, we split the dataset randomly with 4:1 train-test ratio. Our proposed model could beat the FCNN baseline with the same initial architecture. Only 3 out of 13 original features are directly passed to the GLM layer, implying that California Housing’s input features are mostly second and third level. 5.2 DISENTANGLING THE CONTRIBUTION OF EACH LEVEL OF FEATURES MNIST: With digit 4 as an example, compared to FCNN and SENN-FCNN, our model’s l1 identifies the contour of digit 4 and the corners of 4(larger in gradient) as first level feature for 4. l2 shows concentrated negative gradients in the middle of the digit which corresponds to the ”hole” in digit 4. Cal Housing: We compare with NIT in [22]. The left figure shows that our model with different colors indicating feature gradients from different layers, NIT’s colors indicate different groups. Compared to [22], our model’s l1 identifies ”longitude”(long) as a feature that linearly relates to housing price since in California, longitude is a major determining factor for housing price comparing to latitude. According to the gradients, l2 and l3 emphasizes on different parts of the input, justifying that our model could divide the features to different sets. However, for [22], gradients of most groups are similar, indicating that the features are not sufficiently disentangled among groups. In contrast, our model identifies most important features with stronger weight and zero or minimal weight for irrelevant ones. 5.3 SCALABILITY During the training stage, our model requires more computation resources as features from higher layers are passed to the final layer as well as to the final GLM layer. However, during the inference time when the gates are learned, each feature input to the neural layer is only computed once due to the mutual exclusion of our gating setup. The weight parameters related to the ”zeroed out” features can also be eliminated. In most cases our model results in lower parameter count. In the Appendix (A2) we show the number of parameters our framework needs for the reported inference models. 5.4 EXTEND TO CONVOLUTIONAL NEURAL NETWORKS Our framework is also applicable to be applied to convolutional architectures. To modify, we simply apply the gate to the input features. Appendix (A11 and Figure 5) shows our model can also clearly isolate features from different levels. To reduce the gated feature size, we apply convolutions with no activation to reduce dimension while maintaining linearity. 6 STRENGTH IN PRUNING REDUNDANT HIDDEN LAYERS Due to our proposed model’s ability to encourage linearity, our model is also able to reduce its network complexity automatically by decreasing the number of hidden layers. MNIST classification, as an example, when the dataset feature level is less than the number of hidden layers, our proposed model can learn to prune excess hidden layers automatically as the network learns not to pass information to further hidden layers. As a result, the number of hidden layers are effectively reduced. Therefore, we believe that our framework is helpful for architectural design by helping researchers to probe the ideal number of hidden layers to use as well as understanding the complexity of a given task. 7 DISCUSSION In this work we propose a novel architecture that could perform feature leveling automatically to boost interpretability. We use a toy example to demonstrate the fact that not all features are equal in complexity and most DNNs take mixed levels of features as input, decreasing interpretability. We then characterize absolute feature complexity by the number of layers it requires to be extracted to make GLM decision. To boost interpretability by isolating the kth level features. We propose feature leveling network with a gating mechanics and an end-to-end training process that allow the kth level features to be directly passed to the GLM layer. We perfrom various experiments to show that our feature leveling network is able to successfully separate out the kth level features without compromising performance. A APPENDIX A.1 ADDITIONAL EXPERIMENT The CIFAR-10 Dataset (Krizhevsky et al., 2014) consists of 32 × 32 RGB images of 10 different classes. We test our model’s ability to extract abstract concepts. For comparison, we follow the experiments in the NIT paper and choose the class cat and deer to perform binary classification. The resulting architecture shows that for FCNN networks, most of the the two chosen classes are mainly differentiated through their second level features. None of the raw inputs are used for direct classification. This corresponds to the assumption that RGB images of animals are relatively high level features. A.2 NUMBER OF PARAMETERS REQUIRED FOR THE REPORTED MODELS Due to unable to reproduce a result reported by NIT paper on CIFAR-10, we used the original architecture that the authors used in the original NIT paper. As a result, we did not include the number of parameters in our table. A.3 REVISIT GLM FOR INTERPRETATIONS OF DEEP NEURAL NETWORKS Consider training a linear model with dataset {X ,Y} where X is the set of features and Y is the corresponding set of labels. The goal is to learn a function f(x) from (xi, yi) ∈ {X ,Y}subject to a criteria function Lθ(xi, yi) with parameter set θ. In a classical setting of Linear Models, θ usually refers to a matrix w such that: ŷ = f(x) = T (w>x+ β) (9) Here, ŷ refers to the predicted label given a sample instance of a set of feature x and T refers to the set of functions such as Logictic, Softmax and Identity. GLM is easy to interpret because the contribution of each individual dimension of x to the decision output y by its corresponding weight. Therefore, we hope to emulate GML’s interpretability in a DNN setting - by creating a method to efficiently back-trace the contribution of different features. We argue that our proposed architecture is similar to a GLM in that our final layer makes decision based on the weights assigned to each level of input features. Our model is linear in relationship to various levels of features. Given k levels of features, our model makes decision with y = [w>1 l1, w > 2 l2, ..., w > K lK ], each weight parameter wi indicates the influence of that layer. With this construction, we can easily interpret how each levels of feature contribute to decision making. This insight can help us to understand whether the given task is more ”low level” or ”high level” and thus can also help us to understand the complexity of a given task with precise characterization. A.4 THE LAST LAYER OF COMMON NEURAL NETWORKS IS A GLM LAYER The ”classical” DNN architecture consists of a set of hidden layers with non-linear activations and a final layer that aggregates the result through sigmoid, softmax, or a linear function. The final layer is in fact similar to the GLM layer since it itself has the same form and optimization objective. A.5 OUR NOVELTY COMPARED TO RESNET AND OTHER SIMILAR ARCHITECTURE Compared to Residual Networks, our model is able to explain features at different levels and their contributions separately due to the linear nature of GLM. ResNet and DenseNet cannot isolate each level as their skip features are further entangled by polling, non-linear activation and the following hidden layers. Different from ResNet with full connection to all features, we propose to learn which feature to pass to GLM from a probabilistic perspective. Specifically, we introduce the l0 regularization for the purpose of performing effective feature leveling. In contrast, Resnet and Dense Net do not perform such layer wise regularization. A.6 POWER TO EXTRACT FEATURE COMPLEXITY THROUGH PRUNING To demonstrate that our network achieves effective pruning and can help practitioners to determine the complexity of a given problem, we use Cal Housing as an example and train our models with 2-5 hidden layers. Each intermediate hidden layer has 32-32 structure. To prove that our model can find the optimal structure, we first run the baseline model (without gating) with 2-5 hidden layers separately. We observe that the mse is 0.2364 for 3 layer, 0.2618 for 4 and 0.4807 for 5. Thus, the 3 layer model is sufficient to make accurate prediction. Then we train our model with gate selections and observed that when started with more than 4 hidden layers, our model would completely be reduced to a 3 layer model after training and this is indeed the best structure for the Cal housing task as verified with the complete models. Thus, we argue that our model is able to discover optimal number of hidden layer to make accurate prediction of housing price. This further proves that our model would be helpful for architecture engineers to decide on the optimal number of layers for any given task. A.7 REPRODUCING EMPIRICAL RESULTS: GENERAL CONFIGURATION All models are implemented in TensorFlow(Abadi et al., 2016) and hyperparameters configurations could be found in our public repository or supplemental code. Model name with citation denotes that the result is obtained from the original paper. SEEN’s architecture listed is the prototyping network while we use similar architecture for autoencoder parts. All SENN models are re-implemented with fully connected networks for comparison purposes. A.8 DATASET AND PREPROCESSING MNIST is a dataset that contains 60000 training and 10000 testing of handwriting digits from 0 to 9. Experiment results were tested against the allocated testing set. CIFAR-10 is a dataset consists of 10 classes of images each with 10000 training and 2000 testing. We used the allocated testing set for reporting results. For MNIST, CIFAR-10, we rescaled the color channel with a divisor of 255., to make pixel values from 0 to 1. For Cal Housing, we dropped all samples with any empty value entry. Normalize all numerical values with mean and standard deviation. The IXOR dataset is generated with the script attached in the supplemental material under src/independent xor. A.9 HYPERPARAMETER The only tunable hyperparameter in our model is the λ which we usually consider values from 0.5 to 0.01. All the λ values to display result is in the model scripts of the attached folder. Generally, lower λ are better for training more complicated dataset such as CIFAR-10 to prevent too many Gating at early stage. A.10 EXACT NUMBER OF ITERATION RUNS MNIST 280000 CIFAR-10 680000 California Housing 988000 A.11 RESULT OF OUR MODEL ON CONVOLUTIONAL NEURAL NETWORKS
1. What is the focus of the paper regarding neural network architecture and feature extraction? 2. What are the strengths of the proposed approach, particularly in terms of its ability to separate low-level and high-level features? 3. Do you have any concerns or questions about the interpretation of the k-th level features in the final GLM model? 4. How does the reviewer assess the clarity and organization of the paper's content? 5. Are there any suggestions for improving the interpretation of the learned features, such as comparing the approach to previous works like Alvarez-Melis and Jaakkola NIPS 2018?
Review
Review This paper proposed a neural network architecture to separate low-level features and high-level features. At the k-th hidden layer, 1) the k-th level features, defined as the set of features that requires k-1 hidden layers to extract, are directly passed to the final GLM layers; 2) the remaining features are further processed in the subsequent layers. This separation is achieved by applying the gating mechanism. The model can be interpreted by the weights associated with each of those k-th level features in the final GLM layer. Experimental results on the MNIST classification dataset and the California Housing regression dataset demonstrate the proposed approach can 1) achieve competitive performance (in terms of classification and regression) compared to the FCNN baseline; and 2) has better interpretability performance. This paper is clearly written. The description of the model architecture is easy to follow. The introduction of the related works and background material are well organized. My major concern with this work is how to interpret those k-th level features. Since the weights of those k-th level features in the final GLM model are used to indicate the importance of those features, interpreting the meaning of those k-th level features seem necessary. In practice, how to interpret the meaning of those higher-level features? For example, for the California Housing dataset, how to interpret the meaning of those features learned at level 2? The interpretation of the MNIST classification examples seems difficult to understand. Compared to inspecting the raw pixels, would it be easier to interpret through learning a few prototypes, similar to the approach described in Alvarez-Melis and Jaakkola NIPS 2018?
ICLR
Title Neural Implicit Representations for Physical Parameter Inference from a Single Video Abstract Neural networks have recently been used to model the dynamics of diverse physical systems. While existing methods achieve impressive results, they are limited by their strong demand for training data and their weak generalization abilities. To overcome these limitations, in this work we propose to combine neural implicit representations for appearance modeling with neural ordinary differential equations (ODEs) in order to obtain interpretable physical models directly from visual observations. Our proposed model combines several unique advantages: (i) Contrary to existing approaches that require large training datasets, we are able to identify physical parameters from only a single video (ii) The use of neural implicit representations enables the processing of high-resolution videos and the synthesis of photo-realistic imagery. (iii) The embedded neural ODE has a known parametric form that allows for the identification of interpretable physical parameters, and (iv) long-term prediction in state space. (v) Furthermore, the photo-realistic rendering of novel scenes with modified physical parameters becomes possible. synth | real synth | real synth | real synth | real synth | real synth | real pendulum length Figure 1: Our method infers physical parameters directly from real-world videos, like the shown pendulum motion. Separated by the red line, the right half of each image shows the input frame, and the left half shows our reconstruction based on physical parameters that we estimate from the input. We show 6 out of 10 frames that were used for training. The proposed model can precisely recover the metric length of the pendulum from the monocular video (relative error to true length is less than 2.5%). Best viewed on screen with magnification. Please also consider the supplementary video. 1 INTRODUCTION The physics of many real-world phenomena can be described concisely and accurately using differential equations. However, such equations are usually formulated in terms of highly abstracted quantities that are typically not directly observable using commodity sensors, such as cameras. For example, a pendulum is physically described by the deflection angle, the angular velocity, the damping coefficient, and the pendulum’s length, but automatically extracting those physical parameters directly from video data is challenging. Thus, due to the complex relationship between the physical process and images of respective scenes, measuring such quantities often necessitates a trained expert operating customised measuring equipment. While for many physical phenomena humans are able to infer (a rough estimation of) physical quantities from a given video, physical understanding from videos is an open problem in machine learning. Recently, the combination of deep learning and physics has become popular, particularly in the context of video prediction. While earlier works (Lutter et al., 2019; Greydanus et al., 2019; Cranmer et al., 2020; Zhong et al., 2020) require coordinate data, i.e. already abstracted physical quantities, more recent works directly use image data (Levine et al., 2020; Zhong & Leonard, 2020). A major downside is that all these approaches rely on massive amounts of training data, and, as we experimentally confirm in App. F, they exhibit poor generalization abilities. In contrast, in our work we address this shortcoming by proposing a solution that extracts semantic physical parameters directly from a single video, see Figure 1. Therefore, we alleviate the need for large data and furthermore facilitate interpretation due to the semantics of the inferred parameters in respective physical equations. Additionally, the six previously mentioned works model physical systems using Lagrangian or Hamiltonian energy formulations, which elegantly guarantee the conservation of energy, but can therefore not easily model dissipative systems that are much more common in the real world (Galley, 2013). The proposed model effectively transforms the camera into a physical measuring device with which we can observe quantities such as the length or the damping coefficient of a pendulum. To achieve the learning of physical models from a single video, we propose to utilise physicsbased neural implicit representations in an analysis-by-synthesis manner, where the latter relies on neural ordinary differential equations for representing abstract physics of visual scenes. Overall, we summarize our main contributions as follows: 1. We present the first method that is able to identify physical parameters from a single video using neural implicit representations. 2. Our approach infers parameters of an underlying ODE-based physical model that directly allows for interpretability and long-term predictions. 3. The unique combination of powerful neural implicit representations with rich physical models allows to synthesize high-resolution and photo-realistic imagery. Moreover, it enables physical editing by rendering novel scenes with modified physical parameters. 4. Contrary to existing learning-based approaches that require large corpora of training data, we propose a per-scene model, so that only a single short video clip that depicts the physical phenomenon is necessary. 2 RELATED WORK The combination of machine learning and physics has been addressed across an extremely broad range of topics. For example, machine learning was used to aid physics research (Bogojeski et al., 2020; Leclerc et al., 2020), or physics was used within machine learning models, such as for automatic question answering from videos (Chen et al., 2021; Bear et al., 2021). In this work we focus specficially on extracting physical models from single videos, so that in the following we discuss related works that we consider most relevant in this context. Physics in the context of learning. While neural networks have led to many remarkable results across diverse domains, the inference of physical principles, such as energy conservation, is still a major challenge and requires additional constraints. A general way to endow models with a physicsbased prior is to use generalized energy functions. For example, Greydanus et al. (2019) and Toth et al. (2020) use a neural network to parameterize the Hamiltonian of a system, which yields a relation between the energy of the system and the change of the state. Hence, they are able to infer the dynamics of systems with conserved energy, such as a pendulum or a multi-body system. One disadvantage of using the Hamiltionian is that canonical coordinates need to be used. To eliminate this constraint, other works use the Lagrangian to model the system’s energy. Since this formalism is more complex, Lutter et al. (2019) and Zhong & Leonard (2020) restrict the Lagrangian to the case of rigid-body dynamics to model systems with multiple degrees of freedom, such as a pole on a cart, or a robotic arm. Cranmer et al. (2020) use a neural network to parameterize a general Lagrangian, which they use to infer the dynamics of a relativistic particle in a uniform potential. While being able to model many relevant systems, the aforementioned energy-based approaches cannot easily be extended to dissipative systems that are much more common in the real world (Galley, 2013). Furthermore, they do not allow for a semantic interpretation of individual learned system parameters. PhyDNet, introduced by Guen & Thome (2020), learns dynamics in the form of a general PDE in a latent space, which, like the aforementioned works, prohibits interpretation of the learned physical model. In contrast, in the context of incorporating physical phenomena into learning frameworks, there are also approaches that make the underlying dynamics explicit. For example, Jaques et al. (2020) unroll the Euler integration of the ordinary differential equation of bouncing balls, as well as balls connected by a spring, to identify the physical parameters like the spring constant. Kandukuri et al. (2020) and de Avila Belbute-Peres et al. (2018) propose to use a linear complementarity problem to differentiably simulate rigid multi-body dynamics that can also handle object interaction and friction. For our method, we also rely on the advantages of modelling the underlying physics explicitly in order to obtain interpretable parameter estimates. Inferring physical properties from video. While many approaches work with trajectories in state space, there are also some works that operate directly on videos. In this case, the information about physical quantities is substantially more abstract, so that uncovering non-linear dynamics from video data is a significantly more difficult problem. Traditionally, such inverse problems are often phrased in terms of optimization problems, for example for deformable physics inference (Weiss et al., 2020), among many more. While respective approaches can successfully estimate a wide range of relevant physical quantities from video data, they often require rich additional information, such as 3D information in the form of depth images in combination with a 3D template mesh (Weiss et al., 2020), which may limit their practical applicability. More recently, several end-to-end learning approaches have been proposed. de Avila Belbute-Peres et al. (2018) use an encoder to extract the initial state of several objects from the combination of images, object masks and flow frames. After propagating the physical state over time, they decode the state back into images to allow for end-to-end training. Jaques et al. (2020) and Kandukuri et al. (2020) use an encoder network to extract object positions from object masks for individual frames. After estimating initial velocities from the positions they integrate the state over time and use a carefully crafted coordinate-consistent decoder, which is based on spatial transformers, to obtain predicted images. Zhong & Leonard (2020) extend this idea to their variational autoencoder (VAE) architecture to obtain a coordinate-aware encoder which they use to infer parameters of the latent distribution of generalized coordinates for each frame. Toth et al. (2020) use a VAE structure to predict the parameters of a posterior over the initial state from a sequence of videos. All of these approaches require large amounts of data to train the complex encoder and decoder modules. In contrast, our approach does not rely on trainable encoder or decoder structures, but instead uses a non-trainable fixed neural ODE solver in combination with a trainable neural implicit representation, and is thus able to infer physical models from a single video. Implicit representations. Recently, neural implicit representations have gained popularity due to their theoretical elegance and performance in novel view synthesis. The idea is to use a neural network to parametrize a function that maps a spatial location to a spatial feature. For example, to represent geometric shapes, using occupancy values (Mescheder et al., 2019; Chen & Zhang, 2019; Peng et al., 2020), or signed distance functions (Park et al., 2019; Gropp et al., 2020; Atzmon & Lipman, 2020). In the area of multiview 3D surface reconstruction as well as novel view synthesis, implicit geometry representations, such as density or signed distance, are combined with implicit color fields to represent shape and appearance (Sitzmann et al., 2019; Mildenhall et al., 2020; Yariv et al., 2020; Niemeyer et al., 2020; Azinovic et al., 2021). To model dynamic scenes, there have been several approaches that parametrize a displacement field and model the scene in a reference configuration (Niemeyer et al., 2019; Park et al., 2021; Pumarola et al., 2021). On the other hand, several approaches (Xian et al., 2021; Li et al., 2021; Du et al., 2021) include the time as an input to the neural representation and regularize the network using constraints based on appearance, geometry, and pre-trained depth or flow networks – however, none of these methods uses physics-based constraints, e.g. by enforcing Newtonian motion. While the majority of works on implicit representations focuses on shape, Sitzmann et al. (2020) show the generality of implicit representations by representing images and audio signals. Our work contributes to the neural implicit representation literature by combining such representations with explicit physical models. 3 ESTIMATING PHYSICAL MODELS WITH NEURAL IMPLICIT REPRESENTATIONS Our main goal is the estimation of physical parameters from a single video, where we specifically focus on the setting of a static background and dynamic objects that are moving according to some physical phenomenon. With that, we model the dynamics of the objects using an ordinary differential equation (ODE). Our objective is now to estimate the unknown physical parameters, as well as the initial conditions, of this ODE. Hence, we additionally learn a video generation model that is able to render a video that depicts objects which follow a specific physical model depending on respective physical parameters. For estimating these physical parameters directly from an input video, we utilise a photometric loss that imposes that the generated video is similar to the input video. 3.1 MODELING THE DYNAMICS For most of the dynamics that can be observed in nature, the temporal evolution of the state can be described by an ODE. For example, for a pendulum the state variables are the angle of deflection and the angular velocity, and a two dimensional first-order ODE can be used to describe the dynamics. In general, we write ż = f (z, t; θ) to describe the ODE1, where z ∈ Rn denotes the state variable, t ∈ R denotes time and θ ∈ Rm are the unknown physical parameters. Using the initial conditions z0 ∈ Rn at the initial time t0, we can write the solution of the ODE as z (t; z0, θ) = z0 + ∫ t t0 f (z(τ), τ ; θ) dτ. (1) Note that the solution curve z (t; z0, θ) ⊂ Rn depends both on the unknown initial conditions z0, as well as on the unknown physical parameters θ. In practice, the solution to Eq. (1) is typically approximated by numeric integration. In our context of physical parameter estimation from videos, we build upon the recent work by Chen et al. (2018), who proposed an approach to compute gradients of the solution curve of an ODE with respect to its parameters. With that, it becomes possible to differentiate through the solution in Eq. (1) and therefore we can use gradient-based methods to estimate z0 and θ. 3.2 DIFFERENTIABLE RENDERING OF THE VIDEO FRAMES To render the video frames, we draw inspiration from the recent advances in neural implicit representations. To this end, we use a static representation to model the background, which we combine with a appearance and shape representation of dynamic foreground objects. By composing the learned background with the dynamic foreground objects, whose poses are determined by the solution of the ODE encoding the physical phenomenon, we obtain a dynamic representation of the overall scene. Doing so allows us to query the color values on a pixel grid, so that we are able to render video frames in a differentiable manner. Fig. 2 shows an overview of the approach. Representation of background. The static background is modeled by a function F (·; θbg) that maps a 2D location x to an appearance value c ∈ RC , where C denotes the number of appearance channels (e.g. RGB colors). The function F (·; θbg) encodes the appearance of the background and is represented as a neural network with learnable parameters θbg. To improve the ability of the neural network to learn high frequency variations in appearance, we use Fourier features (Tancik et al., 2020), so that the input location x ∈ R2 is mapped to a higher-frequency vector γ (x) ∈ R4NFourier+2, where NFourier is the numbers of frequencies used. The full representation of the background then reads cbg (x) = F (γ (x) ; θbg). For a more detailed discussion of the architecture, we refer to App. A. Representation of dynamic objects. To compose the static background and the dynamically moving objects into the full scene, we draw inspiration from Ost et al. (2021) who use implicit representations to represent color and shape in a scene graph to decompose a dynamic scene into a background representation and dynamically moving local representations. A drawback of their work is that they do not use a physical model to constrain the dynamics, and therefore strong supervisory signals like the trajectories and the dimensions of the bounding boxes are essential. In our case, each dynamic object is represented in terms of a local neural implicit representation, which is then placed in the overall scene based on the time-dependent spatial transformation Tt = T (z (t; z0, θode) , θ+). This transformation is parameterized by the unknown initial condition z0, the physical parameters θode of the ODE, and possibly additional parameters θ+. As such, these parameters determine the transformation from the global coordinate system of the background to the local coordinate system. 1W.l.o.g. we only consider first-order ODEs here, since it is always possible to reduce the order to one by introducing additional state variables. Input: Video Sequence Similarly as the background, the appearance of each individual dynamic object is modelled in terms of an implicit neural representation (in the local coordinate system). In contrast to the background, we augment the color output c ∈ RC of the dynamic object representation with an additional opacity value o ∈ [0, 1], which allows us to model objects with arbitrary shape. We write the representation of a dynamic object in the global coordinate system as (cobj (x) , o (x)) = G(γ (x′) ; θobj), where G(·; θobj) is represented as a neural network with weights θobj, γ denotes the mapping to Fourier features, and x′ = Tt(x) is the local coordinate representation of the (global) 2D location x. Differentiable rendering. For rendering we evaluate the composed scene appearance at a regular pixel grid, where we use the opacity value of the local object representation to blend the color of the background and the dynamic objects. To obtain the final color, for all positions x of the pixel grid we evaluate the equation c(x, t) = (1− o(x)) cbg(x) + o(x)cobj(x). (2) Note that due to the time dependence of the transformation Tt, the color value for pixel x is also time dependent, which allows us to render the frames of the sequence over time. 3.3 LOSS FUNCTION We jointly optimize for the parameters of the neural implicit representations θbg and θobj and estimate the physical parameters θode, z0 and θ+ of the dynamics and the transformation. To this end, we use a simple photometric loss defined over all the pixel values, which reads L = 1 |I| |T | ∑ t∈T ∑ x∈I d(I (x, t) , c (x, t)), (3) where d computes the discrepancy between its two inputs, T is the set of all given time steps, I is the set of all pixel coordinates at the current resolution (see next section) and I (x, t) are the given images. To capture information on multiple scales we employ an image pyramid scheme. More details can be found in App. C. 4 EXPERIMENTS We use two challenging physical models to experimentally evaluate our proposed approach. To analyze our method and to compare to previous work, we first consider synthetically created data. Afterwards, we show that our method achives promising results also on real-world data. For details about the ODEs describing the dynamics, additional implementation details, an ablation study, as well as additional results we refer the reader to the Appendix. Although several learning-based approaches that infer physical models from image data have been proposed (de Avila Belbute-Peres et al., 2018; Jaques et al., 2020; Kandukuri et al., 2020; Zhong & Leonard, 2020; Toth et al., 2020), existing approaches are particularly tailored towards settings with large training corpora. However, these methods typically suffer from a decreasing estimation accuracy in scarce training data regimes, or if out of distribution generalization is required (cf. App. F). In contrast, our proposed approach is able to predict physical parameters from a single short video clip. Due to the lack of existing baselines tailored towards estimation from a single video, we adapt the recent work of Jaques et al. (2020) and Zhong & Leonard (2020) to act as baseline methods. 4.1 TWO MASSES SPRING SYSTEM We consider the example of two moving MNIST digits connected by an (invisible) spring on a CIFAR background, in a similar spirit to Jaques et al. (2020), see Fig. 3. Besides the initial positions and velocities, the spring constant k and the equilibrium distance l of the connecting spring need to be identified for the dynamics model. For a more detailed description of the model see App. B.1. The approach of Jaques et al. (2020) uses a learnable encoder and velocity estimator to obtain positions and initial velocities of a known number of objects from the video frames. After integrating the known parametric model, they use a learnable coordinate-consistent decoder in combination with learned object masks and colors to render frames from the integrated trajectories. Using a photometric loss they require 5000 sequences of different runs of the same two masses spring system to train the model and identify the parameters. In order to compare their method to our work in the set- ting of parameter estimation from single video, in addition to their model trained on the full dataset (‘B: Full’), we also consider their model trained for an individual sequences of the test dataset (‘B: Overfit’). We fit our model to sequences from the test dataset, where we use two local representations and parametrize the spatial transformation as shown in App. B.1. By using the maximum of both masks as foreground mask, we enable the model to identify the object layering. We find that for training it is necessary to gradually build up the sequence of frames over training. We start with only two frames and add the respective next frame after 60 epochs. Also, the model appears to have a scale freedom in terms of the equilibrium length and the points where the spring is attached to the digits.2 We therefore add an additional loss to keep the spring attachment close to the center of the bounding box of the digits in the first frame. We observe similar effects when overfitting the model of Jaques et al. (2020) to a single sequence. When training on the full dataset, the effect seems to be averaged out and is not observed. Fig. 3 shows a qualitative comparison of our results to the baseline of Jaques et al. (2020), where the latter is trained in the two settings explained above. We observe, that for this sequence all approaches yield reasonable results for the reconstruction of the training frames. However, for prediction the overfitted model of Jaques et al. (2020) performs significantly worse, indicating that the physical model is poorly identified from a single video. The baseline trained on the full dataset yields results that are slightly worse than our results. We see that in both cases the parameters are identified correctly. The fact that we achieve comparable results while using significantly less data highlights the advantage of combining the explicit dynamics model with the implicit representation for the objects. Note that we chose sequence 6 since it yielded the best results for the baseline. More results can be found in app. E.1. 4.2 NONLINEAR DAMPED PENDULUM We use synthetically created videos of a nonlinear damped pendulum to compare our method to the previous work of Zhong & Leonard (2020) and also to show the ability of our approach to handle high resolution videos. The equations describing the pendulum dynamics can be found in App. B.2. Comparison to Lagrangian Variational Autoencoder. We use the dataset of Zhong & Leonard (2020) containing several sequences of a simple pendulum (each comprising 20 frames), which was created by the OpenAI Gym simulator (Brockman et al., 2016). The method by Zhong & Leonard (2020) uses a coordinate aware encoder to obtain the distribution of the initial state from object masks. After sampling, the initial state is integrated using a learnable Lagrangian function parametrizing the dynamics of the system and a coordinate aware decoder is used to render frames from the trajectories. We train the model using only the first N frames of a single sequence as the training data (with no external control input), effectively overfitting the model to each sequence. 2Intuitively, if the motion is only in one direction we can vary the equilibrium length and adjust the spring attachments without changing the observed motion. Similar effects are present in a 2D motion. Similar to the baseline, we assume no damping and a known pivot pointA in the middle of the frame to train our model. Since this dataset does not include image data, we only use the loss on the object mask, and train our model in this modified setup using the same frames as for the baseline. To evaluate the performance of each method in identifying the underlying dynamics, we compare the prediction of the unseen frames of the same sequence. Qualitative results are presented in Fig. 4. We can observe that both methods fit the given training data very well, however, in the baseline the pendulum motion significantly slows down for unseen time steps and thus it is unable to obtain accurate predictions for unseen data. We emphasize that this happens because the method requires significantly larger training datasets, so that it performs poorly in the single-video setting considered in this paper. In contrast, our method shows a significantly better performance, which highlights the strength of directly modelling physical phenomena to constrain the learnable dynamics in an analysis-by-synthesis manner. Due to aliasing effects that arise from the low resolution of the frames, our method does not give perfect predictions, however, if we use high-resolution images for our method we achieve nearly perfect reconstruction as we show in Fig. 5 and Fig. 6. For a quantitative comparison and further experimental details see App. E.2. High resolution videos. In contrast to the baseline, our approach is able to handle highresolution videos with complex background and pendulum shapes and textures. In this case, our approach accurately the parameters for the full pendulum model as we show in Fig. 5. For this experiment we created several videos by simulating a pendulum with known parameters and then rendering the pendulum on top of an image. Qualitative results of fitting our model to the lake scene can be seen in Fig. 5. We see that our model produces photorealistic renderings of the scene, even for the predicted frames. The renderings of other scenes are shown in App. E. As we show in Fig. 6 and Table 1, it is necessary that the frames in the training set cover a sufficient portion of the motion to enable a correct estimation of the physical parameters. 4.3 REAL PENDULUM VIDEO We now show that our approach is even able to infer physical parameters from real world data. We recorded the pendulum motion shown in Fig. 1. The pendulum is mounted almost frictionless and due to its high weight we do not expect large air drag effects either. The video was recorded with a smartphone, which leads to noticeable real-world noise such as motion blur, however, the proposed method still manages to produce convincing results. The pseudo groundtruth segmentation masks are generated semi-manually by using GrabCut (Rother et al., 2004) and exhibit significant noise Predicted Deflection Angle Trajectory Error that is also handled well by the proposed model. We extract every third frame from the video s.t. there are 10 extracted frames per second and use the first 10 frames for training. We use the the full damped pendulum model to estimate the physical parameters of the pendulum motion. The damping is estimated as c = 4.7 · 10−13, which matches our expectation for this low friction setting. For the pendulum length we note from Eq. (6) that the estimated length l = 27.7 cm is a real world quantity without scale ambiguity. Therefore, we can compare it to lmeasured = 27.1 cm which we obtained by measuring the length from the pivot point to the estimated center of gravity of the pendulum using a ruler. We would like to emphazise, that the very good correspondence shows, that we are able to estimate scale in a monocular video from a pendulum motion. 5 CONCLUSION In this work we presented a solution for learning a physical model from an image sequence that depicts some physical phenomenon. To this end, we proposed to combine neural implicit representations and neural ordinary differential equations in an analysis-by-synthesis fashion. Unlike existing learning-based approaches that require large training corpora, a single short video clip is sufficient for our approach. In contrast to prior works that use encoder-decoder architectures specifically tailored to 2D images, we built upon neural implicit representations that have been shown to give impressive results for 3D scene reconstruction. Therefore, the extension of the proposed method to 3D is a promising direction for future work. We present diverse experiments in which the ODE parametrizes a rigid-body transformation between the background and the foreground objects, such as the pendulum motion. We emphasize that conceptually our model is not limited to rigid-body motions, and that it can directly be extended to other cases, for example to nonlinear transformations for modelling soft-body dynamics. The focus of this work is on learning a physical model of a phenomenon from a short video. Yet, the high fidelity of our model’s renderings, together with the easy modifiability of the physical parameters, enables various computer graphics applications such as the artistic re-rendering of scenes, which we briefly demonstrate in the supplementary video. Overall, our per-scene model combines a unique set of favorable properties, including the interpretability of physical parameters, the ability to perform long-term predictions, and the synthesis of high-resolution images. We believe that our work may serve as inspiration for follow-up works on physics-based machine learning using neural implicit representations. Ethics statement. This work attempts to learn interpretable physical models from video clips of physical phenomena. Our contribution is largely theoretical and we show experiments on synthetic data and limited real-world data. Nevertheless, as machine learning models achieve more humanlike understanding of the real, physical world, it is paramount to ensure that they are deployed safely and according to strict ethical guidelines. While we think that the current state of our work will not disadvantage or advantage specific groups of people, we recommend a careful ethical evaluation of derivative works that aim to close the gap to human physical reasoning. A potential positive impact of this work is that it can be beneficial to people with lower financial resources, as it overcomes the need for expensive experimental gear to infer physical parameters, e.g. in the context of physics education. Reproducibility statement. To ensure the reproducibility of this work we give architecture and training details in App. A and C. Furthermore, we will release our code upon acceptance, so that all experiments and figures shown in this paper can be reproduced. A MODEL ARCHITECTURE We adopt the architecture used in Mildenhall et al. (2020) for the implicit representations, see Fig. 7 for the basic structure. For the Fourier features we use a logarithmic scaling. The i-th of the NFourier Fourier features is obtained as γi(x) = (sin(2 ix), cos(2ix)) i = 0, . . . NFourier − 1, (4) where sin(2ix) for x ∈ R2 means the element wise application of the sine function. We also include the original x in the encoding γ(x). B MODELS FOR THE DYNAMICS B.1 TWO MASSES SPRING SYSTEM The system is modeled as two-body system where the dynamic of each object is described by Newton’s second law of motion, i.e. F = mẍ, where F is the force. Since only the ratio between force and mass can be identified without additional measurement, we fix m = 1, analogously to the work of Jaques et al. (2020). Using Hooke’s law, we write the force applied to object i by object j as Fi,j = −k ( (pi − pj)− 2l pi − pj ‖pi − pj‖ ) . (5) Using the position pi(t; k, l) of the objects to parametrize the trajectory of the local coordinate systems, we can write the time-dependent 2D spatial transformation to the local coordinate system i as T (i)t (x) = x− pi(t; k, l), where l and k are learnable parameters. B.2 NONLINEAR DAMPED PENDULUM A pendulum that is damped by air drag can be modelled as ˙[ϕ ω ] = [ ω − gl sin (ϕ)− cω |ω| ] , (6) where ϕ ∈ R is the deflection angle, ω ∈ R is the angular velocity, g is the (known) gravitational acceleration, l > 0 is the (physical) length of the pendulum, and c > 0 is the damping constant. We use the solution curve ϕ (t; l, c) to parameterize the time-dependent 2D spatial transformation as Tt (x) = R (ϕ (t; l, c))x + A, where R ∈ SO (2) is a rotation matrix and A ∈ R2 is the pivot point of the pendulum. For the full model, the parameters l and c are learnable. For the sake of simplicity we assume that the gravitational acceleration g always points downwards in the global image coordinate system. C TRAINING DETAILS In the following we provide additional training details. C.1 DISCREPANCY MEASURE FOR THE LOSS TERM Unless stated otherwise, for our experiments we use C = 4 image channels, where the three first channels correspond to the RGB channels, and the last channel represents a mask of the foreground object. For the real world data, we obtained the objects masks using a semi-manual approach as described in Sec. 4.3. For the experiments on the synthetically created high resolution videos we used the masks constructed for the video creation for the experiments directly. For the first three channels we define the discrepancy measure in terms of the mean square error as drgb(x, y) = ‖x− y‖2, (7) and for the mask in the last channel we consider the binary cross entropy loss, i.e. dseg(x, y) = [x log (y) + (1− x) log (1− y)] . (8) With that, the overall discrepancy measure is given as d(x, y) = drgb(x1:3, y1:3) + λsegdseg(x4, y4). (9) C.2 OPTIMIZATION We train our model using the Adam optimizer (Kingma & Ba, 2015) with exponential learning rate decay, which reads r(e) = r0 · βe/ndecay (10) where r(e) is the learning rate depending on the epoch e, r0 is the initial learning rate, β is the decay rate and ndecay is the decay step size. One important aspect of the training is to use different learning rates for the parameters θbg and θobj of the implicit representations on the one hand and the physical parameters θode, z0 and θ+ on the other hand. In order to estimate the initial parameters of the ODE and the transformation for the pendulum we employ a heuristic that uses the information contained in the mask. To obtain an initial estimate for the pivot point A we average all masks and use the the pixel with the highest value. To obtain an estimate for the initial angle, we perform a principal component analysis (PCA) on the pixel locations covered by the mask and use the angle between the first component and the vertical direction. The velocity is always initialized as 0. We initialize the damping as c = 1 and the pendulum length as l = 2m for the synthetic experiments and l = 0.4m for the real world experiment. C.3 IMAGE PYRAMID To capture information on multiple scales we employ an image pyramid scheme. Due to memory limitations, for large images we cannot evaluate all pixel values in one batch, and thus the classical approach that considers all stages of the image pyramid at once is not feasible in our setting. Therefore, during training, we sequentially traverse the image pyramid from the low-resolution levels towards the original high-resolution level. The idea is that the low resolution stages reveal global information about the movement of the object, whereas the later high-resolution stages allow to use finer details that improve the coarse estimates from the previous stages. To this end, we use a Binomial kernel of size 5×5 with stride two, which we repeatedly applyNpyr times to reduce the original resolution of the image. We start the training using the coarsest level, and then switch to the next finer level every npyr steps. D ABLATION STUDY To motivate the chosen loss functions, we report the results for the parameter estimation with different loss function configurations in Table 1. Beyond the influence on the quality of the parameter estimation, another motivation to use the color loss is that it enables to learn the representation of the appearance of the background and the object in the implicit representation. This allows for photo-realistic rendering of unseen predictions, as well as the re-rendering of scenes with modified physical parameters, effectively allowing physical scene editing. For the mask loss, on the other hand, we have found that it makes the estimation process more robust to suboptimal initializations of the physical parameters. E FURTHER EXPERIMENTAL DETAILS AND RESULTS In the following we consider specific details for the different experiments. E.1 TWO MASSES SPRING SYSTEM Experimental details. We use both loss terms and set λseg = 0.01 to balance them. Additionally, we use an MSE loss to keep the center of the bounding boxes of the digits close to the origin of the local representations in the first frame. This fixes the scale problem related to the equilibrium length described in the main text. Moreover, we use another MSE loss term to keep the opacity value close to zero outside of (but close to) the visible area. We found this to be necessary, since otherwise artefacts might appear in the extrapolation when previously unseen parts of the mask appear in the visible area. For the background we use an implicit representation with NFourier = 6 Fourier features, NFC = 8 fully connected layers of width WFC = 128 and an input skip to layer number Nskip = 4. For the local object representation we use NFourier = 8 Fourier features, NFC = 8 fully connected layers of width WFC = 128 and an input skip to layer number Nskip = 4. We use an initial learning rate of rMLP, 0 = 0.001 for the parameters of the implicit representations and rparam, 0 = 0.01 for the physical parameters. We set βMLP = 0.99954, ndecay,MLP = 50, βparam = 0.95 and ndecay,param = 100. For the image pyramid we use Npyr = 1 stage and step up the pyramid every npyr = 200 epochs. We train for 1500 epochs, where one epoch is completed, when all the pixels in the current resolution have been considered. Additional results. In Fig. 8 and Fig. 9 we present additional results for sequence 0 and sequence 1 of the test dataset. We see, that for both sequences, overfitting the baseline is not able to produce a reasonable extrapolation of the data and even produces artifacts for the reconstruction part of the sequence. One reason for this is that the model is unable to identify the physical parameters correctly as can be seen by the large relative errors. Our model, on the other hand, is able to estimate the parameters with high accuracy that is even slightly better than the baseline trained on the full training dataset, which again shows the strength of our approach, considering, that we use a single video as input. E.2 COMPARISON WITH THE LAGRANGIAN VARIATIONAL AUTOENCODER Experimental details. The data used in this experiment does not include image data, therefore we do not use drgb and set λseg = 1. Since the predicted masks are obtained only from the local representation, we do not use an implicit representation for the background in this example. For the local representation we use NFourier = 4 Fourier features, NFC = 6 fully connected layers of width WFC = 64 and an input skip to layer number Nskip = 3. We use an initial learning rate of rMLP, 0 = 0.001 for the parameters of the implicit representations and rparam, 0 = 0.01 for the physical parameters. We set βMLP = 0.9954, ndecay,MLP = 10, βparam = 0.995 and ndecay,param = 50. For the image pyramid we use Npyr = 2 stages and step up the pyramid every npyr = 75 epochs. We train for 1500 epochs, where one epoch is completed, when all the pixels in the current resolution have been considered. Quantitative comparison. To quantitatively compare the temporal prediction ability of our approach with the baseline, we follow the procedure by Zhong & Leonard (2020) and report the average mean squared error (MSE) between the predicted and the ground truth mask for the frames Temporal Prediction Ability of the full sequence, which we denote as pixel MSE for consistency with the previous work. The results for randomly chosen sequences of the dataset are presented in Fig. 10. We can observe that the predictive power for both methods is limited when only a few frames are available to infer the underlying dynamics. However, with an increasing number of frames, our method becomes able to reconstruct the physics more consistently, while the baseline does not noticeably benefit from more training frames. We believe that this is because the baseline method overfits to the given frames, whereas our method infers actual physical parameters. E.3 EXPERIMENTS WITH HIGH RESOLUTION VIDEOS AND THE REAL WORLD VIDEO Experimental details. We use the same architecture for the high resolution synthetic and the real-world video sequences. We use both loss terms and set λseg = 0.03 to balance them. For the background we use an implicit representation with NFourier = 10 Fourier features, NFC = 8 fully connected layers of width WFC = 128 and an input skip to layer number Nskip = 4. For the local object representation we use NFourier = 8 Fourier features, NFC = 8 fully connected layers of width WFC = 128 and an input skip to layer number Nskip = 4. We use an initial learning rate of rMLP, 0 = 0.001 for the parameters of the implicit representations and rparam, 0 = 0.05 for the physical parameters. We set βMLP = 0.99954, ndecay,MLP = 10, βparam = 0.95 and ndecay,param = 100. For the image pyramid we use Npyr = 5 stages and step up the pyramid every npyr = 200 epochs. We train for 1500 epochs, where one epoch is completed, when all the pixels in the current resolution have been considered. Additional results. In the following we present additional rendering results. Fig. 11 and Fig. 12 show additional reconstruction and prediction results for additional synthetic high resolution scenes. To create the synthetic scenes we took the background images from https://pixabay.com/photos/ lake-mountains-nature-outdoors-6627781/ (Lake), https://pixabay. com/photos/city-street-architecture-business-4667143/ (City) and https://pixabay.com/photos/apples-fruits-ripe-red-apples-6073599/ (Apple). Fig. 13 allows for a more detailed comparison for the results of the real pendulum video. We show the images and masks used for training on the real pendulum video in Fig. 14. Please also see our supplementary video for additional results on this data. F GENERALIZATION OF THE LAGRANGIAN VARATIONAL AUTOENCODER One drawback of learning-based approaches for visual estimation of physical models is the poor generalization to data that deviates from the training data distribution. We confirm this for the fully (pre-)trained model of Zhong & Leonard (2020). While the Pixel MSE averaged over the full test set is 1.83 · 10−3, the error increases to 1.22 · 10−2 when we shift the frames of the test data set by as much as 1 pixel in each direction. This corresponds to the case of input videos, where the pivot point of the pendulum is not in the center of the image, which is different from the training data. This effect is visualized in Fig. 15, which shows the output of the model for sequence 2 of the test data set with zero control input, both in the original version and in the shifted version. We observe that the small shift of only one pixel in each direction leads to results that are significantly off, and not even the first frame is predicted correctly. While Zhong & Leonard (2020) propose to use a coordinate-aware encoder based on spatial transformers, this introduces additional complexity to the model. In contrast, our approach does not suffer from such issues.
1. What is the focus of the paper regarding video prediction and physical parameters? 2. What are the strengths of the proposed approach, particularly in its novelty and innovation? 3. What are the weaknesses of the paper, especially regarding its assumptions and experimental limitations? 4. Do you have concerns regarding the applicability of the method in real-world scenarios? 5. Are there any questions or suggestions you have for improving the paper's content or research direction?
Summary Of The Paper Review
Summary Of The Paper This paper proposed a model for learning physical parameters from video. The model takes in a video with a foreground object and background, and predicts the motion of the object in the video. The experiments are done on a set of videos of damped pendulums. Review Strength: The proposed method deals with video prediction in physics-aware fashion by incorporating implicit physics models, which seems to be novel and innovative. The direction that this paper is heading towards is interesting. The presentation of the paper is clear and easy to follow. Weakness: The whole framework is built upon an assumption that the video can be (near) perfectly decomposed into foreground objects and background, which is a very toy assumption and cannot be used in any complicated real video data. This paper assumes knowing the underlying physics dynamics (in this case pendulum), which is an unreasonable assumption. Other dynamics, if it exists in the video, will not be able to be modeled. The experiments are very weak. 1) only one physics dynamics model of the pendulum is shown; 2) For the pendulum, only one real video data is evaluated; 3) the other experiments are done on synthetically generated data, which are also very weak. How about there is more than one pendulum in the video? How about viewing the pendulum from another viewpoint such that the motion pattern is not a perfect "swing"? These are not shown in the paper at all.
ICLR
Title Neural Implicit Representations for Physical Parameter Inference from a Single Video Abstract Neural networks have recently been used to model the dynamics of diverse physical systems. While existing methods achieve impressive results, they are limited by their strong demand for training data and their weak generalization abilities. To overcome these limitations, in this work we propose to combine neural implicit representations for appearance modeling with neural ordinary differential equations (ODEs) in order to obtain interpretable physical models directly from visual observations. Our proposed model combines several unique advantages: (i) Contrary to existing approaches that require large training datasets, we are able to identify physical parameters from only a single video (ii) The use of neural implicit representations enables the processing of high-resolution videos and the synthesis of photo-realistic imagery. (iii) The embedded neural ODE has a known parametric form that allows for the identification of interpretable physical parameters, and (iv) long-term prediction in state space. (v) Furthermore, the photo-realistic rendering of novel scenes with modified physical parameters becomes possible. synth | real synth | real synth | real synth | real synth | real synth | real pendulum length Figure 1: Our method infers physical parameters directly from real-world videos, like the shown pendulum motion. Separated by the red line, the right half of each image shows the input frame, and the left half shows our reconstruction based on physical parameters that we estimate from the input. We show 6 out of 10 frames that were used for training. The proposed model can precisely recover the metric length of the pendulum from the monocular video (relative error to true length is less than 2.5%). Best viewed on screen with magnification. Please also consider the supplementary video. 1 INTRODUCTION The physics of many real-world phenomena can be described concisely and accurately using differential equations. However, such equations are usually formulated in terms of highly abstracted quantities that are typically not directly observable using commodity sensors, such as cameras. For example, a pendulum is physically described by the deflection angle, the angular velocity, the damping coefficient, and the pendulum’s length, but automatically extracting those physical parameters directly from video data is challenging. Thus, due to the complex relationship between the physical process and images of respective scenes, measuring such quantities often necessitates a trained expert operating customised measuring equipment. While for many physical phenomena humans are able to infer (a rough estimation of) physical quantities from a given video, physical understanding from videos is an open problem in machine learning. Recently, the combination of deep learning and physics has become popular, particularly in the context of video prediction. While earlier works (Lutter et al., 2019; Greydanus et al., 2019; Cranmer et al., 2020; Zhong et al., 2020) require coordinate data, i.e. already abstracted physical quantities, more recent works directly use image data (Levine et al., 2020; Zhong & Leonard, 2020). A major downside is that all these approaches rely on massive amounts of training data, and, as we experimentally confirm in App. F, they exhibit poor generalization abilities. In contrast, in our work we address this shortcoming by proposing a solution that extracts semantic physical parameters directly from a single video, see Figure 1. Therefore, we alleviate the need for large data and furthermore facilitate interpretation due to the semantics of the inferred parameters in respective physical equations. Additionally, the six previously mentioned works model physical systems using Lagrangian or Hamiltonian energy formulations, which elegantly guarantee the conservation of energy, but can therefore not easily model dissipative systems that are much more common in the real world (Galley, 2013). The proposed model effectively transforms the camera into a physical measuring device with which we can observe quantities such as the length or the damping coefficient of a pendulum. To achieve the learning of physical models from a single video, we propose to utilise physicsbased neural implicit representations in an analysis-by-synthesis manner, where the latter relies on neural ordinary differential equations for representing abstract physics of visual scenes. Overall, we summarize our main contributions as follows: 1. We present the first method that is able to identify physical parameters from a single video using neural implicit representations. 2. Our approach infers parameters of an underlying ODE-based physical model that directly allows for interpretability and long-term predictions. 3. The unique combination of powerful neural implicit representations with rich physical models allows to synthesize high-resolution and photo-realistic imagery. Moreover, it enables physical editing by rendering novel scenes with modified physical parameters. 4. Contrary to existing learning-based approaches that require large corpora of training data, we propose a per-scene model, so that only a single short video clip that depicts the physical phenomenon is necessary. 2 RELATED WORK The combination of machine learning and physics has been addressed across an extremely broad range of topics. For example, machine learning was used to aid physics research (Bogojeski et al., 2020; Leclerc et al., 2020), or physics was used within machine learning models, such as for automatic question answering from videos (Chen et al., 2021; Bear et al., 2021). In this work we focus specficially on extracting physical models from single videos, so that in the following we discuss related works that we consider most relevant in this context. Physics in the context of learning. While neural networks have led to many remarkable results across diverse domains, the inference of physical principles, such as energy conservation, is still a major challenge and requires additional constraints. A general way to endow models with a physicsbased prior is to use generalized energy functions. For example, Greydanus et al. (2019) and Toth et al. (2020) use a neural network to parameterize the Hamiltonian of a system, which yields a relation between the energy of the system and the change of the state. Hence, they are able to infer the dynamics of systems with conserved energy, such as a pendulum or a multi-body system. One disadvantage of using the Hamiltionian is that canonical coordinates need to be used. To eliminate this constraint, other works use the Lagrangian to model the system’s energy. Since this formalism is more complex, Lutter et al. (2019) and Zhong & Leonard (2020) restrict the Lagrangian to the case of rigid-body dynamics to model systems with multiple degrees of freedom, such as a pole on a cart, or a robotic arm. Cranmer et al. (2020) use a neural network to parameterize a general Lagrangian, which they use to infer the dynamics of a relativistic particle in a uniform potential. While being able to model many relevant systems, the aforementioned energy-based approaches cannot easily be extended to dissipative systems that are much more common in the real world (Galley, 2013). Furthermore, they do not allow for a semantic interpretation of individual learned system parameters. PhyDNet, introduced by Guen & Thome (2020), learns dynamics in the form of a general PDE in a latent space, which, like the aforementioned works, prohibits interpretation of the learned physical model. In contrast, in the context of incorporating physical phenomena into learning frameworks, there are also approaches that make the underlying dynamics explicit. For example, Jaques et al. (2020) unroll the Euler integration of the ordinary differential equation of bouncing balls, as well as balls connected by a spring, to identify the physical parameters like the spring constant. Kandukuri et al. (2020) and de Avila Belbute-Peres et al. (2018) propose to use a linear complementarity problem to differentiably simulate rigid multi-body dynamics that can also handle object interaction and friction. For our method, we also rely on the advantages of modelling the underlying physics explicitly in order to obtain interpretable parameter estimates. Inferring physical properties from video. While many approaches work with trajectories in state space, there are also some works that operate directly on videos. In this case, the information about physical quantities is substantially more abstract, so that uncovering non-linear dynamics from video data is a significantly more difficult problem. Traditionally, such inverse problems are often phrased in terms of optimization problems, for example for deformable physics inference (Weiss et al., 2020), among many more. While respective approaches can successfully estimate a wide range of relevant physical quantities from video data, they often require rich additional information, such as 3D information in the form of depth images in combination with a 3D template mesh (Weiss et al., 2020), which may limit their practical applicability. More recently, several end-to-end learning approaches have been proposed. de Avila Belbute-Peres et al. (2018) use an encoder to extract the initial state of several objects from the combination of images, object masks and flow frames. After propagating the physical state over time, they decode the state back into images to allow for end-to-end training. Jaques et al. (2020) and Kandukuri et al. (2020) use an encoder network to extract object positions from object masks for individual frames. After estimating initial velocities from the positions they integrate the state over time and use a carefully crafted coordinate-consistent decoder, which is based on spatial transformers, to obtain predicted images. Zhong & Leonard (2020) extend this idea to their variational autoencoder (VAE) architecture to obtain a coordinate-aware encoder which they use to infer parameters of the latent distribution of generalized coordinates for each frame. Toth et al. (2020) use a VAE structure to predict the parameters of a posterior over the initial state from a sequence of videos. All of these approaches require large amounts of data to train the complex encoder and decoder modules. In contrast, our approach does not rely on trainable encoder or decoder structures, but instead uses a non-trainable fixed neural ODE solver in combination with a trainable neural implicit representation, and is thus able to infer physical models from a single video. Implicit representations. Recently, neural implicit representations have gained popularity due to their theoretical elegance and performance in novel view synthesis. The idea is to use a neural network to parametrize a function that maps a spatial location to a spatial feature. For example, to represent geometric shapes, using occupancy values (Mescheder et al., 2019; Chen & Zhang, 2019; Peng et al., 2020), or signed distance functions (Park et al., 2019; Gropp et al., 2020; Atzmon & Lipman, 2020). In the area of multiview 3D surface reconstruction as well as novel view synthesis, implicit geometry representations, such as density or signed distance, are combined with implicit color fields to represent shape and appearance (Sitzmann et al., 2019; Mildenhall et al., 2020; Yariv et al., 2020; Niemeyer et al., 2020; Azinovic et al., 2021). To model dynamic scenes, there have been several approaches that parametrize a displacement field and model the scene in a reference configuration (Niemeyer et al., 2019; Park et al., 2021; Pumarola et al., 2021). On the other hand, several approaches (Xian et al., 2021; Li et al., 2021; Du et al., 2021) include the time as an input to the neural representation and regularize the network using constraints based on appearance, geometry, and pre-trained depth or flow networks – however, none of these methods uses physics-based constraints, e.g. by enforcing Newtonian motion. While the majority of works on implicit representations focuses on shape, Sitzmann et al. (2020) show the generality of implicit representations by representing images and audio signals. Our work contributes to the neural implicit representation literature by combining such representations with explicit physical models. 3 ESTIMATING PHYSICAL MODELS WITH NEURAL IMPLICIT REPRESENTATIONS Our main goal is the estimation of physical parameters from a single video, where we specifically focus on the setting of a static background and dynamic objects that are moving according to some physical phenomenon. With that, we model the dynamics of the objects using an ordinary differential equation (ODE). Our objective is now to estimate the unknown physical parameters, as well as the initial conditions, of this ODE. Hence, we additionally learn a video generation model that is able to render a video that depicts objects which follow a specific physical model depending on respective physical parameters. For estimating these physical parameters directly from an input video, we utilise a photometric loss that imposes that the generated video is similar to the input video. 3.1 MODELING THE DYNAMICS For most of the dynamics that can be observed in nature, the temporal evolution of the state can be described by an ODE. For example, for a pendulum the state variables are the angle of deflection and the angular velocity, and a two dimensional first-order ODE can be used to describe the dynamics. In general, we write ż = f (z, t; θ) to describe the ODE1, where z ∈ Rn denotes the state variable, t ∈ R denotes time and θ ∈ Rm are the unknown physical parameters. Using the initial conditions z0 ∈ Rn at the initial time t0, we can write the solution of the ODE as z (t; z0, θ) = z0 + ∫ t t0 f (z(τ), τ ; θ) dτ. (1) Note that the solution curve z (t; z0, θ) ⊂ Rn depends both on the unknown initial conditions z0, as well as on the unknown physical parameters θ. In practice, the solution to Eq. (1) is typically approximated by numeric integration. In our context of physical parameter estimation from videos, we build upon the recent work by Chen et al. (2018), who proposed an approach to compute gradients of the solution curve of an ODE with respect to its parameters. With that, it becomes possible to differentiate through the solution in Eq. (1) and therefore we can use gradient-based methods to estimate z0 and θ. 3.2 DIFFERENTIABLE RENDERING OF THE VIDEO FRAMES To render the video frames, we draw inspiration from the recent advances in neural implicit representations. To this end, we use a static representation to model the background, which we combine with a appearance and shape representation of dynamic foreground objects. By composing the learned background with the dynamic foreground objects, whose poses are determined by the solution of the ODE encoding the physical phenomenon, we obtain a dynamic representation of the overall scene. Doing so allows us to query the color values on a pixel grid, so that we are able to render video frames in a differentiable manner. Fig. 2 shows an overview of the approach. Representation of background. The static background is modeled by a function F (·; θbg) that maps a 2D location x to an appearance value c ∈ RC , where C denotes the number of appearance channels (e.g. RGB colors). The function F (·; θbg) encodes the appearance of the background and is represented as a neural network with learnable parameters θbg. To improve the ability of the neural network to learn high frequency variations in appearance, we use Fourier features (Tancik et al., 2020), so that the input location x ∈ R2 is mapped to a higher-frequency vector γ (x) ∈ R4NFourier+2, where NFourier is the numbers of frequencies used. The full representation of the background then reads cbg (x) = F (γ (x) ; θbg). For a more detailed discussion of the architecture, we refer to App. A. Representation of dynamic objects. To compose the static background and the dynamically moving objects into the full scene, we draw inspiration from Ost et al. (2021) who use implicit representations to represent color and shape in a scene graph to decompose a dynamic scene into a background representation and dynamically moving local representations. A drawback of their work is that they do not use a physical model to constrain the dynamics, and therefore strong supervisory signals like the trajectories and the dimensions of the bounding boxes are essential. In our case, each dynamic object is represented in terms of a local neural implicit representation, which is then placed in the overall scene based on the time-dependent spatial transformation Tt = T (z (t; z0, θode) , θ+). This transformation is parameterized by the unknown initial condition z0, the physical parameters θode of the ODE, and possibly additional parameters θ+. As such, these parameters determine the transformation from the global coordinate system of the background to the local coordinate system. 1W.l.o.g. we only consider first-order ODEs here, since it is always possible to reduce the order to one by introducing additional state variables. Input: Video Sequence Similarly as the background, the appearance of each individual dynamic object is modelled in terms of an implicit neural representation (in the local coordinate system). In contrast to the background, we augment the color output c ∈ RC of the dynamic object representation with an additional opacity value o ∈ [0, 1], which allows us to model objects with arbitrary shape. We write the representation of a dynamic object in the global coordinate system as (cobj (x) , o (x)) = G(γ (x′) ; θobj), where G(·; θobj) is represented as a neural network with weights θobj, γ denotes the mapping to Fourier features, and x′ = Tt(x) is the local coordinate representation of the (global) 2D location x. Differentiable rendering. For rendering we evaluate the composed scene appearance at a regular pixel grid, where we use the opacity value of the local object representation to blend the color of the background and the dynamic objects. To obtain the final color, for all positions x of the pixel grid we evaluate the equation c(x, t) = (1− o(x)) cbg(x) + o(x)cobj(x). (2) Note that due to the time dependence of the transformation Tt, the color value for pixel x is also time dependent, which allows us to render the frames of the sequence over time. 3.3 LOSS FUNCTION We jointly optimize for the parameters of the neural implicit representations θbg and θobj and estimate the physical parameters θode, z0 and θ+ of the dynamics and the transformation. To this end, we use a simple photometric loss defined over all the pixel values, which reads L = 1 |I| |T | ∑ t∈T ∑ x∈I d(I (x, t) , c (x, t)), (3) where d computes the discrepancy between its two inputs, T is the set of all given time steps, I is the set of all pixel coordinates at the current resolution (see next section) and I (x, t) are the given images. To capture information on multiple scales we employ an image pyramid scheme. More details can be found in App. C. 4 EXPERIMENTS We use two challenging physical models to experimentally evaluate our proposed approach. To analyze our method and to compare to previous work, we first consider synthetically created data. Afterwards, we show that our method achives promising results also on real-world data. For details about the ODEs describing the dynamics, additional implementation details, an ablation study, as well as additional results we refer the reader to the Appendix. Although several learning-based approaches that infer physical models from image data have been proposed (de Avila Belbute-Peres et al., 2018; Jaques et al., 2020; Kandukuri et al., 2020; Zhong & Leonard, 2020; Toth et al., 2020), existing approaches are particularly tailored towards settings with large training corpora. However, these methods typically suffer from a decreasing estimation accuracy in scarce training data regimes, or if out of distribution generalization is required (cf. App. F). In contrast, our proposed approach is able to predict physical parameters from a single short video clip. Due to the lack of existing baselines tailored towards estimation from a single video, we adapt the recent work of Jaques et al. (2020) and Zhong & Leonard (2020) to act as baseline methods. 4.1 TWO MASSES SPRING SYSTEM We consider the example of two moving MNIST digits connected by an (invisible) spring on a CIFAR background, in a similar spirit to Jaques et al. (2020), see Fig. 3. Besides the initial positions and velocities, the spring constant k and the equilibrium distance l of the connecting spring need to be identified for the dynamics model. For a more detailed description of the model see App. B.1. The approach of Jaques et al. (2020) uses a learnable encoder and velocity estimator to obtain positions and initial velocities of a known number of objects from the video frames. After integrating the known parametric model, they use a learnable coordinate-consistent decoder in combination with learned object masks and colors to render frames from the integrated trajectories. Using a photometric loss they require 5000 sequences of different runs of the same two masses spring system to train the model and identify the parameters. In order to compare their method to our work in the set- ting of parameter estimation from single video, in addition to their model trained on the full dataset (‘B: Full’), we also consider their model trained for an individual sequences of the test dataset (‘B: Overfit’). We fit our model to sequences from the test dataset, where we use two local representations and parametrize the spatial transformation as shown in App. B.1. By using the maximum of both masks as foreground mask, we enable the model to identify the object layering. We find that for training it is necessary to gradually build up the sequence of frames over training. We start with only two frames and add the respective next frame after 60 epochs. Also, the model appears to have a scale freedom in terms of the equilibrium length and the points where the spring is attached to the digits.2 We therefore add an additional loss to keep the spring attachment close to the center of the bounding box of the digits in the first frame. We observe similar effects when overfitting the model of Jaques et al. (2020) to a single sequence. When training on the full dataset, the effect seems to be averaged out and is not observed. Fig. 3 shows a qualitative comparison of our results to the baseline of Jaques et al. (2020), where the latter is trained in the two settings explained above. We observe, that for this sequence all approaches yield reasonable results for the reconstruction of the training frames. However, for prediction the overfitted model of Jaques et al. (2020) performs significantly worse, indicating that the physical model is poorly identified from a single video. The baseline trained on the full dataset yields results that are slightly worse than our results. We see that in both cases the parameters are identified correctly. The fact that we achieve comparable results while using significantly less data highlights the advantage of combining the explicit dynamics model with the implicit representation for the objects. Note that we chose sequence 6 since it yielded the best results for the baseline. More results can be found in app. E.1. 4.2 NONLINEAR DAMPED PENDULUM We use synthetically created videos of a nonlinear damped pendulum to compare our method to the previous work of Zhong & Leonard (2020) and also to show the ability of our approach to handle high resolution videos. The equations describing the pendulum dynamics can be found in App. B.2. Comparison to Lagrangian Variational Autoencoder. We use the dataset of Zhong & Leonard (2020) containing several sequences of a simple pendulum (each comprising 20 frames), which was created by the OpenAI Gym simulator (Brockman et al., 2016). The method by Zhong & Leonard (2020) uses a coordinate aware encoder to obtain the distribution of the initial state from object masks. After sampling, the initial state is integrated using a learnable Lagrangian function parametrizing the dynamics of the system and a coordinate aware decoder is used to render frames from the trajectories. We train the model using only the first N frames of a single sequence as the training data (with no external control input), effectively overfitting the model to each sequence. 2Intuitively, if the motion is only in one direction we can vary the equilibrium length and adjust the spring attachments without changing the observed motion. Similar effects are present in a 2D motion. Similar to the baseline, we assume no damping and a known pivot pointA in the middle of the frame to train our model. Since this dataset does not include image data, we only use the loss on the object mask, and train our model in this modified setup using the same frames as for the baseline. To evaluate the performance of each method in identifying the underlying dynamics, we compare the prediction of the unseen frames of the same sequence. Qualitative results are presented in Fig. 4. We can observe that both methods fit the given training data very well, however, in the baseline the pendulum motion significantly slows down for unseen time steps and thus it is unable to obtain accurate predictions for unseen data. We emphasize that this happens because the method requires significantly larger training datasets, so that it performs poorly in the single-video setting considered in this paper. In contrast, our method shows a significantly better performance, which highlights the strength of directly modelling physical phenomena to constrain the learnable dynamics in an analysis-by-synthesis manner. Due to aliasing effects that arise from the low resolution of the frames, our method does not give perfect predictions, however, if we use high-resolution images for our method we achieve nearly perfect reconstruction as we show in Fig. 5 and Fig. 6. For a quantitative comparison and further experimental details see App. E.2. High resolution videos. In contrast to the baseline, our approach is able to handle highresolution videos with complex background and pendulum shapes and textures. In this case, our approach accurately the parameters for the full pendulum model as we show in Fig. 5. For this experiment we created several videos by simulating a pendulum with known parameters and then rendering the pendulum on top of an image. Qualitative results of fitting our model to the lake scene can be seen in Fig. 5. We see that our model produces photorealistic renderings of the scene, even for the predicted frames. The renderings of other scenes are shown in App. E. As we show in Fig. 6 and Table 1, it is necessary that the frames in the training set cover a sufficient portion of the motion to enable a correct estimation of the physical parameters. 4.3 REAL PENDULUM VIDEO We now show that our approach is even able to infer physical parameters from real world data. We recorded the pendulum motion shown in Fig. 1. The pendulum is mounted almost frictionless and due to its high weight we do not expect large air drag effects either. The video was recorded with a smartphone, which leads to noticeable real-world noise such as motion blur, however, the proposed method still manages to produce convincing results. The pseudo groundtruth segmentation masks are generated semi-manually by using GrabCut (Rother et al., 2004) and exhibit significant noise Predicted Deflection Angle Trajectory Error that is also handled well by the proposed model. We extract every third frame from the video s.t. there are 10 extracted frames per second and use the first 10 frames for training. We use the the full damped pendulum model to estimate the physical parameters of the pendulum motion. The damping is estimated as c = 4.7 · 10−13, which matches our expectation for this low friction setting. For the pendulum length we note from Eq. (6) that the estimated length l = 27.7 cm is a real world quantity without scale ambiguity. Therefore, we can compare it to lmeasured = 27.1 cm which we obtained by measuring the length from the pivot point to the estimated center of gravity of the pendulum using a ruler. We would like to emphazise, that the very good correspondence shows, that we are able to estimate scale in a monocular video from a pendulum motion. 5 CONCLUSION In this work we presented a solution for learning a physical model from an image sequence that depicts some physical phenomenon. To this end, we proposed to combine neural implicit representations and neural ordinary differential equations in an analysis-by-synthesis fashion. Unlike existing learning-based approaches that require large training corpora, a single short video clip is sufficient for our approach. In contrast to prior works that use encoder-decoder architectures specifically tailored to 2D images, we built upon neural implicit representations that have been shown to give impressive results for 3D scene reconstruction. Therefore, the extension of the proposed method to 3D is a promising direction for future work. We present diverse experiments in which the ODE parametrizes a rigid-body transformation between the background and the foreground objects, such as the pendulum motion. We emphasize that conceptually our model is not limited to rigid-body motions, and that it can directly be extended to other cases, for example to nonlinear transformations for modelling soft-body dynamics. The focus of this work is on learning a physical model of a phenomenon from a short video. Yet, the high fidelity of our model’s renderings, together with the easy modifiability of the physical parameters, enables various computer graphics applications such as the artistic re-rendering of scenes, which we briefly demonstrate in the supplementary video. Overall, our per-scene model combines a unique set of favorable properties, including the interpretability of physical parameters, the ability to perform long-term predictions, and the synthesis of high-resolution images. We believe that our work may serve as inspiration for follow-up works on physics-based machine learning using neural implicit representations. Ethics statement. This work attempts to learn interpretable physical models from video clips of physical phenomena. Our contribution is largely theoretical and we show experiments on synthetic data and limited real-world data. Nevertheless, as machine learning models achieve more humanlike understanding of the real, physical world, it is paramount to ensure that they are deployed safely and according to strict ethical guidelines. While we think that the current state of our work will not disadvantage or advantage specific groups of people, we recommend a careful ethical evaluation of derivative works that aim to close the gap to human physical reasoning. A potential positive impact of this work is that it can be beneficial to people with lower financial resources, as it overcomes the need for expensive experimental gear to infer physical parameters, e.g. in the context of physics education. Reproducibility statement. To ensure the reproducibility of this work we give architecture and training details in App. A and C. Furthermore, we will release our code upon acceptance, so that all experiments and figures shown in this paper can be reproduced. A MODEL ARCHITECTURE We adopt the architecture used in Mildenhall et al. (2020) for the implicit representations, see Fig. 7 for the basic structure. For the Fourier features we use a logarithmic scaling. The i-th of the NFourier Fourier features is obtained as γi(x) = (sin(2 ix), cos(2ix)) i = 0, . . . NFourier − 1, (4) where sin(2ix) for x ∈ R2 means the element wise application of the sine function. We also include the original x in the encoding γ(x). B MODELS FOR THE DYNAMICS B.1 TWO MASSES SPRING SYSTEM The system is modeled as two-body system where the dynamic of each object is described by Newton’s second law of motion, i.e. F = mẍ, where F is the force. Since only the ratio between force and mass can be identified without additional measurement, we fix m = 1, analogously to the work of Jaques et al. (2020). Using Hooke’s law, we write the force applied to object i by object j as Fi,j = −k ( (pi − pj)− 2l pi − pj ‖pi − pj‖ ) . (5) Using the position pi(t; k, l) of the objects to parametrize the trajectory of the local coordinate systems, we can write the time-dependent 2D spatial transformation to the local coordinate system i as T (i)t (x) = x− pi(t; k, l), where l and k are learnable parameters. B.2 NONLINEAR DAMPED PENDULUM A pendulum that is damped by air drag can be modelled as ˙[ϕ ω ] = [ ω − gl sin (ϕ)− cω |ω| ] , (6) where ϕ ∈ R is the deflection angle, ω ∈ R is the angular velocity, g is the (known) gravitational acceleration, l > 0 is the (physical) length of the pendulum, and c > 0 is the damping constant. We use the solution curve ϕ (t; l, c) to parameterize the time-dependent 2D spatial transformation as Tt (x) = R (ϕ (t; l, c))x + A, where R ∈ SO (2) is a rotation matrix and A ∈ R2 is the pivot point of the pendulum. For the full model, the parameters l and c are learnable. For the sake of simplicity we assume that the gravitational acceleration g always points downwards in the global image coordinate system. C TRAINING DETAILS In the following we provide additional training details. C.1 DISCREPANCY MEASURE FOR THE LOSS TERM Unless stated otherwise, for our experiments we use C = 4 image channels, where the three first channels correspond to the RGB channels, and the last channel represents a mask of the foreground object. For the real world data, we obtained the objects masks using a semi-manual approach as described in Sec. 4.3. For the experiments on the synthetically created high resolution videos we used the masks constructed for the video creation for the experiments directly. For the first three channels we define the discrepancy measure in terms of the mean square error as drgb(x, y) = ‖x− y‖2, (7) and for the mask in the last channel we consider the binary cross entropy loss, i.e. dseg(x, y) = [x log (y) + (1− x) log (1− y)] . (8) With that, the overall discrepancy measure is given as d(x, y) = drgb(x1:3, y1:3) + λsegdseg(x4, y4). (9) C.2 OPTIMIZATION We train our model using the Adam optimizer (Kingma & Ba, 2015) with exponential learning rate decay, which reads r(e) = r0 · βe/ndecay (10) where r(e) is the learning rate depending on the epoch e, r0 is the initial learning rate, β is the decay rate and ndecay is the decay step size. One important aspect of the training is to use different learning rates for the parameters θbg and θobj of the implicit representations on the one hand and the physical parameters θode, z0 and θ+ on the other hand. In order to estimate the initial parameters of the ODE and the transformation for the pendulum we employ a heuristic that uses the information contained in the mask. To obtain an initial estimate for the pivot point A we average all masks and use the the pixel with the highest value. To obtain an estimate for the initial angle, we perform a principal component analysis (PCA) on the pixel locations covered by the mask and use the angle between the first component and the vertical direction. The velocity is always initialized as 0. We initialize the damping as c = 1 and the pendulum length as l = 2m for the synthetic experiments and l = 0.4m for the real world experiment. C.3 IMAGE PYRAMID To capture information on multiple scales we employ an image pyramid scheme. Due to memory limitations, for large images we cannot evaluate all pixel values in one batch, and thus the classical approach that considers all stages of the image pyramid at once is not feasible in our setting. Therefore, during training, we sequentially traverse the image pyramid from the low-resolution levels towards the original high-resolution level. The idea is that the low resolution stages reveal global information about the movement of the object, whereas the later high-resolution stages allow to use finer details that improve the coarse estimates from the previous stages. To this end, we use a Binomial kernel of size 5×5 with stride two, which we repeatedly applyNpyr times to reduce the original resolution of the image. We start the training using the coarsest level, and then switch to the next finer level every npyr steps. D ABLATION STUDY To motivate the chosen loss functions, we report the results for the parameter estimation with different loss function configurations in Table 1. Beyond the influence on the quality of the parameter estimation, another motivation to use the color loss is that it enables to learn the representation of the appearance of the background and the object in the implicit representation. This allows for photo-realistic rendering of unseen predictions, as well as the re-rendering of scenes with modified physical parameters, effectively allowing physical scene editing. For the mask loss, on the other hand, we have found that it makes the estimation process more robust to suboptimal initializations of the physical parameters. E FURTHER EXPERIMENTAL DETAILS AND RESULTS In the following we consider specific details for the different experiments. E.1 TWO MASSES SPRING SYSTEM Experimental details. We use both loss terms and set λseg = 0.01 to balance them. Additionally, we use an MSE loss to keep the center of the bounding boxes of the digits close to the origin of the local representations in the first frame. This fixes the scale problem related to the equilibrium length described in the main text. Moreover, we use another MSE loss term to keep the opacity value close to zero outside of (but close to) the visible area. We found this to be necessary, since otherwise artefacts might appear in the extrapolation when previously unseen parts of the mask appear in the visible area. For the background we use an implicit representation with NFourier = 6 Fourier features, NFC = 8 fully connected layers of width WFC = 128 and an input skip to layer number Nskip = 4. For the local object representation we use NFourier = 8 Fourier features, NFC = 8 fully connected layers of width WFC = 128 and an input skip to layer number Nskip = 4. We use an initial learning rate of rMLP, 0 = 0.001 for the parameters of the implicit representations and rparam, 0 = 0.01 for the physical parameters. We set βMLP = 0.99954, ndecay,MLP = 50, βparam = 0.95 and ndecay,param = 100. For the image pyramid we use Npyr = 1 stage and step up the pyramid every npyr = 200 epochs. We train for 1500 epochs, where one epoch is completed, when all the pixels in the current resolution have been considered. Additional results. In Fig. 8 and Fig. 9 we present additional results for sequence 0 and sequence 1 of the test dataset. We see, that for both sequences, overfitting the baseline is not able to produce a reasonable extrapolation of the data and even produces artifacts for the reconstruction part of the sequence. One reason for this is that the model is unable to identify the physical parameters correctly as can be seen by the large relative errors. Our model, on the other hand, is able to estimate the parameters with high accuracy that is even slightly better than the baseline trained on the full training dataset, which again shows the strength of our approach, considering, that we use a single video as input. E.2 COMPARISON WITH THE LAGRANGIAN VARIATIONAL AUTOENCODER Experimental details. The data used in this experiment does not include image data, therefore we do not use drgb and set λseg = 1. Since the predicted masks are obtained only from the local representation, we do not use an implicit representation for the background in this example. For the local representation we use NFourier = 4 Fourier features, NFC = 6 fully connected layers of width WFC = 64 and an input skip to layer number Nskip = 3. We use an initial learning rate of rMLP, 0 = 0.001 for the parameters of the implicit representations and rparam, 0 = 0.01 for the physical parameters. We set βMLP = 0.9954, ndecay,MLP = 10, βparam = 0.995 and ndecay,param = 50. For the image pyramid we use Npyr = 2 stages and step up the pyramid every npyr = 75 epochs. We train for 1500 epochs, where one epoch is completed, when all the pixels in the current resolution have been considered. Quantitative comparison. To quantitatively compare the temporal prediction ability of our approach with the baseline, we follow the procedure by Zhong & Leonard (2020) and report the average mean squared error (MSE) between the predicted and the ground truth mask for the frames Temporal Prediction Ability of the full sequence, which we denote as pixel MSE for consistency with the previous work. The results for randomly chosen sequences of the dataset are presented in Fig. 10. We can observe that the predictive power for both methods is limited when only a few frames are available to infer the underlying dynamics. However, with an increasing number of frames, our method becomes able to reconstruct the physics more consistently, while the baseline does not noticeably benefit from more training frames. We believe that this is because the baseline method overfits to the given frames, whereas our method infers actual physical parameters. E.3 EXPERIMENTS WITH HIGH RESOLUTION VIDEOS AND THE REAL WORLD VIDEO Experimental details. We use the same architecture for the high resolution synthetic and the real-world video sequences. We use both loss terms and set λseg = 0.03 to balance them. For the background we use an implicit representation with NFourier = 10 Fourier features, NFC = 8 fully connected layers of width WFC = 128 and an input skip to layer number Nskip = 4. For the local object representation we use NFourier = 8 Fourier features, NFC = 8 fully connected layers of width WFC = 128 and an input skip to layer number Nskip = 4. We use an initial learning rate of rMLP, 0 = 0.001 for the parameters of the implicit representations and rparam, 0 = 0.05 for the physical parameters. We set βMLP = 0.99954, ndecay,MLP = 10, βparam = 0.95 and ndecay,param = 100. For the image pyramid we use Npyr = 5 stages and step up the pyramid every npyr = 200 epochs. We train for 1500 epochs, where one epoch is completed, when all the pixels in the current resolution have been considered. Additional results. In the following we present additional rendering results. Fig. 11 and Fig. 12 show additional reconstruction and prediction results for additional synthetic high resolution scenes. To create the synthetic scenes we took the background images from https://pixabay.com/photos/ lake-mountains-nature-outdoors-6627781/ (Lake), https://pixabay. com/photos/city-street-architecture-business-4667143/ (City) and https://pixabay.com/photos/apples-fruits-ripe-red-apples-6073599/ (Apple). Fig. 13 allows for a more detailed comparison for the results of the real pendulum video. We show the images and masks used for training on the real pendulum video in Fig. 14. Please also see our supplementary video for additional results on this data. F GENERALIZATION OF THE LAGRANGIAN VARATIONAL AUTOENCODER One drawback of learning-based approaches for visual estimation of physical models is the poor generalization to data that deviates from the training data distribution. We confirm this for the fully (pre-)trained model of Zhong & Leonard (2020). While the Pixel MSE averaged over the full test set is 1.83 · 10−3, the error increases to 1.22 · 10−2 when we shift the frames of the test data set by as much as 1 pixel in each direction. This corresponds to the case of input videos, where the pivot point of the pendulum is not in the center of the image, which is different from the training data. This effect is visualized in Fig. 15, which shows the output of the model for sequence 2 of the test data set with zero control input, both in the original version and in the shifted version. We observe that the small shift of only one pixel in each direction leads to results that are significantly off, and not even the first frame is predicted correctly. While Zhong & Leonard (2020) propose to use a coordinate-aware encoder based on spatial transformers, this introduces additional complexity to the model. In contrast, our approach does not suffer from such issues.
1. What is the main contribution of the paper regarding inferring physical parameters from videos? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of novelty and empirical results? 3. How does the reviewer assess the clarity, quality, and relevance of the paper's content? 4. Are there any concerns regarding the domain specificity and limitations of the method? 5. Do you have any suggestions for improving the evaluation or exploring downstream applications of the physics learned by the approach?
Summary Of The Paper Review
Summary Of The Paper This paper proposes to infer from a single video, the underlying physical parameters of a system. To do this, NeuralODE is combined with a neural radiance field to fit the desired video. The physical parameters learned by a NeuralODE may then be reutilized for simulation of the video under novel dynamics Review The paper studies the interesting problem of inferring the underlying physics of a scene given a video. The paper is well written and the details clear to follow. However, I have several different major concerns about the approach both in terms of novelty and evaluated empirical results, . The underlying setting specified in this paper -- that of learning physics from a single video is more of a limitation in my opinion than a strength. To really learn the underlying physics of objects, more than a single video is needed. For example, if I were only to see a block fall once, there is only so much physics information I can gain. Instead, I think the much more interesting question is how we may learn physics of objects more generally from a dataset of different videos. The underlying novelty of the project is also rather limited. There are variety of different works in NERF that have been fit to different dynamic scenes and video. This paper appears to build largely on top of [1], but swaps a neuralODE to represent the propogation of dynamics, which has also been done before [3]. The synthetic pendulum results seem very toy, with the pendulum rendered on a fake background. In general, the evaluation is the paper is insufficient -- only pendulum scenes are consider, and in that only a single video of the real world. The underlying approach is also very domain specific. The underlying ground truth ODE equations for the pendulum are given.Furthermore, utilizing a NeuralODE seems limiting when modeling physical scenes. Many physical scenes do not conserve physics. The paper does not show any downstream applications of the physics that is learned by the approach. Nor is the approach shown to generalize in any way to a new dataset I would like to see a comparison where a non continuous neural network is used to predict the underlying dynamics of the approach. The paper is also missing references several additional implicit representations for modeling dynamic scenes which are relevant [2-4]. [3] for example, also utilizes an NeuralODE to parameterize the underlying dynamics of a scene. [1] Ost et. al, Neural scene graphs for dynamic scenes. CVPR 2021. [2] Xian et. al, Space-time Neural Irradiance Fields for Free-Viewpoint Video, CVPR 2021. [3] Du et. al, Neural Radiance Flow for 4D View Synthesis and Video Processing, ICCV 2021. [4] Li et. al, Neural Scene Flow Fields for Space-Time ViewSynthesis of Dynamic Scenes, CVPR 2021.
ICLR
Title Neural Implicit Representations for Physical Parameter Inference from a Single Video Abstract Neural networks have recently been used to model the dynamics of diverse physical systems. While existing methods achieve impressive results, they are limited by their strong demand for training data and their weak generalization abilities. To overcome these limitations, in this work we propose to combine neural implicit representations for appearance modeling with neural ordinary differential equations (ODEs) in order to obtain interpretable physical models directly from visual observations. Our proposed model combines several unique advantages: (i) Contrary to existing approaches that require large training datasets, we are able to identify physical parameters from only a single video (ii) The use of neural implicit representations enables the processing of high-resolution videos and the synthesis of photo-realistic imagery. (iii) The embedded neural ODE has a known parametric form that allows for the identification of interpretable physical parameters, and (iv) long-term prediction in state space. (v) Furthermore, the photo-realistic rendering of novel scenes with modified physical parameters becomes possible. synth | real synth | real synth | real synth | real synth | real synth | real pendulum length Figure 1: Our method infers physical parameters directly from real-world videos, like the shown pendulum motion. Separated by the red line, the right half of each image shows the input frame, and the left half shows our reconstruction based on physical parameters that we estimate from the input. We show 6 out of 10 frames that were used for training. The proposed model can precisely recover the metric length of the pendulum from the monocular video (relative error to true length is less than 2.5%). Best viewed on screen with magnification. Please also consider the supplementary video. 1 INTRODUCTION The physics of many real-world phenomena can be described concisely and accurately using differential equations. However, such equations are usually formulated in terms of highly abstracted quantities that are typically not directly observable using commodity sensors, such as cameras. For example, a pendulum is physically described by the deflection angle, the angular velocity, the damping coefficient, and the pendulum’s length, but automatically extracting those physical parameters directly from video data is challenging. Thus, due to the complex relationship between the physical process and images of respective scenes, measuring such quantities often necessitates a trained expert operating customised measuring equipment. While for many physical phenomena humans are able to infer (a rough estimation of) physical quantities from a given video, physical understanding from videos is an open problem in machine learning. Recently, the combination of deep learning and physics has become popular, particularly in the context of video prediction. While earlier works (Lutter et al., 2019; Greydanus et al., 2019; Cranmer et al., 2020; Zhong et al., 2020) require coordinate data, i.e. already abstracted physical quantities, more recent works directly use image data (Levine et al., 2020; Zhong & Leonard, 2020). A major downside is that all these approaches rely on massive amounts of training data, and, as we experimentally confirm in App. F, they exhibit poor generalization abilities. In contrast, in our work we address this shortcoming by proposing a solution that extracts semantic physical parameters directly from a single video, see Figure 1. Therefore, we alleviate the need for large data and furthermore facilitate interpretation due to the semantics of the inferred parameters in respective physical equations. Additionally, the six previously mentioned works model physical systems using Lagrangian or Hamiltonian energy formulations, which elegantly guarantee the conservation of energy, but can therefore not easily model dissipative systems that are much more common in the real world (Galley, 2013). The proposed model effectively transforms the camera into a physical measuring device with which we can observe quantities such as the length or the damping coefficient of a pendulum. To achieve the learning of physical models from a single video, we propose to utilise physicsbased neural implicit representations in an analysis-by-synthesis manner, where the latter relies on neural ordinary differential equations for representing abstract physics of visual scenes. Overall, we summarize our main contributions as follows: 1. We present the first method that is able to identify physical parameters from a single video using neural implicit representations. 2. Our approach infers parameters of an underlying ODE-based physical model that directly allows for interpretability and long-term predictions. 3. The unique combination of powerful neural implicit representations with rich physical models allows to synthesize high-resolution and photo-realistic imagery. Moreover, it enables physical editing by rendering novel scenes with modified physical parameters. 4. Contrary to existing learning-based approaches that require large corpora of training data, we propose a per-scene model, so that only a single short video clip that depicts the physical phenomenon is necessary. 2 RELATED WORK The combination of machine learning and physics has been addressed across an extremely broad range of topics. For example, machine learning was used to aid physics research (Bogojeski et al., 2020; Leclerc et al., 2020), or physics was used within machine learning models, such as for automatic question answering from videos (Chen et al., 2021; Bear et al., 2021). In this work we focus specficially on extracting physical models from single videos, so that in the following we discuss related works that we consider most relevant in this context. Physics in the context of learning. While neural networks have led to many remarkable results across diverse domains, the inference of physical principles, such as energy conservation, is still a major challenge and requires additional constraints. A general way to endow models with a physicsbased prior is to use generalized energy functions. For example, Greydanus et al. (2019) and Toth et al. (2020) use a neural network to parameterize the Hamiltonian of a system, which yields a relation between the energy of the system and the change of the state. Hence, they are able to infer the dynamics of systems with conserved energy, such as a pendulum or a multi-body system. One disadvantage of using the Hamiltionian is that canonical coordinates need to be used. To eliminate this constraint, other works use the Lagrangian to model the system’s energy. Since this formalism is more complex, Lutter et al. (2019) and Zhong & Leonard (2020) restrict the Lagrangian to the case of rigid-body dynamics to model systems with multiple degrees of freedom, such as a pole on a cart, or a robotic arm. Cranmer et al. (2020) use a neural network to parameterize a general Lagrangian, which they use to infer the dynamics of a relativistic particle in a uniform potential. While being able to model many relevant systems, the aforementioned energy-based approaches cannot easily be extended to dissipative systems that are much more common in the real world (Galley, 2013). Furthermore, they do not allow for a semantic interpretation of individual learned system parameters. PhyDNet, introduced by Guen & Thome (2020), learns dynamics in the form of a general PDE in a latent space, which, like the aforementioned works, prohibits interpretation of the learned physical model. In contrast, in the context of incorporating physical phenomena into learning frameworks, there are also approaches that make the underlying dynamics explicit. For example, Jaques et al. (2020) unroll the Euler integration of the ordinary differential equation of bouncing balls, as well as balls connected by a spring, to identify the physical parameters like the spring constant. Kandukuri et al. (2020) and de Avila Belbute-Peres et al. (2018) propose to use a linear complementarity problem to differentiably simulate rigid multi-body dynamics that can also handle object interaction and friction. For our method, we also rely on the advantages of modelling the underlying physics explicitly in order to obtain interpretable parameter estimates. Inferring physical properties from video. While many approaches work with trajectories in state space, there are also some works that operate directly on videos. In this case, the information about physical quantities is substantially more abstract, so that uncovering non-linear dynamics from video data is a significantly more difficult problem. Traditionally, such inverse problems are often phrased in terms of optimization problems, for example for deformable physics inference (Weiss et al., 2020), among many more. While respective approaches can successfully estimate a wide range of relevant physical quantities from video data, they often require rich additional information, such as 3D information in the form of depth images in combination with a 3D template mesh (Weiss et al., 2020), which may limit their practical applicability. More recently, several end-to-end learning approaches have been proposed. de Avila Belbute-Peres et al. (2018) use an encoder to extract the initial state of several objects from the combination of images, object masks and flow frames. After propagating the physical state over time, they decode the state back into images to allow for end-to-end training. Jaques et al. (2020) and Kandukuri et al. (2020) use an encoder network to extract object positions from object masks for individual frames. After estimating initial velocities from the positions they integrate the state over time and use a carefully crafted coordinate-consistent decoder, which is based on spatial transformers, to obtain predicted images. Zhong & Leonard (2020) extend this idea to their variational autoencoder (VAE) architecture to obtain a coordinate-aware encoder which they use to infer parameters of the latent distribution of generalized coordinates for each frame. Toth et al. (2020) use a VAE structure to predict the parameters of a posterior over the initial state from a sequence of videos. All of these approaches require large amounts of data to train the complex encoder and decoder modules. In contrast, our approach does not rely on trainable encoder or decoder structures, but instead uses a non-trainable fixed neural ODE solver in combination with a trainable neural implicit representation, and is thus able to infer physical models from a single video. Implicit representations. Recently, neural implicit representations have gained popularity due to their theoretical elegance and performance in novel view synthesis. The idea is to use a neural network to parametrize a function that maps a spatial location to a spatial feature. For example, to represent geometric shapes, using occupancy values (Mescheder et al., 2019; Chen & Zhang, 2019; Peng et al., 2020), or signed distance functions (Park et al., 2019; Gropp et al., 2020; Atzmon & Lipman, 2020). In the area of multiview 3D surface reconstruction as well as novel view synthesis, implicit geometry representations, such as density or signed distance, are combined with implicit color fields to represent shape and appearance (Sitzmann et al., 2019; Mildenhall et al., 2020; Yariv et al., 2020; Niemeyer et al., 2020; Azinovic et al., 2021). To model dynamic scenes, there have been several approaches that parametrize a displacement field and model the scene in a reference configuration (Niemeyer et al., 2019; Park et al., 2021; Pumarola et al., 2021). On the other hand, several approaches (Xian et al., 2021; Li et al., 2021; Du et al., 2021) include the time as an input to the neural representation and regularize the network using constraints based on appearance, geometry, and pre-trained depth or flow networks – however, none of these methods uses physics-based constraints, e.g. by enforcing Newtonian motion. While the majority of works on implicit representations focuses on shape, Sitzmann et al. (2020) show the generality of implicit representations by representing images and audio signals. Our work contributes to the neural implicit representation literature by combining such representations with explicit physical models. 3 ESTIMATING PHYSICAL MODELS WITH NEURAL IMPLICIT REPRESENTATIONS Our main goal is the estimation of physical parameters from a single video, where we specifically focus on the setting of a static background and dynamic objects that are moving according to some physical phenomenon. With that, we model the dynamics of the objects using an ordinary differential equation (ODE). Our objective is now to estimate the unknown physical parameters, as well as the initial conditions, of this ODE. Hence, we additionally learn a video generation model that is able to render a video that depicts objects which follow a specific physical model depending on respective physical parameters. For estimating these physical parameters directly from an input video, we utilise a photometric loss that imposes that the generated video is similar to the input video. 3.1 MODELING THE DYNAMICS For most of the dynamics that can be observed in nature, the temporal evolution of the state can be described by an ODE. For example, for a pendulum the state variables are the angle of deflection and the angular velocity, and a two dimensional first-order ODE can be used to describe the dynamics. In general, we write ż = f (z, t; θ) to describe the ODE1, where z ∈ Rn denotes the state variable, t ∈ R denotes time and θ ∈ Rm are the unknown physical parameters. Using the initial conditions z0 ∈ Rn at the initial time t0, we can write the solution of the ODE as z (t; z0, θ) = z0 + ∫ t t0 f (z(τ), τ ; θ) dτ. (1) Note that the solution curve z (t; z0, θ) ⊂ Rn depends both on the unknown initial conditions z0, as well as on the unknown physical parameters θ. In practice, the solution to Eq. (1) is typically approximated by numeric integration. In our context of physical parameter estimation from videos, we build upon the recent work by Chen et al. (2018), who proposed an approach to compute gradients of the solution curve of an ODE with respect to its parameters. With that, it becomes possible to differentiate through the solution in Eq. (1) and therefore we can use gradient-based methods to estimate z0 and θ. 3.2 DIFFERENTIABLE RENDERING OF THE VIDEO FRAMES To render the video frames, we draw inspiration from the recent advances in neural implicit representations. To this end, we use a static representation to model the background, which we combine with a appearance and shape representation of dynamic foreground objects. By composing the learned background with the dynamic foreground objects, whose poses are determined by the solution of the ODE encoding the physical phenomenon, we obtain a dynamic representation of the overall scene. Doing so allows us to query the color values on a pixel grid, so that we are able to render video frames in a differentiable manner. Fig. 2 shows an overview of the approach. Representation of background. The static background is modeled by a function F (·; θbg) that maps a 2D location x to an appearance value c ∈ RC , where C denotes the number of appearance channels (e.g. RGB colors). The function F (·; θbg) encodes the appearance of the background and is represented as a neural network with learnable parameters θbg. To improve the ability of the neural network to learn high frequency variations in appearance, we use Fourier features (Tancik et al., 2020), so that the input location x ∈ R2 is mapped to a higher-frequency vector γ (x) ∈ R4NFourier+2, where NFourier is the numbers of frequencies used. The full representation of the background then reads cbg (x) = F (γ (x) ; θbg). For a more detailed discussion of the architecture, we refer to App. A. Representation of dynamic objects. To compose the static background and the dynamically moving objects into the full scene, we draw inspiration from Ost et al. (2021) who use implicit representations to represent color and shape in a scene graph to decompose a dynamic scene into a background representation and dynamically moving local representations. A drawback of their work is that they do not use a physical model to constrain the dynamics, and therefore strong supervisory signals like the trajectories and the dimensions of the bounding boxes are essential. In our case, each dynamic object is represented in terms of a local neural implicit representation, which is then placed in the overall scene based on the time-dependent spatial transformation Tt = T (z (t; z0, θode) , θ+). This transformation is parameterized by the unknown initial condition z0, the physical parameters θode of the ODE, and possibly additional parameters θ+. As such, these parameters determine the transformation from the global coordinate system of the background to the local coordinate system. 1W.l.o.g. we only consider first-order ODEs here, since it is always possible to reduce the order to one by introducing additional state variables. Input: Video Sequence Similarly as the background, the appearance of each individual dynamic object is modelled in terms of an implicit neural representation (in the local coordinate system). In contrast to the background, we augment the color output c ∈ RC of the dynamic object representation with an additional opacity value o ∈ [0, 1], which allows us to model objects with arbitrary shape. We write the representation of a dynamic object in the global coordinate system as (cobj (x) , o (x)) = G(γ (x′) ; θobj), where G(·; θobj) is represented as a neural network with weights θobj, γ denotes the mapping to Fourier features, and x′ = Tt(x) is the local coordinate representation of the (global) 2D location x. Differentiable rendering. For rendering we evaluate the composed scene appearance at a regular pixel grid, where we use the opacity value of the local object representation to blend the color of the background and the dynamic objects. To obtain the final color, for all positions x of the pixel grid we evaluate the equation c(x, t) = (1− o(x)) cbg(x) + o(x)cobj(x). (2) Note that due to the time dependence of the transformation Tt, the color value for pixel x is also time dependent, which allows us to render the frames of the sequence over time. 3.3 LOSS FUNCTION We jointly optimize for the parameters of the neural implicit representations θbg and θobj and estimate the physical parameters θode, z0 and θ+ of the dynamics and the transformation. To this end, we use a simple photometric loss defined over all the pixel values, which reads L = 1 |I| |T | ∑ t∈T ∑ x∈I d(I (x, t) , c (x, t)), (3) where d computes the discrepancy between its two inputs, T is the set of all given time steps, I is the set of all pixel coordinates at the current resolution (see next section) and I (x, t) are the given images. To capture information on multiple scales we employ an image pyramid scheme. More details can be found in App. C. 4 EXPERIMENTS We use two challenging physical models to experimentally evaluate our proposed approach. To analyze our method and to compare to previous work, we first consider synthetically created data. Afterwards, we show that our method achives promising results also on real-world data. For details about the ODEs describing the dynamics, additional implementation details, an ablation study, as well as additional results we refer the reader to the Appendix. Although several learning-based approaches that infer physical models from image data have been proposed (de Avila Belbute-Peres et al., 2018; Jaques et al., 2020; Kandukuri et al., 2020; Zhong & Leonard, 2020; Toth et al., 2020), existing approaches are particularly tailored towards settings with large training corpora. However, these methods typically suffer from a decreasing estimation accuracy in scarce training data regimes, or if out of distribution generalization is required (cf. App. F). In contrast, our proposed approach is able to predict physical parameters from a single short video clip. Due to the lack of existing baselines tailored towards estimation from a single video, we adapt the recent work of Jaques et al. (2020) and Zhong & Leonard (2020) to act as baseline methods. 4.1 TWO MASSES SPRING SYSTEM We consider the example of two moving MNIST digits connected by an (invisible) spring on a CIFAR background, in a similar spirit to Jaques et al. (2020), see Fig. 3. Besides the initial positions and velocities, the spring constant k and the equilibrium distance l of the connecting spring need to be identified for the dynamics model. For a more detailed description of the model see App. B.1. The approach of Jaques et al. (2020) uses a learnable encoder and velocity estimator to obtain positions and initial velocities of a known number of objects from the video frames. After integrating the known parametric model, they use a learnable coordinate-consistent decoder in combination with learned object masks and colors to render frames from the integrated trajectories. Using a photometric loss they require 5000 sequences of different runs of the same two masses spring system to train the model and identify the parameters. In order to compare their method to our work in the set- ting of parameter estimation from single video, in addition to their model trained on the full dataset (‘B: Full’), we also consider their model trained for an individual sequences of the test dataset (‘B: Overfit’). We fit our model to sequences from the test dataset, where we use two local representations and parametrize the spatial transformation as shown in App. B.1. By using the maximum of both masks as foreground mask, we enable the model to identify the object layering. We find that for training it is necessary to gradually build up the sequence of frames over training. We start with only two frames and add the respective next frame after 60 epochs. Also, the model appears to have a scale freedom in terms of the equilibrium length and the points where the spring is attached to the digits.2 We therefore add an additional loss to keep the spring attachment close to the center of the bounding box of the digits in the first frame. We observe similar effects when overfitting the model of Jaques et al. (2020) to a single sequence. When training on the full dataset, the effect seems to be averaged out and is not observed. Fig. 3 shows a qualitative comparison of our results to the baseline of Jaques et al. (2020), where the latter is trained in the two settings explained above. We observe, that for this sequence all approaches yield reasonable results for the reconstruction of the training frames. However, for prediction the overfitted model of Jaques et al. (2020) performs significantly worse, indicating that the physical model is poorly identified from a single video. The baseline trained on the full dataset yields results that are slightly worse than our results. We see that in both cases the parameters are identified correctly. The fact that we achieve comparable results while using significantly less data highlights the advantage of combining the explicit dynamics model with the implicit representation for the objects. Note that we chose sequence 6 since it yielded the best results for the baseline. More results can be found in app. E.1. 4.2 NONLINEAR DAMPED PENDULUM We use synthetically created videos of a nonlinear damped pendulum to compare our method to the previous work of Zhong & Leonard (2020) and also to show the ability of our approach to handle high resolution videos. The equations describing the pendulum dynamics can be found in App. B.2. Comparison to Lagrangian Variational Autoencoder. We use the dataset of Zhong & Leonard (2020) containing several sequences of a simple pendulum (each comprising 20 frames), which was created by the OpenAI Gym simulator (Brockman et al., 2016). The method by Zhong & Leonard (2020) uses a coordinate aware encoder to obtain the distribution of the initial state from object masks. After sampling, the initial state is integrated using a learnable Lagrangian function parametrizing the dynamics of the system and a coordinate aware decoder is used to render frames from the trajectories. We train the model using only the first N frames of a single sequence as the training data (with no external control input), effectively overfitting the model to each sequence. 2Intuitively, if the motion is only in one direction we can vary the equilibrium length and adjust the spring attachments without changing the observed motion. Similar effects are present in a 2D motion. Similar to the baseline, we assume no damping and a known pivot pointA in the middle of the frame to train our model. Since this dataset does not include image data, we only use the loss on the object mask, and train our model in this modified setup using the same frames as for the baseline. To evaluate the performance of each method in identifying the underlying dynamics, we compare the prediction of the unseen frames of the same sequence. Qualitative results are presented in Fig. 4. We can observe that both methods fit the given training data very well, however, in the baseline the pendulum motion significantly slows down for unseen time steps and thus it is unable to obtain accurate predictions for unseen data. We emphasize that this happens because the method requires significantly larger training datasets, so that it performs poorly in the single-video setting considered in this paper. In contrast, our method shows a significantly better performance, which highlights the strength of directly modelling physical phenomena to constrain the learnable dynamics in an analysis-by-synthesis manner. Due to aliasing effects that arise from the low resolution of the frames, our method does not give perfect predictions, however, if we use high-resolution images for our method we achieve nearly perfect reconstruction as we show in Fig. 5 and Fig. 6. For a quantitative comparison and further experimental details see App. E.2. High resolution videos. In contrast to the baseline, our approach is able to handle highresolution videos with complex background and pendulum shapes and textures. In this case, our approach accurately the parameters for the full pendulum model as we show in Fig. 5. For this experiment we created several videos by simulating a pendulum with known parameters and then rendering the pendulum on top of an image. Qualitative results of fitting our model to the lake scene can be seen in Fig. 5. We see that our model produces photorealistic renderings of the scene, even for the predicted frames. The renderings of other scenes are shown in App. E. As we show in Fig. 6 and Table 1, it is necessary that the frames in the training set cover a sufficient portion of the motion to enable a correct estimation of the physical parameters. 4.3 REAL PENDULUM VIDEO We now show that our approach is even able to infer physical parameters from real world data. We recorded the pendulum motion shown in Fig. 1. The pendulum is mounted almost frictionless and due to its high weight we do not expect large air drag effects either. The video was recorded with a smartphone, which leads to noticeable real-world noise such as motion blur, however, the proposed method still manages to produce convincing results. The pseudo groundtruth segmentation masks are generated semi-manually by using GrabCut (Rother et al., 2004) and exhibit significant noise Predicted Deflection Angle Trajectory Error that is also handled well by the proposed model. We extract every third frame from the video s.t. there are 10 extracted frames per second and use the first 10 frames for training. We use the the full damped pendulum model to estimate the physical parameters of the pendulum motion. The damping is estimated as c = 4.7 · 10−13, which matches our expectation for this low friction setting. For the pendulum length we note from Eq. (6) that the estimated length l = 27.7 cm is a real world quantity without scale ambiguity. Therefore, we can compare it to lmeasured = 27.1 cm which we obtained by measuring the length from the pivot point to the estimated center of gravity of the pendulum using a ruler. We would like to emphazise, that the very good correspondence shows, that we are able to estimate scale in a monocular video from a pendulum motion. 5 CONCLUSION In this work we presented a solution for learning a physical model from an image sequence that depicts some physical phenomenon. To this end, we proposed to combine neural implicit representations and neural ordinary differential equations in an analysis-by-synthesis fashion. Unlike existing learning-based approaches that require large training corpora, a single short video clip is sufficient for our approach. In contrast to prior works that use encoder-decoder architectures specifically tailored to 2D images, we built upon neural implicit representations that have been shown to give impressive results for 3D scene reconstruction. Therefore, the extension of the proposed method to 3D is a promising direction for future work. We present diverse experiments in which the ODE parametrizes a rigid-body transformation between the background and the foreground objects, such as the pendulum motion. We emphasize that conceptually our model is not limited to rigid-body motions, and that it can directly be extended to other cases, for example to nonlinear transformations for modelling soft-body dynamics. The focus of this work is on learning a physical model of a phenomenon from a short video. Yet, the high fidelity of our model’s renderings, together with the easy modifiability of the physical parameters, enables various computer graphics applications such as the artistic re-rendering of scenes, which we briefly demonstrate in the supplementary video. Overall, our per-scene model combines a unique set of favorable properties, including the interpretability of physical parameters, the ability to perform long-term predictions, and the synthesis of high-resolution images. We believe that our work may serve as inspiration for follow-up works on physics-based machine learning using neural implicit representations. Ethics statement. This work attempts to learn interpretable physical models from video clips of physical phenomena. Our contribution is largely theoretical and we show experiments on synthetic data and limited real-world data. Nevertheless, as machine learning models achieve more humanlike understanding of the real, physical world, it is paramount to ensure that they are deployed safely and according to strict ethical guidelines. While we think that the current state of our work will not disadvantage or advantage specific groups of people, we recommend a careful ethical evaluation of derivative works that aim to close the gap to human physical reasoning. A potential positive impact of this work is that it can be beneficial to people with lower financial resources, as it overcomes the need for expensive experimental gear to infer physical parameters, e.g. in the context of physics education. Reproducibility statement. To ensure the reproducibility of this work we give architecture and training details in App. A and C. Furthermore, we will release our code upon acceptance, so that all experiments and figures shown in this paper can be reproduced. A MODEL ARCHITECTURE We adopt the architecture used in Mildenhall et al. (2020) for the implicit representations, see Fig. 7 for the basic structure. For the Fourier features we use a logarithmic scaling. The i-th of the NFourier Fourier features is obtained as γi(x) = (sin(2 ix), cos(2ix)) i = 0, . . . NFourier − 1, (4) where sin(2ix) for x ∈ R2 means the element wise application of the sine function. We also include the original x in the encoding γ(x). B MODELS FOR THE DYNAMICS B.1 TWO MASSES SPRING SYSTEM The system is modeled as two-body system where the dynamic of each object is described by Newton’s second law of motion, i.e. F = mẍ, where F is the force. Since only the ratio between force and mass can be identified without additional measurement, we fix m = 1, analogously to the work of Jaques et al. (2020). Using Hooke’s law, we write the force applied to object i by object j as Fi,j = −k ( (pi − pj)− 2l pi − pj ‖pi − pj‖ ) . (5) Using the position pi(t; k, l) of the objects to parametrize the trajectory of the local coordinate systems, we can write the time-dependent 2D spatial transformation to the local coordinate system i as T (i)t (x) = x− pi(t; k, l), where l and k are learnable parameters. B.2 NONLINEAR DAMPED PENDULUM A pendulum that is damped by air drag can be modelled as ˙[ϕ ω ] = [ ω − gl sin (ϕ)− cω |ω| ] , (6) where ϕ ∈ R is the deflection angle, ω ∈ R is the angular velocity, g is the (known) gravitational acceleration, l > 0 is the (physical) length of the pendulum, and c > 0 is the damping constant. We use the solution curve ϕ (t; l, c) to parameterize the time-dependent 2D spatial transformation as Tt (x) = R (ϕ (t; l, c))x + A, where R ∈ SO (2) is a rotation matrix and A ∈ R2 is the pivot point of the pendulum. For the full model, the parameters l and c are learnable. For the sake of simplicity we assume that the gravitational acceleration g always points downwards in the global image coordinate system. C TRAINING DETAILS In the following we provide additional training details. C.1 DISCREPANCY MEASURE FOR THE LOSS TERM Unless stated otherwise, for our experiments we use C = 4 image channels, where the three first channels correspond to the RGB channels, and the last channel represents a mask of the foreground object. For the real world data, we obtained the objects masks using a semi-manual approach as described in Sec. 4.3. For the experiments on the synthetically created high resolution videos we used the masks constructed for the video creation for the experiments directly. For the first three channels we define the discrepancy measure in terms of the mean square error as drgb(x, y) = ‖x− y‖2, (7) and for the mask in the last channel we consider the binary cross entropy loss, i.e. dseg(x, y) = [x log (y) + (1− x) log (1− y)] . (8) With that, the overall discrepancy measure is given as d(x, y) = drgb(x1:3, y1:3) + λsegdseg(x4, y4). (9) C.2 OPTIMIZATION We train our model using the Adam optimizer (Kingma & Ba, 2015) with exponential learning rate decay, which reads r(e) = r0 · βe/ndecay (10) where r(e) is the learning rate depending on the epoch e, r0 is the initial learning rate, β is the decay rate and ndecay is the decay step size. One important aspect of the training is to use different learning rates for the parameters θbg and θobj of the implicit representations on the one hand and the physical parameters θode, z0 and θ+ on the other hand. In order to estimate the initial parameters of the ODE and the transformation for the pendulum we employ a heuristic that uses the information contained in the mask. To obtain an initial estimate for the pivot point A we average all masks and use the the pixel with the highest value. To obtain an estimate for the initial angle, we perform a principal component analysis (PCA) on the pixel locations covered by the mask and use the angle between the first component and the vertical direction. The velocity is always initialized as 0. We initialize the damping as c = 1 and the pendulum length as l = 2m for the synthetic experiments and l = 0.4m for the real world experiment. C.3 IMAGE PYRAMID To capture information on multiple scales we employ an image pyramid scheme. Due to memory limitations, for large images we cannot evaluate all pixel values in one batch, and thus the classical approach that considers all stages of the image pyramid at once is not feasible in our setting. Therefore, during training, we sequentially traverse the image pyramid from the low-resolution levels towards the original high-resolution level. The idea is that the low resolution stages reveal global information about the movement of the object, whereas the later high-resolution stages allow to use finer details that improve the coarse estimates from the previous stages. To this end, we use a Binomial kernel of size 5×5 with stride two, which we repeatedly applyNpyr times to reduce the original resolution of the image. We start the training using the coarsest level, and then switch to the next finer level every npyr steps. D ABLATION STUDY To motivate the chosen loss functions, we report the results for the parameter estimation with different loss function configurations in Table 1. Beyond the influence on the quality of the parameter estimation, another motivation to use the color loss is that it enables to learn the representation of the appearance of the background and the object in the implicit representation. This allows for photo-realistic rendering of unseen predictions, as well as the re-rendering of scenes with modified physical parameters, effectively allowing physical scene editing. For the mask loss, on the other hand, we have found that it makes the estimation process more robust to suboptimal initializations of the physical parameters. E FURTHER EXPERIMENTAL DETAILS AND RESULTS In the following we consider specific details for the different experiments. E.1 TWO MASSES SPRING SYSTEM Experimental details. We use both loss terms and set λseg = 0.01 to balance them. Additionally, we use an MSE loss to keep the center of the bounding boxes of the digits close to the origin of the local representations in the first frame. This fixes the scale problem related to the equilibrium length described in the main text. Moreover, we use another MSE loss term to keep the opacity value close to zero outside of (but close to) the visible area. We found this to be necessary, since otherwise artefacts might appear in the extrapolation when previously unseen parts of the mask appear in the visible area. For the background we use an implicit representation with NFourier = 6 Fourier features, NFC = 8 fully connected layers of width WFC = 128 and an input skip to layer number Nskip = 4. For the local object representation we use NFourier = 8 Fourier features, NFC = 8 fully connected layers of width WFC = 128 and an input skip to layer number Nskip = 4. We use an initial learning rate of rMLP, 0 = 0.001 for the parameters of the implicit representations and rparam, 0 = 0.01 for the physical parameters. We set βMLP = 0.99954, ndecay,MLP = 50, βparam = 0.95 and ndecay,param = 100. For the image pyramid we use Npyr = 1 stage and step up the pyramid every npyr = 200 epochs. We train for 1500 epochs, where one epoch is completed, when all the pixels in the current resolution have been considered. Additional results. In Fig. 8 and Fig. 9 we present additional results for sequence 0 and sequence 1 of the test dataset. We see, that for both sequences, overfitting the baseline is not able to produce a reasonable extrapolation of the data and even produces artifacts for the reconstruction part of the sequence. One reason for this is that the model is unable to identify the physical parameters correctly as can be seen by the large relative errors. Our model, on the other hand, is able to estimate the parameters with high accuracy that is even slightly better than the baseline trained on the full training dataset, which again shows the strength of our approach, considering, that we use a single video as input. E.2 COMPARISON WITH THE LAGRANGIAN VARIATIONAL AUTOENCODER Experimental details. The data used in this experiment does not include image data, therefore we do not use drgb and set λseg = 1. Since the predicted masks are obtained only from the local representation, we do not use an implicit representation for the background in this example. For the local representation we use NFourier = 4 Fourier features, NFC = 6 fully connected layers of width WFC = 64 and an input skip to layer number Nskip = 3. We use an initial learning rate of rMLP, 0 = 0.001 for the parameters of the implicit representations and rparam, 0 = 0.01 for the physical parameters. We set βMLP = 0.9954, ndecay,MLP = 10, βparam = 0.995 and ndecay,param = 50. For the image pyramid we use Npyr = 2 stages and step up the pyramid every npyr = 75 epochs. We train for 1500 epochs, where one epoch is completed, when all the pixels in the current resolution have been considered. Quantitative comparison. To quantitatively compare the temporal prediction ability of our approach with the baseline, we follow the procedure by Zhong & Leonard (2020) and report the average mean squared error (MSE) between the predicted and the ground truth mask for the frames Temporal Prediction Ability of the full sequence, which we denote as pixel MSE for consistency with the previous work. The results for randomly chosen sequences of the dataset are presented in Fig. 10. We can observe that the predictive power for both methods is limited when only a few frames are available to infer the underlying dynamics. However, with an increasing number of frames, our method becomes able to reconstruct the physics more consistently, while the baseline does not noticeably benefit from more training frames. We believe that this is because the baseline method overfits to the given frames, whereas our method infers actual physical parameters. E.3 EXPERIMENTS WITH HIGH RESOLUTION VIDEOS AND THE REAL WORLD VIDEO Experimental details. We use the same architecture for the high resolution synthetic and the real-world video sequences. We use both loss terms and set λseg = 0.03 to balance them. For the background we use an implicit representation with NFourier = 10 Fourier features, NFC = 8 fully connected layers of width WFC = 128 and an input skip to layer number Nskip = 4. For the local object representation we use NFourier = 8 Fourier features, NFC = 8 fully connected layers of width WFC = 128 and an input skip to layer number Nskip = 4. We use an initial learning rate of rMLP, 0 = 0.001 for the parameters of the implicit representations and rparam, 0 = 0.05 for the physical parameters. We set βMLP = 0.99954, ndecay,MLP = 10, βparam = 0.95 and ndecay,param = 100. For the image pyramid we use Npyr = 5 stages and step up the pyramid every npyr = 200 epochs. We train for 1500 epochs, where one epoch is completed, when all the pixels in the current resolution have been considered. Additional results. In the following we present additional rendering results. Fig. 11 and Fig. 12 show additional reconstruction and prediction results for additional synthetic high resolution scenes. To create the synthetic scenes we took the background images from https://pixabay.com/photos/ lake-mountains-nature-outdoors-6627781/ (Lake), https://pixabay. com/photos/city-street-architecture-business-4667143/ (City) and https://pixabay.com/photos/apples-fruits-ripe-red-apples-6073599/ (Apple). Fig. 13 allows for a more detailed comparison for the results of the real pendulum video. We show the images and masks used for training on the real pendulum video in Fig. 14. Please also see our supplementary video for additional results on this data. F GENERALIZATION OF THE LAGRANGIAN VARATIONAL AUTOENCODER One drawback of learning-based approaches for visual estimation of physical models is the poor generalization to data that deviates from the training data distribution. We confirm this for the fully (pre-)trained model of Zhong & Leonard (2020). While the Pixel MSE averaged over the full test set is 1.83 · 10−3, the error increases to 1.22 · 10−2 when we shift the frames of the test data set by as much as 1 pixel in each direction. This corresponds to the case of input videos, where the pivot point of the pendulum is not in the center of the image, which is different from the training data. This effect is visualized in Fig. 15, which shows the output of the model for sequence 2 of the test data set with zero control input, both in the original version and in the shifted version. We observe that the small shift of only one pixel in each direction leads to results that are significantly off, and not even the first frame is predicted correctly. While Zhong & Leonard (2020) propose to use a coordinate-aware encoder based on spatial transformers, this introduces additional complexity to the model. In contrast, our approach does not suffer from such issues.
1. What is the focus and contribution of the paper on physical model inference? 2. What are the strengths and weaknesses of the proposed approach, particularly in its ability to handle different types of dynamics? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any concerns or limitations regarding the use of neural implicit representations for rendering video sequences? 5. Can the method be applied to other scenarios beyond pendulums, and what would be the challenges in doing so?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a novel method for inferring a physical model from a single video. Their approach estimates the physical parameters and initial conditions of an ODE that describe the dynamics of the object, using neural implicit representations to render a video sequence based on the physical parameters. The authors perform experiments using simulate and real-world pendulum videos and compare to a Lagrangian VAE baseline. Review The paper fails to provide sufficient empirical support for the advantages of their approach. In particular, I feel like critical baselines are missing (e.g. Jaques et al. 2020) and pendulums are the only dynamics tested.
ICLR
Title Neural Implicit Representations for Physical Parameter Inference from a Single Video Abstract Neural networks have recently been used to model the dynamics of diverse physical systems. While existing methods achieve impressive results, they are limited by their strong demand for training data and their weak generalization abilities. To overcome these limitations, in this work we propose to combine neural implicit representations for appearance modeling with neural ordinary differential equations (ODEs) in order to obtain interpretable physical models directly from visual observations. Our proposed model combines several unique advantages: (i) Contrary to existing approaches that require large training datasets, we are able to identify physical parameters from only a single video (ii) The use of neural implicit representations enables the processing of high-resolution videos and the synthesis of photo-realistic imagery. (iii) The embedded neural ODE has a known parametric form that allows for the identification of interpretable physical parameters, and (iv) long-term prediction in state space. (v) Furthermore, the photo-realistic rendering of novel scenes with modified physical parameters becomes possible. synth | real synth | real synth | real synth | real synth | real synth | real pendulum length Figure 1: Our method infers physical parameters directly from real-world videos, like the shown pendulum motion. Separated by the red line, the right half of each image shows the input frame, and the left half shows our reconstruction based on physical parameters that we estimate from the input. We show 6 out of 10 frames that were used for training. The proposed model can precisely recover the metric length of the pendulum from the monocular video (relative error to true length is less than 2.5%). Best viewed on screen with magnification. Please also consider the supplementary video. 1 INTRODUCTION The physics of many real-world phenomena can be described concisely and accurately using differential equations. However, such equations are usually formulated in terms of highly abstracted quantities that are typically not directly observable using commodity sensors, such as cameras. For example, a pendulum is physically described by the deflection angle, the angular velocity, the damping coefficient, and the pendulum’s length, but automatically extracting those physical parameters directly from video data is challenging. Thus, due to the complex relationship between the physical process and images of respective scenes, measuring such quantities often necessitates a trained expert operating customised measuring equipment. While for many physical phenomena humans are able to infer (a rough estimation of) physical quantities from a given video, physical understanding from videos is an open problem in machine learning. Recently, the combination of deep learning and physics has become popular, particularly in the context of video prediction. While earlier works (Lutter et al., 2019; Greydanus et al., 2019; Cranmer et al., 2020; Zhong et al., 2020) require coordinate data, i.e. already abstracted physical quantities, more recent works directly use image data (Levine et al., 2020; Zhong & Leonard, 2020). A major downside is that all these approaches rely on massive amounts of training data, and, as we experimentally confirm in App. F, they exhibit poor generalization abilities. In contrast, in our work we address this shortcoming by proposing a solution that extracts semantic physical parameters directly from a single video, see Figure 1. Therefore, we alleviate the need for large data and furthermore facilitate interpretation due to the semantics of the inferred parameters in respective physical equations. Additionally, the six previously mentioned works model physical systems using Lagrangian or Hamiltonian energy formulations, which elegantly guarantee the conservation of energy, but can therefore not easily model dissipative systems that are much more common in the real world (Galley, 2013). The proposed model effectively transforms the camera into a physical measuring device with which we can observe quantities such as the length or the damping coefficient of a pendulum. To achieve the learning of physical models from a single video, we propose to utilise physicsbased neural implicit representations in an analysis-by-synthesis manner, where the latter relies on neural ordinary differential equations for representing abstract physics of visual scenes. Overall, we summarize our main contributions as follows: 1. We present the first method that is able to identify physical parameters from a single video using neural implicit representations. 2. Our approach infers parameters of an underlying ODE-based physical model that directly allows for interpretability and long-term predictions. 3. The unique combination of powerful neural implicit representations with rich physical models allows to synthesize high-resolution and photo-realistic imagery. Moreover, it enables physical editing by rendering novel scenes with modified physical parameters. 4. Contrary to existing learning-based approaches that require large corpora of training data, we propose a per-scene model, so that only a single short video clip that depicts the physical phenomenon is necessary. 2 RELATED WORK The combination of machine learning and physics has been addressed across an extremely broad range of topics. For example, machine learning was used to aid physics research (Bogojeski et al., 2020; Leclerc et al., 2020), or physics was used within machine learning models, such as for automatic question answering from videos (Chen et al., 2021; Bear et al., 2021). In this work we focus specficially on extracting physical models from single videos, so that in the following we discuss related works that we consider most relevant in this context. Physics in the context of learning. While neural networks have led to many remarkable results across diverse domains, the inference of physical principles, such as energy conservation, is still a major challenge and requires additional constraints. A general way to endow models with a physicsbased prior is to use generalized energy functions. For example, Greydanus et al. (2019) and Toth et al. (2020) use a neural network to parameterize the Hamiltonian of a system, which yields a relation between the energy of the system and the change of the state. Hence, they are able to infer the dynamics of systems with conserved energy, such as a pendulum or a multi-body system. One disadvantage of using the Hamiltionian is that canonical coordinates need to be used. To eliminate this constraint, other works use the Lagrangian to model the system’s energy. Since this formalism is more complex, Lutter et al. (2019) and Zhong & Leonard (2020) restrict the Lagrangian to the case of rigid-body dynamics to model systems with multiple degrees of freedom, such as a pole on a cart, or a robotic arm. Cranmer et al. (2020) use a neural network to parameterize a general Lagrangian, which they use to infer the dynamics of a relativistic particle in a uniform potential. While being able to model many relevant systems, the aforementioned energy-based approaches cannot easily be extended to dissipative systems that are much more common in the real world (Galley, 2013). Furthermore, they do not allow for a semantic interpretation of individual learned system parameters. PhyDNet, introduced by Guen & Thome (2020), learns dynamics in the form of a general PDE in a latent space, which, like the aforementioned works, prohibits interpretation of the learned physical model. In contrast, in the context of incorporating physical phenomena into learning frameworks, there are also approaches that make the underlying dynamics explicit. For example, Jaques et al. (2020) unroll the Euler integration of the ordinary differential equation of bouncing balls, as well as balls connected by a spring, to identify the physical parameters like the spring constant. Kandukuri et al. (2020) and de Avila Belbute-Peres et al. (2018) propose to use a linear complementarity problem to differentiably simulate rigid multi-body dynamics that can also handle object interaction and friction. For our method, we also rely on the advantages of modelling the underlying physics explicitly in order to obtain interpretable parameter estimates. Inferring physical properties from video. While many approaches work with trajectories in state space, there are also some works that operate directly on videos. In this case, the information about physical quantities is substantially more abstract, so that uncovering non-linear dynamics from video data is a significantly more difficult problem. Traditionally, such inverse problems are often phrased in terms of optimization problems, for example for deformable physics inference (Weiss et al., 2020), among many more. While respective approaches can successfully estimate a wide range of relevant physical quantities from video data, they often require rich additional information, such as 3D information in the form of depth images in combination with a 3D template mesh (Weiss et al., 2020), which may limit their practical applicability. More recently, several end-to-end learning approaches have been proposed. de Avila Belbute-Peres et al. (2018) use an encoder to extract the initial state of several objects from the combination of images, object masks and flow frames. After propagating the physical state over time, they decode the state back into images to allow for end-to-end training. Jaques et al. (2020) and Kandukuri et al. (2020) use an encoder network to extract object positions from object masks for individual frames. After estimating initial velocities from the positions they integrate the state over time and use a carefully crafted coordinate-consistent decoder, which is based on spatial transformers, to obtain predicted images. Zhong & Leonard (2020) extend this idea to their variational autoencoder (VAE) architecture to obtain a coordinate-aware encoder which they use to infer parameters of the latent distribution of generalized coordinates for each frame. Toth et al. (2020) use a VAE structure to predict the parameters of a posterior over the initial state from a sequence of videos. All of these approaches require large amounts of data to train the complex encoder and decoder modules. In contrast, our approach does not rely on trainable encoder or decoder structures, but instead uses a non-trainable fixed neural ODE solver in combination with a trainable neural implicit representation, and is thus able to infer physical models from a single video. Implicit representations. Recently, neural implicit representations have gained popularity due to their theoretical elegance and performance in novel view synthesis. The idea is to use a neural network to parametrize a function that maps a spatial location to a spatial feature. For example, to represent geometric shapes, using occupancy values (Mescheder et al., 2019; Chen & Zhang, 2019; Peng et al., 2020), or signed distance functions (Park et al., 2019; Gropp et al., 2020; Atzmon & Lipman, 2020). In the area of multiview 3D surface reconstruction as well as novel view synthesis, implicit geometry representations, such as density or signed distance, are combined with implicit color fields to represent shape and appearance (Sitzmann et al., 2019; Mildenhall et al., 2020; Yariv et al., 2020; Niemeyer et al., 2020; Azinovic et al., 2021). To model dynamic scenes, there have been several approaches that parametrize a displacement field and model the scene in a reference configuration (Niemeyer et al., 2019; Park et al., 2021; Pumarola et al., 2021). On the other hand, several approaches (Xian et al., 2021; Li et al., 2021; Du et al., 2021) include the time as an input to the neural representation and regularize the network using constraints based on appearance, geometry, and pre-trained depth or flow networks – however, none of these methods uses physics-based constraints, e.g. by enforcing Newtonian motion. While the majority of works on implicit representations focuses on shape, Sitzmann et al. (2020) show the generality of implicit representations by representing images and audio signals. Our work contributes to the neural implicit representation literature by combining such representations with explicit physical models. 3 ESTIMATING PHYSICAL MODELS WITH NEURAL IMPLICIT REPRESENTATIONS Our main goal is the estimation of physical parameters from a single video, where we specifically focus on the setting of a static background and dynamic objects that are moving according to some physical phenomenon. With that, we model the dynamics of the objects using an ordinary differential equation (ODE). Our objective is now to estimate the unknown physical parameters, as well as the initial conditions, of this ODE. Hence, we additionally learn a video generation model that is able to render a video that depicts objects which follow a specific physical model depending on respective physical parameters. For estimating these physical parameters directly from an input video, we utilise a photometric loss that imposes that the generated video is similar to the input video. 3.1 MODELING THE DYNAMICS For most of the dynamics that can be observed in nature, the temporal evolution of the state can be described by an ODE. For example, for a pendulum the state variables are the angle of deflection and the angular velocity, and a two dimensional first-order ODE can be used to describe the dynamics. In general, we write ż = f (z, t; θ) to describe the ODE1, where z ∈ Rn denotes the state variable, t ∈ R denotes time and θ ∈ Rm are the unknown physical parameters. Using the initial conditions z0 ∈ Rn at the initial time t0, we can write the solution of the ODE as z (t; z0, θ) = z0 + ∫ t t0 f (z(τ), τ ; θ) dτ. (1) Note that the solution curve z (t; z0, θ) ⊂ Rn depends both on the unknown initial conditions z0, as well as on the unknown physical parameters θ. In practice, the solution to Eq. (1) is typically approximated by numeric integration. In our context of physical parameter estimation from videos, we build upon the recent work by Chen et al. (2018), who proposed an approach to compute gradients of the solution curve of an ODE with respect to its parameters. With that, it becomes possible to differentiate through the solution in Eq. (1) and therefore we can use gradient-based methods to estimate z0 and θ. 3.2 DIFFERENTIABLE RENDERING OF THE VIDEO FRAMES To render the video frames, we draw inspiration from the recent advances in neural implicit representations. To this end, we use a static representation to model the background, which we combine with a appearance and shape representation of dynamic foreground objects. By composing the learned background with the dynamic foreground objects, whose poses are determined by the solution of the ODE encoding the physical phenomenon, we obtain a dynamic representation of the overall scene. Doing so allows us to query the color values on a pixel grid, so that we are able to render video frames in a differentiable manner. Fig. 2 shows an overview of the approach. Representation of background. The static background is modeled by a function F (·; θbg) that maps a 2D location x to an appearance value c ∈ RC , where C denotes the number of appearance channels (e.g. RGB colors). The function F (·; θbg) encodes the appearance of the background and is represented as a neural network with learnable parameters θbg. To improve the ability of the neural network to learn high frequency variations in appearance, we use Fourier features (Tancik et al., 2020), so that the input location x ∈ R2 is mapped to a higher-frequency vector γ (x) ∈ R4NFourier+2, where NFourier is the numbers of frequencies used. The full representation of the background then reads cbg (x) = F (γ (x) ; θbg). For a more detailed discussion of the architecture, we refer to App. A. Representation of dynamic objects. To compose the static background and the dynamically moving objects into the full scene, we draw inspiration from Ost et al. (2021) who use implicit representations to represent color and shape in a scene graph to decompose a dynamic scene into a background representation and dynamically moving local representations. A drawback of their work is that they do not use a physical model to constrain the dynamics, and therefore strong supervisory signals like the trajectories and the dimensions of the bounding boxes are essential. In our case, each dynamic object is represented in terms of a local neural implicit representation, which is then placed in the overall scene based on the time-dependent spatial transformation Tt = T (z (t; z0, θode) , θ+). This transformation is parameterized by the unknown initial condition z0, the physical parameters θode of the ODE, and possibly additional parameters θ+. As such, these parameters determine the transformation from the global coordinate system of the background to the local coordinate system. 1W.l.o.g. we only consider first-order ODEs here, since it is always possible to reduce the order to one by introducing additional state variables. Input: Video Sequence Similarly as the background, the appearance of each individual dynamic object is modelled in terms of an implicit neural representation (in the local coordinate system). In contrast to the background, we augment the color output c ∈ RC of the dynamic object representation with an additional opacity value o ∈ [0, 1], which allows us to model objects with arbitrary shape. We write the representation of a dynamic object in the global coordinate system as (cobj (x) , o (x)) = G(γ (x′) ; θobj), where G(·; θobj) is represented as a neural network with weights θobj, γ denotes the mapping to Fourier features, and x′ = Tt(x) is the local coordinate representation of the (global) 2D location x. Differentiable rendering. For rendering we evaluate the composed scene appearance at a regular pixel grid, where we use the opacity value of the local object representation to blend the color of the background and the dynamic objects. To obtain the final color, for all positions x of the pixel grid we evaluate the equation c(x, t) = (1− o(x)) cbg(x) + o(x)cobj(x). (2) Note that due to the time dependence of the transformation Tt, the color value for pixel x is also time dependent, which allows us to render the frames of the sequence over time. 3.3 LOSS FUNCTION We jointly optimize for the parameters of the neural implicit representations θbg and θobj and estimate the physical parameters θode, z0 and θ+ of the dynamics and the transformation. To this end, we use a simple photometric loss defined over all the pixel values, which reads L = 1 |I| |T | ∑ t∈T ∑ x∈I d(I (x, t) , c (x, t)), (3) where d computes the discrepancy between its two inputs, T is the set of all given time steps, I is the set of all pixel coordinates at the current resolution (see next section) and I (x, t) are the given images. To capture information on multiple scales we employ an image pyramid scheme. More details can be found in App. C. 4 EXPERIMENTS We use two challenging physical models to experimentally evaluate our proposed approach. To analyze our method and to compare to previous work, we first consider synthetically created data. Afterwards, we show that our method achives promising results also on real-world data. For details about the ODEs describing the dynamics, additional implementation details, an ablation study, as well as additional results we refer the reader to the Appendix. Although several learning-based approaches that infer physical models from image data have been proposed (de Avila Belbute-Peres et al., 2018; Jaques et al., 2020; Kandukuri et al., 2020; Zhong & Leonard, 2020; Toth et al., 2020), existing approaches are particularly tailored towards settings with large training corpora. However, these methods typically suffer from a decreasing estimation accuracy in scarce training data regimes, or if out of distribution generalization is required (cf. App. F). In contrast, our proposed approach is able to predict physical parameters from a single short video clip. Due to the lack of existing baselines tailored towards estimation from a single video, we adapt the recent work of Jaques et al. (2020) and Zhong & Leonard (2020) to act as baseline methods. 4.1 TWO MASSES SPRING SYSTEM We consider the example of two moving MNIST digits connected by an (invisible) spring on a CIFAR background, in a similar spirit to Jaques et al. (2020), see Fig. 3. Besides the initial positions and velocities, the spring constant k and the equilibrium distance l of the connecting spring need to be identified for the dynamics model. For a more detailed description of the model see App. B.1. The approach of Jaques et al. (2020) uses a learnable encoder and velocity estimator to obtain positions and initial velocities of a known number of objects from the video frames. After integrating the known parametric model, they use a learnable coordinate-consistent decoder in combination with learned object masks and colors to render frames from the integrated trajectories. Using a photometric loss they require 5000 sequences of different runs of the same two masses spring system to train the model and identify the parameters. In order to compare their method to our work in the set- ting of parameter estimation from single video, in addition to their model trained on the full dataset (‘B: Full’), we also consider their model trained for an individual sequences of the test dataset (‘B: Overfit’). We fit our model to sequences from the test dataset, where we use two local representations and parametrize the spatial transformation as shown in App. B.1. By using the maximum of both masks as foreground mask, we enable the model to identify the object layering. We find that for training it is necessary to gradually build up the sequence of frames over training. We start with only two frames and add the respective next frame after 60 epochs. Also, the model appears to have a scale freedom in terms of the equilibrium length and the points where the spring is attached to the digits.2 We therefore add an additional loss to keep the spring attachment close to the center of the bounding box of the digits in the first frame. We observe similar effects when overfitting the model of Jaques et al. (2020) to a single sequence. When training on the full dataset, the effect seems to be averaged out and is not observed. Fig. 3 shows a qualitative comparison of our results to the baseline of Jaques et al. (2020), where the latter is trained in the two settings explained above. We observe, that for this sequence all approaches yield reasonable results for the reconstruction of the training frames. However, for prediction the overfitted model of Jaques et al. (2020) performs significantly worse, indicating that the physical model is poorly identified from a single video. The baseline trained on the full dataset yields results that are slightly worse than our results. We see that in both cases the parameters are identified correctly. The fact that we achieve comparable results while using significantly less data highlights the advantage of combining the explicit dynamics model with the implicit representation for the objects. Note that we chose sequence 6 since it yielded the best results for the baseline. More results can be found in app. E.1. 4.2 NONLINEAR DAMPED PENDULUM We use synthetically created videos of a nonlinear damped pendulum to compare our method to the previous work of Zhong & Leonard (2020) and also to show the ability of our approach to handle high resolution videos. The equations describing the pendulum dynamics can be found in App. B.2. Comparison to Lagrangian Variational Autoencoder. We use the dataset of Zhong & Leonard (2020) containing several sequences of a simple pendulum (each comprising 20 frames), which was created by the OpenAI Gym simulator (Brockman et al., 2016). The method by Zhong & Leonard (2020) uses a coordinate aware encoder to obtain the distribution of the initial state from object masks. After sampling, the initial state is integrated using a learnable Lagrangian function parametrizing the dynamics of the system and a coordinate aware decoder is used to render frames from the trajectories. We train the model using only the first N frames of a single sequence as the training data (with no external control input), effectively overfitting the model to each sequence. 2Intuitively, if the motion is only in one direction we can vary the equilibrium length and adjust the spring attachments without changing the observed motion. Similar effects are present in a 2D motion. Similar to the baseline, we assume no damping and a known pivot pointA in the middle of the frame to train our model. Since this dataset does not include image data, we only use the loss on the object mask, and train our model in this modified setup using the same frames as for the baseline. To evaluate the performance of each method in identifying the underlying dynamics, we compare the prediction of the unseen frames of the same sequence. Qualitative results are presented in Fig. 4. We can observe that both methods fit the given training data very well, however, in the baseline the pendulum motion significantly slows down for unseen time steps and thus it is unable to obtain accurate predictions for unseen data. We emphasize that this happens because the method requires significantly larger training datasets, so that it performs poorly in the single-video setting considered in this paper. In contrast, our method shows a significantly better performance, which highlights the strength of directly modelling physical phenomena to constrain the learnable dynamics in an analysis-by-synthesis manner. Due to aliasing effects that arise from the low resolution of the frames, our method does not give perfect predictions, however, if we use high-resolution images for our method we achieve nearly perfect reconstruction as we show in Fig. 5 and Fig. 6. For a quantitative comparison and further experimental details see App. E.2. High resolution videos. In contrast to the baseline, our approach is able to handle highresolution videos with complex background and pendulum shapes and textures. In this case, our approach accurately the parameters for the full pendulum model as we show in Fig. 5. For this experiment we created several videos by simulating a pendulum with known parameters and then rendering the pendulum on top of an image. Qualitative results of fitting our model to the lake scene can be seen in Fig. 5. We see that our model produces photorealistic renderings of the scene, even for the predicted frames. The renderings of other scenes are shown in App. E. As we show in Fig. 6 and Table 1, it is necessary that the frames in the training set cover a sufficient portion of the motion to enable a correct estimation of the physical parameters. 4.3 REAL PENDULUM VIDEO We now show that our approach is even able to infer physical parameters from real world data. We recorded the pendulum motion shown in Fig. 1. The pendulum is mounted almost frictionless and due to its high weight we do not expect large air drag effects either. The video was recorded with a smartphone, which leads to noticeable real-world noise such as motion blur, however, the proposed method still manages to produce convincing results. The pseudo groundtruth segmentation masks are generated semi-manually by using GrabCut (Rother et al., 2004) and exhibit significant noise Predicted Deflection Angle Trajectory Error that is also handled well by the proposed model. We extract every third frame from the video s.t. there are 10 extracted frames per second and use the first 10 frames for training. We use the the full damped pendulum model to estimate the physical parameters of the pendulum motion. The damping is estimated as c = 4.7 · 10−13, which matches our expectation for this low friction setting. For the pendulum length we note from Eq. (6) that the estimated length l = 27.7 cm is a real world quantity without scale ambiguity. Therefore, we can compare it to lmeasured = 27.1 cm which we obtained by measuring the length from the pivot point to the estimated center of gravity of the pendulum using a ruler. We would like to emphazise, that the very good correspondence shows, that we are able to estimate scale in a monocular video from a pendulum motion. 5 CONCLUSION In this work we presented a solution for learning a physical model from an image sequence that depicts some physical phenomenon. To this end, we proposed to combine neural implicit representations and neural ordinary differential equations in an analysis-by-synthesis fashion. Unlike existing learning-based approaches that require large training corpora, a single short video clip is sufficient for our approach. In contrast to prior works that use encoder-decoder architectures specifically tailored to 2D images, we built upon neural implicit representations that have been shown to give impressive results for 3D scene reconstruction. Therefore, the extension of the proposed method to 3D is a promising direction for future work. We present diverse experiments in which the ODE parametrizes a rigid-body transformation between the background and the foreground objects, such as the pendulum motion. We emphasize that conceptually our model is not limited to rigid-body motions, and that it can directly be extended to other cases, for example to nonlinear transformations for modelling soft-body dynamics. The focus of this work is on learning a physical model of a phenomenon from a short video. Yet, the high fidelity of our model’s renderings, together with the easy modifiability of the physical parameters, enables various computer graphics applications such as the artistic re-rendering of scenes, which we briefly demonstrate in the supplementary video. Overall, our per-scene model combines a unique set of favorable properties, including the interpretability of physical parameters, the ability to perform long-term predictions, and the synthesis of high-resolution images. We believe that our work may serve as inspiration for follow-up works on physics-based machine learning using neural implicit representations. Ethics statement. This work attempts to learn interpretable physical models from video clips of physical phenomena. Our contribution is largely theoretical and we show experiments on synthetic data and limited real-world data. Nevertheless, as machine learning models achieve more humanlike understanding of the real, physical world, it is paramount to ensure that they are deployed safely and according to strict ethical guidelines. While we think that the current state of our work will not disadvantage or advantage specific groups of people, we recommend a careful ethical evaluation of derivative works that aim to close the gap to human physical reasoning. A potential positive impact of this work is that it can be beneficial to people with lower financial resources, as it overcomes the need for expensive experimental gear to infer physical parameters, e.g. in the context of physics education. Reproducibility statement. To ensure the reproducibility of this work we give architecture and training details in App. A and C. Furthermore, we will release our code upon acceptance, so that all experiments and figures shown in this paper can be reproduced. A MODEL ARCHITECTURE We adopt the architecture used in Mildenhall et al. (2020) for the implicit representations, see Fig. 7 for the basic structure. For the Fourier features we use a logarithmic scaling. The i-th of the NFourier Fourier features is obtained as γi(x) = (sin(2 ix), cos(2ix)) i = 0, . . . NFourier − 1, (4) where sin(2ix) for x ∈ R2 means the element wise application of the sine function. We also include the original x in the encoding γ(x). B MODELS FOR THE DYNAMICS B.1 TWO MASSES SPRING SYSTEM The system is modeled as two-body system where the dynamic of each object is described by Newton’s second law of motion, i.e. F = mẍ, where F is the force. Since only the ratio between force and mass can be identified without additional measurement, we fix m = 1, analogously to the work of Jaques et al. (2020). Using Hooke’s law, we write the force applied to object i by object j as Fi,j = −k ( (pi − pj)− 2l pi − pj ‖pi − pj‖ ) . (5) Using the position pi(t; k, l) of the objects to parametrize the trajectory of the local coordinate systems, we can write the time-dependent 2D spatial transformation to the local coordinate system i as T (i)t (x) = x− pi(t; k, l), where l and k are learnable parameters. B.2 NONLINEAR DAMPED PENDULUM A pendulum that is damped by air drag can be modelled as ˙[ϕ ω ] = [ ω − gl sin (ϕ)− cω |ω| ] , (6) where ϕ ∈ R is the deflection angle, ω ∈ R is the angular velocity, g is the (known) gravitational acceleration, l > 0 is the (physical) length of the pendulum, and c > 0 is the damping constant. We use the solution curve ϕ (t; l, c) to parameterize the time-dependent 2D spatial transformation as Tt (x) = R (ϕ (t; l, c))x + A, where R ∈ SO (2) is a rotation matrix and A ∈ R2 is the pivot point of the pendulum. For the full model, the parameters l and c are learnable. For the sake of simplicity we assume that the gravitational acceleration g always points downwards in the global image coordinate system. C TRAINING DETAILS In the following we provide additional training details. C.1 DISCREPANCY MEASURE FOR THE LOSS TERM Unless stated otherwise, for our experiments we use C = 4 image channels, where the three first channels correspond to the RGB channels, and the last channel represents a mask of the foreground object. For the real world data, we obtained the objects masks using a semi-manual approach as described in Sec. 4.3. For the experiments on the synthetically created high resolution videos we used the masks constructed for the video creation for the experiments directly. For the first three channels we define the discrepancy measure in terms of the mean square error as drgb(x, y) = ‖x− y‖2, (7) and for the mask in the last channel we consider the binary cross entropy loss, i.e. dseg(x, y) = [x log (y) + (1− x) log (1− y)] . (8) With that, the overall discrepancy measure is given as d(x, y) = drgb(x1:3, y1:3) + λsegdseg(x4, y4). (9) C.2 OPTIMIZATION We train our model using the Adam optimizer (Kingma & Ba, 2015) with exponential learning rate decay, which reads r(e) = r0 · βe/ndecay (10) where r(e) is the learning rate depending on the epoch e, r0 is the initial learning rate, β is the decay rate and ndecay is the decay step size. One important aspect of the training is to use different learning rates for the parameters θbg and θobj of the implicit representations on the one hand and the physical parameters θode, z0 and θ+ on the other hand. In order to estimate the initial parameters of the ODE and the transformation for the pendulum we employ a heuristic that uses the information contained in the mask. To obtain an initial estimate for the pivot point A we average all masks and use the the pixel with the highest value. To obtain an estimate for the initial angle, we perform a principal component analysis (PCA) on the pixel locations covered by the mask and use the angle between the first component and the vertical direction. The velocity is always initialized as 0. We initialize the damping as c = 1 and the pendulum length as l = 2m for the synthetic experiments and l = 0.4m for the real world experiment. C.3 IMAGE PYRAMID To capture information on multiple scales we employ an image pyramid scheme. Due to memory limitations, for large images we cannot evaluate all pixel values in one batch, and thus the classical approach that considers all stages of the image pyramid at once is not feasible in our setting. Therefore, during training, we sequentially traverse the image pyramid from the low-resolution levels towards the original high-resolution level. The idea is that the low resolution stages reveal global information about the movement of the object, whereas the later high-resolution stages allow to use finer details that improve the coarse estimates from the previous stages. To this end, we use a Binomial kernel of size 5×5 with stride two, which we repeatedly applyNpyr times to reduce the original resolution of the image. We start the training using the coarsest level, and then switch to the next finer level every npyr steps. D ABLATION STUDY To motivate the chosen loss functions, we report the results for the parameter estimation with different loss function configurations in Table 1. Beyond the influence on the quality of the parameter estimation, another motivation to use the color loss is that it enables to learn the representation of the appearance of the background and the object in the implicit representation. This allows for photo-realistic rendering of unseen predictions, as well as the re-rendering of scenes with modified physical parameters, effectively allowing physical scene editing. For the mask loss, on the other hand, we have found that it makes the estimation process more robust to suboptimal initializations of the physical parameters. E FURTHER EXPERIMENTAL DETAILS AND RESULTS In the following we consider specific details for the different experiments. E.1 TWO MASSES SPRING SYSTEM Experimental details. We use both loss terms and set λseg = 0.01 to balance them. Additionally, we use an MSE loss to keep the center of the bounding boxes of the digits close to the origin of the local representations in the first frame. This fixes the scale problem related to the equilibrium length described in the main text. Moreover, we use another MSE loss term to keep the opacity value close to zero outside of (but close to) the visible area. We found this to be necessary, since otherwise artefacts might appear in the extrapolation when previously unseen parts of the mask appear in the visible area. For the background we use an implicit representation with NFourier = 6 Fourier features, NFC = 8 fully connected layers of width WFC = 128 and an input skip to layer number Nskip = 4. For the local object representation we use NFourier = 8 Fourier features, NFC = 8 fully connected layers of width WFC = 128 and an input skip to layer number Nskip = 4. We use an initial learning rate of rMLP, 0 = 0.001 for the parameters of the implicit representations and rparam, 0 = 0.01 for the physical parameters. We set βMLP = 0.99954, ndecay,MLP = 50, βparam = 0.95 and ndecay,param = 100. For the image pyramid we use Npyr = 1 stage and step up the pyramid every npyr = 200 epochs. We train for 1500 epochs, where one epoch is completed, when all the pixels in the current resolution have been considered. Additional results. In Fig. 8 and Fig. 9 we present additional results for sequence 0 and sequence 1 of the test dataset. We see, that for both sequences, overfitting the baseline is not able to produce a reasonable extrapolation of the data and even produces artifacts for the reconstruction part of the sequence. One reason for this is that the model is unable to identify the physical parameters correctly as can be seen by the large relative errors. Our model, on the other hand, is able to estimate the parameters with high accuracy that is even slightly better than the baseline trained on the full training dataset, which again shows the strength of our approach, considering, that we use a single video as input. E.2 COMPARISON WITH THE LAGRANGIAN VARIATIONAL AUTOENCODER Experimental details. The data used in this experiment does not include image data, therefore we do not use drgb and set λseg = 1. Since the predicted masks are obtained only from the local representation, we do not use an implicit representation for the background in this example. For the local representation we use NFourier = 4 Fourier features, NFC = 6 fully connected layers of width WFC = 64 and an input skip to layer number Nskip = 3. We use an initial learning rate of rMLP, 0 = 0.001 for the parameters of the implicit representations and rparam, 0 = 0.01 for the physical parameters. We set βMLP = 0.9954, ndecay,MLP = 10, βparam = 0.995 and ndecay,param = 50. For the image pyramid we use Npyr = 2 stages and step up the pyramid every npyr = 75 epochs. We train for 1500 epochs, where one epoch is completed, when all the pixels in the current resolution have been considered. Quantitative comparison. To quantitatively compare the temporal prediction ability of our approach with the baseline, we follow the procedure by Zhong & Leonard (2020) and report the average mean squared error (MSE) between the predicted and the ground truth mask for the frames Temporal Prediction Ability of the full sequence, which we denote as pixel MSE for consistency with the previous work. The results for randomly chosen sequences of the dataset are presented in Fig. 10. We can observe that the predictive power for both methods is limited when only a few frames are available to infer the underlying dynamics. However, with an increasing number of frames, our method becomes able to reconstruct the physics more consistently, while the baseline does not noticeably benefit from more training frames. We believe that this is because the baseline method overfits to the given frames, whereas our method infers actual physical parameters. E.3 EXPERIMENTS WITH HIGH RESOLUTION VIDEOS AND THE REAL WORLD VIDEO Experimental details. We use the same architecture for the high resolution synthetic and the real-world video sequences. We use both loss terms and set λseg = 0.03 to balance them. For the background we use an implicit representation with NFourier = 10 Fourier features, NFC = 8 fully connected layers of width WFC = 128 and an input skip to layer number Nskip = 4. For the local object representation we use NFourier = 8 Fourier features, NFC = 8 fully connected layers of width WFC = 128 and an input skip to layer number Nskip = 4. We use an initial learning rate of rMLP, 0 = 0.001 for the parameters of the implicit representations and rparam, 0 = 0.05 for the physical parameters. We set βMLP = 0.99954, ndecay,MLP = 10, βparam = 0.95 and ndecay,param = 100. For the image pyramid we use Npyr = 5 stages and step up the pyramid every npyr = 200 epochs. We train for 1500 epochs, where one epoch is completed, when all the pixels in the current resolution have been considered. Additional results. In the following we present additional rendering results. Fig. 11 and Fig. 12 show additional reconstruction and prediction results for additional synthetic high resolution scenes. To create the synthetic scenes we took the background images from https://pixabay.com/photos/ lake-mountains-nature-outdoors-6627781/ (Lake), https://pixabay. com/photos/city-street-architecture-business-4667143/ (City) and https://pixabay.com/photos/apples-fruits-ripe-red-apples-6073599/ (Apple). Fig. 13 allows for a more detailed comparison for the results of the real pendulum video. We show the images and masks used for training on the real pendulum video in Fig. 14. Please also see our supplementary video for additional results on this data. F GENERALIZATION OF THE LAGRANGIAN VARATIONAL AUTOENCODER One drawback of learning-based approaches for visual estimation of physical models is the poor generalization to data that deviates from the training data distribution. We confirm this for the fully (pre-)trained model of Zhong & Leonard (2020). While the Pixel MSE averaged over the full test set is 1.83 · 10−3, the error increases to 1.22 · 10−2 when we shift the frames of the test data set by as much as 1 pixel in each direction. This corresponds to the case of input videos, where the pivot point of the pendulum is not in the center of the image, which is different from the training data. This effect is visualized in Fig. 15, which shows the output of the model for sequence 2 of the test data set with zero control input, both in the original version and in the shifted version. We observe that the small shift of only one pixel in each direction leads to results that are significantly off, and not even the first frame is predicted correctly. While Zhong & Leonard (2020) propose to use a coordinate-aware encoder based on spatial transformers, this introduces additional complexity to the model. In contrast, our approach does not suffer from such issues.
1. What is the main contribution of the paper regarding physical parameter inference? 2. How effective is the proposed method compared to baseline methods? 3. What are the strengths and weaknesses of the paper regarding its clarity, results, and adaptability to other situations? 4. How does the reviewer assess the significance of the work in terms of enabling synthesis of realistic new visual content? 5. What additional information or explanations should be included in the paper to enhance its understanding and value?
Summary Of The Paper Review
Summary Of The Paper The paper proposes a method which is capable of obtaining physical models of motion using only visual observations of the motion from a single video (no training data). This is done by combining a differentiable ODE solver with a coordinate-based network which reconstructs the video frames based on the ODE solution. Since this system is fully differentiable, the synthesized video loss can be used to infer the physical parameters and initial condition of the physical model along with the coordinate-based network parameters which represent the video. The work demonstrates that the physical parameters learned using this method are more accurate than those of baseline methods, and can be used to make long-term predictions of future frames of the video more accurately. Review I am not especially familiar with literature or work in physical parameter inference and the state-of-the-art and practices in the field. Thus my comments focus more on the usage of implicit neural representations, or coordinate-based networks, in this application. In my opinion, the strengths of the paper are as follows: It seems like the qualitative results significantly outperform the baseline method. Figure 3 shows that the proposed method generalizes to unseen frames significantly better than the baseline method. Figure 4 shows that this is especially the case when more training frames can be used to better infer physical parameters. The paper is written very clearly (minus a few pieces of information I could not find, see weaknesses), and is overall enjoyable to read. I feel like I learned something about inferring physical parameters from only visual observations. I feel like this is an important problem, and especially could enable synthesis of realistic new visual content. In my opinion, the weaknesses of the paper are as follows: It seems like the results are very limited in terms of the types of situations they can be applied to. Only results with a pendulum overlaid over a synthetic background are shown, with one single real pendulum example? Is this formulation easy to extend to other physical models? If so, showing this would make the paper appear much stronger as this could be a more general tool for learning dynamics from only visual data. Some clarity of explanation that seems to be missing I don’t see any explanation of the “baseline method” which is compared to. I think much more detail needs to be given to this in the paper to understand if the method is providing a meaningful contribution or not. Similarly, why are no other inference methods that are mentioned in the related work compared to? It is hard to tell the magnitude of the contribution without this. I think that the methods section on “representation of dynamic objects” could contain more information which makes it more clear how the method from Ost et al. (2021) is adapted to this application. What does the dynamic object coordinate-based network model, pixel colors on a coordinate domain? I understand they are warped in the overall scene conditioned on the physical parameters and initial conditions, but it’s not immediately clear how this is translated into an image which can then be composited on to the background. I think it’s not immediately clear why a coordinate-based representation of the scene is really the useful tool here. Is it because rendering from the coordinate-based network is differentiable, allowing gradients to propagate back through the ODE solver and physical parameters? Because if not, I could see a potential baseline foregoing the neural representation of the video as a whole and optimizing physical parameters based on some deterministic rendering algorithm which warps pixels. For example, the background representation network seems unnecessary since it is completely static, why not just have a static image instead of a network to represent it? It would help the paper to have some explanation on this, and potentially ablate some of these contributions.
ICLR
Title Neural Implicit Representations for Physical Parameter Inference from a Single Video Abstract Neural networks have recently been used to model the dynamics of diverse physical systems. While existing methods achieve impressive results, they are limited by their strong demand for training data and their weak generalization abilities. To overcome these limitations, in this work we propose to combine neural implicit representations for appearance modeling with neural ordinary differential equations (ODEs) in order to obtain interpretable physical models directly from visual observations. Our proposed model combines several unique advantages: (i) Contrary to existing approaches that require large training datasets, we are able to identify physical parameters from only a single video (ii) The use of neural implicit representations enables the processing of high-resolution videos and the synthesis of photo-realistic imagery. (iii) The embedded neural ODE has a known parametric form that allows for the identification of interpretable physical parameters, and (iv) long-term prediction in state space. (v) Furthermore, the photo-realistic rendering of novel scenes with modified physical parameters becomes possible. synth | real synth | real synth | real synth | real synth | real synth | real pendulum length Figure 1: Our method infers physical parameters directly from real-world videos, like the shown pendulum motion. Separated by the red line, the right half of each image shows the input frame, and the left half shows our reconstruction based on physical parameters that we estimate from the input. We show 6 out of 10 frames that were used for training. The proposed model can precisely recover the metric length of the pendulum from the monocular video (relative error to true length is less than 2.5%). Best viewed on screen with magnification. Please also consider the supplementary video. 1 INTRODUCTION The physics of many real-world phenomena can be described concisely and accurately using differential equations. However, such equations are usually formulated in terms of highly abstracted quantities that are typically not directly observable using commodity sensors, such as cameras. For example, a pendulum is physically described by the deflection angle, the angular velocity, the damping coefficient, and the pendulum’s length, but automatically extracting those physical parameters directly from video data is challenging. Thus, due to the complex relationship between the physical process and images of respective scenes, measuring such quantities often necessitates a trained expert operating customised measuring equipment. While for many physical phenomena humans are able to infer (a rough estimation of) physical quantities from a given video, physical understanding from videos is an open problem in machine learning. Recently, the combination of deep learning and physics has become popular, particularly in the context of video prediction. While earlier works (Lutter et al., 2019; Greydanus et al., 2019; Cranmer et al., 2020; Zhong et al., 2020) require coordinate data, i.e. already abstracted physical quantities, more recent works directly use image data (Levine et al., 2020; Zhong & Leonard, 2020). A major downside is that all these approaches rely on massive amounts of training data, and, as we experimentally confirm in App. F, they exhibit poor generalization abilities. In contrast, in our work we address this shortcoming by proposing a solution that extracts semantic physical parameters directly from a single video, see Figure 1. Therefore, we alleviate the need for large data and furthermore facilitate interpretation due to the semantics of the inferred parameters in respective physical equations. Additionally, the six previously mentioned works model physical systems using Lagrangian or Hamiltonian energy formulations, which elegantly guarantee the conservation of energy, but can therefore not easily model dissipative systems that are much more common in the real world (Galley, 2013). The proposed model effectively transforms the camera into a physical measuring device with which we can observe quantities such as the length or the damping coefficient of a pendulum. To achieve the learning of physical models from a single video, we propose to utilise physicsbased neural implicit representations in an analysis-by-synthesis manner, where the latter relies on neural ordinary differential equations for representing abstract physics of visual scenes. Overall, we summarize our main contributions as follows: 1. We present the first method that is able to identify physical parameters from a single video using neural implicit representations. 2. Our approach infers parameters of an underlying ODE-based physical model that directly allows for interpretability and long-term predictions. 3. The unique combination of powerful neural implicit representations with rich physical models allows to synthesize high-resolution and photo-realistic imagery. Moreover, it enables physical editing by rendering novel scenes with modified physical parameters. 4. Contrary to existing learning-based approaches that require large corpora of training data, we propose a per-scene model, so that only a single short video clip that depicts the physical phenomenon is necessary. 2 RELATED WORK The combination of machine learning and physics has been addressed across an extremely broad range of topics. For example, machine learning was used to aid physics research (Bogojeski et al., 2020; Leclerc et al., 2020), or physics was used within machine learning models, such as for automatic question answering from videos (Chen et al., 2021; Bear et al., 2021). In this work we focus specficially on extracting physical models from single videos, so that in the following we discuss related works that we consider most relevant in this context. Physics in the context of learning. While neural networks have led to many remarkable results across diverse domains, the inference of physical principles, such as energy conservation, is still a major challenge and requires additional constraints. A general way to endow models with a physicsbased prior is to use generalized energy functions. For example, Greydanus et al. (2019) and Toth et al. (2020) use a neural network to parameterize the Hamiltonian of a system, which yields a relation between the energy of the system and the change of the state. Hence, they are able to infer the dynamics of systems with conserved energy, such as a pendulum or a multi-body system. One disadvantage of using the Hamiltionian is that canonical coordinates need to be used. To eliminate this constraint, other works use the Lagrangian to model the system’s energy. Since this formalism is more complex, Lutter et al. (2019) and Zhong & Leonard (2020) restrict the Lagrangian to the case of rigid-body dynamics to model systems with multiple degrees of freedom, such as a pole on a cart, or a robotic arm. Cranmer et al. (2020) use a neural network to parameterize a general Lagrangian, which they use to infer the dynamics of a relativistic particle in a uniform potential. While being able to model many relevant systems, the aforementioned energy-based approaches cannot easily be extended to dissipative systems that are much more common in the real world (Galley, 2013). Furthermore, they do not allow for a semantic interpretation of individual learned system parameters. PhyDNet, introduced by Guen & Thome (2020), learns dynamics in the form of a general PDE in a latent space, which, like the aforementioned works, prohibits interpretation of the learned physical model. In contrast, in the context of incorporating physical phenomena into learning frameworks, there are also approaches that make the underlying dynamics explicit. For example, Jaques et al. (2020) unroll the Euler integration of the ordinary differential equation of bouncing balls, as well as balls connected by a spring, to identify the physical parameters like the spring constant. Kandukuri et al. (2020) and de Avila Belbute-Peres et al. (2018) propose to use a linear complementarity problem to differentiably simulate rigid multi-body dynamics that can also handle object interaction and friction. For our method, we also rely on the advantages of modelling the underlying physics explicitly in order to obtain interpretable parameter estimates. Inferring physical properties from video. While many approaches work with trajectories in state space, there are also some works that operate directly on videos. In this case, the information about physical quantities is substantially more abstract, so that uncovering non-linear dynamics from video data is a significantly more difficult problem. Traditionally, such inverse problems are often phrased in terms of optimization problems, for example for deformable physics inference (Weiss et al., 2020), among many more. While respective approaches can successfully estimate a wide range of relevant physical quantities from video data, they often require rich additional information, such as 3D information in the form of depth images in combination with a 3D template mesh (Weiss et al., 2020), which may limit their practical applicability. More recently, several end-to-end learning approaches have been proposed. de Avila Belbute-Peres et al. (2018) use an encoder to extract the initial state of several objects from the combination of images, object masks and flow frames. After propagating the physical state over time, they decode the state back into images to allow for end-to-end training. Jaques et al. (2020) and Kandukuri et al. (2020) use an encoder network to extract object positions from object masks for individual frames. After estimating initial velocities from the positions they integrate the state over time and use a carefully crafted coordinate-consistent decoder, which is based on spatial transformers, to obtain predicted images. Zhong & Leonard (2020) extend this idea to their variational autoencoder (VAE) architecture to obtain a coordinate-aware encoder which they use to infer parameters of the latent distribution of generalized coordinates for each frame. Toth et al. (2020) use a VAE structure to predict the parameters of a posterior over the initial state from a sequence of videos. All of these approaches require large amounts of data to train the complex encoder and decoder modules. In contrast, our approach does not rely on trainable encoder or decoder structures, but instead uses a non-trainable fixed neural ODE solver in combination with a trainable neural implicit representation, and is thus able to infer physical models from a single video. Implicit representations. Recently, neural implicit representations have gained popularity due to their theoretical elegance and performance in novel view synthesis. The idea is to use a neural network to parametrize a function that maps a spatial location to a spatial feature. For example, to represent geometric shapes, using occupancy values (Mescheder et al., 2019; Chen & Zhang, 2019; Peng et al., 2020), or signed distance functions (Park et al., 2019; Gropp et al., 2020; Atzmon & Lipman, 2020). In the area of multiview 3D surface reconstruction as well as novel view synthesis, implicit geometry representations, such as density or signed distance, are combined with implicit color fields to represent shape and appearance (Sitzmann et al., 2019; Mildenhall et al., 2020; Yariv et al., 2020; Niemeyer et al., 2020; Azinovic et al., 2021). To model dynamic scenes, there have been several approaches that parametrize a displacement field and model the scene in a reference configuration (Niemeyer et al., 2019; Park et al., 2021; Pumarola et al., 2021). On the other hand, several approaches (Xian et al., 2021; Li et al., 2021; Du et al., 2021) include the time as an input to the neural representation and regularize the network using constraints based on appearance, geometry, and pre-trained depth or flow networks – however, none of these methods uses physics-based constraints, e.g. by enforcing Newtonian motion. While the majority of works on implicit representations focuses on shape, Sitzmann et al. (2020) show the generality of implicit representations by representing images and audio signals. Our work contributes to the neural implicit representation literature by combining such representations with explicit physical models. 3 ESTIMATING PHYSICAL MODELS WITH NEURAL IMPLICIT REPRESENTATIONS Our main goal is the estimation of physical parameters from a single video, where we specifically focus on the setting of a static background and dynamic objects that are moving according to some physical phenomenon. With that, we model the dynamics of the objects using an ordinary differential equation (ODE). Our objective is now to estimate the unknown physical parameters, as well as the initial conditions, of this ODE. Hence, we additionally learn a video generation model that is able to render a video that depicts objects which follow a specific physical model depending on respective physical parameters. For estimating these physical parameters directly from an input video, we utilise a photometric loss that imposes that the generated video is similar to the input video. 3.1 MODELING THE DYNAMICS For most of the dynamics that can be observed in nature, the temporal evolution of the state can be described by an ODE. For example, for a pendulum the state variables are the angle of deflection and the angular velocity, and a two dimensional first-order ODE can be used to describe the dynamics. In general, we write ż = f (z, t; θ) to describe the ODE1, where z ∈ Rn denotes the state variable, t ∈ R denotes time and θ ∈ Rm are the unknown physical parameters. Using the initial conditions z0 ∈ Rn at the initial time t0, we can write the solution of the ODE as z (t; z0, θ) = z0 + ∫ t t0 f (z(τ), τ ; θ) dτ. (1) Note that the solution curve z (t; z0, θ) ⊂ Rn depends both on the unknown initial conditions z0, as well as on the unknown physical parameters θ. In practice, the solution to Eq. (1) is typically approximated by numeric integration. In our context of physical parameter estimation from videos, we build upon the recent work by Chen et al. (2018), who proposed an approach to compute gradients of the solution curve of an ODE with respect to its parameters. With that, it becomes possible to differentiate through the solution in Eq. (1) and therefore we can use gradient-based methods to estimate z0 and θ. 3.2 DIFFERENTIABLE RENDERING OF THE VIDEO FRAMES To render the video frames, we draw inspiration from the recent advances in neural implicit representations. To this end, we use a static representation to model the background, which we combine with a appearance and shape representation of dynamic foreground objects. By composing the learned background with the dynamic foreground objects, whose poses are determined by the solution of the ODE encoding the physical phenomenon, we obtain a dynamic representation of the overall scene. Doing so allows us to query the color values on a pixel grid, so that we are able to render video frames in a differentiable manner. Fig. 2 shows an overview of the approach. Representation of background. The static background is modeled by a function F (·; θbg) that maps a 2D location x to an appearance value c ∈ RC , where C denotes the number of appearance channels (e.g. RGB colors). The function F (·; θbg) encodes the appearance of the background and is represented as a neural network with learnable parameters θbg. To improve the ability of the neural network to learn high frequency variations in appearance, we use Fourier features (Tancik et al., 2020), so that the input location x ∈ R2 is mapped to a higher-frequency vector γ (x) ∈ R4NFourier+2, where NFourier is the numbers of frequencies used. The full representation of the background then reads cbg (x) = F (γ (x) ; θbg). For a more detailed discussion of the architecture, we refer to App. A. Representation of dynamic objects. To compose the static background and the dynamically moving objects into the full scene, we draw inspiration from Ost et al. (2021) who use implicit representations to represent color and shape in a scene graph to decompose a dynamic scene into a background representation and dynamically moving local representations. A drawback of their work is that they do not use a physical model to constrain the dynamics, and therefore strong supervisory signals like the trajectories and the dimensions of the bounding boxes are essential. In our case, each dynamic object is represented in terms of a local neural implicit representation, which is then placed in the overall scene based on the time-dependent spatial transformation Tt = T (z (t; z0, θode) , θ+). This transformation is parameterized by the unknown initial condition z0, the physical parameters θode of the ODE, and possibly additional parameters θ+. As such, these parameters determine the transformation from the global coordinate system of the background to the local coordinate system. 1W.l.o.g. we only consider first-order ODEs here, since it is always possible to reduce the order to one by introducing additional state variables. Input: Video Sequence Similarly as the background, the appearance of each individual dynamic object is modelled in terms of an implicit neural representation (in the local coordinate system). In contrast to the background, we augment the color output c ∈ RC of the dynamic object representation with an additional opacity value o ∈ [0, 1], which allows us to model objects with arbitrary shape. We write the representation of a dynamic object in the global coordinate system as (cobj (x) , o (x)) = G(γ (x′) ; θobj), where G(·; θobj) is represented as a neural network with weights θobj, γ denotes the mapping to Fourier features, and x′ = Tt(x) is the local coordinate representation of the (global) 2D location x. Differentiable rendering. For rendering we evaluate the composed scene appearance at a regular pixel grid, where we use the opacity value of the local object representation to blend the color of the background and the dynamic objects. To obtain the final color, for all positions x of the pixel grid we evaluate the equation c(x, t) = (1− o(x)) cbg(x) + o(x)cobj(x). (2) Note that due to the time dependence of the transformation Tt, the color value for pixel x is also time dependent, which allows us to render the frames of the sequence over time. 3.3 LOSS FUNCTION We jointly optimize for the parameters of the neural implicit representations θbg and θobj and estimate the physical parameters θode, z0 and θ+ of the dynamics and the transformation. To this end, we use a simple photometric loss defined over all the pixel values, which reads L = 1 |I| |T | ∑ t∈T ∑ x∈I d(I (x, t) , c (x, t)), (3) where d computes the discrepancy between its two inputs, T is the set of all given time steps, I is the set of all pixel coordinates at the current resolution (see next section) and I (x, t) are the given images. To capture information on multiple scales we employ an image pyramid scheme. More details can be found in App. C. 4 EXPERIMENTS We use two challenging physical models to experimentally evaluate our proposed approach. To analyze our method and to compare to previous work, we first consider synthetically created data. Afterwards, we show that our method achives promising results also on real-world data. For details about the ODEs describing the dynamics, additional implementation details, an ablation study, as well as additional results we refer the reader to the Appendix. Although several learning-based approaches that infer physical models from image data have been proposed (de Avila Belbute-Peres et al., 2018; Jaques et al., 2020; Kandukuri et al., 2020; Zhong & Leonard, 2020; Toth et al., 2020), existing approaches are particularly tailored towards settings with large training corpora. However, these methods typically suffer from a decreasing estimation accuracy in scarce training data regimes, or if out of distribution generalization is required (cf. App. F). In contrast, our proposed approach is able to predict physical parameters from a single short video clip. Due to the lack of existing baselines tailored towards estimation from a single video, we adapt the recent work of Jaques et al. (2020) and Zhong & Leonard (2020) to act as baseline methods. 4.1 TWO MASSES SPRING SYSTEM We consider the example of two moving MNIST digits connected by an (invisible) spring on a CIFAR background, in a similar spirit to Jaques et al. (2020), see Fig. 3. Besides the initial positions and velocities, the spring constant k and the equilibrium distance l of the connecting spring need to be identified for the dynamics model. For a more detailed description of the model see App. B.1. The approach of Jaques et al. (2020) uses a learnable encoder and velocity estimator to obtain positions and initial velocities of a known number of objects from the video frames. After integrating the known parametric model, they use a learnable coordinate-consistent decoder in combination with learned object masks and colors to render frames from the integrated trajectories. Using a photometric loss they require 5000 sequences of different runs of the same two masses spring system to train the model and identify the parameters. In order to compare their method to our work in the set- ting of parameter estimation from single video, in addition to their model trained on the full dataset (‘B: Full’), we also consider their model trained for an individual sequences of the test dataset (‘B: Overfit’). We fit our model to sequences from the test dataset, where we use two local representations and parametrize the spatial transformation as shown in App. B.1. By using the maximum of both masks as foreground mask, we enable the model to identify the object layering. We find that for training it is necessary to gradually build up the sequence of frames over training. We start with only two frames and add the respective next frame after 60 epochs. Also, the model appears to have a scale freedom in terms of the equilibrium length and the points where the spring is attached to the digits.2 We therefore add an additional loss to keep the spring attachment close to the center of the bounding box of the digits in the first frame. We observe similar effects when overfitting the model of Jaques et al. (2020) to a single sequence. When training on the full dataset, the effect seems to be averaged out and is not observed. Fig. 3 shows a qualitative comparison of our results to the baseline of Jaques et al. (2020), where the latter is trained in the two settings explained above. We observe, that for this sequence all approaches yield reasonable results for the reconstruction of the training frames. However, for prediction the overfitted model of Jaques et al. (2020) performs significantly worse, indicating that the physical model is poorly identified from a single video. The baseline trained on the full dataset yields results that are slightly worse than our results. We see that in both cases the parameters are identified correctly. The fact that we achieve comparable results while using significantly less data highlights the advantage of combining the explicit dynamics model with the implicit representation for the objects. Note that we chose sequence 6 since it yielded the best results for the baseline. More results can be found in app. E.1. 4.2 NONLINEAR DAMPED PENDULUM We use synthetically created videos of a nonlinear damped pendulum to compare our method to the previous work of Zhong & Leonard (2020) and also to show the ability of our approach to handle high resolution videos. The equations describing the pendulum dynamics can be found in App. B.2. Comparison to Lagrangian Variational Autoencoder. We use the dataset of Zhong & Leonard (2020) containing several sequences of a simple pendulum (each comprising 20 frames), which was created by the OpenAI Gym simulator (Brockman et al., 2016). The method by Zhong & Leonard (2020) uses a coordinate aware encoder to obtain the distribution of the initial state from object masks. After sampling, the initial state is integrated using a learnable Lagrangian function parametrizing the dynamics of the system and a coordinate aware decoder is used to render frames from the trajectories. We train the model using only the first N frames of a single sequence as the training data (with no external control input), effectively overfitting the model to each sequence. 2Intuitively, if the motion is only in one direction we can vary the equilibrium length and adjust the spring attachments without changing the observed motion. Similar effects are present in a 2D motion. Similar to the baseline, we assume no damping and a known pivot pointA in the middle of the frame to train our model. Since this dataset does not include image data, we only use the loss on the object mask, and train our model in this modified setup using the same frames as for the baseline. To evaluate the performance of each method in identifying the underlying dynamics, we compare the prediction of the unseen frames of the same sequence. Qualitative results are presented in Fig. 4. We can observe that both methods fit the given training data very well, however, in the baseline the pendulum motion significantly slows down for unseen time steps and thus it is unable to obtain accurate predictions for unseen data. We emphasize that this happens because the method requires significantly larger training datasets, so that it performs poorly in the single-video setting considered in this paper. In contrast, our method shows a significantly better performance, which highlights the strength of directly modelling physical phenomena to constrain the learnable dynamics in an analysis-by-synthesis manner. Due to aliasing effects that arise from the low resolution of the frames, our method does not give perfect predictions, however, if we use high-resolution images for our method we achieve nearly perfect reconstruction as we show in Fig. 5 and Fig. 6. For a quantitative comparison and further experimental details see App. E.2. High resolution videos. In contrast to the baseline, our approach is able to handle highresolution videos with complex background and pendulum shapes and textures. In this case, our approach accurately the parameters for the full pendulum model as we show in Fig. 5. For this experiment we created several videos by simulating a pendulum with known parameters and then rendering the pendulum on top of an image. Qualitative results of fitting our model to the lake scene can be seen in Fig. 5. We see that our model produces photorealistic renderings of the scene, even for the predicted frames. The renderings of other scenes are shown in App. E. As we show in Fig. 6 and Table 1, it is necessary that the frames in the training set cover a sufficient portion of the motion to enable a correct estimation of the physical parameters. 4.3 REAL PENDULUM VIDEO We now show that our approach is even able to infer physical parameters from real world data. We recorded the pendulum motion shown in Fig. 1. The pendulum is mounted almost frictionless and due to its high weight we do not expect large air drag effects either. The video was recorded with a smartphone, which leads to noticeable real-world noise such as motion blur, however, the proposed method still manages to produce convincing results. The pseudo groundtruth segmentation masks are generated semi-manually by using GrabCut (Rother et al., 2004) and exhibit significant noise Predicted Deflection Angle Trajectory Error that is also handled well by the proposed model. We extract every third frame from the video s.t. there are 10 extracted frames per second and use the first 10 frames for training. We use the the full damped pendulum model to estimate the physical parameters of the pendulum motion. The damping is estimated as c = 4.7 · 10−13, which matches our expectation for this low friction setting. For the pendulum length we note from Eq. (6) that the estimated length l = 27.7 cm is a real world quantity without scale ambiguity. Therefore, we can compare it to lmeasured = 27.1 cm which we obtained by measuring the length from the pivot point to the estimated center of gravity of the pendulum using a ruler. We would like to emphazise, that the very good correspondence shows, that we are able to estimate scale in a monocular video from a pendulum motion. 5 CONCLUSION In this work we presented a solution for learning a physical model from an image sequence that depicts some physical phenomenon. To this end, we proposed to combine neural implicit representations and neural ordinary differential equations in an analysis-by-synthesis fashion. Unlike existing learning-based approaches that require large training corpora, a single short video clip is sufficient for our approach. In contrast to prior works that use encoder-decoder architectures specifically tailored to 2D images, we built upon neural implicit representations that have been shown to give impressive results for 3D scene reconstruction. Therefore, the extension of the proposed method to 3D is a promising direction for future work. We present diverse experiments in which the ODE parametrizes a rigid-body transformation between the background and the foreground objects, such as the pendulum motion. We emphasize that conceptually our model is not limited to rigid-body motions, and that it can directly be extended to other cases, for example to nonlinear transformations for modelling soft-body dynamics. The focus of this work is on learning a physical model of a phenomenon from a short video. Yet, the high fidelity of our model’s renderings, together with the easy modifiability of the physical parameters, enables various computer graphics applications such as the artistic re-rendering of scenes, which we briefly demonstrate in the supplementary video. Overall, our per-scene model combines a unique set of favorable properties, including the interpretability of physical parameters, the ability to perform long-term predictions, and the synthesis of high-resolution images. We believe that our work may serve as inspiration for follow-up works on physics-based machine learning using neural implicit representations. Ethics statement. This work attempts to learn interpretable physical models from video clips of physical phenomena. Our contribution is largely theoretical and we show experiments on synthetic data and limited real-world data. Nevertheless, as machine learning models achieve more humanlike understanding of the real, physical world, it is paramount to ensure that they are deployed safely and according to strict ethical guidelines. While we think that the current state of our work will not disadvantage or advantage specific groups of people, we recommend a careful ethical evaluation of derivative works that aim to close the gap to human physical reasoning. A potential positive impact of this work is that it can be beneficial to people with lower financial resources, as it overcomes the need for expensive experimental gear to infer physical parameters, e.g. in the context of physics education. Reproducibility statement. To ensure the reproducibility of this work we give architecture and training details in App. A and C. Furthermore, we will release our code upon acceptance, so that all experiments and figures shown in this paper can be reproduced. A MODEL ARCHITECTURE We adopt the architecture used in Mildenhall et al. (2020) for the implicit representations, see Fig. 7 for the basic structure. For the Fourier features we use a logarithmic scaling. The i-th of the NFourier Fourier features is obtained as γi(x) = (sin(2 ix), cos(2ix)) i = 0, . . . NFourier − 1, (4) where sin(2ix) for x ∈ R2 means the element wise application of the sine function. We also include the original x in the encoding γ(x). B MODELS FOR THE DYNAMICS B.1 TWO MASSES SPRING SYSTEM The system is modeled as two-body system where the dynamic of each object is described by Newton’s second law of motion, i.e. F = mẍ, where F is the force. Since only the ratio between force and mass can be identified without additional measurement, we fix m = 1, analogously to the work of Jaques et al. (2020). Using Hooke’s law, we write the force applied to object i by object j as Fi,j = −k ( (pi − pj)− 2l pi − pj ‖pi − pj‖ ) . (5) Using the position pi(t; k, l) of the objects to parametrize the trajectory of the local coordinate systems, we can write the time-dependent 2D spatial transformation to the local coordinate system i as T (i)t (x) = x− pi(t; k, l), where l and k are learnable parameters. B.2 NONLINEAR DAMPED PENDULUM A pendulum that is damped by air drag can be modelled as ˙[ϕ ω ] = [ ω − gl sin (ϕ)− cω |ω| ] , (6) where ϕ ∈ R is the deflection angle, ω ∈ R is the angular velocity, g is the (known) gravitational acceleration, l > 0 is the (physical) length of the pendulum, and c > 0 is the damping constant. We use the solution curve ϕ (t; l, c) to parameterize the time-dependent 2D spatial transformation as Tt (x) = R (ϕ (t; l, c))x + A, where R ∈ SO (2) is a rotation matrix and A ∈ R2 is the pivot point of the pendulum. For the full model, the parameters l and c are learnable. For the sake of simplicity we assume that the gravitational acceleration g always points downwards in the global image coordinate system. C TRAINING DETAILS In the following we provide additional training details. C.1 DISCREPANCY MEASURE FOR THE LOSS TERM Unless stated otherwise, for our experiments we use C = 4 image channels, where the three first channels correspond to the RGB channels, and the last channel represents a mask of the foreground object. For the real world data, we obtained the objects masks using a semi-manual approach as described in Sec. 4.3. For the experiments on the synthetically created high resolution videos we used the masks constructed for the video creation for the experiments directly. For the first three channels we define the discrepancy measure in terms of the mean square error as drgb(x, y) = ‖x− y‖2, (7) and for the mask in the last channel we consider the binary cross entropy loss, i.e. dseg(x, y) = [x log (y) + (1− x) log (1− y)] . (8) With that, the overall discrepancy measure is given as d(x, y) = drgb(x1:3, y1:3) + λsegdseg(x4, y4). (9) C.2 OPTIMIZATION We train our model using the Adam optimizer (Kingma & Ba, 2015) with exponential learning rate decay, which reads r(e) = r0 · βe/ndecay (10) where r(e) is the learning rate depending on the epoch e, r0 is the initial learning rate, β is the decay rate and ndecay is the decay step size. One important aspect of the training is to use different learning rates for the parameters θbg and θobj of the implicit representations on the one hand and the physical parameters θode, z0 and θ+ on the other hand. In order to estimate the initial parameters of the ODE and the transformation for the pendulum we employ a heuristic that uses the information contained in the mask. To obtain an initial estimate for the pivot point A we average all masks and use the the pixel with the highest value. To obtain an estimate for the initial angle, we perform a principal component analysis (PCA) on the pixel locations covered by the mask and use the angle between the first component and the vertical direction. The velocity is always initialized as 0. We initialize the damping as c = 1 and the pendulum length as l = 2m for the synthetic experiments and l = 0.4m for the real world experiment. C.3 IMAGE PYRAMID To capture information on multiple scales we employ an image pyramid scheme. Due to memory limitations, for large images we cannot evaluate all pixel values in one batch, and thus the classical approach that considers all stages of the image pyramid at once is not feasible in our setting. Therefore, during training, we sequentially traverse the image pyramid from the low-resolution levels towards the original high-resolution level. The idea is that the low resolution stages reveal global information about the movement of the object, whereas the later high-resolution stages allow to use finer details that improve the coarse estimates from the previous stages. To this end, we use a Binomial kernel of size 5×5 with stride two, which we repeatedly applyNpyr times to reduce the original resolution of the image. We start the training using the coarsest level, and then switch to the next finer level every npyr steps. D ABLATION STUDY To motivate the chosen loss functions, we report the results for the parameter estimation with different loss function configurations in Table 1. Beyond the influence on the quality of the parameter estimation, another motivation to use the color loss is that it enables to learn the representation of the appearance of the background and the object in the implicit representation. This allows for photo-realistic rendering of unseen predictions, as well as the re-rendering of scenes with modified physical parameters, effectively allowing physical scene editing. For the mask loss, on the other hand, we have found that it makes the estimation process more robust to suboptimal initializations of the physical parameters. E FURTHER EXPERIMENTAL DETAILS AND RESULTS In the following we consider specific details for the different experiments. E.1 TWO MASSES SPRING SYSTEM Experimental details. We use both loss terms and set λseg = 0.01 to balance them. Additionally, we use an MSE loss to keep the center of the bounding boxes of the digits close to the origin of the local representations in the first frame. This fixes the scale problem related to the equilibrium length described in the main text. Moreover, we use another MSE loss term to keep the opacity value close to zero outside of (but close to) the visible area. We found this to be necessary, since otherwise artefacts might appear in the extrapolation when previously unseen parts of the mask appear in the visible area. For the background we use an implicit representation with NFourier = 6 Fourier features, NFC = 8 fully connected layers of width WFC = 128 and an input skip to layer number Nskip = 4. For the local object representation we use NFourier = 8 Fourier features, NFC = 8 fully connected layers of width WFC = 128 and an input skip to layer number Nskip = 4. We use an initial learning rate of rMLP, 0 = 0.001 for the parameters of the implicit representations and rparam, 0 = 0.01 for the physical parameters. We set βMLP = 0.99954, ndecay,MLP = 50, βparam = 0.95 and ndecay,param = 100. For the image pyramid we use Npyr = 1 stage and step up the pyramid every npyr = 200 epochs. We train for 1500 epochs, where one epoch is completed, when all the pixels in the current resolution have been considered. Additional results. In Fig. 8 and Fig. 9 we present additional results for sequence 0 and sequence 1 of the test dataset. We see, that for both sequences, overfitting the baseline is not able to produce a reasonable extrapolation of the data and even produces artifacts for the reconstruction part of the sequence. One reason for this is that the model is unable to identify the physical parameters correctly as can be seen by the large relative errors. Our model, on the other hand, is able to estimate the parameters with high accuracy that is even slightly better than the baseline trained on the full training dataset, which again shows the strength of our approach, considering, that we use a single video as input. E.2 COMPARISON WITH THE LAGRANGIAN VARIATIONAL AUTOENCODER Experimental details. The data used in this experiment does not include image data, therefore we do not use drgb and set λseg = 1. Since the predicted masks are obtained only from the local representation, we do not use an implicit representation for the background in this example. For the local representation we use NFourier = 4 Fourier features, NFC = 6 fully connected layers of width WFC = 64 and an input skip to layer number Nskip = 3. We use an initial learning rate of rMLP, 0 = 0.001 for the parameters of the implicit representations and rparam, 0 = 0.01 for the physical parameters. We set βMLP = 0.9954, ndecay,MLP = 10, βparam = 0.995 and ndecay,param = 50. For the image pyramid we use Npyr = 2 stages and step up the pyramid every npyr = 75 epochs. We train for 1500 epochs, where one epoch is completed, when all the pixels in the current resolution have been considered. Quantitative comparison. To quantitatively compare the temporal prediction ability of our approach with the baseline, we follow the procedure by Zhong & Leonard (2020) and report the average mean squared error (MSE) between the predicted and the ground truth mask for the frames Temporal Prediction Ability of the full sequence, which we denote as pixel MSE for consistency with the previous work. The results for randomly chosen sequences of the dataset are presented in Fig. 10. We can observe that the predictive power for both methods is limited when only a few frames are available to infer the underlying dynamics. However, with an increasing number of frames, our method becomes able to reconstruct the physics more consistently, while the baseline does not noticeably benefit from more training frames. We believe that this is because the baseline method overfits to the given frames, whereas our method infers actual physical parameters. E.3 EXPERIMENTS WITH HIGH RESOLUTION VIDEOS AND THE REAL WORLD VIDEO Experimental details. We use the same architecture for the high resolution synthetic and the real-world video sequences. We use both loss terms and set λseg = 0.03 to balance them. For the background we use an implicit representation with NFourier = 10 Fourier features, NFC = 8 fully connected layers of width WFC = 128 and an input skip to layer number Nskip = 4. For the local object representation we use NFourier = 8 Fourier features, NFC = 8 fully connected layers of width WFC = 128 and an input skip to layer number Nskip = 4. We use an initial learning rate of rMLP, 0 = 0.001 for the parameters of the implicit representations and rparam, 0 = 0.05 for the physical parameters. We set βMLP = 0.99954, ndecay,MLP = 10, βparam = 0.95 and ndecay,param = 100. For the image pyramid we use Npyr = 5 stages and step up the pyramid every npyr = 200 epochs. We train for 1500 epochs, where one epoch is completed, when all the pixels in the current resolution have been considered. Additional results. In the following we present additional rendering results. Fig. 11 and Fig. 12 show additional reconstruction and prediction results for additional synthetic high resolution scenes. To create the synthetic scenes we took the background images from https://pixabay.com/photos/ lake-mountains-nature-outdoors-6627781/ (Lake), https://pixabay. com/photos/city-street-architecture-business-4667143/ (City) and https://pixabay.com/photos/apples-fruits-ripe-red-apples-6073599/ (Apple). Fig. 13 allows for a more detailed comparison for the results of the real pendulum video. We show the images and masks used for training on the real pendulum video in Fig. 14. Please also see our supplementary video for additional results on this data. F GENERALIZATION OF THE LAGRANGIAN VARATIONAL AUTOENCODER One drawback of learning-based approaches for visual estimation of physical models is the poor generalization to data that deviates from the training data distribution. We confirm this for the fully (pre-)trained model of Zhong & Leonard (2020). While the Pixel MSE averaged over the full test set is 1.83 · 10−3, the error increases to 1.22 · 10−2 when we shift the frames of the test data set by as much as 1 pixel in each direction. This corresponds to the case of input videos, where the pivot point of the pendulum is not in the center of the image, which is different from the training data. This effect is visualized in Fig. 15, which shows the output of the model for sequence 2 of the test data set with zero control input, both in the original version and in the shifted version. We observe that the small shift of only one pixel in each direction leads to results that are significantly off, and not even the first frame is predicted correctly. While Zhong & Leonard (2020) propose to use a coordinate-aware encoder based on spatial transformers, this introduces additional complexity to the model. In contrast, our approach does not suffer from such issues.
1. What is the focus and contribution of the paper regarding video representation and object motions? 2. What are the strengths of the proposed approach, particularly in terms of its novelty and technical completeness? 3. What are the weaknesses of the paper, especially regarding experimentation and validation? 4. Do you have any suggestions for improving the experimental design or adding more baselines for comparison? 5. Are there any potential limitations or areas for further exploration in the proposed method, such as applicability to various physical models or motion states?
Summary Of The Paper Review
Summary Of The Paper Summary This paper represents a video as a canonical implicit representation and a set of object motions that are described with an ordinary differential equation. It renders the representation into images and optimize its parameters by minimizing the difference between rendered images and target images. This work utilize an image pyramid for efficient training. It performs experimentes on a nonlinear damped pendulum in both synthetic and real-world settings. Contributions This paper represents nonlinear object motions with an ordinary differential equation, which enables them to recover interpretable physical parameters from videos. It performs several experiments to validate the proposed approach. Review Strengths The proposed approach is novel. Although there are methods that represents a dynamic object as a canonical object and object motions, this work innovatively models object motions as an ordinary differential equation, which can produce interpretable physical parameters. The paper is well-written. I can understand the approach easily. The technical details are complete. Weaknesses The validation experiments are not sufficient. The paper says that it can learn physical models from videos. However, there are only experiments on a nonlinear damped pendulum model. It is not sure that the proposed approach can work on other physical models. The comparison experiments only compare the proposed approach with one baseline. I am not sure if there are other methods in the setting, but it would be better to construct some baselines to validate the effectiveness of the proposed components. The ablation studies are not sufficient. There are some ablation studies that can be conducted, such as the iamge pyramid scheme, validating the decomposition of foreground and background, and the accuracies of the approach for captured objects in different motion states.
ICLR
Title Variational Information Pursuit for Interpretable Predictions Abstract There is a growing interest in the machine learning community in developing predictive algorithms that are interpretable by design. To this end, recent work proposes to sequentially ask interpretable queries about data until a high confidence prediction can be made based on the answers obtained (the history). To promote short query-answer chains, a greedy procedure called Information Pursuit (IP) is used, which adaptively chooses queries in order of information gain. Generative models are employed to learn the distribution of query-answers and labels, which is in turn used to estimate the most informative query. However, learning and inference with a full generative model of the data is often intractable for complex tasks. In this work, we propose Variational Information Pursuit (V-IP), a variational characterization of IP which bypasses the need to learn generative models. V-IP is based on finding a query selection strategy and a classifier that minimize the expected cross-entropy between true and predicted labels. We prove that the IP strategy is the optimal solution to this problem. Therefore, instead of learning generative models, we can use our optimal strategy to directly pick the most informative query given any history. We then develop a practical algorithm by defining a finite-dimensional parameterization of our strategy and classifier using deep networks and train them end-to-end using our objective. Empirically, V-IP is 10-100x faster than IP on different Vision and NLP tasks with competitive performance. Moreover, V-IP finds much shorter query chains when compared to reinforcement learning which is typically used in sequential-decision-making problems. Finally, we demonstrate the utility of V-IP on challenging tasks like medical diagnosis where the performance is far superior to the generative modeling approach. 1 INTRODUCTION Suppose a doctor diagnoses a patient with a particular disease. One would want to know not only the disease but also an evidential explanation of the diagnosis in terms of clinical test results, physiological data, or symptoms experienced by the patient. For practical applications, machine learning methods require an emphasis not only on metrics such as generalization and scalability but also on criteria such as interpretability and transparency. With the advent of deep learning methods over traditionally interpretable methods such as decision trees or logistic regression, the ability to perform complex tasks such as large-scale image classification now often implies a sacrifice in interpretability. However, interpretability is important in unveiling potential biases for users with different backgrounds (Yu, 2018) or gaining users’ trust. Most of the prominent work in machine learning that addresses this question of interpretability is based on post hoc analysis of a trained deep network’s decisions (Simonyan et al., 2013; Ribeiro et al., 2016; Shrikumar et al., 2017; Zeiler & Fergus, 2014; Selvaraju et al., 2017; Smilkov et al., 2017; Chattopadhyay et al., 2019; Lundberg & Lee, 2017). These methods typically assign importance scores to different features used in a model’s decision by measuring the sensitivity of the model output to these features. However, explanations in terms of importance scores of raw features might not always be as desirable as a description of the reasoning process behind a model’s decision. Moreover, there are rarely any guarantees for the reliability of these post hoc explanations to faithfully represent the model’s decision-making process (Koh et al., 2020). Consequently, post hoc interpretability has been widely criticized (Adebayo et al., 2018; Kindermans et al., 2019; Rudin, 2019; Slack et al., 2020; Shah et al., 2021; Yang & Kim, 2019) and there is a need to shift towards ML algorithms that are interpretable by design. An interesting framework for making interpretable predictions was recently introduced by Chattopadhyay et al. (2022). The authors propose the concept of an interpretable query set Q, a set of user-defined and taskspecific functions q : X → A, which map a data point in X to an answer in A, each having a clear interpretation to the enduser. For instance, a plausible query set for identifying bird species might involve querying beak shape, head colour, and other visual attributes of birds. Given a query set, their method sequentially asks queries about X until the answers obtained are sufficient for predicting the label/hypothesis Y with high confidence. Notably, as the final prediction is solely a function of this sequence of query-answer pairs, these pairs provide a complete explanation for the prediction. Figure 1 illustrates the framework on a bird classification task. To obtain short explanations (short query-answer chains), the authors propose to use a greedy procedure called Information Pursuit (IP), which was first introduced in Geman & Jedynak (1996). Given any input xobs, IP sequentially chooses the query which has the largest mutual information about the label/hypothesis Y given the history of query-answers obtained so far. To compute this mutual information criteria, a generative model is first trained to learn the joint distribution between all query-answers q(X) and Y ; in particular, Variational Autoencoders (VAEs) (Kingma & Welling, 2013) are employed. This learnt VAE is then used to construct Markov Chain Monte Carlo (MCMC) estimates for mutual information via MCMC sampling. Unfortunately, the computational costs of MCMC sampling coupled with the challenges of learning accurate generative models that enable fast inference limit the application of this framework to simple tasks. As an example, classifying MNIST digits using 3× 3 overlapping patches as queries1 with this approach would take weeks! In this paper, we question the need to learn a full generative model between all query-answers q(X) and Y given that at each iteration IP is only interested in finding the most informative query given the history. More specifically, we present a variational charaterization of IP which is based on the observation that, given any history, the query q∗, whose answer minimizes the KL-divergence between the label distribution P (Y | X) and the posterior P (Y | q∗(X), history), will be the most informative query as required by IP. As a result, we propose to minimize this KL-divergence term in expectation (over randomization of histories) by optimizing over querier functions, which pick a query from Q given history, parameterized by deep networks. The optimal querier would then learn to directly pick the most informative query given any history, thus bypassing the need for explicitly computing mutual information using generative models. Through extensive experiments, we show that the proposed method is not only faster (since MCMC sampling methods are no longer needed for inference), but also achieves competitive performance when compared with the generative modeling approach and also outperforms other state-of-the-art sequential-decision-making methods. Paper Contributions. (1) We present a variational characterization of IP, termed Variational-IP or V-IP, and show that the solution to the V-IP objective is exactly the IP strategy. (2) We present a practical algorithm for optimizing this objective using deep networks. (3) Empirically, we show that V-IP achieves competitive performance with the generative modelling approach on various computer vision and NLP tasks with a much faster inference time. (4) Finally, we also compare our approach to Reinforcement Learning (RL) approaches used in sequential-decision making areas like Hard Attention (Mnih et al., 2014) and Symptom Checking (Peng et al., 2018), where the objective is to learn a policy which adaptively chooses a fixed number of queries, one at a time, such that an accurate prediction can be made. In all experiments, V-IP is superior to RL methods. 1Each patch query asks about the pixel intensities observed in that patch for xobs. 2 RELATED WORK Interpretability in Machine Learning. These works can be broadly classified into two main categories: (i) post-hoc interpretability, and (ii) algorithms that are interpretable by design. A large number of papers in this area are devoted to post-hoc interpretability. However, as stated in the Introduction, the reliability of these methods have recently been called into question (Adebayo et al., 2018; Yang & Kim, 2019; Kindermans et al., 2019; Shah et al., 2021; Slack et al., 2020; Rudin, 2019; Koh et al., 2020; Subramanya et al., 2019). Consequently, recent works have focused on developing ML algorithms that are interpretable by design. Several of these works aim at learning deep networks via regularization such that it can be approximated by a decision tree (Wu et al., 2021) or locally by a linear network (Bohle et al., 2021; Alvarez Melis & Jaakkola, 2018). However, the framework of Chattopadhyay et al. (2022) produces predictions that are completely explained by interpretable query-chains and is not merely an approximation to an interpretable model like decision trees. Another line of work tries to learn latent semantic concepts or prototypes from data and subsequently base the final prediction on these learnt concepts (Sarkar et al., 2022; Nauta et al., 2021; Donnelly et al., 2022; Li et al., 2018; Yeh et al., 2020). However, there is no guarantee that these learnt concepts would be interpretable to the user or align with the user’s requirement. In sharp contrast, allowing the user to define an interpretable query set in (Chattopadhyay et al., 2022) guarantees by construction that the resulting query-chain explanations would be interpretable and useful. Sequential Decision-Making. An alternative approach to learning short query-chains is to use methods for sequential decision learning. These algorithms can be used for making interpretable decisions by sequentially deciding “what to query next?” in order to predict Y as quickly as possible. Mnih et al. (2014) introduced a reinforcement-learning (RL) algorithm to sequentially observe an image through glimpses (small patches) and predict the label, and called their approach Hard Attention. Rangrej & Clark (2021) introduced a probabilistic model for Hard Attention which is similar to the IP algorithm. More specifically, they propose to learn a partial-VAE Ma et al. (2018) to directly learn the distribution of images given partially observed pixels. This VAE is then used to select glimpses in order of information gain, as in IP. In another work, Peng et al. (2018) introduced an RL-based framework to sequentially query patient symptoms for fast diagnosis. In §4, we compare V-IP with prior works in this area and show that in almost all cases our method requires a smaller number of queries to achieve the same level of accuracy. We conjecture that the superiority of V-IP over RL-based methods is because the V-IP optimization is not plagued by sparse rewards over long trajectories, for example, a positive reward for correct prediction after a large number of symptom queries as in Peng et al. (2018). Instead, Deep V-IP can be abstractly thought of as, given history, s, choosing a query q (the action) and receiving DKL(P (Y | x) || P (Y | q(x), s)) as an immediate reward. A more rigorous comparison of the two approaches would be an interesting future work. 3 METHODS 3.1 G-IP: INFORMATION PURSUIT VIA GENERATIVE MODELS AND MCMC Let X : Ω → X and Y : Ω → Y be random variables representing the input data and corresponding labels/output. We use capital letters for random variables and small letters for their realizations. Ω is the underlying sample space on which all random variables are defined. Let P (Y | X) denote the ground truth conditional distribution of Y given data X . Let Q be a set of task-specific, user-defined, interpretable functions of data, q : X → A, where q(x) ∈ A is the answer to query q ∈ Q evaluated at x ∈ X . We assume that Q is sufficient for solving the task, i.e., we assume that ∀(x, y) ∈ X × Y P (y | x) = P (y | {x′ ∈ X : q(x′) = q(x) ∀q ∈ Q}). (1) In other words Q(X) := {q(X) : q ∈ Q} is a sufficient statistic for Y . The Information Pursuit (IP) algorithm (Geman & Jedynak, 1996) proceeds as follows; given a data-point xobs, a sequence of most informative queries is selected as q1 = IP(∅) = argmax q∈Q I(q(X);Y ); (2) qk+1 = IP({qi, qi(xobs)}1:k) = argmax q∈Q I(q(X);Y | q1:k(xobs)). Here qk+1 ∈ Q refers to the new query selected by IP at step k+1, based on the history (denoted as q1:k(x obs))2, and qk+1(xobs) indicates the corresponding answer. The algorithm terminates after L queries, where L depends on the data point xobs, if all remaining queries are nearly uninformative, that is, ∀q ∈ Q I(q(X);Y | q1:L)) ≈ 0. The symbol I denotes mutual information. Evidently equation 2 requires estimating the query with maximum mutual information with Y based on history. One approach to carrying out IP is by first learning the distribution P (Q(X), Y ) from data using generative models and then using MCMC sampling to estimate the mutual information terms3. However, learning generative models for distributions with high-dimensional support is challenging, and performing multiple iterations of MCMC sampling can likewise be computationally demanding. To address this challenge, in the nest subsection we propose a variational characterization of IP that completely bypasses the need to learn and sample from complex generative models. 3.2 V-IP: A VARIATIONAL CHARACTERIZATION OF INFORMATION PURSUIT We begin this section by describing our variational characterization of IP. The proposed approach is motivated by the fact that generative models are only a means to an end; what we need is the function, that we call querier, that maps the histories observed, {qi, qi(xobs)}1:k, to the most informative next query qk+1 ∈ Q. It turns out that this most informative query is exactly the query q∗ whose answer will minimize the KL divergence between the conditional label distribution P (Y | X) and the posterior P (Y | q∗(X), {qi(xobs)}1:k). Based on this insight, is it possible to define an optimization problem to directly learn this querier function? This requires a few ingredients, • First, we need to learn a querier that, given any possible history one might encounter during IP, chooses the next most informative query. One possible strategy for this is to minimize the KL divergence objective in expectation over random histories of query-answer pairs. • The posterior P (Y | q∗(X), {qi(xobs)}1:k) depends on the data distribution and is typically unknown. Thus, we need to estimate this using probabilistic classifiers. A possible solution is to jointly optimize this expected KL divergence over both querier and classifier functions. This leads to the following variational characterization of IP which will allow us to avoid generative models. Let K(x) be the set of all finite-length query-answer pairs of the form ({q1, q1(x)}, ..., {qm, qm(x)}), generated using queries from Q evaluated on any x ∈ X . We then define K̄ := ∪x∈XK(x) and denote elements of K̄ as “histories”. We define a classifier f : K̄ → PY as a function which maps arbitrary query-answer sequences to a distribution over Y . We define a querier g : K̄ → Q as a function which maps arbitrary query-answer sequences to a query q ∈ Q. The variational objective for IP is given by the following functional optimization problem, min f,g EX,S [ DKL ( P (Y | X) ∥ P̂ (Y | q(X), S) )] where q := g(S) ∈ Q P̂ (Y | q(X), S) := f({q, q(X)} ∪ S), (V-IP) and the minimum is taken over all possible mappings f (classifier) and g (querier). Here, S is a random set of query-answer pairs taking values in K̄4. Given S = s and X = xobs, the querier g chooses a query q ∈ Q, evaluates it on xobs and passes the pair {q, q(xobs)} to the classifier. The classifier f then makes a prediction based on s appended with this additional pair {q, q(xobs)}. 2Conditioning on q1:k(xobs) is to be understood as conditioning on the event {x′ ∈ X | {qi, qi(xobs)}1:k = {qi, qi(x′)}1:k} 3Since mutual information requires computing expectation over density ratios which is still intractable despite having learnt a generative model. 4Throughout this paper whenever we condition on S = s, we mean conditioning on the event of all data x′ ∈ X which share the same answers to queries as in s. Let (f∗, g∗) be an optimal solution to V-IP. The querier g∗ will be the IP strategy with the requirement that the distribution of S, denoted as PS , in V-IP is chosen such that the histories observed while carrying out IP must have positive probability mass under PS . Thus, given a data-point xobs, q1 = g ∗(∅) = argmax q∈Q I(q(X);Y ); qk+1 = g ∗({qi, qi(xobs)}1:k) = argmax q∈Q I(q(X);Y | q1:k(xobs)). (3) The above sequential procedure is illustrated in Figure 2. As before {qi, qi(xobs)}1:k is referred to as the history observed after k queries and is a realization of S. This is formalized in the following proposition whose proof can be found in Appendix A. Proposition 1. Let (f∗, g∗) be an optimal solution to V-IP. For any realization S = s such that P (S = s) > 0, define the optimization problem: max P̃∈PY ,q∈Q I(q(X);Y | s)− EX|s [ DKL ( P (Y | q(X), s) ∥ P̃ (Y | q(X), s) )] . (4) Then there exists an optimal solution (P̃ ∗s , q ∗ s ) to the above objective such that q ∗ s = g ∗(s) and P̃ ∗s = f ∗({q∗s , q∗s (X)} ∪ s). Thus, at the optima, the KL divergence term in equation 4 would be 0 and g∗ would pick the most informative query for any given subset of query-answer pairs S = s as is presented in equation 3. Theoretical guarantees aside, solving the optimization problem defined in V-IP is challenging since functional optimization over all possible classifier and querier mappings is intractable. In the following subsection, we present a practical algorithm for approximately solving this V-IP objective. 3.3 V-IP WITH DEEP NETWORKS Instead of optimizing f and g over the intractable function space, we parameterize them using deep networks with weights θ and η respectively. Our practical version of V-IP, termed as Deep V-IP, is as follows, min θ,η EX,S [DKL(P (Y | X) ∥ Pθ(Y | qη(X), S)] where qη := gη(S) Pθ(Y | qη(X), S) := fθ({qη, qη(X)} ∪ S). (Deep V-IP) Note that all we have done is replace arbitrary functions f and g in V-IP by deep networks parameterized by θ and η. To find good solutions there are two key constraints. First, the architecture for the classifier fθ and the querier gη functions need to be expressive enough to learn over an exponential (in |Q|) number of possible realizations of S. Second, we need to choose a sampling distribution PS for S. Notice, that for any reasonable-sized Q, there will be an exponentially large number of possible realizations of S. Ideally, we would like to choose a PS that assigns positive mass only to histories observed during the exact IP procedure, however this is like the “chicken-and-egg” dilemma. We now briefly discuss the architectures used for optimizing the Deep V-IP objective, followed by an exposition on the sampling distribution for S. Architectures. The architecture for both the querier and classifier networks (described in more detail in Appendix C) are chosen in such a way that they can operate on arbitrary length query-answer pairs. There can be several choices for this. In this paper, we primarily use masking where the deep networks operate on fixed size inputs (the answers to all queries q ∈ Q evaluated on input x) with the unobserved query-answers masked out. We also experiment with set-based deep architectures proposed in Ma et al. (2018). We show by ablation studies in Appendix E that the masking-based architecture performs better. In practice, qη = argmax(gη(S)), where gη(S) ∈ R|Q| is the output of the querier network, which assigns a score to every query in Q and argmax computes an 1-hot indicator of the max element index of its input vector. To ensure differentiability through argmax we use the straight-through softmax estimator (Paulus et al., 2020) which is described in detail in Appendix D. Finally, Pθ(Y | qη(X), S) is the output of Softmax applied to the last layer of fθ. Sampling of S. Choosing a sampling distribution that only has positive mass on histories observed during exact IP is like the “chicken-and-egg” dilemma. A simple alternative is to consider a distribution that assigns a positive mass to all possible sequences of query-answer pairs from K̄. This however would lead to slow convergence since the deep networks now have to learn to choose the most informative query given a large number query-answer pair subsets that would never be observed for any xobs ∈ X if one could do exact IP. To remedy this, we choose to adaptively bias our sampling distribution towards realizations of S one would observe if they carried out equation 3 using the current estimate for querier in place of g∗. More concretely, we optimize Deep V-IP by sequentially biasing the sampling distribution as follows: 1. Initial Random Sampling: We choose an initial distribution P 0S which ensures all elements of K̄ have positive mass. We first sample X ∼ PData. Then we sample k ∼ Uniform{0, 1, ..., |Q|} as the number of queries. Subsequently, k queries from Q are selected for X uniformly at random. 2. Subsequent Biased Sampling: The distribution P j+1S is obtained by using the solution querier gηj to V-IP using P j S as the sampling distribution. In particular, we first sample X ∼ PData and k ∼ Uniform{0, 1, ..., |Q|} as before. Subsequently, we find the first k query-answer pairs for this sampled X using equation 3 with gηj as our querier. Notice that, the empty set ∅, corresponding to empty history, would have positive probability under any P jS and hence the querier would eventually learn to pick the most informative first query. Subsequent sequential optimization would aim at choosing the most informative second query and so on, assuming our architectures are expressive enough. In practice, we optimize with random sampling of S using stochastic gradients for numerous epochs. We then take the solution gη0 and fine-tune it with biased sampling strategies, each time optimizing using a single batch and consequently changing the sampling strategy according to the updated querier. Refer Appendix E for ablation studies on the effectiveness of the biased sampling strategy for S. Stopping Criterion. There are two possible choices; (i) Fixed budget: Following prior work in sequential active testing (Ma et al., 2018; Rangrej & Clark, 2021), we stop asking queries after a fixed number of iterations. (ii) Variable query-lengths: Different data-points might need different number of queries to make confident predictions. For supervised learning tasks, where Y is “almost” a deterministic function of X , that is, maxY P (Y | X) ≈ 1, for any given xobs ∈ X , we terminate after L steps if maxY P (Y | q1:L(xobs)) ≥ 1− ϵ, where ϵ is a hyperparameter. This is termed as the “MAP criterion”. For tasks where Y is more ambiguous and not a deterministic function of X , we choose to terminate once the posterior is “stable” for a pre-defined number of steps. This stability is measured by the difference between the two consecutive posterior entropies, H ( Y | q1:k(xobs) ) − H ( Y | q1:k+1(xobs) ) ≤ ϵ. This criterion, termed as the “stability criterion”, is an unbiased estimate of the mutual information-based stopping criteria used in Chattopadhyay et al. (2022). Qualitative differences between Generative-IP and Variational-IP. We will refer to the generative approach for carrying out IP as described in Chattopadhyay et al. (2022) as Generative-IP or G-IP. The difference between G-IP and V-IP is similar in spirit to that of generative versus discriminative modelling in classification problems (Ng & Jordan, 2001). We conjecture that, when the data distribution agrees with the modelling assumptions made by the generative model, for example, conditional independence of query answers given Y , and the dataset size is “small,” then G-IP would obtain better results than V-IP since there are not enough datapoints for learning competitive querier and classifier networks. We thus expect the gains of V-IP to be most evident on datasets where learning a good generative model is difficult. 4 EXPERIMENTS In this section, through extensive experiments, we evaluate the effectiveness of the proposed method. We describe the query set used for each dataset in Table 1, with more details in Appendix C. The choice of query sets for each dataset was made to make our approach comparable with prior work. We also complement the results presented here with more examples in the Appendix. Code is available at https://github.com/ryanchankh/VariationalInformationPursuit. 4.1 INTERPRETABLE PREDICTIONS USING V-IP Basing predictions on an interpretable query set allows us to reason about the predictions in terms of the queries, which are compositions of elementary words, symbols or patterns. We will illustrate this by analyzing the query-answer chains uncovered by V-IP for different datasets. Figure 3a illustrates the decision-making process for V-IP on an image of a dog from the CIFAR-10 dataset. A priori the model’s belief is almost uniform for all the classes (second row, first column). The first query probes a patch near the centre of the image and observes the snout. Visually it looks similar to the left face of a cat, justifying the shift in the model’s belief to label “cat” with some mass on the label “dog”. Subsequent queries are aimed at distinguishing between these two possibilities. Finally, the model becomes more than 99% confident that it is a “dog” once it spots the left ear. Figure 3b shows the query-chain for a “genital herpes” diagnosis of a synthetic patient from the SymCAT-200 dataset. The y-axis shows the query asked at each iteration with green indicating a “Yes” answer and red indicating a “No”. Each row shows the model’s current belief in the patient’s disease. We begin with an initial symptom, “0: itching of skin”, provided by the patient. The subsequent queries ask about different conditions of the skin. The diseases shown are the top-10 most probable out of 200. All of these diseases have skin-related symptoms. After discovering the patient has painful urination (query 11), V-IP zooms into two possibilities “Balanitis” and “Genital herpes”. The subsequent queries rule out symptoms typically observed in patients with “Balanties” resulting in a 80% confidence in the herpes disease. For our final example, we elucidate the results for a bird image from the CUB-200 dataset in Figure 3c. The colour scheme for the y-axis is the same as in Figure 3b, with the exception that, unlike the patient case, we do not bootstrap V-IP with an initial positive attribute of the word. Instead, the first query about bill shape is the most-informative query about the label Y before any answer is observed. This is indicated with the grey “0:Init”. All the top10 most probable bird species in this context are seabirds, and have very similar visual characteristics to the true species, “Laysan Albatross”. After 14 queries V-IP figures out the true class with more than 99% confidence. Thus, in all three case-studies, we see that V-IP makes transparent decisions, interpretable in terms of queries specified by the user. A limitation of this framework however is finding a good query set that is interpretable and allows for highly accurate predictions with short explanations (short query-answer chains). We discuss this further in Appendix §G. 4.2 QUANTITATIVE COMPARISON WITH PRIOR WORK Baselines. We compare V-IP primarily to the generative modelling approach for IP, namely G-IP. We also compare to ReinforcementLearning methods prevalent in other areas of sequential-decision making like HardAttention (Mnih et al., 2014; Rangrej & Clark, 2021) or Symptom Checking (Peng et al., 2018), which can be adapted for our purposes. In particular, we compare with the RAM (Mnih et al., 2014) and RAM+(Li et al., 2017) algorithms. In both methods, a policy is learnt using deep networks to select queries based on previ- ous query-answers for a fixed number of iterations such that the expected cumulative reward is maximized. A classifier network is also trained simultaneously with the policy network to make accurate predictions. In RAM, this reward is just the negative cross-entropy loss between true and predicted labels at the last step. In RAM+, this reward is the cumulative sum of the negative cross-entropy loss at each step. We also compare our method with the “Random” strategy where successive queries are chosen randomly independent of the history observed so far. The predictions given history are still made using the V-IP classifier. We first show results on the simple datasets used in Chattopadhyay et al. (2022) comparing with our own implementation of G-IP, RAM and RAM+. V-IP is competitive with G-IP in terms of performance but far more efficient in terms of speed of inference. In all datasets, V-IP outperforms the RL-based methods. Subsequently, we present results on more complex tasks like RGB image classification. On these datasets, V-IP achieves a higher accuracy given a fixed budget of queries compared with prior work. Comparisons on Simple Tasks. Concise explanations are always preferred due to their simplicity. In Figure 4 we plot the trade-off between accuracy and explanation length obtained by various methods. V-IP is competitive with G-IP and obtains far shorter explanations than the RL-based methods to obtain the same test accuracy. This trade-off is quantified using the Area Under the Curve (AUC) metric in Table 2. Notice on the HuffingtonNews dataset the RL methods struggle to perform better than even Random. This is potentially due to the fact that in these RL methods, the classifier is trained jointly with the policy which likely affects its performance when the actionspace is large (|Q| = 1000). On the other hand, the random strategy learns its classifier by training on random sequences of query-answer pairs. This agrees with findings in Rangrej & Clark (2021). While V-IP performs competitive with the generative approach the biggest gain is in terms of computational cost of inference. Once trained, inference in V-IP, that is computing the most-informative query, is akin to a forward pass through a deep network and is potentially O(1)5. On the other hand, the per-iteration cost in G-IP is O(N + |Q|m), where N is the number of MCMC iterations employed and m is the cardinality of the space q(X)× Y . As an example, on the same GPU server, G-IP takes about 47 seconds per iteration on MNIST whereas V-IP requires just 0.11s, an approximately 400× speedup! Note that the inference cost is the same for V-IP and the RL methods since all of them train a querier/policy function to choose the next query. Comparisons on Complex Tasks. We now move on to more complex datasets where the gains of V-IP are more evident. First, we consider the task of natural image classification and show results on the CIFAR-{10, 100} datasets. For G-IP on these datasets, we refer to the Probabilistic HardAttn model introduced in Rangrej & Clark (2021) which proposes to learn a partial-VAE model (Ma et al., 2018) for images and then do inference using this model to compute the most informative query. Figures 5 a & b show the accuracy vs. number of queries curves for different methods. V-IP clearly outperforms all baselines on both datasets. Next, we consider the task of medical diagnosis by querying symptoms of the patient. We show results on the popular SymCAT dataset along with comparisons with prior work in Figure 5c. The plot clearly shows that V-IP achieves a much higher accuracy given a fixed budget of queries. For G-IP on this task, we refer to the BSODA framework introduced in He et al. (2022) which is again based on partial-VAEs. REFUEL (Peng et al., 2018) is a state-of-the-art RLbased method for this task akin to the RAM technique used in Hard-Attention literature. The classification accuracy for these different methods on all medical datasets are summarized in Table 3. Numbers for baselines are taken from Nesterov et al. (2022) since we used their released versions of these datasets6. As conjectured in §3.3, MuZhi and Dxy are small-scale datasets with about 500 training samples thus approaches based on generative models, like BSODA, are able to perform slightly better than V-IP. 5 CONCLUSION IP was recently used to construct interpretable predictions by composing interpretable queries from a user-defined query set. The framework however required generative models which limited its application to simple tasks. Here, we have introduced a variational characterization of IP which does away with generative models and tries to directly optimize a KL-divergence based objective to find the most informative query, as required by IP, in each iteration.Through qualitative and quantitative experiments we show the effectiveness of the proposed method. 5For simplicity we consider unit cost for any operation that was computed in a batch concurrently on a GPU. 6https://github.com/SympCheck/NeuralSymptomChecker ACKNOWLEDGMENTS This research was supported by the Army Research Office under the Multidisciplinary University Research Initiative contract W911NF-17-1-0304, the NSF grant 2031985 and by Simons Foundation Mathematical and Scientific Foundations of Deep Learning (MoDL) grant 135615. Moreover, the authors acknowledge support from the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE2139757. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. A PROOF OF PROPOSITION 1 Before proceeding to the proof we will prove the following lemma, Lemma 1. Let Q be an user-defined query set and P(Y ) be the set of all possible distributions on Y . Then, for any realization S = s, the following holds true: min P̃∈PY ,q∈Q EX|s [ DKL(P (Y | X) || P̃ (Y | q(X), s) ] ≡ max P̃∈PY ,q∈Q [ I(q(X);Y | s)− EX|s[DKL(P (Y | q(X), s) || P̃ (Y | q(X), s))] ] (5) Proof. Using information-theoretic properties of the KL-divergence we have the following set of equalities. min P̃∈PY ,q∈Q EX|s [ DKL(P (Y | X) || P̃ (Y | q(X), s) ] = min P̃∈PY ,q∈Q EX|s [ DKL(P (Y | X, q(X), s) || P̃ (Y | q(X), s) ] = min P̃∈PY ,q∈Q EX|s [∑ Y P (Y | X, q(X), s) log P (Y | X, q(X), s) P̃ (Y | q(X), s) ] = min P̃∈PY ,q∈Q EX|s [∑ Y P (Y | X, q(X), s) log P (X,Y | q(X), s) P̃ (Y | q(X), s)P (X | q(X), s) ] = min P̃∈PY ,q∈Q EX,Y |s [ log P (X,Y | q(X), s) P (Y | q(X), s)P (X | q(X), s) ] + EX,Y |s [ log P (Y | q(X), s) P̃ (Y | q(X), s) ] = min P̃∈PY ,q∈Q I(X;Y | q(X), s) + EX|s[DKL(P (Y | q(X), s) || P̃ (Y | q(X), s))] (6) In the first equality, assuming P (X = x, S = s) > 0.7, we used the fact that given any X = x, the label Y is independent of any query answer q(X) = q(x) and event {S = s}. Thus, P (Y | X = x) = P (Y | X = x, q(X) = q(x), S = s). In the fourth equality we multiplied the term inside the log by the identity P (Y |q(X),s)P (Y |q(X),s) , where P (Y | q(X), s) represents the true posterior of Y given the query answer q(X) and S = s. Now observe that for any fixed S = s and any q ∈ Q, I(X, q(X);Y | s) = I(X;Y | s) + I(q(X);Y | X, s) = I(X;Y | s) (7) The second equality is obtained by using the fact that q(X) is a function of X . Decomposing I(X, q(X);Y | s) another way, I(X, q(X);Y | s) = I(q(X);Y | s) + I(X;Y | q(X), s) (8) From equation 7 and equation 8 we conclude that min q∈Q I(Y ;X | q(X), s) ≡ min q∈Q −I(q(X);Y | s) Substituting the RHS in the above result in equation 6 we obtain the desired result. 7For any x′ ∈ X , if P (X = x′, S = s) = 0, then x′ would not contribute to the expectation in the first equation and so we do not need consider this case. Proof of Proposition 1. Restating the objective from equation V-IP, min f,g EX,S [ DKL ( P (Y | X) ∥ P̂ (Y | q(X), S )] where q := g(S) ∈ Q P̂ (Y | q(X), S) := f({q, q(X)} ∪ S), Now, for any realization S = s, such that P (S = s) > 0, we have, min P̃∈PY ,q∈Q EX|s [ DKL ( P (Y | X) || P̃s(Y | q(X), s) )] = EX|s [ DKL ( P (Y | X) || P̃ ∗s (Y | q∗s (X), s) )] = EX|s [ DKL ( P (Y | X) || P̂ (Y | q̃(X), s) )] + EX|s [∑ Y P (Y | X) log P̂ (Y | q̃(X), s) P̃ ∗s (Y | q∗s (X), s) ] = EX|s [ DKL ( P (Y | X) || P̂ (Y | q̃(X), s) )] − EX|s [∑ Y P (Y | X) log P̃ ∗ s (Y | q∗s (X), s) P̂ (Y | q̃(X), s) ] = EX|s [ DKL ( P (Y | X) || P̂ (Y | q̃(X), s) ) −DKL ( P̃ ∗s (Y | q∗s (X), s) || P̂ (Y | q̃(X), s) )] ≤ EX|s [ DKL ( P (Y | X) || P̂ (Y | q̃(X), s) )] (9) In the first equality we used the definition of (P̃ ∗s , q ∗ s ) as the solution to the minimization problem in the first equality. In the second equality, q̃ = g(s) for any querier g and P̂ (Y | q̃(X), s) = f({q̃, q̃(X)} ∪ s}) for any classifier f . In the fourth equality we appealed to lemma 1 to conclude that P̃ ∗s (Y | q∗s (X), s) = P (Y | q∗s (X), s), the true posterior over Y given answer q∗s (X) and history s. The final step we used the non-negativity of the KL-divergence for getting rid of the second term. Since the inequality in equation 9 holds ∀S = s, and mappings f and g, we conclude that q∗s = g∗(s) and P̃ ∗s = f ∗({q∗s , q∗s (X)} ∪ s) for any given S = s. Equation 4 in the proposition is then proved by using lemma 1 to characterize q∗s and P̃ ∗ s . B TRAINING PROCEDURE Consider a mini-batch of N samples, {(xi, yi)}Ni=1 from a training set. In Deep V-IP objective, the KL-divergence is mathematically equivalent to the cross entropy loss. The mini-batch estimate of this loss can be expressed as: min θ,η − 1 N N∑ i=1 yi log ŷi subject to qη = argmax(gη(si)) ŷi = fθ(si ∪ {qη, qη(xi)}), (10) where yi is the ground truth label corresponding to input xi. si is obtained by sampling for P JS as defined in §3.3. We optimize the above objective using Stochastic Gradient Descent (or its variants). To optimize objective 10, for every sample xi in the batch, the sampled history si is fed into the querier network gη which outputs a score for every query q ∈ Q. The argmax(.) (see D regarding its differentiability) operator converges these scores into a |Q|-dimensional one-hot vector, with the non-zero entry at the location of the max. We then append this argmax query qη and it’s answer, (qη, qη(X)) to si. The updated history si ∪ (qη, qη(xi)) is then fed into the classifier fθ to obtain a softmax distribution over the labels, denoted as ŷi. C EXPERIMENT DETAILS All of our experiments are implemented in python using PyTorch (Paszke et al., 2019) version 1.12. Moreover, all training is done on one computing node with 64-core 2.10GHz Intel(R) Xeon(R) Gold 6130 CPU, 8 NVIDIA GeForce RTX 2080 GPUs (each with 10GB memory) and 377GB of RAM. General Optimization Scheme. The following optimization scheme is used in all experiments for both Initial Random Sampling and Subsequent Biased Sampling, unless stated otherwise. We minimize the Deep V-IP objective using Adam (Kingma & Ba, 2014) as our optimizer, with learning rate lr=1e-4, betas=(0.9, 0.999), weight decay=0 and amdgrad=True (Reddi et al., 2019). We also use Cosine Annealing learning rate scheduler (Loshchilov & Hutter, 2016) with T max=50. We train our networks fθ and gη for 500 epochs using batch size 128. In both sampling stages, we linearly anneal temperature parameter τ , in our straight-through softmax estimator, from 1.0 to 0.2 over the 500 epochs. Training Details for RAM and RAM+. For train the RL policy and classification network we use the popular PPO algorithm (Schulman et al., 2017) with entropy regularization (0.01 regularization parameter). We used in initial learning rate of 3e-5 and clip value 0.2. For a fair comparison the architectures for the policy8 and classification networks are kept the same for RAM, RAM+ and V-IP. C.1 SPECIES CLASSIFICATION ON CUB. Dataset and Query Set. Caltech-UCSD Birds-200-201 (CUB-200) (Wah et al., 2011) is a dataset of 200 bird species, containing 11,788 images with 312 annotated binary features for different visual attributes of birds, such as the color or shape of the wing or the head of the bird. We construct the query set Q using these attributes. For example, given an image for a Blue-jay a possible query might be “is the back-colour blue?” and the answer “Yes”. The query set construction and data preprocessing steps as same as Chattopadhyay et al. (2022): In the original dataset, each annotated attribution for each image is too noisy, containing often imprecise description of the bird. Hence, if a certain attribute is true/false for over 50% of the samples in a given category, then that attribute is true/false for all samples in the same category. We also train a CNN to answer each query using the training set annotations called the concept network. This concept network provides answers for training all the methods compared in §4, namely, RAM, RAM+, G-IP and V-IP. Last but not least, our query set Q consists of 312 queries that ask whether each of the binary attributes is 1 for present or −1 for absent. Architecture and Training A diagram of the architectures is shown in Figure 6. Both fθ and gη have the same full-connected network architecture (except the last linear layer), but they do not share any parameters with each. We initialize each architecture randomly, and train using the optimization scheme mentioned in the beginning of this section. The input history is a |Q|-dimensional vector with unobserved answers masked out with zeros. Updating the History. Let the history of query-answer pairs observed after k steps be denoted as Sk. Sk is represented by a masked vector of dimension 312, and qk+1 = argmax(gη(Sk))9 is a one-hot vector of the same dimension, denoting the next query. For a given observation xobs, we update Sk using qk+1 as follows: • We obtain the query-answer by performing a point-wise multiplication, that is, qk+1(x obs) = qk+1 ⊙ xobs. • We update the history to Sk+1 by adding this query answer qk+1(xobs) to Sk. The entire process can be denoted by Sk+1 = Sk + qk+1 ⊙ xobs. 8This term is from the RL community. In our context, the policy network is exactly the querier network. 9recall gη is our querier function parameterized by a deep network with weights η. C.2 TOPIC IDENTIFICATION ON THE HUFFINGTON POST NEWS CATEGORY DATASET. Dataset and Query Set. The Huffington Post News Category dataset (HuffingtonNews) is a natural language dataset, containing news “headlines” and their short descriptions (continuation of the headline) extracted from Huffington Post news published between 2012 and 2018. We follow the same data-processing procedure as Chattopadhyay et al. (2022): Each data point is an extended headline formed by concatenation of the headline with their short descriptions. Moreover, we also remove redundant categories, including semantically ambiguous and HuffPost-specific words such as “Impact“ and “Worldpost.” We also remove categories with small number of articles, along with semantically equivalent category names, such as “Arts & Culture” versus ”Culture & Art.” After processing, there is a total of 10 categories in the dataset. In addition, only the top-1,000 words are kept according to their tf-idf scores (Lavin, 2019), along with semantically redundant words removed. For more details, please refer to Chattopadhyay et al. (2022). The query set Q contains binary questions of whether one of the 1000 words exist in the headline. The query answer is 1 if the word in question is present and −1 if absent. Architecture and Training. A diagram of the architectures is shown in Figure 7. Both the classifer fθ and the querier gη shares the same architecture except the last layer; However, they do not share any parameters. The inputs to fθ and gη are masked vectors, with masked values set to 0. To optimize the Deep V-IP objective, we randomly randomly initialize fθ and gη , and train using Adam optimizer and Cosine Annealing learning rate scheduler, with the settings mentioned in the beginning of the section. During Subsequent Adaptive Sampling, we train for 100 epochs using Stochastic Gradient Descent (SGD) instead of Adam, with learning rate lr=1e-4 and momentum=0.9, and Cosine Annealing learning rate scheduler, with T max=100. The input history is a |Q|-dimensional vector with unobserved answers masked out with zeros. Updating the History. The method of updating the history is equivalent to that for CUB as mentioned in §C.1. The history is now a masked vector of dimension 1000, since there are 1000 queries in our query set for this dataset. C.3 IMAGE CLASSIFICATION ON MNIST, KMNIST AND FASHION-MNIST. Each setting mentioned in the following is the same for MNIST, KMNIST and Fashing-MNIST unless stated otherwise. Dataset and Query Set. MNIST (LeCun et al., 1998), KMNIST (Clanuwat et al., 2018), FashionMNIST (Xiao et al., 2017b) are gray-scale hand-written digit datasets, each containing 60,000 train- ing images and 10,000 testing images of size 28 × 28. We follow Chattopadhyay et al. (2022) for data pre-processing procedure and the design of our query sets of all three datasets. Each gray-scale image is converted into a binary image. For MNIST and KMNIST, we round values greater or equal than 0.5 up to 1 and round values below 0.5 down to -1. For Fashion-MNIST, we round values greater or equal up to 0.1 to 1 and round values below 0.1 down to -1. Each query set Q contains all overlapping 3 × 3 patches over the 28 × 28 pixel space, resulting in 676 queries. Each query answer indicates the 9 pixel intensities at the queried patch. The inputs to the classifier fθ and the querier gθ are masked images, with masked pixels zeroed out if they are not part of the current History. Updating the History. The method for updating the history is equivalent to that for CUB as mentioned in §C.1 with some differences as we will describe next. For a given observation xobs, we update Sk using qk+1 as follows: • We reshape qk+1 from a vector of dimension 676 (the number of queries) to a 2D grid of dimension 26× 26, denoted by q̂k+1. • q̂k+1 is then converted to a binary matrix with 1s at the location corresponding to the queried 3 × 3 patch and 0s everywhere else via a convolutation operation with a kernel of all 1s of size 3× 3, stride 1 and padding 2. • We then obtain the query-answer by performing a hadamard product of the convolved output (in the previous step) with xobs. This results in a masked image, q̂k+1(xobs), with the queried patch revealed and 0 everywhere else. • Finally, we update the history to Sk+1 by adding this query-answer to Sk. To account for pixels observed (unmsaked) in q̂k+1(xobs) that overlap with the history Sk, we clip the values in Sk to lie between −1 and 1. The entire process can be summarized as, Sk+1 = Clip ( Sk + (Conv2D(q̂k+1)⊙ xobs), minval = −1, maxval = 1 ) . Architecture and Training. Refer to Figure 8 for a diagram of the architecture for the classifier fθ and the querier gη . Every Conv is a 2D Convolution with a 3 × 3 kernel, stride 1 and padding 1. Moreover, every MaxPool is a 2D max pooling operator with a 2 × 2 kernel. We initialize each architecture from random initialization, and train our networks using Adam as our optimizer and Cosine Annealing learning rate scheduler, with the same settings mentioned in the beginning of this section. C.4 MEDICAL DIAGNOSIS ON SYMCAT, MUZHI AND DXY Dataset and Query Set. MuZhi (Wei et al., 2018) and Dxy (Xu et al., 2019) are two real-world medical datasets containing symptoms and diseases extracted from Chinese healthcare websites (https://muzhi.baidu.com/ and https://dxy.com/), where doctors provide online help based on patient’s self-report symptoms and online conversations. We follow the same data processing procedure as He et al. (2022). MuZhi has 66 symptoms (features) and 4 diseases (classes), including children’s bronchitis, children’s functional dyspepsia, infantile diarrhea infection, and upper respiratory infection. Dxy dataset contains 41 symptoms for 5 diseases: allergic rhinitis, upper respiratory infection, pneumonia, children hand-foot-mouth disease, and pediatric diarrhea. Last but not least, our query set Q consists of queries that correspond to asking questions about the presence of each symptom for the patient; the query-answer is either 1 for Yes, 0 for No and -1 for Can’t Say. SymCAT is a synthetic medical dataset generated from a symptom checking website called SymCAT introduced by Peng et al. (2018). The dataset has three versions; SymCAT-200 contains 328 symptoms and 200 disease, SymCAT-300 contains 349 symptoms and 300 classes, and SymCAT-400 con- tains 355 symptoms and 400 classes. We used publically avialable version of this dataset provided by Nesterov et al. (2022) at https://github.com/SympCheck/NeuralSymptomChecker. Our query set Q consists of queries that correspond to asking questions about the presence/absence of each symptom for the patient; the query-answer is either 1 for Yes and 0 for No. Architecture and Training. A diagram of the architecture is shown in Figure 9. We used the set architecture proposed in Ma et al. (2018). We made this choice for a fair comparison with He et al. (2022) which also used to same architecture for their partial-VAEs. The input to the network is a concatenation of the query-answers {q(x j) : q ∈ Q}, trainable positional embeddings e j (red blocks), and bias terms b j (blue blocks). Each positional embedding is also multiplied by the query-answer. After the first linear layer, the intermediate embedding is multiplied by a queryanswer mask derived from the history, which each dimension has a value of 1 for query selected in the history and 0 otherwise. To optimize our objective, we randomly initialize fθ and gθ and train using the same optimizer and learning rate scheduler setting as mentioned in the beginning of the section. However, we train our algorithms for only 200 epochs, and linearly anneal the straightthrough softmax estimator’s temperature τ from 1.0 to 0.2 over the first 50 epochs. Updating the History. Let the history of query-answer pairs observed after k steps be denoted as Sk. Since we used set architectures as our querier and classifier networks for these datasets, as proposed in Ma et al. (2018), Sk is represented as a set consisting of embeddings of query-answer pairs observed so far. The next query, qk+1 = argmax(gη(Sk)) is a one-hot vector of dimension equal to the size of the query set used. For a given observation xobs, we update Sk using qk+1 as follows: • Let M be a matrix of size |Q|×d where every row corresponds to a query-answer evaluated at xobs and d is the size the embeddings used for representing the query-answers. We obtain the answer corresponding to the selected query qk+1 by performing a matrix-vector product, that is, qk+1(xobs) = qTk+1M . • We update the history to Sk+1 by concatenating qk+1(xobs) to Sk. C.5 IMAGE CLASSIFICATION ON CIFAR-10 AND CIFAR-100 Dataset and Query Set. CIFAR-{10,100} (Krizhevsky et al., 2009) are natural image datasets that contain {10, 100} different classes of objects. They contain 50,000 training images and 10,000 testing images. Each RGB image is of size 32 × 32. We design our query set Q following Rangrej & Clark (2021) consisting of all 8× 8 overlapping patches with stride 4. This results in a query set size |Q| of 49. The inputs to the classifier fθ and the querier gη are full-sized 3 × 32 × 32 masked image, with masked pixels zeroed out if they are not part of the current History. Architecture and Training. For CIFAR-10, the architectures used for the classifier fθ and querier gη are both Deep Layer Aggregation Network (DLA) (Yu et al., 2018). fθ and gη do not share any parameters with each other. An out-of-the-box implementation was used and can be found here: https://github.com/kuangliu/pytorch-cifar/blob/master/ models/dla.py. For CIFAR-100, the architectures used for the classifier fθ and querier gη are both DenseNet169 (Huang et al., 2017). fθ and gη do not share any parameters with each other. An out-of-the-box implementation was used and can be found here: https://github.com/ kuangliu/pytorch-cifar/blob/master/models/densenet.py. The only change made to the architectures is the last layer, which the dimensions depend on the number of classes or the size of the query set Q. During training, we follow standard data processing techniques as in He et al. (2016). In CIFAR10, we set batch size 128 for both initial and subsequent sampling stages. In CIFAR-100, we set batch size 64 for both initial and subsequent sampling stages. For both CIFAR-10 and CIFAR-100, we randomly initialize fθ and gη , and train them for 500 epochs during Initial Random Sampling using optimizer Adam and Cosine Annealing learning rate scheduler, with settings mentioned above. During Subsequent Adaptive Sampling, we optimize using Stochastic Gradient Descent (SGD), with learning rate lr=0.01 and momentum=0.9 and Cosine Annealing learning rate scheduler, with T max=50, for 100 epochs. Updating the History. The method of updating the history is similar to that for MNIST, KMNIST, and Fashion-MNIST, as mentioned in § C.3. For a given observation xobs, we update the history Sk using qk+1 as follows: • We reshape qk+1 from a vector of dimension 49 (the number of queries) to a 2D grid of dimension 7× 7, denoted by q̂k+1. • q̂k+1 is then converted to a binary matrix with 1s at the location corresponding to the queried 8 × 8 patch and 0s everywhere else via a 2D transposed convolutation operation with a kernel of all 1s of size 8× 8, stride 4 and no padding. • We then obtain the query-answer by performing a hadamard product of the convolved output (in the previous step) with xobs. This results in a masked image, q̂k+1(xobs), with the queried patch revealed and 0 everywhere else. • Finally, we update the history to Sk+1 by adding this query-answer to Sk. Any pixel ij that is observed (unmasked) in q̂k+1(xobs) and is also observed in the history Sk is handled by the transformation Sk+1[i, j] → Sk[i, j] if Sk+1[i, j] = 2Sk[i, j]. The entire process can be summarized as, S′k+1 = Sk + TransposedConv2D(q̂k+1)⊙ xobs. ∀(i, j) ∈ {Number of pixels in image} Sk+1[i, j] = Sk[i, j] if S′k+1[i, j] = 2Sk[i, j] Sk+1[i, j] = S ′ k+1[i, j] otherwise (11) D STRAIGHT-THROUGH SOFTMAX ESTIMATOR As mentioned in §3.3, in the main text, we employ the straight-through softmax gradient estimator for differentiating through the argmax operation. We will now describe this estimator in detail. Consider the following optimization problem, min θ∈Rd f(argmax(θ)) (12) Let Z := argmax(θ). We will assume f is differentiable in its input. The straight-through softmax estimator of the gradient of f w.r.t θ is defined as, ∇STθ f := ∂f ∂Z dsoftmaxτ (θ) dθ , where softmaxτ (θ) := [ e θ1 τ∑d i=1 e θi τ e θ2 τ∑d i=1 e θi τ . . . e θd τ∑d i=1 e θi τ ] and τ is the temperature parameter. No- tice that limτ→0 softmaxτ (θ) → argmax(θ). Thus, we replace the gradient of the argmax operation which is either 0 almost everywhere or doesn’t exist with a surrogate biased estimate. Equation 12 can then be optimized using the straight-through estimator by iteratively carrying out the following operation, θ = θ − η∇STθ f, where η is the learning rate. In our experiments we start with τ = 1.0 and linearly anneal it down to 0.2 over the course of training. E ABLATION STUDIES In §3.3 we discussed two possible architectures for operating on histories of arbitrary sequence lengths; the set-based architectures (as in Figure 9) and the fixed-sized input masking based architectures (as in Figure 6). In Figure 10, we compare the two architectures on the same task of bird species identification using the same query set (see Table 1). We see that the fixed-sized input masking based architecture performs better in terms of avg. number of queries needed to get the same performance. Based on this observation, we use the latter architecture in all our experiments except on the medical datasets, where we used the set-based architecture for fair comparison with the BSODA method which uses a similar set-based architecture for their partial-VAEs. In §3.3 we discussed that a sampling distribution PS that assigns a positive mass to every element in K̄ would make learning a good querier function challenging since the network has to learn over an exponentially (in size of Q) large number of histories. Instead we proposed to sequential bias the sampling according to our current estimate of the querier function. We validate the usefulness of this biased sampling strategy in Figure 11. In most datasets we observe that biasing the sampling distribution for S helps learn better querying strategies (in terms of accuracy vs. explanation length trade-offs). Notice that in most datasets, biased sampling without the initial random sampling ultimately ends up learning a slightly better strategy (in terms of accuracy vs. explanation length trade-offs). However, since biased sampling requires multiple forward passes through our querier network to generate histories, it is much slower than the initial random sampling (IRS) scheme for PS . Thus, when computational resources are not a concern one could do away with the initial random sampling but under a computational budget, the random sampling allows for finding a quick solution which can then be finetuned using biased sampling. This finetuning will potentially require fewer epochs to get good performance than training using biased sampling from scratch. F EXTENDED RESULTS In Figure 12, we show the trade-off between accuracy and explanation length (avg. number of queries) on KMNIST and Fashion-MNIST datasets as the stopping criterion ϵ is changed. In both these datasets, the “MAP criterion” is used as the stopping criterion. V-IP performs better than RLbased RAM and RAM+ on both these datasets. V-IP is competitive with G-IP eventually surpassing it in terms of accuracy for longer query-answer chains. In addition, Table 4 shows extended results for AUC values for test accuracy versus explanation length curves for different datasets. A simplified table is shown in Table 2. G COMPARING V-IP WITH BLACK-BOX DEEP NETWORKS An important aspect of the framework introduced by Chattopadhyay et al. (2022) is that the enduser defines queries that are interpretable to them. Given this set of queries, Q, V-IP learns to efficiently compose them into concise explanations for model predictions (in terms of query-answer chains). This begs the question, how much do we loose in terms of performance by constructing an interpretable Q? In Table 5, we report results studying this question. “Acc. w/ V-IP given ϵ” is the test accuracy obtained by V-IP after termination using our stopping criterion. “Acc. w/ V-IP given Q(X)” is the accuracy the classifier network (trained jointly with the queried network using the Deep V-IP objective) obtains when seeing all the query answers Q(X). In all data sets, V-IP learns to predict with short explanations (avg. number of queries) and a test accuracy in proximity to what can be achieved if all answers were observed (col 6). ‘Acc. w/ Black-Box given Q(X)” reports the accuracy a black-box deep network obtains by training on feature vectors comprised of all query answers Q(X) using the standard cross-entropy loss. Columns 6 and 7 show that training a classifier network with the Deep V-IP objective results in only a minor loss in performance compared to training using the standard supervised classification cross-entropy loss. “Acc. w/ Black-Box given X” reports test accuracies obtained by training black-box deep networks on the whole input X to produce a single output (the classification label). Comparing these values with the accuracies reported in col 6 we see that basing predictions on an interpretable query set, almost always, results in a drop in accuracy. This is expected since interpretability can be seen as a constraint on learning. For example, there is a drop of about 15% for the HuffingtonNews dataset since our queries are about the presence/absence of words which completely ignores the linguistic structure present in documents. Similarly, for the binary image classification tasks (MNIST, KMNIST and Fashion-MNIST) the queries are binary patches which can be easily interpreted as edges, foregrounds and backgrounds. This binarization however results in a drop in performance, especially in Fashion-MNIST where it is harder to distinguish between some classes like coat and shirt without grayscale information. H ADDITIONAL QUERY-ANSWER CHAINS We show additional trajectories for different task and datasets: CUB-200 (Figure 13), MNIST (Figure 14), Fashion-MNIST (Figure 15), KMNIST (Figure 16), HuffingtonNews (Figure 18), CIFAR10 (Figure 19) and CIFAR-100 (Figure 20). In every figure, we see that the correct predictions are explained by an interpretable sequence of query-answer pairs. The evolution of the posterior P (Y | q1:k(xobs)), as more and more queries are asked, gives insights into the model’s decisionmaking process as we see it’s belief shift among the possible labels for xobs. This adds an additional layer of transparency to the model. 10Number taken from Chattopadhyay et al. (2022) where authors fine-tuned a Bert Large Uncased Transformer model to classify documents. 11Number taken from Koh et al. (2020) which trained a CNN on raw images of birds from the CUB-200 dataset 12Number reported from Hu et al. (2018) 13Number reported from Clanuwat et al. (2018) 14Number reported from Xiao et al. (2017a) 15Trained a Deep Layer Aggregation model Yu et al. (2018) to classify CIFAR-10 images from scratch. 16Number reported fromHuang et al. (2017)
1. What is the main contribution of the paper regarding interpretability in neural networks? 2. What are the strengths and weaknesses of the proposed method, particularly in comparison to previous works? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any suggestions or ideas for future improvements or extensions of the proposed method?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a method called V-IP (Variational Information Pursuit) that does a multi-step prediction to improve interpretability instead of doing a one-pass prediction like other neural nets do. In each step, only a small sets of features (i.e. "query set" called in the paper) are revealed and the goal is to make a prediction using the minimun number of steps i.e. part of the feature sets. It can derive interpretability becasue the subsets of features causing a big increase of the prediction of the ground truth class between steps can be seen as important rationales of why model makes such prediction. Previously, most method resort to using generative models to model the distributions between labels and subsets of features to pick which parts of features can maximally predict the target by resorting to MCMC sampling methods (the baseline called G-IP). Or others have proposed using reinforcement learning to sequentially select the feature sets that predict the correct target. The proposed method, V-IP, instead learns to greedily choose the subsets of features that maximize the downstream classifiers to predict the target y in each step as measured in the KL divergence, which can be seen as the mutual information of the current selected features in each step. Note V-IP can be seen as a RL method that has an immediate reward and the decay factor gamma set to 0. In the classification part, V-IP experiments with set-based and mask-based classifiers to predict and find the mask-based ones perform better. In a wide-variety of datasets including images, medical diagnosises, and texts data, the V-IP outperforms recent G-IP related methods and RL-based methods. Strengths And Weaknesses Strength The writing is clear and easy to follow. The examples of interpretability shown in Fig. 3 are interesting to see. I like the careful details of biased samplings and the set-based classifiers v.s. mask-based classifiers. The experiments seem thorough, but some more experiments can be helpful. See below. Weaknesses IMHO, the novelty side may be a little bit low since this method can be seen as a RL method with immediate reward of improving the classifiers predictions. Other inventions like initial random sampling and subsequent biased sampling are new in my opinions. G-IP seems to perform quite similarly or sometimes better than V-IP (Fig. 4) in larger datasets like CUB-200 with large query size 312. It seems contradictory to what authors state "We thus expect the gains of V-IP to be most evident on large-scale datasets". An ablation study that improves from small to large number of examples in a dataset may help verify what the authors claim. There is no consistent baselines across the datasets, which may make comparisons difficult. If it's easy to do, can authors also run the G-IP on the datasets in Table 3 and Figure 5 CIFAR-10/CIFAR-100 to see if the proposed method V-IP is indeed better than G-IP? Or authors can comment on why such comparisons are not easy due to code inaccessibility etc. I would love to see a more complete ablation study than the Supp. E (Fig. 11) that compare another version <1> No initial random sapling, and put a subset of results into a small table in the main text if possible. The metrics reported in the experimental section are mostly borrowed from other papers. Although it's great to directly compare to the SOTA numbers, there maybe subtle differences in the implementation leading to such result. For example, can authors please confirm fi those baselines BSODA, REFUEL in Table 3 and Figure 5 are using the same classifier architectures and training strategies the same as V-IP and the only difference lies in the strategy of selecting the features? If not, such number should not be directly compared, or some ablation studies are needed e.g. using BSODA's classifiers instead of the V-IP classifiers. Clarity, Quality, Novelty And Reproducibility Clarity I enjoy reading the paper and it's quite smooth and clear. One little thins is maybe authors can illustrate that those queries are in fact the features in a dataset in Fig. 1. Although I understand the queries are more general, it makes me confused that if an oracle is needed to get the answer of these questions or it needs to be learned. I realize that those questions in Fig. 1 are in fact just a feature value in the dataset in the experiment sections so no oracle is needed. Quality The experiments are overall good. One important thing is can authors please add standard deviation on both Table 2, 3 and Figure 4, 5 to understand the significance of each method? Originiality IMHO, I think the work has slightly lower originiality since these methods are similar to RL-based methods. It reminds me of an earlier work [1] that also uses RL to do per-instance feature selection. Can authors please comment on the relations? Can authors comment on in what scenarios the proposed greedy approach work better (gamma=0), and in what scenarios the RL-based approaches (gamma > 0) can be better? [1] seems to show that the RL-based approaches perform better. Thoughts (may not be important) Can this method be further improved by combining with a generative approach such as partial VAE? For example, a way to improve V-IP is that for each feature selection doing an imputation for the rest of unselected features and use all the features to send to the classifier. I think it will improve in the image space where there is a high-degree of correlations. Do you think if this approach will improve the accuracy, and also the interpretability? [1] INVASE: Instance-wise Variable Selection using Neural Networks: https://openreview.net/forum?id=BJg_roAcK7
ICLR
Title Variational Information Pursuit for Interpretable Predictions Abstract There is a growing interest in the machine learning community in developing predictive algorithms that are interpretable by design. To this end, recent work proposes to sequentially ask interpretable queries about data until a high confidence prediction can be made based on the answers obtained (the history). To promote short query-answer chains, a greedy procedure called Information Pursuit (IP) is used, which adaptively chooses queries in order of information gain. Generative models are employed to learn the distribution of query-answers and labels, which is in turn used to estimate the most informative query. However, learning and inference with a full generative model of the data is often intractable for complex tasks. In this work, we propose Variational Information Pursuit (V-IP), a variational characterization of IP which bypasses the need to learn generative models. V-IP is based on finding a query selection strategy and a classifier that minimize the expected cross-entropy between true and predicted labels. We prove that the IP strategy is the optimal solution to this problem. Therefore, instead of learning generative models, we can use our optimal strategy to directly pick the most informative query given any history. We then develop a practical algorithm by defining a finite-dimensional parameterization of our strategy and classifier using deep networks and train them end-to-end using our objective. Empirically, V-IP is 10-100x faster than IP on different Vision and NLP tasks with competitive performance. Moreover, V-IP finds much shorter query chains when compared to reinforcement learning which is typically used in sequential-decision-making problems. Finally, we demonstrate the utility of V-IP on challenging tasks like medical diagnosis where the performance is far superior to the generative modeling approach. 1 INTRODUCTION Suppose a doctor diagnoses a patient with a particular disease. One would want to know not only the disease but also an evidential explanation of the diagnosis in terms of clinical test results, physiological data, or symptoms experienced by the patient. For practical applications, machine learning methods require an emphasis not only on metrics such as generalization and scalability but also on criteria such as interpretability and transparency. With the advent of deep learning methods over traditionally interpretable methods such as decision trees or logistic regression, the ability to perform complex tasks such as large-scale image classification now often implies a sacrifice in interpretability. However, interpretability is important in unveiling potential biases for users with different backgrounds (Yu, 2018) or gaining users’ trust. Most of the prominent work in machine learning that addresses this question of interpretability is based on post hoc analysis of a trained deep network’s decisions (Simonyan et al., 2013; Ribeiro et al., 2016; Shrikumar et al., 2017; Zeiler & Fergus, 2014; Selvaraju et al., 2017; Smilkov et al., 2017; Chattopadhyay et al., 2019; Lundberg & Lee, 2017). These methods typically assign importance scores to different features used in a model’s decision by measuring the sensitivity of the model output to these features. However, explanations in terms of importance scores of raw features might not always be as desirable as a description of the reasoning process behind a model’s decision. Moreover, there are rarely any guarantees for the reliability of these post hoc explanations to faithfully represent the model’s decision-making process (Koh et al., 2020). Consequently, post hoc interpretability has been widely criticized (Adebayo et al., 2018; Kindermans et al., 2019; Rudin, 2019; Slack et al., 2020; Shah et al., 2021; Yang & Kim, 2019) and there is a need to shift towards ML algorithms that are interpretable by design. An interesting framework for making interpretable predictions was recently introduced by Chattopadhyay et al. (2022). The authors propose the concept of an interpretable query set Q, a set of user-defined and taskspecific functions q : X → A, which map a data point in X to an answer in A, each having a clear interpretation to the enduser. For instance, a plausible query set for identifying bird species might involve querying beak shape, head colour, and other visual attributes of birds. Given a query set, their method sequentially asks queries about X until the answers obtained are sufficient for predicting the label/hypothesis Y with high confidence. Notably, as the final prediction is solely a function of this sequence of query-answer pairs, these pairs provide a complete explanation for the prediction. Figure 1 illustrates the framework on a bird classification task. To obtain short explanations (short query-answer chains), the authors propose to use a greedy procedure called Information Pursuit (IP), which was first introduced in Geman & Jedynak (1996). Given any input xobs, IP sequentially chooses the query which has the largest mutual information about the label/hypothesis Y given the history of query-answers obtained so far. To compute this mutual information criteria, a generative model is first trained to learn the joint distribution between all query-answers q(X) and Y ; in particular, Variational Autoencoders (VAEs) (Kingma & Welling, 2013) are employed. This learnt VAE is then used to construct Markov Chain Monte Carlo (MCMC) estimates for mutual information via MCMC sampling. Unfortunately, the computational costs of MCMC sampling coupled with the challenges of learning accurate generative models that enable fast inference limit the application of this framework to simple tasks. As an example, classifying MNIST digits using 3× 3 overlapping patches as queries1 with this approach would take weeks! In this paper, we question the need to learn a full generative model between all query-answers q(X) and Y given that at each iteration IP is only interested in finding the most informative query given the history. More specifically, we present a variational charaterization of IP which is based on the observation that, given any history, the query q∗, whose answer minimizes the KL-divergence between the label distribution P (Y | X) and the posterior P (Y | q∗(X), history), will be the most informative query as required by IP. As a result, we propose to minimize this KL-divergence term in expectation (over randomization of histories) by optimizing over querier functions, which pick a query from Q given history, parameterized by deep networks. The optimal querier would then learn to directly pick the most informative query given any history, thus bypassing the need for explicitly computing mutual information using generative models. Through extensive experiments, we show that the proposed method is not only faster (since MCMC sampling methods are no longer needed for inference), but also achieves competitive performance when compared with the generative modeling approach and also outperforms other state-of-the-art sequential-decision-making methods. Paper Contributions. (1) We present a variational characterization of IP, termed Variational-IP or V-IP, and show that the solution to the V-IP objective is exactly the IP strategy. (2) We present a practical algorithm for optimizing this objective using deep networks. (3) Empirically, we show that V-IP achieves competitive performance with the generative modelling approach on various computer vision and NLP tasks with a much faster inference time. (4) Finally, we also compare our approach to Reinforcement Learning (RL) approaches used in sequential-decision making areas like Hard Attention (Mnih et al., 2014) and Symptom Checking (Peng et al., 2018), where the objective is to learn a policy which adaptively chooses a fixed number of queries, one at a time, such that an accurate prediction can be made. In all experiments, V-IP is superior to RL methods. 1Each patch query asks about the pixel intensities observed in that patch for xobs. 2 RELATED WORK Interpretability in Machine Learning. These works can be broadly classified into two main categories: (i) post-hoc interpretability, and (ii) algorithms that are interpretable by design. A large number of papers in this area are devoted to post-hoc interpretability. However, as stated in the Introduction, the reliability of these methods have recently been called into question (Adebayo et al., 2018; Yang & Kim, 2019; Kindermans et al., 2019; Shah et al., 2021; Slack et al., 2020; Rudin, 2019; Koh et al., 2020; Subramanya et al., 2019). Consequently, recent works have focused on developing ML algorithms that are interpretable by design. Several of these works aim at learning deep networks via regularization such that it can be approximated by a decision tree (Wu et al., 2021) or locally by a linear network (Bohle et al., 2021; Alvarez Melis & Jaakkola, 2018). However, the framework of Chattopadhyay et al. (2022) produces predictions that are completely explained by interpretable query-chains and is not merely an approximation to an interpretable model like decision trees. Another line of work tries to learn latent semantic concepts or prototypes from data and subsequently base the final prediction on these learnt concepts (Sarkar et al., 2022; Nauta et al., 2021; Donnelly et al., 2022; Li et al., 2018; Yeh et al., 2020). However, there is no guarantee that these learnt concepts would be interpretable to the user or align with the user’s requirement. In sharp contrast, allowing the user to define an interpretable query set in (Chattopadhyay et al., 2022) guarantees by construction that the resulting query-chain explanations would be interpretable and useful. Sequential Decision-Making. An alternative approach to learning short query-chains is to use methods for sequential decision learning. These algorithms can be used for making interpretable decisions by sequentially deciding “what to query next?” in order to predict Y as quickly as possible. Mnih et al. (2014) introduced a reinforcement-learning (RL) algorithm to sequentially observe an image through glimpses (small patches) and predict the label, and called their approach Hard Attention. Rangrej & Clark (2021) introduced a probabilistic model for Hard Attention which is similar to the IP algorithm. More specifically, they propose to learn a partial-VAE Ma et al. (2018) to directly learn the distribution of images given partially observed pixels. This VAE is then used to select glimpses in order of information gain, as in IP. In another work, Peng et al. (2018) introduced an RL-based framework to sequentially query patient symptoms for fast diagnosis. In §4, we compare V-IP with prior works in this area and show that in almost all cases our method requires a smaller number of queries to achieve the same level of accuracy. We conjecture that the superiority of V-IP over RL-based methods is because the V-IP optimization is not plagued by sparse rewards over long trajectories, for example, a positive reward for correct prediction after a large number of symptom queries as in Peng et al. (2018). Instead, Deep V-IP can be abstractly thought of as, given history, s, choosing a query q (the action) and receiving DKL(P (Y | x) || P (Y | q(x), s)) as an immediate reward. A more rigorous comparison of the two approaches would be an interesting future work. 3 METHODS 3.1 G-IP: INFORMATION PURSUIT VIA GENERATIVE MODELS AND MCMC Let X : Ω → X and Y : Ω → Y be random variables representing the input data and corresponding labels/output. We use capital letters for random variables and small letters for their realizations. Ω is the underlying sample space on which all random variables are defined. Let P (Y | X) denote the ground truth conditional distribution of Y given data X . Let Q be a set of task-specific, user-defined, interpretable functions of data, q : X → A, where q(x) ∈ A is the answer to query q ∈ Q evaluated at x ∈ X . We assume that Q is sufficient for solving the task, i.e., we assume that ∀(x, y) ∈ X × Y P (y | x) = P (y | {x′ ∈ X : q(x′) = q(x) ∀q ∈ Q}). (1) In other words Q(X) := {q(X) : q ∈ Q} is a sufficient statistic for Y . The Information Pursuit (IP) algorithm (Geman & Jedynak, 1996) proceeds as follows; given a data-point xobs, a sequence of most informative queries is selected as q1 = IP(∅) = argmax q∈Q I(q(X);Y ); (2) qk+1 = IP({qi, qi(xobs)}1:k) = argmax q∈Q I(q(X);Y | q1:k(xobs)). Here qk+1 ∈ Q refers to the new query selected by IP at step k+1, based on the history (denoted as q1:k(x obs))2, and qk+1(xobs) indicates the corresponding answer. The algorithm terminates after L queries, where L depends on the data point xobs, if all remaining queries are nearly uninformative, that is, ∀q ∈ Q I(q(X);Y | q1:L)) ≈ 0. The symbol I denotes mutual information. Evidently equation 2 requires estimating the query with maximum mutual information with Y based on history. One approach to carrying out IP is by first learning the distribution P (Q(X), Y ) from data using generative models and then using MCMC sampling to estimate the mutual information terms3. However, learning generative models for distributions with high-dimensional support is challenging, and performing multiple iterations of MCMC sampling can likewise be computationally demanding. To address this challenge, in the nest subsection we propose a variational characterization of IP that completely bypasses the need to learn and sample from complex generative models. 3.2 V-IP: A VARIATIONAL CHARACTERIZATION OF INFORMATION PURSUIT We begin this section by describing our variational characterization of IP. The proposed approach is motivated by the fact that generative models are only a means to an end; what we need is the function, that we call querier, that maps the histories observed, {qi, qi(xobs)}1:k, to the most informative next query qk+1 ∈ Q. It turns out that this most informative query is exactly the query q∗ whose answer will minimize the KL divergence between the conditional label distribution P (Y | X) and the posterior P (Y | q∗(X), {qi(xobs)}1:k). Based on this insight, is it possible to define an optimization problem to directly learn this querier function? This requires a few ingredients, • First, we need to learn a querier that, given any possible history one might encounter during IP, chooses the next most informative query. One possible strategy for this is to minimize the KL divergence objective in expectation over random histories of query-answer pairs. • The posterior P (Y | q∗(X), {qi(xobs)}1:k) depends on the data distribution and is typically unknown. Thus, we need to estimate this using probabilistic classifiers. A possible solution is to jointly optimize this expected KL divergence over both querier and classifier functions. This leads to the following variational characterization of IP which will allow us to avoid generative models. Let K(x) be the set of all finite-length query-answer pairs of the form ({q1, q1(x)}, ..., {qm, qm(x)}), generated using queries from Q evaluated on any x ∈ X . We then define K̄ := ∪x∈XK(x) and denote elements of K̄ as “histories”. We define a classifier f : K̄ → PY as a function which maps arbitrary query-answer sequences to a distribution over Y . We define a querier g : K̄ → Q as a function which maps arbitrary query-answer sequences to a query q ∈ Q. The variational objective for IP is given by the following functional optimization problem, min f,g EX,S [ DKL ( P (Y | X) ∥ P̂ (Y | q(X), S) )] where q := g(S) ∈ Q P̂ (Y | q(X), S) := f({q, q(X)} ∪ S), (V-IP) and the minimum is taken over all possible mappings f (classifier) and g (querier). Here, S is a random set of query-answer pairs taking values in K̄4. Given S = s and X = xobs, the querier g chooses a query q ∈ Q, evaluates it on xobs and passes the pair {q, q(xobs)} to the classifier. The classifier f then makes a prediction based on s appended with this additional pair {q, q(xobs)}. 2Conditioning on q1:k(xobs) is to be understood as conditioning on the event {x′ ∈ X | {qi, qi(xobs)}1:k = {qi, qi(x′)}1:k} 3Since mutual information requires computing expectation over density ratios which is still intractable despite having learnt a generative model. 4Throughout this paper whenever we condition on S = s, we mean conditioning on the event of all data x′ ∈ X which share the same answers to queries as in s. Let (f∗, g∗) be an optimal solution to V-IP. The querier g∗ will be the IP strategy with the requirement that the distribution of S, denoted as PS , in V-IP is chosen such that the histories observed while carrying out IP must have positive probability mass under PS . Thus, given a data-point xobs, q1 = g ∗(∅) = argmax q∈Q I(q(X);Y ); qk+1 = g ∗({qi, qi(xobs)}1:k) = argmax q∈Q I(q(X);Y | q1:k(xobs)). (3) The above sequential procedure is illustrated in Figure 2. As before {qi, qi(xobs)}1:k is referred to as the history observed after k queries and is a realization of S. This is formalized in the following proposition whose proof can be found in Appendix A. Proposition 1. Let (f∗, g∗) be an optimal solution to V-IP. For any realization S = s such that P (S = s) > 0, define the optimization problem: max P̃∈PY ,q∈Q I(q(X);Y | s)− EX|s [ DKL ( P (Y | q(X), s) ∥ P̃ (Y | q(X), s) )] . (4) Then there exists an optimal solution (P̃ ∗s , q ∗ s ) to the above objective such that q ∗ s = g ∗(s) and P̃ ∗s = f ∗({q∗s , q∗s (X)} ∪ s). Thus, at the optima, the KL divergence term in equation 4 would be 0 and g∗ would pick the most informative query for any given subset of query-answer pairs S = s as is presented in equation 3. Theoretical guarantees aside, solving the optimization problem defined in V-IP is challenging since functional optimization over all possible classifier and querier mappings is intractable. In the following subsection, we present a practical algorithm for approximately solving this V-IP objective. 3.3 V-IP WITH DEEP NETWORKS Instead of optimizing f and g over the intractable function space, we parameterize them using deep networks with weights θ and η respectively. Our practical version of V-IP, termed as Deep V-IP, is as follows, min θ,η EX,S [DKL(P (Y | X) ∥ Pθ(Y | qη(X), S)] where qη := gη(S) Pθ(Y | qη(X), S) := fθ({qη, qη(X)} ∪ S). (Deep V-IP) Note that all we have done is replace arbitrary functions f and g in V-IP by deep networks parameterized by θ and η. To find good solutions there are two key constraints. First, the architecture for the classifier fθ and the querier gη functions need to be expressive enough to learn over an exponential (in |Q|) number of possible realizations of S. Second, we need to choose a sampling distribution PS for S. Notice, that for any reasonable-sized Q, there will be an exponentially large number of possible realizations of S. Ideally, we would like to choose a PS that assigns positive mass only to histories observed during the exact IP procedure, however this is like the “chicken-and-egg” dilemma. We now briefly discuss the architectures used for optimizing the Deep V-IP objective, followed by an exposition on the sampling distribution for S. Architectures. The architecture for both the querier and classifier networks (described in more detail in Appendix C) are chosen in such a way that they can operate on arbitrary length query-answer pairs. There can be several choices for this. In this paper, we primarily use masking where the deep networks operate on fixed size inputs (the answers to all queries q ∈ Q evaluated on input x) with the unobserved query-answers masked out. We also experiment with set-based deep architectures proposed in Ma et al. (2018). We show by ablation studies in Appendix E that the masking-based architecture performs better. In practice, qη = argmax(gη(S)), where gη(S) ∈ R|Q| is the output of the querier network, which assigns a score to every query in Q and argmax computes an 1-hot indicator of the max element index of its input vector. To ensure differentiability through argmax we use the straight-through softmax estimator (Paulus et al., 2020) which is described in detail in Appendix D. Finally, Pθ(Y | qη(X), S) is the output of Softmax applied to the last layer of fθ. Sampling of S. Choosing a sampling distribution that only has positive mass on histories observed during exact IP is like the “chicken-and-egg” dilemma. A simple alternative is to consider a distribution that assigns a positive mass to all possible sequences of query-answer pairs from K̄. This however would lead to slow convergence since the deep networks now have to learn to choose the most informative query given a large number query-answer pair subsets that would never be observed for any xobs ∈ X if one could do exact IP. To remedy this, we choose to adaptively bias our sampling distribution towards realizations of S one would observe if they carried out equation 3 using the current estimate for querier in place of g∗. More concretely, we optimize Deep V-IP by sequentially biasing the sampling distribution as follows: 1. Initial Random Sampling: We choose an initial distribution P 0S which ensures all elements of K̄ have positive mass. We first sample X ∼ PData. Then we sample k ∼ Uniform{0, 1, ..., |Q|} as the number of queries. Subsequently, k queries from Q are selected for X uniformly at random. 2. Subsequent Biased Sampling: The distribution P j+1S is obtained by using the solution querier gηj to V-IP using P j S as the sampling distribution. In particular, we first sample X ∼ PData and k ∼ Uniform{0, 1, ..., |Q|} as before. Subsequently, we find the first k query-answer pairs for this sampled X using equation 3 with gηj as our querier. Notice that, the empty set ∅, corresponding to empty history, would have positive probability under any P jS and hence the querier would eventually learn to pick the most informative first query. Subsequent sequential optimization would aim at choosing the most informative second query and so on, assuming our architectures are expressive enough. In practice, we optimize with random sampling of S using stochastic gradients for numerous epochs. We then take the solution gη0 and fine-tune it with biased sampling strategies, each time optimizing using a single batch and consequently changing the sampling strategy according to the updated querier. Refer Appendix E for ablation studies on the effectiveness of the biased sampling strategy for S. Stopping Criterion. There are two possible choices; (i) Fixed budget: Following prior work in sequential active testing (Ma et al., 2018; Rangrej & Clark, 2021), we stop asking queries after a fixed number of iterations. (ii) Variable query-lengths: Different data-points might need different number of queries to make confident predictions. For supervised learning tasks, where Y is “almost” a deterministic function of X , that is, maxY P (Y | X) ≈ 1, for any given xobs ∈ X , we terminate after L steps if maxY P (Y | q1:L(xobs)) ≥ 1− ϵ, where ϵ is a hyperparameter. This is termed as the “MAP criterion”. For tasks where Y is more ambiguous and not a deterministic function of X , we choose to terminate once the posterior is “stable” for a pre-defined number of steps. This stability is measured by the difference between the two consecutive posterior entropies, H ( Y | q1:k(xobs) ) − H ( Y | q1:k+1(xobs) ) ≤ ϵ. This criterion, termed as the “stability criterion”, is an unbiased estimate of the mutual information-based stopping criteria used in Chattopadhyay et al. (2022). Qualitative differences between Generative-IP and Variational-IP. We will refer to the generative approach for carrying out IP as described in Chattopadhyay et al. (2022) as Generative-IP or G-IP. The difference between G-IP and V-IP is similar in spirit to that of generative versus discriminative modelling in classification problems (Ng & Jordan, 2001). We conjecture that, when the data distribution agrees with the modelling assumptions made by the generative model, for example, conditional independence of query answers given Y , and the dataset size is “small,” then G-IP would obtain better results than V-IP since there are not enough datapoints for learning competitive querier and classifier networks. We thus expect the gains of V-IP to be most evident on datasets where learning a good generative model is difficult. 4 EXPERIMENTS In this section, through extensive experiments, we evaluate the effectiveness of the proposed method. We describe the query set used for each dataset in Table 1, with more details in Appendix C. The choice of query sets for each dataset was made to make our approach comparable with prior work. We also complement the results presented here with more examples in the Appendix. Code is available at https://github.com/ryanchankh/VariationalInformationPursuit. 4.1 INTERPRETABLE PREDICTIONS USING V-IP Basing predictions on an interpretable query set allows us to reason about the predictions in terms of the queries, which are compositions of elementary words, symbols or patterns. We will illustrate this by analyzing the query-answer chains uncovered by V-IP for different datasets. Figure 3a illustrates the decision-making process for V-IP on an image of a dog from the CIFAR-10 dataset. A priori the model’s belief is almost uniform for all the classes (second row, first column). The first query probes a patch near the centre of the image and observes the snout. Visually it looks similar to the left face of a cat, justifying the shift in the model’s belief to label “cat” with some mass on the label “dog”. Subsequent queries are aimed at distinguishing between these two possibilities. Finally, the model becomes more than 99% confident that it is a “dog” once it spots the left ear. Figure 3b shows the query-chain for a “genital herpes” diagnosis of a synthetic patient from the SymCAT-200 dataset. The y-axis shows the query asked at each iteration with green indicating a “Yes” answer and red indicating a “No”. Each row shows the model’s current belief in the patient’s disease. We begin with an initial symptom, “0: itching of skin”, provided by the patient. The subsequent queries ask about different conditions of the skin. The diseases shown are the top-10 most probable out of 200. All of these diseases have skin-related symptoms. After discovering the patient has painful urination (query 11), V-IP zooms into two possibilities “Balanitis” and “Genital herpes”. The subsequent queries rule out symptoms typically observed in patients with “Balanties” resulting in a 80% confidence in the herpes disease. For our final example, we elucidate the results for a bird image from the CUB-200 dataset in Figure 3c. The colour scheme for the y-axis is the same as in Figure 3b, with the exception that, unlike the patient case, we do not bootstrap V-IP with an initial positive attribute of the word. Instead, the first query about bill shape is the most-informative query about the label Y before any answer is observed. This is indicated with the grey “0:Init”. All the top10 most probable bird species in this context are seabirds, and have very similar visual characteristics to the true species, “Laysan Albatross”. After 14 queries V-IP figures out the true class with more than 99% confidence. Thus, in all three case-studies, we see that V-IP makes transparent decisions, interpretable in terms of queries specified by the user. A limitation of this framework however is finding a good query set that is interpretable and allows for highly accurate predictions with short explanations (short query-answer chains). We discuss this further in Appendix §G. 4.2 QUANTITATIVE COMPARISON WITH PRIOR WORK Baselines. We compare V-IP primarily to the generative modelling approach for IP, namely G-IP. We also compare to ReinforcementLearning methods prevalent in other areas of sequential-decision making like HardAttention (Mnih et al., 2014; Rangrej & Clark, 2021) or Symptom Checking (Peng et al., 2018), which can be adapted for our purposes. In particular, we compare with the RAM (Mnih et al., 2014) and RAM+(Li et al., 2017) algorithms. In both methods, a policy is learnt using deep networks to select queries based on previ- ous query-answers for a fixed number of iterations such that the expected cumulative reward is maximized. A classifier network is also trained simultaneously with the policy network to make accurate predictions. In RAM, this reward is just the negative cross-entropy loss between true and predicted labels at the last step. In RAM+, this reward is the cumulative sum of the negative cross-entropy loss at each step. We also compare our method with the “Random” strategy where successive queries are chosen randomly independent of the history observed so far. The predictions given history are still made using the V-IP classifier. We first show results on the simple datasets used in Chattopadhyay et al. (2022) comparing with our own implementation of G-IP, RAM and RAM+. V-IP is competitive with G-IP in terms of performance but far more efficient in terms of speed of inference. In all datasets, V-IP outperforms the RL-based methods. Subsequently, we present results on more complex tasks like RGB image classification. On these datasets, V-IP achieves a higher accuracy given a fixed budget of queries compared with prior work. Comparisons on Simple Tasks. Concise explanations are always preferred due to their simplicity. In Figure 4 we plot the trade-off between accuracy and explanation length obtained by various methods. V-IP is competitive with G-IP and obtains far shorter explanations than the RL-based methods to obtain the same test accuracy. This trade-off is quantified using the Area Under the Curve (AUC) metric in Table 2. Notice on the HuffingtonNews dataset the RL methods struggle to perform better than even Random. This is potentially due to the fact that in these RL methods, the classifier is trained jointly with the policy which likely affects its performance when the actionspace is large (|Q| = 1000). On the other hand, the random strategy learns its classifier by training on random sequences of query-answer pairs. This agrees with findings in Rangrej & Clark (2021). While V-IP performs competitive with the generative approach the biggest gain is in terms of computational cost of inference. Once trained, inference in V-IP, that is computing the most-informative query, is akin to a forward pass through a deep network and is potentially O(1)5. On the other hand, the per-iteration cost in G-IP is O(N + |Q|m), where N is the number of MCMC iterations employed and m is the cardinality of the space q(X)× Y . As an example, on the same GPU server, G-IP takes about 47 seconds per iteration on MNIST whereas V-IP requires just 0.11s, an approximately 400× speedup! Note that the inference cost is the same for V-IP and the RL methods since all of them train a querier/policy function to choose the next query. Comparisons on Complex Tasks. We now move on to more complex datasets where the gains of V-IP are more evident. First, we consider the task of natural image classification and show results on the CIFAR-{10, 100} datasets. For G-IP on these datasets, we refer to the Probabilistic HardAttn model introduced in Rangrej & Clark (2021) which proposes to learn a partial-VAE model (Ma et al., 2018) for images and then do inference using this model to compute the most informative query. Figures 5 a & b show the accuracy vs. number of queries curves for different methods. V-IP clearly outperforms all baselines on both datasets. Next, we consider the task of medical diagnosis by querying symptoms of the patient. We show results on the popular SymCAT dataset along with comparisons with prior work in Figure 5c. The plot clearly shows that V-IP achieves a much higher accuracy given a fixed budget of queries. For G-IP on this task, we refer to the BSODA framework introduced in He et al. (2022) which is again based on partial-VAEs. REFUEL (Peng et al., 2018) is a state-of-the-art RLbased method for this task akin to the RAM technique used in Hard-Attention literature. The classification accuracy for these different methods on all medical datasets are summarized in Table 3. Numbers for baselines are taken from Nesterov et al. (2022) since we used their released versions of these datasets6. As conjectured in §3.3, MuZhi and Dxy are small-scale datasets with about 500 training samples thus approaches based on generative models, like BSODA, are able to perform slightly better than V-IP. 5 CONCLUSION IP was recently used to construct interpretable predictions by composing interpretable queries from a user-defined query set. The framework however required generative models which limited its application to simple tasks. Here, we have introduced a variational characterization of IP which does away with generative models and tries to directly optimize a KL-divergence based objective to find the most informative query, as required by IP, in each iteration.Through qualitative and quantitative experiments we show the effectiveness of the proposed method. 5For simplicity we consider unit cost for any operation that was computed in a batch concurrently on a GPU. 6https://github.com/SympCheck/NeuralSymptomChecker ACKNOWLEDGMENTS This research was supported by the Army Research Office under the Multidisciplinary University Research Initiative contract W911NF-17-1-0304, the NSF grant 2031985 and by Simons Foundation Mathematical and Scientific Foundations of Deep Learning (MoDL) grant 135615. Moreover, the authors acknowledge support from the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE2139757. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. A PROOF OF PROPOSITION 1 Before proceeding to the proof we will prove the following lemma, Lemma 1. Let Q be an user-defined query set and P(Y ) be the set of all possible distributions on Y . Then, for any realization S = s, the following holds true: min P̃∈PY ,q∈Q EX|s [ DKL(P (Y | X) || P̃ (Y | q(X), s) ] ≡ max P̃∈PY ,q∈Q [ I(q(X);Y | s)− EX|s[DKL(P (Y | q(X), s) || P̃ (Y | q(X), s))] ] (5) Proof. Using information-theoretic properties of the KL-divergence we have the following set of equalities. min P̃∈PY ,q∈Q EX|s [ DKL(P (Y | X) || P̃ (Y | q(X), s) ] = min P̃∈PY ,q∈Q EX|s [ DKL(P (Y | X, q(X), s) || P̃ (Y | q(X), s) ] = min P̃∈PY ,q∈Q EX|s [∑ Y P (Y | X, q(X), s) log P (Y | X, q(X), s) P̃ (Y | q(X), s) ] = min P̃∈PY ,q∈Q EX|s [∑ Y P (Y | X, q(X), s) log P (X,Y | q(X), s) P̃ (Y | q(X), s)P (X | q(X), s) ] = min P̃∈PY ,q∈Q EX,Y |s [ log P (X,Y | q(X), s) P (Y | q(X), s)P (X | q(X), s) ] + EX,Y |s [ log P (Y | q(X), s) P̃ (Y | q(X), s) ] = min P̃∈PY ,q∈Q I(X;Y | q(X), s) + EX|s[DKL(P (Y | q(X), s) || P̃ (Y | q(X), s))] (6) In the first equality, assuming P (X = x, S = s) > 0.7, we used the fact that given any X = x, the label Y is independent of any query answer q(X) = q(x) and event {S = s}. Thus, P (Y | X = x) = P (Y | X = x, q(X) = q(x), S = s). In the fourth equality we multiplied the term inside the log by the identity P (Y |q(X),s)P (Y |q(X),s) , where P (Y | q(X), s) represents the true posterior of Y given the query answer q(X) and S = s. Now observe that for any fixed S = s and any q ∈ Q, I(X, q(X);Y | s) = I(X;Y | s) + I(q(X);Y | X, s) = I(X;Y | s) (7) The second equality is obtained by using the fact that q(X) is a function of X . Decomposing I(X, q(X);Y | s) another way, I(X, q(X);Y | s) = I(q(X);Y | s) + I(X;Y | q(X), s) (8) From equation 7 and equation 8 we conclude that min q∈Q I(Y ;X | q(X), s) ≡ min q∈Q −I(q(X);Y | s) Substituting the RHS in the above result in equation 6 we obtain the desired result. 7For any x′ ∈ X , if P (X = x′, S = s) = 0, then x′ would not contribute to the expectation in the first equation and so we do not need consider this case. Proof of Proposition 1. Restating the objective from equation V-IP, min f,g EX,S [ DKL ( P (Y | X) ∥ P̂ (Y | q(X), S )] where q := g(S) ∈ Q P̂ (Y | q(X), S) := f({q, q(X)} ∪ S), Now, for any realization S = s, such that P (S = s) > 0, we have, min P̃∈PY ,q∈Q EX|s [ DKL ( P (Y | X) || P̃s(Y | q(X), s) )] = EX|s [ DKL ( P (Y | X) || P̃ ∗s (Y | q∗s (X), s) )] = EX|s [ DKL ( P (Y | X) || P̂ (Y | q̃(X), s) )] + EX|s [∑ Y P (Y | X) log P̂ (Y | q̃(X), s) P̃ ∗s (Y | q∗s (X), s) ] = EX|s [ DKL ( P (Y | X) || P̂ (Y | q̃(X), s) )] − EX|s [∑ Y P (Y | X) log P̃ ∗ s (Y | q∗s (X), s) P̂ (Y | q̃(X), s) ] = EX|s [ DKL ( P (Y | X) || P̂ (Y | q̃(X), s) ) −DKL ( P̃ ∗s (Y | q∗s (X), s) || P̂ (Y | q̃(X), s) )] ≤ EX|s [ DKL ( P (Y | X) || P̂ (Y | q̃(X), s) )] (9) In the first equality we used the definition of (P̃ ∗s , q ∗ s ) as the solution to the minimization problem in the first equality. In the second equality, q̃ = g(s) for any querier g and P̂ (Y | q̃(X), s) = f({q̃, q̃(X)} ∪ s}) for any classifier f . In the fourth equality we appealed to lemma 1 to conclude that P̃ ∗s (Y | q∗s (X), s) = P (Y | q∗s (X), s), the true posterior over Y given answer q∗s (X) and history s. The final step we used the non-negativity of the KL-divergence for getting rid of the second term. Since the inequality in equation 9 holds ∀S = s, and mappings f and g, we conclude that q∗s = g∗(s) and P̃ ∗s = f ∗({q∗s , q∗s (X)} ∪ s) for any given S = s. Equation 4 in the proposition is then proved by using lemma 1 to characterize q∗s and P̃ ∗ s . B TRAINING PROCEDURE Consider a mini-batch of N samples, {(xi, yi)}Ni=1 from a training set. In Deep V-IP objective, the KL-divergence is mathematically equivalent to the cross entropy loss. The mini-batch estimate of this loss can be expressed as: min θ,η − 1 N N∑ i=1 yi log ŷi subject to qη = argmax(gη(si)) ŷi = fθ(si ∪ {qη, qη(xi)}), (10) where yi is the ground truth label corresponding to input xi. si is obtained by sampling for P JS as defined in §3.3. We optimize the above objective using Stochastic Gradient Descent (or its variants). To optimize objective 10, for every sample xi in the batch, the sampled history si is fed into the querier network gη which outputs a score for every query q ∈ Q. The argmax(.) (see D regarding its differentiability) operator converges these scores into a |Q|-dimensional one-hot vector, with the non-zero entry at the location of the max. We then append this argmax query qη and it’s answer, (qη, qη(X)) to si. The updated history si ∪ (qη, qη(xi)) is then fed into the classifier fθ to obtain a softmax distribution over the labels, denoted as ŷi. C EXPERIMENT DETAILS All of our experiments are implemented in python using PyTorch (Paszke et al., 2019) version 1.12. Moreover, all training is done on one computing node with 64-core 2.10GHz Intel(R) Xeon(R) Gold 6130 CPU, 8 NVIDIA GeForce RTX 2080 GPUs (each with 10GB memory) and 377GB of RAM. General Optimization Scheme. The following optimization scheme is used in all experiments for both Initial Random Sampling and Subsequent Biased Sampling, unless stated otherwise. We minimize the Deep V-IP objective using Adam (Kingma & Ba, 2014) as our optimizer, with learning rate lr=1e-4, betas=(0.9, 0.999), weight decay=0 and amdgrad=True (Reddi et al., 2019). We also use Cosine Annealing learning rate scheduler (Loshchilov & Hutter, 2016) with T max=50. We train our networks fθ and gη for 500 epochs using batch size 128. In both sampling stages, we linearly anneal temperature parameter τ , in our straight-through softmax estimator, from 1.0 to 0.2 over the 500 epochs. Training Details for RAM and RAM+. For train the RL policy and classification network we use the popular PPO algorithm (Schulman et al., 2017) with entropy regularization (0.01 regularization parameter). We used in initial learning rate of 3e-5 and clip value 0.2. For a fair comparison the architectures for the policy8 and classification networks are kept the same for RAM, RAM+ and V-IP. C.1 SPECIES CLASSIFICATION ON CUB. Dataset and Query Set. Caltech-UCSD Birds-200-201 (CUB-200) (Wah et al., 2011) is a dataset of 200 bird species, containing 11,788 images with 312 annotated binary features for different visual attributes of birds, such as the color or shape of the wing or the head of the bird. We construct the query set Q using these attributes. For example, given an image for a Blue-jay a possible query might be “is the back-colour blue?” and the answer “Yes”. The query set construction and data preprocessing steps as same as Chattopadhyay et al. (2022): In the original dataset, each annotated attribution for each image is too noisy, containing often imprecise description of the bird. Hence, if a certain attribute is true/false for over 50% of the samples in a given category, then that attribute is true/false for all samples in the same category. We also train a CNN to answer each query using the training set annotations called the concept network. This concept network provides answers for training all the methods compared in §4, namely, RAM, RAM+, G-IP and V-IP. Last but not least, our query set Q consists of 312 queries that ask whether each of the binary attributes is 1 for present or −1 for absent. Architecture and Training A diagram of the architectures is shown in Figure 6. Both fθ and gη have the same full-connected network architecture (except the last linear layer), but they do not share any parameters with each. We initialize each architecture randomly, and train using the optimization scheme mentioned in the beginning of this section. The input history is a |Q|-dimensional vector with unobserved answers masked out with zeros. Updating the History. Let the history of query-answer pairs observed after k steps be denoted as Sk. Sk is represented by a masked vector of dimension 312, and qk+1 = argmax(gη(Sk))9 is a one-hot vector of the same dimension, denoting the next query. For a given observation xobs, we update Sk using qk+1 as follows: • We obtain the query-answer by performing a point-wise multiplication, that is, qk+1(x obs) = qk+1 ⊙ xobs. • We update the history to Sk+1 by adding this query answer qk+1(xobs) to Sk. The entire process can be denoted by Sk+1 = Sk + qk+1 ⊙ xobs. 8This term is from the RL community. In our context, the policy network is exactly the querier network. 9recall gη is our querier function parameterized by a deep network with weights η. C.2 TOPIC IDENTIFICATION ON THE HUFFINGTON POST NEWS CATEGORY DATASET. Dataset and Query Set. The Huffington Post News Category dataset (HuffingtonNews) is a natural language dataset, containing news “headlines” and their short descriptions (continuation of the headline) extracted from Huffington Post news published between 2012 and 2018. We follow the same data-processing procedure as Chattopadhyay et al. (2022): Each data point is an extended headline formed by concatenation of the headline with their short descriptions. Moreover, we also remove redundant categories, including semantically ambiguous and HuffPost-specific words such as “Impact“ and “Worldpost.” We also remove categories with small number of articles, along with semantically equivalent category names, such as “Arts & Culture” versus ”Culture & Art.” After processing, there is a total of 10 categories in the dataset. In addition, only the top-1,000 words are kept according to their tf-idf scores (Lavin, 2019), along with semantically redundant words removed. For more details, please refer to Chattopadhyay et al. (2022). The query set Q contains binary questions of whether one of the 1000 words exist in the headline. The query answer is 1 if the word in question is present and −1 if absent. Architecture and Training. A diagram of the architectures is shown in Figure 7. Both the classifer fθ and the querier gη shares the same architecture except the last layer; However, they do not share any parameters. The inputs to fθ and gη are masked vectors, with masked values set to 0. To optimize the Deep V-IP objective, we randomly randomly initialize fθ and gη , and train using Adam optimizer and Cosine Annealing learning rate scheduler, with the settings mentioned in the beginning of the section. During Subsequent Adaptive Sampling, we train for 100 epochs using Stochastic Gradient Descent (SGD) instead of Adam, with learning rate lr=1e-4 and momentum=0.9, and Cosine Annealing learning rate scheduler, with T max=100. The input history is a |Q|-dimensional vector with unobserved answers masked out with zeros. Updating the History. The method of updating the history is equivalent to that for CUB as mentioned in §C.1. The history is now a masked vector of dimension 1000, since there are 1000 queries in our query set for this dataset. C.3 IMAGE CLASSIFICATION ON MNIST, KMNIST AND FASHION-MNIST. Each setting mentioned in the following is the same for MNIST, KMNIST and Fashing-MNIST unless stated otherwise. Dataset and Query Set. MNIST (LeCun et al., 1998), KMNIST (Clanuwat et al., 2018), FashionMNIST (Xiao et al., 2017b) are gray-scale hand-written digit datasets, each containing 60,000 train- ing images and 10,000 testing images of size 28 × 28. We follow Chattopadhyay et al. (2022) for data pre-processing procedure and the design of our query sets of all three datasets. Each gray-scale image is converted into a binary image. For MNIST and KMNIST, we round values greater or equal than 0.5 up to 1 and round values below 0.5 down to -1. For Fashion-MNIST, we round values greater or equal up to 0.1 to 1 and round values below 0.1 down to -1. Each query set Q contains all overlapping 3 × 3 patches over the 28 × 28 pixel space, resulting in 676 queries. Each query answer indicates the 9 pixel intensities at the queried patch. The inputs to the classifier fθ and the querier gθ are masked images, with masked pixels zeroed out if they are not part of the current History. Updating the History. The method for updating the history is equivalent to that for CUB as mentioned in §C.1 with some differences as we will describe next. For a given observation xobs, we update Sk using qk+1 as follows: • We reshape qk+1 from a vector of dimension 676 (the number of queries) to a 2D grid of dimension 26× 26, denoted by q̂k+1. • q̂k+1 is then converted to a binary matrix with 1s at the location corresponding to the queried 3 × 3 patch and 0s everywhere else via a convolutation operation with a kernel of all 1s of size 3× 3, stride 1 and padding 2. • We then obtain the query-answer by performing a hadamard product of the convolved output (in the previous step) with xobs. This results in a masked image, q̂k+1(xobs), with the queried patch revealed and 0 everywhere else. • Finally, we update the history to Sk+1 by adding this query-answer to Sk. To account for pixels observed (unmsaked) in q̂k+1(xobs) that overlap with the history Sk, we clip the values in Sk to lie between −1 and 1. The entire process can be summarized as, Sk+1 = Clip ( Sk + (Conv2D(q̂k+1)⊙ xobs), minval = −1, maxval = 1 ) . Architecture and Training. Refer to Figure 8 for a diagram of the architecture for the classifier fθ and the querier gη . Every Conv is a 2D Convolution with a 3 × 3 kernel, stride 1 and padding 1. Moreover, every MaxPool is a 2D max pooling operator with a 2 × 2 kernel. We initialize each architecture from random initialization, and train our networks using Adam as our optimizer and Cosine Annealing learning rate scheduler, with the same settings mentioned in the beginning of this section. C.4 MEDICAL DIAGNOSIS ON SYMCAT, MUZHI AND DXY Dataset and Query Set. MuZhi (Wei et al., 2018) and Dxy (Xu et al., 2019) are two real-world medical datasets containing symptoms and diseases extracted from Chinese healthcare websites (https://muzhi.baidu.com/ and https://dxy.com/), where doctors provide online help based on patient’s self-report symptoms and online conversations. We follow the same data processing procedure as He et al. (2022). MuZhi has 66 symptoms (features) and 4 diseases (classes), including children’s bronchitis, children’s functional dyspepsia, infantile diarrhea infection, and upper respiratory infection. Dxy dataset contains 41 symptoms for 5 diseases: allergic rhinitis, upper respiratory infection, pneumonia, children hand-foot-mouth disease, and pediatric diarrhea. Last but not least, our query set Q consists of queries that correspond to asking questions about the presence of each symptom for the patient; the query-answer is either 1 for Yes, 0 for No and -1 for Can’t Say. SymCAT is a synthetic medical dataset generated from a symptom checking website called SymCAT introduced by Peng et al. (2018). The dataset has three versions; SymCAT-200 contains 328 symptoms and 200 disease, SymCAT-300 contains 349 symptoms and 300 classes, and SymCAT-400 con- tains 355 symptoms and 400 classes. We used publically avialable version of this dataset provided by Nesterov et al. (2022) at https://github.com/SympCheck/NeuralSymptomChecker. Our query set Q consists of queries that correspond to asking questions about the presence/absence of each symptom for the patient; the query-answer is either 1 for Yes and 0 for No. Architecture and Training. A diagram of the architecture is shown in Figure 9. We used the set architecture proposed in Ma et al. (2018). We made this choice for a fair comparison with He et al. (2022) which also used to same architecture for their partial-VAEs. The input to the network is a concatenation of the query-answers {q(x j) : q ∈ Q}, trainable positional embeddings e j (red blocks), and bias terms b j (blue blocks). Each positional embedding is also multiplied by the query-answer. After the first linear layer, the intermediate embedding is multiplied by a queryanswer mask derived from the history, which each dimension has a value of 1 for query selected in the history and 0 otherwise. To optimize our objective, we randomly initialize fθ and gθ and train using the same optimizer and learning rate scheduler setting as mentioned in the beginning of the section. However, we train our algorithms for only 200 epochs, and linearly anneal the straightthrough softmax estimator’s temperature τ from 1.0 to 0.2 over the first 50 epochs. Updating the History. Let the history of query-answer pairs observed after k steps be denoted as Sk. Since we used set architectures as our querier and classifier networks for these datasets, as proposed in Ma et al. (2018), Sk is represented as a set consisting of embeddings of query-answer pairs observed so far. The next query, qk+1 = argmax(gη(Sk)) is a one-hot vector of dimension equal to the size of the query set used. For a given observation xobs, we update Sk using qk+1 as follows: • Let M be a matrix of size |Q|×d where every row corresponds to a query-answer evaluated at xobs and d is the size the embeddings used for representing the query-answers. We obtain the answer corresponding to the selected query qk+1 by performing a matrix-vector product, that is, qk+1(xobs) = qTk+1M . • We update the history to Sk+1 by concatenating qk+1(xobs) to Sk. C.5 IMAGE CLASSIFICATION ON CIFAR-10 AND CIFAR-100 Dataset and Query Set. CIFAR-{10,100} (Krizhevsky et al., 2009) are natural image datasets that contain {10, 100} different classes of objects. They contain 50,000 training images and 10,000 testing images. Each RGB image is of size 32 × 32. We design our query set Q following Rangrej & Clark (2021) consisting of all 8× 8 overlapping patches with stride 4. This results in a query set size |Q| of 49. The inputs to the classifier fθ and the querier gη are full-sized 3 × 32 × 32 masked image, with masked pixels zeroed out if they are not part of the current History. Architecture and Training. For CIFAR-10, the architectures used for the classifier fθ and querier gη are both Deep Layer Aggregation Network (DLA) (Yu et al., 2018). fθ and gη do not share any parameters with each other. An out-of-the-box implementation was used and can be found here: https://github.com/kuangliu/pytorch-cifar/blob/master/ models/dla.py. For CIFAR-100, the architectures used for the classifier fθ and querier gη are both DenseNet169 (Huang et al., 2017). fθ and gη do not share any parameters with each other. An out-of-the-box implementation was used and can be found here: https://github.com/ kuangliu/pytorch-cifar/blob/master/models/densenet.py. The only change made to the architectures is the last layer, which the dimensions depend on the number of classes or the size of the query set Q. During training, we follow standard data processing techniques as in He et al. (2016). In CIFAR10, we set batch size 128 for both initial and subsequent sampling stages. In CIFAR-100, we set batch size 64 for both initial and subsequent sampling stages. For both CIFAR-10 and CIFAR-100, we randomly initialize fθ and gη , and train them for 500 epochs during Initial Random Sampling using optimizer Adam and Cosine Annealing learning rate scheduler, with settings mentioned above. During Subsequent Adaptive Sampling, we optimize using Stochastic Gradient Descent (SGD), with learning rate lr=0.01 and momentum=0.9 and Cosine Annealing learning rate scheduler, with T max=50, for 100 epochs. Updating the History. The method of updating the history is similar to that for MNIST, KMNIST, and Fashion-MNIST, as mentioned in § C.3. For a given observation xobs, we update the history Sk using qk+1 as follows: • We reshape qk+1 from a vector of dimension 49 (the number of queries) to a 2D grid of dimension 7× 7, denoted by q̂k+1. • q̂k+1 is then converted to a binary matrix with 1s at the location corresponding to the queried 8 × 8 patch and 0s everywhere else via a 2D transposed convolutation operation with a kernel of all 1s of size 8× 8, stride 4 and no padding. • We then obtain the query-answer by performing a hadamard product of the convolved output (in the previous step) with xobs. This results in a masked image, q̂k+1(xobs), with the queried patch revealed and 0 everywhere else. • Finally, we update the history to Sk+1 by adding this query-answer to Sk. Any pixel ij that is observed (unmasked) in q̂k+1(xobs) and is also observed in the history Sk is handled by the transformation Sk+1[i, j] → Sk[i, j] if Sk+1[i, j] = 2Sk[i, j]. The entire process can be summarized as, S′k+1 = Sk + TransposedConv2D(q̂k+1)⊙ xobs. ∀(i, j) ∈ {Number of pixels in image} Sk+1[i, j] = Sk[i, j] if S′k+1[i, j] = 2Sk[i, j] Sk+1[i, j] = S ′ k+1[i, j] otherwise (11) D STRAIGHT-THROUGH SOFTMAX ESTIMATOR As mentioned in §3.3, in the main text, we employ the straight-through softmax gradient estimator for differentiating through the argmax operation. We will now describe this estimator in detail. Consider the following optimization problem, min θ∈Rd f(argmax(θ)) (12) Let Z := argmax(θ). We will assume f is differentiable in its input. The straight-through softmax estimator of the gradient of f w.r.t θ is defined as, ∇STθ f := ∂f ∂Z dsoftmaxτ (θ) dθ , where softmaxτ (θ) := [ e θ1 τ∑d i=1 e θi τ e θ2 τ∑d i=1 e θi τ . . . e θd τ∑d i=1 e θi τ ] and τ is the temperature parameter. No- tice that limτ→0 softmaxτ (θ) → argmax(θ). Thus, we replace the gradient of the argmax operation which is either 0 almost everywhere or doesn’t exist with a surrogate biased estimate. Equation 12 can then be optimized using the straight-through estimator by iteratively carrying out the following operation, θ = θ − η∇STθ f, where η is the learning rate. In our experiments we start with τ = 1.0 and linearly anneal it down to 0.2 over the course of training. E ABLATION STUDIES In §3.3 we discussed two possible architectures for operating on histories of arbitrary sequence lengths; the set-based architectures (as in Figure 9) and the fixed-sized input masking based architectures (as in Figure 6). In Figure 10, we compare the two architectures on the same task of bird species identification using the same query set (see Table 1). We see that the fixed-sized input masking based architecture performs better in terms of avg. number of queries needed to get the same performance. Based on this observation, we use the latter architecture in all our experiments except on the medical datasets, where we used the set-based architecture for fair comparison with the BSODA method which uses a similar set-based architecture for their partial-VAEs. In §3.3 we discussed that a sampling distribution PS that assigns a positive mass to every element in K̄ would make learning a good querier function challenging since the network has to learn over an exponentially (in size of Q) large number of histories. Instead we proposed to sequential bias the sampling according to our current estimate of the querier function. We validate the usefulness of this biased sampling strategy in Figure 11. In most datasets we observe that biasing the sampling distribution for S helps learn better querying strategies (in terms of accuracy vs. explanation length trade-offs). Notice that in most datasets, biased sampling without the initial random sampling ultimately ends up learning a slightly better strategy (in terms of accuracy vs. explanation length trade-offs). However, since biased sampling requires multiple forward passes through our querier network to generate histories, it is much slower than the initial random sampling (IRS) scheme for PS . Thus, when computational resources are not a concern one could do away with the initial random sampling but under a computational budget, the random sampling allows for finding a quick solution which can then be finetuned using biased sampling. This finetuning will potentially require fewer epochs to get good performance than training using biased sampling from scratch. F EXTENDED RESULTS In Figure 12, we show the trade-off between accuracy and explanation length (avg. number of queries) on KMNIST and Fashion-MNIST datasets as the stopping criterion ϵ is changed. In both these datasets, the “MAP criterion” is used as the stopping criterion. V-IP performs better than RLbased RAM and RAM+ on both these datasets. V-IP is competitive with G-IP eventually surpassing it in terms of accuracy for longer query-answer chains. In addition, Table 4 shows extended results for AUC values for test accuracy versus explanation length curves for different datasets. A simplified table is shown in Table 2. G COMPARING V-IP WITH BLACK-BOX DEEP NETWORKS An important aspect of the framework introduced by Chattopadhyay et al. (2022) is that the enduser defines queries that are interpretable to them. Given this set of queries, Q, V-IP learns to efficiently compose them into concise explanations for model predictions (in terms of query-answer chains). This begs the question, how much do we loose in terms of performance by constructing an interpretable Q? In Table 5, we report results studying this question. “Acc. w/ V-IP given ϵ” is the test accuracy obtained by V-IP after termination using our stopping criterion. “Acc. w/ V-IP given Q(X)” is the accuracy the classifier network (trained jointly with the queried network using the Deep V-IP objective) obtains when seeing all the query answers Q(X). In all data sets, V-IP learns to predict with short explanations (avg. number of queries) and a test accuracy in proximity to what can be achieved if all answers were observed (col 6). ‘Acc. w/ Black-Box given Q(X)” reports the accuracy a black-box deep network obtains by training on feature vectors comprised of all query answers Q(X) using the standard cross-entropy loss. Columns 6 and 7 show that training a classifier network with the Deep V-IP objective results in only a minor loss in performance compared to training using the standard supervised classification cross-entropy loss. “Acc. w/ Black-Box given X” reports test accuracies obtained by training black-box deep networks on the whole input X to produce a single output (the classification label). Comparing these values with the accuracies reported in col 6 we see that basing predictions on an interpretable query set, almost always, results in a drop in accuracy. This is expected since interpretability can be seen as a constraint on learning. For example, there is a drop of about 15% for the HuffingtonNews dataset since our queries are about the presence/absence of words which completely ignores the linguistic structure present in documents. Similarly, for the binary image classification tasks (MNIST, KMNIST and Fashion-MNIST) the queries are binary patches which can be easily interpreted as edges, foregrounds and backgrounds. This binarization however results in a drop in performance, especially in Fashion-MNIST where it is harder to distinguish between some classes like coat and shirt without grayscale information. H ADDITIONAL QUERY-ANSWER CHAINS We show additional trajectories for different task and datasets: CUB-200 (Figure 13), MNIST (Figure 14), Fashion-MNIST (Figure 15), KMNIST (Figure 16), HuffingtonNews (Figure 18), CIFAR10 (Figure 19) and CIFAR-100 (Figure 20). In every figure, we see that the correct predictions are explained by an interpretable sequence of query-answer pairs. The evolution of the posterior P (Y | q1:k(xobs)), as more and more queries are asked, gives insights into the model’s decisionmaking process as we see it’s belief shift among the possible labels for xobs. This adds an additional layer of transparency to the model. 10Number taken from Chattopadhyay et al. (2022) where authors fine-tuned a Bert Large Uncased Transformer model to classify documents. 11Number taken from Koh et al. (2020) which trained a CNN on raw images of birds from the CUB-200 dataset 12Number reported from Hu et al. (2018) 13Number reported from Clanuwat et al. (2018) 14Number reported from Xiao et al. (2017a) 15Trained a Deep Layer Aggregation model Yu et al. (2018) to classify CIFAR-10 images from scratch. 16Number reported fromHuang et al. (2017)
1. What is the main contribution of the paper, and how does it differ from previous works? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its theoretical development and quantitative performance? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any concerns or suggestions regarding the experimental results and their interpretation? 5. How does the reviewer evaluate the significance and impact of the paper on the field of interpretable classification models?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper In this paper authors proposed a variational information pursuit for interpretable classification model / estimation scheme. Authors idea is motivated by generative variant that they denote G-IP. Authors propose a complete framework with model defintions explanation of the training scheme and proof that loss does what it is supposed to do. Finally they show with experiments how the proposed method works in contrast to baselines. Strengths And Weaknesses Positives: I find the paper very interesting and clearly written. Idea is neat and does seem to work in practice. I feel that in terms of quantitative performance it would be better to emphasize much more the speed of inference. Maybe even with some Figures, so that the message is really hammered home that V-IP is much faster than G-IP. Theoretical development is also a plus. Negatives: I do not understand why in (V-IP) authors need to formulate their objective as a constraint optimization task. Why not to directly optimize f,g? Authors state in the Introduction that "We empirically demonstrate the efficacy of the proposed method over generative modelling on various computer vision and NLP tasks", but in the case of MNIST clearly V-IP is not the clear winner. I am thinking that authors should modify these statements to reflect the results more accurately. Thing that is sometimes hard to do is to put plot standard deviations in Figs and mark them into the results Tables. In RL lit, this is standard practice, you need to repeat the experiments some number of times to obtain those std estimates. Please consider adding them where ever possible. Clear limitation of the present paper is to have good Q set that allows for interpretation. If one constructs the set Q basically randomly, is it going to give useful information to the practitioner? It is on the other hand quite clear that if Q is very limited, then the proposed method would not work too well against non-interpretable models. Some discussions about this could be good to have. Clarity, Quality, Novelty And Reproducibility Variational Information Pursuit for Interpretable Predictions: Section 2: When you write "in almost all cases our method performs better." it would be good to add precision about what is ment to be better? Section 3: you say that "algorithm terminates after L queries if ... ", L is here capitalized so it seems to indicate a user defined parameter. Would better be said that if arg max I is 0 algorithm terminates. Please clarify. About stopping criterion in difference of entropies: ", which is difficult to compute without explicit generative modeling of the query-answer distribution". Could you clarify this last part, if it is hard to estimate then how you do it? Section 4.2: please explain RAM and RAM+ at least with few words, maybe in a similar way as G-IP had been explained in earlier Sections. In Fig 4, why RAM performance goes down in Huffington dataset? In Fig 4. I would like to see baseline performance of some reasonable non-interpretable model that does not use queries but just takes the whole input as is and produces one output. It would be important to to see how much we lose by trying to be interpretable. Minors: Please select to use either V-IP or VIP.
ICLR
Title Variational Information Pursuit for Interpretable Predictions Abstract There is a growing interest in the machine learning community in developing predictive algorithms that are interpretable by design. To this end, recent work proposes to sequentially ask interpretable queries about data until a high confidence prediction can be made based on the answers obtained (the history). To promote short query-answer chains, a greedy procedure called Information Pursuit (IP) is used, which adaptively chooses queries in order of information gain. Generative models are employed to learn the distribution of query-answers and labels, which is in turn used to estimate the most informative query. However, learning and inference with a full generative model of the data is often intractable for complex tasks. In this work, we propose Variational Information Pursuit (V-IP), a variational characterization of IP which bypasses the need to learn generative models. V-IP is based on finding a query selection strategy and a classifier that minimize the expected cross-entropy between true and predicted labels. We prove that the IP strategy is the optimal solution to this problem. Therefore, instead of learning generative models, we can use our optimal strategy to directly pick the most informative query given any history. We then develop a practical algorithm by defining a finite-dimensional parameterization of our strategy and classifier using deep networks and train them end-to-end using our objective. Empirically, V-IP is 10-100x faster than IP on different Vision and NLP tasks with competitive performance. Moreover, V-IP finds much shorter query chains when compared to reinforcement learning which is typically used in sequential-decision-making problems. Finally, we demonstrate the utility of V-IP on challenging tasks like medical diagnosis where the performance is far superior to the generative modeling approach. 1 INTRODUCTION Suppose a doctor diagnoses a patient with a particular disease. One would want to know not only the disease but also an evidential explanation of the diagnosis in terms of clinical test results, physiological data, or symptoms experienced by the patient. For practical applications, machine learning methods require an emphasis not only on metrics such as generalization and scalability but also on criteria such as interpretability and transparency. With the advent of deep learning methods over traditionally interpretable methods such as decision trees or logistic regression, the ability to perform complex tasks such as large-scale image classification now often implies a sacrifice in interpretability. However, interpretability is important in unveiling potential biases for users with different backgrounds (Yu, 2018) or gaining users’ trust. Most of the prominent work in machine learning that addresses this question of interpretability is based on post hoc analysis of a trained deep network’s decisions (Simonyan et al., 2013; Ribeiro et al., 2016; Shrikumar et al., 2017; Zeiler & Fergus, 2014; Selvaraju et al., 2017; Smilkov et al., 2017; Chattopadhyay et al., 2019; Lundberg & Lee, 2017). These methods typically assign importance scores to different features used in a model’s decision by measuring the sensitivity of the model output to these features. However, explanations in terms of importance scores of raw features might not always be as desirable as a description of the reasoning process behind a model’s decision. Moreover, there are rarely any guarantees for the reliability of these post hoc explanations to faithfully represent the model’s decision-making process (Koh et al., 2020). Consequently, post hoc interpretability has been widely criticized (Adebayo et al., 2018; Kindermans et al., 2019; Rudin, 2019; Slack et al., 2020; Shah et al., 2021; Yang & Kim, 2019) and there is a need to shift towards ML algorithms that are interpretable by design. An interesting framework for making interpretable predictions was recently introduced by Chattopadhyay et al. (2022). The authors propose the concept of an interpretable query set Q, a set of user-defined and taskspecific functions q : X → A, which map a data point in X to an answer in A, each having a clear interpretation to the enduser. For instance, a plausible query set for identifying bird species might involve querying beak shape, head colour, and other visual attributes of birds. Given a query set, their method sequentially asks queries about X until the answers obtained are sufficient for predicting the label/hypothesis Y with high confidence. Notably, as the final prediction is solely a function of this sequence of query-answer pairs, these pairs provide a complete explanation for the prediction. Figure 1 illustrates the framework on a bird classification task. To obtain short explanations (short query-answer chains), the authors propose to use a greedy procedure called Information Pursuit (IP), which was first introduced in Geman & Jedynak (1996). Given any input xobs, IP sequentially chooses the query which has the largest mutual information about the label/hypothesis Y given the history of query-answers obtained so far. To compute this mutual information criteria, a generative model is first trained to learn the joint distribution between all query-answers q(X) and Y ; in particular, Variational Autoencoders (VAEs) (Kingma & Welling, 2013) are employed. This learnt VAE is then used to construct Markov Chain Monte Carlo (MCMC) estimates for mutual information via MCMC sampling. Unfortunately, the computational costs of MCMC sampling coupled with the challenges of learning accurate generative models that enable fast inference limit the application of this framework to simple tasks. As an example, classifying MNIST digits using 3× 3 overlapping patches as queries1 with this approach would take weeks! In this paper, we question the need to learn a full generative model between all query-answers q(X) and Y given that at each iteration IP is only interested in finding the most informative query given the history. More specifically, we present a variational charaterization of IP which is based on the observation that, given any history, the query q∗, whose answer minimizes the KL-divergence between the label distribution P (Y | X) and the posterior P (Y | q∗(X), history), will be the most informative query as required by IP. As a result, we propose to minimize this KL-divergence term in expectation (over randomization of histories) by optimizing over querier functions, which pick a query from Q given history, parameterized by deep networks. The optimal querier would then learn to directly pick the most informative query given any history, thus bypassing the need for explicitly computing mutual information using generative models. Through extensive experiments, we show that the proposed method is not only faster (since MCMC sampling methods are no longer needed for inference), but also achieves competitive performance when compared with the generative modeling approach and also outperforms other state-of-the-art sequential-decision-making methods. Paper Contributions. (1) We present a variational characterization of IP, termed Variational-IP or V-IP, and show that the solution to the V-IP objective is exactly the IP strategy. (2) We present a practical algorithm for optimizing this objective using deep networks. (3) Empirically, we show that V-IP achieves competitive performance with the generative modelling approach on various computer vision and NLP tasks with a much faster inference time. (4) Finally, we also compare our approach to Reinforcement Learning (RL) approaches used in sequential-decision making areas like Hard Attention (Mnih et al., 2014) and Symptom Checking (Peng et al., 2018), where the objective is to learn a policy which adaptively chooses a fixed number of queries, one at a time, such that an accurate prediction can be made. In all experiments, V-IP is superior to RL methods. 1Each patch query asks about the pixel intensities observed in that patch for xobs. 2 RELATED WORK Interpretability in Machine Learning. These works can be broadly classified into two main categories: (i) post-hoc interpretability, and (ii) algorithms that are interpretable by design. A large number of papers in this area are devoted to post-hoc interpretability. However, as stated in the Introduction, the reliability of these methods have recently been called into question (Adebayo et al., 2018; Yang & Kim, 2019; Kindermans et al., 2019; Shah et al., 2021; Slack et al., 2020; Rudin, 2019; Koh et al., 2020; Subramanya et al., 2019). Consequently, recent works have focused on developing ML algorithms that are interpretable by design. Several of these works aim at learning deep networks via regularization such that it can be approximated by a decision tree (Wu et al., 2021) or locally by a linear network (Bohle et al., 2021; Alvarez Melis & Jaakkola, 2018). However, the framework of Chattopadhyay et al. (2022) produces predictions that are completely explained by interpretable query-chains and is not merely an approximation to an interpretable model like decision trees. Another line of work tries to learn latent semantic concepts or prototypes from data and subsequently base the final prediction on these learnt concepts (Sarkar et al., 2022; Nauta et al., 2021; Donnelly et al., 2022; Li et al., 2018; Yeh et al., 2020). However, there is no guarantee that these learnt concepts would be interpretable to the user or align with the user’s requirement. In sharp contrast, allowing the user to define an interpretable query set in (Chattopadhyay et al., 2022) guarantees by construction that the resulting query-chain explanations would be interpretable and useful. Sequential Decision-Making. An alternative approach to learning short query-chains is to use methods for sequential decision learning. These algorithms can be used for making interpretable decisions by sequentially deciding “what to query next?” in order to predict Y as quickly as possible. Mnih et al. (2014) introduced a reinforcement-learning (RL) algorithm to sequentially observe an image through glimpses (small patches) and predict the label, and called their approach Hard Attention. Rangrej & Clark (2021) introduced a probabilistic model for Hard Attention which is similar to the IP algorithm. More specifically, they propose to learn a partial-VAE Ma et al. (2018) to directly learn the distribution of images given partially observed pixels. This VAE is then used to select glimpses in order of information gain, as in IP. In another work, Peng et al. (2018) introduced an RL-based framework to sequentially query patient symptoms for fast diagnosis. In §4, we compare V-IP with prior works in this area and show that in almost all cases our method requires a smaller number of queries to achieve the same level of accuracy. We conjecture that the superiority of V-IP over RL-based methods is because the V-IP optimization is not plagued by sparse rewards over long trajectories, for example, a positive reward for correct prediction after a large number of symptom queries as in Peng et al. (2018). Instead, Deep V-IP can be abstractly thought of as, given history, s, choosing a query q (the action) and receiving DKL(P (Y | x) || P (Y | q(x), s)) as an immediate reward. A more rigorous comparison of the two approaches would be an interesting future work. 3 METHODS 3.1 G-IP: INFORMATION PURSUIT VIA GENERATIVE MODELS AND MCMC Let X : Ω → X and Y : Ω → Y be random variables representing the input data and corresponding labels/output. We use capital letters for random variables and small letters for their realizations. Ω is the underlying sample space on which all random variables are defined. Let P (Y | X) denote the ground truth conditional distribution of Y given data X . Let Q be a set of task-specific, user-defined, interpretable functions of data, q : X → A, where q(x) ∈ A is the answer to query q ∈ Q evaluated at x ∈ X . We assume that Q is sufficient for solving the task, i.e., we assume that ∀(x, y) ∈ X × Y P (y | x) = P (y | {x′ ∈ X : q(x′) = q(x) ∀q ∈ Q}). (1) In other words Q(X) := {q(X) : q ∈ Q} is a sufficient statistic for Y . The Information Pursuit (IP) algorithm (Geman & Jedynak, 1996) proceeds as follows; given a data-point xobs, a sequence of most informative queries is selected as q1 = IP(∅) = argmax q∈Q I(q(X);Y ); (2) qk+1 = IP({qi, qi(xobs)}1:k) = argmax q∈Q I(q(X);Y | q1:k(xobs)). Here qk+1 ∈ Q refers to the new query selected by IP at step k+1, based on the history (denoted as q1:k(x obs))2, and qk+1(xobs) indicates the corresponding answer. The algorithm terminates after L queries, where L depends on the data point xobs, if all remaining queries are nearly uninformative, that is, ∀q ∈ Q I(q(X);Y | q1:L)) ≈ 0. The symbol I denotes mutual information. Evidently equation 2 requires estimating the query with maximum mutual information with Y based on history. One approach to carrying out IP is by first learning the distribution P (Q(X), Y ) from data using generative models and then using MCMC sampling to estimate the mutual information terms3. However, learning generative models for distributions with high-dimensional support is challenging, and performing multiple iterations of MCMC sampling can likewise be computationally demanding. To address this challenge, in the nest subsection we propose a variational characterization of IP that completely bypasses the need to learn and sample from complex generative models. 3.2 V-IP: A VARIATIONAL CHARACTERIZATION OF INFORMATION PURSUIT We begin this section by describing our variational characterization of IP. The proposed approach is motivated by the fact that generative models are only a means to an end; what we need is the function, that we call querier, that maps the histories observed, {qi, qi(xobs)}1:k, to the most informative next query qk+1 ∈ Q. It turns out that this most informative query is exactly the query q∗ whose answer will minimize the KL divergence between the conditional label distribution P (Y | X) and the posterior P (Y | q∗(X), {qi(xobs)}1:k). Based on this insight, is it possible to define an optimization problem to directly learn this querier function? This requires a few ingredients, • First, we need to learn a querier that, given any possible history one might encounter during IP, chooses the next most informative query. One possible strategy for this is to minimize the KL divergence objective in expectation over random histories of query-answer pairs. • The posterior P (Y | q∗(X), {qi(xobs)}1:k) depends on the data distribution and is typically unknown. Thus, we need to estimate this using probabilistic classifiers. A possible solution is to jointly optimize this expected KL divergence over both querier and classifier functions. This leads to the following variational characterization of IP which will allow us to avoid generative models. Let K(x) be the set of all finite-length query-answer pairs of the form ({q1, q1(x)}, ..., {qm, qm(x)}), generated using queries from Q evaluated on any x ∈ X . We then define K̄ := ∪x∈XK(x) and denote elements of K̄ as “histories”. We define a classifier f : K̄ → PY as a function which maps arbitrary query-answer sequences to a distribution over Y . We define a querier g : K̄ → Q as a function which maps arbitrary query-answer sequences to a query q ∈ Q. The variational objective for IP is given by the following functional optimization problem, min f,g EX,S [ DKL ( P (Y | X) ∥ P̂ (Y | q(X), S) )] where q := g(S) ∈ Q P̂ (Y | q(X), S) := f({q, q(X)} ∪ S), (V-IP) and the minimum is taken over all possible mappings f (classifier) and g (querier). Here, S is a random set of query-answer pairs taking values in K̄4. Given S = s and X = xobs, the querier g chooses a query q ∈ Q, evaluates it on xobs and passes the pair {q, q(xobs)} to the classifier. The classifier f then makes a prediction based on s appended with this additional pair {q, q(xobs)}. 2Conditioning on q1:k(xobs) is to be understood as conditioning on the event {x′ ∈ X | {qi, qi(xobs)}1:k = {qi, qi(x′)}1:k} 3Since mutual information requires computing expectation over density ratios which is still intractable despite having learnt a generative model. 4Throughout this paper whenever we condition on S = s, we mean conditioning on the event of all data x′ ∈ X which share the same answers to queries as in s. Let (f∗, g∗) be an optimal solution to V-IP. The querier g∗ will be the IP strategy with the requirement that the distribution of S, denoted as PS , in V-IP is chosen such that the histories observed while carrying out IP must have positive probability mass under PS . Thus, given a data-point xobs, q1 = g ∗(∅) = argmax q∈Q I(q(X);Y ); qk+1 = g ∗({qi, qi(xobs)}1:k) = argmax q∈Q I(q(X);Y | q1:k(xobs)). (3) The above sequential procedure is illustrated in Figure 2. As before {qi, qi(xobs)}1:k is referred to as the history observed after k queries and is a realization of S. This is formalized in the following proposition whose proof can be found in Appendix A. Proposition 1. Let (f∗, g∗) be an optimal solution to V-IP. For any realization S = s such that P (S = s) > 0, define the optimization problem: max P̃∈PY ,q∈Q I(q(X);Y | s)− EX|s [ DKL ( P (Y | q(X), s) ∥ P̃ (Y | q(X), s) )] . (4) Then there exists an optimal solution (P̃ ∗s , q ∗ s ) to the above objective such that q ∗ s = g ∗(s) and P̃ ∗s = f ∗({q∗s , q∗s (X)} ∪ s). Thus, at the optima, the KL divergence term in equation 4 would be 0 and g∗ would pick the most informative query for any given subset of query-answer pairs S = s as is presented in equation 3. Theoretical guarantees aside, solving the optimization problem defined in V-IP is challenging since functional optimization over all possible classifier and querier mappings is intractable. In the following subsection, we present a practical algorithm for approximately solving this V-IP objective. 3.3 V-IP WITH DEEP NETWORKS Instead of optimizing f and g over the intractable function space, we parameterize them using deep networks with weights θ and η respectively. Our practical version of V-IP, termed as Deep V-IP, is as follows, min θ,η EX,S [DKL(P (Y | X) ∥ Pθ(Y | qη(X), S)] where qη := gη(S) Pθ(Y | qη(X), S) := fθ({qη, qη(X)} ∪ S). (Deep V-IP) Note that all we have done is replace arbitrary functions f and g in V-IP by deep networks parameterized by θ and η. To find good solutions there are two key constraints. First, the architecture for the classifier fθ and the querier gη functions need to be expressive enough to learn over an exponential (in |Q|) number of possible realizations of S. Second, we need to choose a sampling distribution PS for S. Notice, that for any reasonable-sized Q, there will be an exponentially large number of possible realizations of S. Ideally, we would like to choose a PS that assigns positive mass only to histories observed during the exact IP procedure, however this is like the “chicken-and-egg” dilemma. We now briefly discuss the architectures used for optimizing the Deep V-IP objective, followed by an exposition on the sampling distribution for S. Architectures. The architecture for both the querier and classifier networks (described in more detail in Appendix C) are chosen in such a way that they can operate on arbitrary length query-answer pairs. There can be several choices for this. In this paper, we primarily use masking where the deep networks operate on fixed size inputs (the answers to all queries q ∈ Q evaluated on input x) with the unobserved query-answers masked out. We also experiment with set-based deep architectures proposed in Ma et al. (2018). We show by ablation studies in Appendix E that the masking-based architecture performs better. In practice, qη = argmax(gη(S)), where gη(S) ∈ R|Q| is the output of the querier network, which assigns a score to every query in Q and argmax computes an 1-hot indicator of the max element index of its input vector. To ensure differentiability through argmax we use the straight-through softmax estimator (Paulus et al., 2020) which is described in detail in Appendix D. Finally, Pθ(Y | qη(X), S) is the output of Softmax applied to the last layer of fθ. Sampling of S. Choosing a sampling distribution that only has positive mass on histories observed during exact IP is like the “chicken-and-egg” dilemma. A simple alternative is to consider a distribution that assigns a positive mass to all possible sequences of query-answer pairs from K̄. This however would lead to slow convergence since the deep networks now have to learn to choose the most informative query given a large number query-answer pair subsets that would never be observed for any xobs ∈ X if one could do exact IP. To remedy this, we choose to adaptively bias our sampling distribution towards realizations of S one would observe if they carried out equation 3 using the current estimate for querier in place of g∗. More concretely, we optimize Deep V-IP by sequentially biasing the sampling distribution as follows: 1. Initial Random Sampling: We choose an initial distribution P 0S which ensures all elements of K̄ have positive mass. We first sample X ∼ PData. Then we sample k ∼ Uniform{0, 1, ..., |Q|} as the number of queries. Subsequently, k queries from Q are selected for X uniformly at random. 2. Subsequent Biased Sampling: The distribution P j+1S is obtained by using the solution querier gηj to V-IP using P j S as the sampling distribution. In particular, we first sample X ∼ PData and k ∼ Uniform{0, 1, ..., |Q|} as before. Subsequently, we find the first k query-answer pairs for this sampled X using equation 3 with gηj as our querier. Notice that, the empty set ∅, corresponding to empty history, would have positive probability under any P jS and hence the querier would eventually learn to pick the most informative first query. Subsequent sequential optimization would aim at choosing the most informative second query and so on, assuming our architectures are expressive enough. In practice, we optimize with random sampling of S using stochastic gradients for numerous epochs. We then take the solution gη0 and fine-tune it with biased sampling strategies, each time optimizing using a single batch and consequently changing the sampling strategy according to the updated querier. Refer Appendix E for ablation studies on the effectiveness of the biased sampling strategy for S. Stopping Criterion. There are two possible choices; (i) Fixed budget: Following prior work in sequential active testing (Ma et al., 2018; Rangrej & Clark, 2021), we stop asking queries after a fixed number of iterations. (ii) Variable query-lengths: Different data-points might need different number of queries to make confident predictions. For supervised learning tasks, where Y is “almost” a deterministic function of X , that is, maxY P (Y | X) ≈ 1, for any given xobs ∈ X , we terminate after L steps if maxY P (Y | q1:L(xobs)) ≥ 1− ϵ, where ϵ is a hyperparameter. This is termed as the “MAP criterion”. For tasks where Y is more ambiguous and not a deterministic function of X , we choose to terminate once the posterior is “stable” for a pre-defined number of steps. This stability is measured by the difference between the two consecutive posterior entropies, H ( Y | q1:k(xobs) ) − H ( Y | q1:k+1(xobs) ) ≤ ϵ. This criterion, termed as the “stability criterion”, is an unbiased estimate of the mutual information-based stopping criteria used in Chattopadhyay et al. (2022). Qualitative differences between Generative-IP and Variational-IP. We will refer to the generative approach for carrying out IP as described in Chattopadhyay et al. (2022) as Generative-IP or G-IP. The difference between G-IP and V-IP is similar in spirit to that of generative versus discriminative modelling in classification problems (Ng & Jordan, 2001). We conjecture that, when the data distribution agrees with the modelling assumptions made by the generative model, for example, conditional independence of query answers given Y , and the dataset size is “small,” then G-IP would obtain better results than V-IP since there are not enough datapoints for learning competitive querier and classifier networks. We thus expect the gains of V-IP to be most evident on datasets where learning a good generative model is difficult. 4 EXPERIMENTS In this section, through extensive experiments, we evaluate the effectiveness of the proposed method. We describe the query set used for each dataset in Table 1, with more details in Appendix C. The choice of query sets for each dataset was made to make our approach comparable with prior work. We also complement the results presented here with more examples in the Appendix. Code is available at https://github.com/ryanchankh/VariationalInformationPursuit. 4.1 INTERPRETABLE PREDICTIONS USING V-IP Basing predictions on an interpretable query set allows us to reason about the predictions in terms of the queries, which are compositions of elementary words, symbols or patterns. We will illustrate this by analyzing the query-answer chains uncovered by V-IP for different datasets. Figure 3a illustrates the decision-making process for V-IP on an image of a dog from the CIFAR-10 dataset. A priori the model’s belief is almost uniform for all the classes (second row, first column). The first query probes a patch near the centre of the image and observes the snout. Visually it looks similar to the left face of a cat, justifying the shift in the model’s belief to label “cat” with some mass on the label “dog”. Subsequent queries are aimed at distinguishing between these two possibilities. Finally, the model becomes more than 99% confident that it is a “dog” once it spots the left ear. Figure 3b shows the query-chain for a “genital herpes” diagnosis of a synthetic patient from the SymCAT-200 dataset. The y-axis shows the query asked at each iteration with green indicating a “Yes” answer and red indicating a “No”. Each row shows the model’s current belief in the patient’s disease. We begin with an initial symptom, “0: itching of skin”, provided by the patient. The subsequent queries ask about different conditions of the skin. The diseases shown are the top-10 most probable out of 200. All of these diseases have skin-related symptoms. After discovering the patient has painful urination (query 11), V-IP zooms into two possibilities “Balanitis” and “Genital herpes”. The subsequent queries rule out symptoms typically observed in patients with “Balanties” resulting in a 80% confidence in the herpes disease. For our final example, we elucidate the results for a bird image from the CUB-200 dataset in Figure 3c. The colour scheme for the y-axis is the same as in Figure 3b, with the exception that, unlike the patient case, we do not bootstrap V-IP with an initial positive attribute of the word. Instead, the first query about bill shape is the most-informative query about the label Y before any answer is observed. This is indicated with the grey “0:Init”. All the top10 most probable bird species in this context are seabirds, and have very similar visual characteristics to the true species, “Laysan Albatross”. After 14 queries V-IP figures out the true class with more than 99% confidence. Thus, in all three case-studies, we see that V-IP makes transparent decisions, interpretable in terms of queries specified by the user. A limitation of this framework however is finding a good query set that is interpretable and allows for highly accurate predictions with short explanations (short query-answer chains). We discuss this further in Appendix §G. 4.2 QUANTITATIVE COMPARISON WITH PRIOR WORK Baselines. We compare V-IP primarily to the generative modelling approach for IP, namely G-IP. We also compare to ReinforcementLearning methods prevalent in other areas of sequential-decision making like HardAttention (Mnih et al., 2014; Rangrej & Clark, 2021) or Symptom Checking (Peng et al., 2018), which can be adapted for our purposes. In particular, we compare with the RAM (Mnih et al., 2014) and RAM+(Li et al., 2017) algorithms. In both methods, a policy is learnt using deep networks to select queries based on previ- ous query-answers for a fixed number of iterations such that the expected cumulative reward is maximized. A classifier network is also trained simultaneously with the policy network to make accurate predictions. In RAM, this reward is just the negative cross-entropy loss between true and predicted labels at the last step. In RAM+, this reward is the cumulative sum of the negative cross-entropy loss at each step. We also compare our method with the “Random” strategy where successive queries are chosen randomly independent of the history observed so far. The predictions given history are still made using the V-IP classifier. We first show results on the simple datasets used in Chattopadhyay et al. (2022) comparing with our own implementation of G-IP, RAM and RAM+. V-IP is competitive with G-IP in terms of performance but far more efficient in terms of speed of inference. In all datasets, V-IP outperforms the RL-based methods. Subsequently, we present results on more complex tasks like RGB image classification. On these datasets, V-IP achieves a higher accuracy given a fixed budget of queries compared with prior work. Comparisons on Simple Tasks. Concise explanations are always preferred due to their simplicity. In Figure 4 we plot the trade-off between accuracy and explanation length obtained by various methods. V-IP is competitive with G-IP and obtains far shorter explanations than the RL-based methods to obtain the same test accuracy. This trade-off is quantified using the Area Under the Curve (AUC) metric in Table 2. Notice on the HuffingtonNews dataset the RL methods struggle to perform better than even Random. This is potentially due to the fact that in these RL methods, the classifier is trained jointly with the policy which likely affects its performance when the actionspace is large (|Q| = 1000). On the other hand, the random strategy learns its classifier by training on random sequences of query-answer pairs. This agrees with findings in Rangrej & Clark (2021). While V-IP performs competitive with the generative approach the biggest gain is in terms of computational cost of inference. Once trained, inference in V-IP, that is computing the most-informative query, is akin to a forward pass through a deep network and is potentially O(1)5. On the other hand, the per-iteration cost in G-IP is O(N + |Q|m), where N is the number of MCMC iterations employed and m is the cardinality of the space q(X)× Y . As an example, on the same GPU server, G-IP takes about 47 seconds per iteration on MNIST whereas V-IP requires just 0.11s, an approximately 400× speedup! Note that the inference cost is the same for V-IP and the RL methods since all of them train a querier/policy function to choose the next query. Comparisons on Complex Tasks. We now move on to more complex datasets where the gains of V-IP are more evident. First, we consider the task of natural image classification and show results on the CIFAR-{10, 100} datasets. For G-IP on these datasets, we refer to the Probabilistic HardAttn model introduced in Rangrej & Clark (2021) which proposes to learn a partial-VAE model (Ma et al., 2018) for images and then do inference using this model to compute the most informative query. Figures 5 a & b show the accuracy vs. number of queries curves for different methods. V-IP clearly outperforms all baselines on both datasets. Next, we consider the task of medical diagnosis by querying symptoms of the patient. We show results on the popular SymCAT dataset along with comparisons with prior work in Figure 5c. The plot clearly shows that V-IP achieves a much higher accuracy given a fixed budget of queries. For G-IP on this task, we refer to the BSODA framework introduced in He et al. (2022) which is again based on partial-VAEs. REFUEL (Peng et al., 2018) is a state-of-the-art RLbased method for this task akin to the RAM technique used in Hard-Attention literature. The classification accuracy for these different methods on all medical datasets are summarized in Table 3. Numbers for baselines are taken from Nesterov et al. (2022) since we used their released versions of these datasets6. As conjectured in §3.3, MuZhi and Dxy are small-scale datasets with about 500 training samples thus approaches based on generative models, like BSODA, are able to perform slightly better than V-IP. 5 CONCLUSION IP was recently used to construct interpretable predictions by composing interpretable queries from a user-defined query set. The framework however required generative models which limited its application to simple tasks. Here, we have introduced a variational characterization of IP which does away with generative models and tries to directly optimize a KL-divergence based objective to find the most informative query, as required by IP, in each iteration.Through qualitative and quantitative experiments we show the effectiveness of the proposed method. 5For simplicity we consider unit cost for any operation that was computed in a batch concurrently on a GPU. 6https://github.com/SympCheck/NeuralSymptomChecker ACKNOWLEDGMENTS This research was supported by the Army Research Office under the Multidisciplinary University Research Initiative contract W911NF-17-1-0304, the NSF grant 2031985 and by Simons Foundation Mathematical and Scientific Foundations of Deep Learning (MoDL) grant 135615. Moreover, the authors acknowledge support from the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE2139757. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. A PROOF OF PROPOSITION 1 Before proceeding to the proof we will prove the following lemma, Lemma 1. Let Q be an user-defined query set and P(Y ) be the set of all possible distributions on Y . Then, for any realization S = s, the following holds true: min P̃∈PY ,q∈Q EX|s [ DKL(P (Y | X) || P̃ (Y | q(X), s) ] ≡ max P̃∈PY ,q∈Q [ I(q(X);Y | s)− EX|s[DKL(P (Y | q(X), s) || P̃ (Y | q(X), s))] ] (5) Proof. Using information-theoretic properties of the KL-divergence we have the following set of equalities. min P̃∈PY ,q∈Q EX|s [ DKL(P (Y | X) || P̃ (Y | q(X), s) ] = min P̃∈PY ,q∈Q EX|s [ DKL(P (Y | X, q(X), s) || P̃ (Y | q(X), s) ] = min P̃∈PY ,q∈Q EX|s [∑ Y P (Y | X, q(X), s) log P (Y | X, q(X), s) P̃ (Y | q(X), s) ] = min P̃∈PY ,q∈Q EX|s [∑ Y P (Y | X, q(X), s) log P (X,Y | q(X), s) P̃ (Y | q(X), s)P (X | q(X), s) ] = min P̃∈PY ,q∈Q EX,Y |s [ log P (X,Y | q(X), s) P (Y | q(X), s)P (X | q(X), s) ] + EX,Y |s [ log P (Y | q(X), s) P̃ (Y | q(X), s) ] = min P̃∈PY ,q∈Q I(X;Y | q(X), s) + EX|s[DKL(P (Y | q(X), s) || P̃ (Y | q(X), s))] (6) In the first equality, assuming P (X = x, S = s) > 0.7, we used the fact that given any X = x, the label Y is independent of any query answer q(X) = q(x) and event {S = s}. Thus, P (Y | X = x) = P (Y | X = x, q(X) = q(x), S = s). In the fourth equality we multiplied the term inside the log by the identity P (Y |q(X),s)P (Y |q(X),s) , where P (Y | q(X), s) represents the true posterior of Y given the query answer q(X) and S = s. Now observe that for any fixed S = s and any q ∈ Q, I(X, q(X);Y | s) = I(X;Y | s) + I(q(X);Y | X, s) = I(X;Y | s) (7) The second equality is obtained by using the fact that q(X) is a function of X . Decomposing I(X, q(X);Y | s) another way, I(X, q(X);Y | s) = I(q(X);Y | s) + I(X;Y | q(X), s) (8) From equation 7 and equation 8 we conclude that min q∈Q I(Y ;X | q(X), s) ≡ min q∈Q −I(q(X);Y | s) Substituting the RHS in the above result in equation 6 we obtain the desired result. 7For any x′ ∈ X , if P (X = x′, S = s) = 0, then x′ would not contribute to the expectation in the first equation and so we do not need consider this case. Proof of Proposition 1. Restating the objective from equation V-IP, min f,g EX,S [ DKL ( P (Y | X) ∥ P̂ (Y | q(X), S )] where q := g(S) ∈ Q P̂ (Y | q(X), S) := f({q, q(X)} ∪ S), Now, for any realization S = s, such that P (S = s) > 0, we have, min P̃∈PY ,q∈Q EX|s [ DKL ( P (Y | X) || P̃s(Y | q(X), s) )] = EX|s [ DKL ( P (Y | X) || P̃ ∗s (Y | q∗s (X), s) )] = EX|s [ DKL ( P (Y | X) || P̂ (Y | q̃(X), s) )] + EX|s [∑ Y P (Y | X) log P̂ (Y | q̃(X), s) P̃ ∗s (Y | q∗s (X), s) ] = EX|s [ DKL ( P (Y | X) || P̂ (Y | q̃(X), s) )] − EX|s [∑ Y P (Y | X) log P̃ ∗ s (Y | q∗s (X), s) P̂ (Y | q̃(X), s) ] = EX|s [ DKL ( P (Y | X) || P̂ (Y | q̃(X), s) ) −DKL ( P̃ ∗s (Y | q∗s (X), s) || P̂ (Y | q̃(X), s) )] ≤ EX|s [ DKL ( P (Y | X) || P̂ (Y | q̃(X), s) )] (9) In the first equality we used the definition of (P̃ ∗s , q ∗ s ) as the solution to the minimization problem in the first equality. In the second equality, q̃ = g(s) for any querier g and P̂ (Y | q̃(X), s) = f({q̃, q̃(X)} ∪ s}) for any classifier f . In the fourth equality we appealed to lemma 1 to conclude that P̃ ∗s (Y | q∗s (X), s) = P (Y | q∗s (X), s), the true posterior over Y given answer q∗s (X) and history s. The final step we used the non-negativity of the KL-divergence for getting rid of the second term. Since the inequality in equation 9 holds ∀S = s, and mappings f and g, we conclude that q∗s = g∗(s) and P̃ ∗s = f ∗({q∗s , q∗s (X)} ∪ s) for any given S = s. Equation 4 in the proposition is then proved by using lemma 1 to characterize q∗s and P̃ ∗ s . B TRAINING PROCEDURE Consider a mini-batch of N samples, {(xi, yi)}Ni=1 from a training set. In Deep V-IP objective, the KL-divergence is mathematically equivalent to the cross entropy loss. The mini-batch estimate of this loss can be expressed as: min θ,η − 1 N N∑ i=1 yi log ŷi subject to qη = argmax(gη(si)) ŷi = fθ(si ∪ {qη, qη(xi)}), (10) where yi is the ground truth label corresponding to input xi. si is obtained by sampling for P JS as defined in §3.3. We optimize the above objective using Stochastic Gradient Descent (or its variants). To optimize objective 10, for every sample xi in the batch, the sampled history si is fed into the querier network gη which outputs a score for every query q ∈ Q. The argmax(.) (see D regarding its differentiability) operator converges these scores into a |Q|-dimensional one-hot vector, with the non-zero entry at the location of the max. We then append this argmax query qη and it’s answer, (qη, qη(X)) to si. The updated history si ∪ (qη, qη(xi)) is then fed into the classifier fθ to obtain a softmax distribution over the labels, denoted as ŷi. C EXPERIMENT DETAILS All of our experiments are implemented in python using PyTorch (Paszke et al., 2019) version 1.12. Moreover, all training is done on one computing node with 64-core 2.10GHz Intel(R) Xeon(R) Gold 6130 CPU, 8 NVIDIA GeForce RTX 2080 GPUs (each with 10GB memory) and 377GB of RAM. General Optimization Scheme. The following optimization scheme is used in all experiments for both Initial Random Sampling and Subsequent Biased Sampling, unless stated otherwise. We minimize the Deep V-IP objective using Adam (Kingma & Ba, 2014) as our optimizer, with learning rate lr=1e-4, betas=(0.9, 0.999), weight decay=0 and amdgrad=True (Reddi et al., 2019). We also use Cosine Annealing learning rate scheduler (Loshchilov & Hutter, 2016) with T max=50. We train our networks fθ and gη for 500 epochs using batch size 128. In both sampling stages, we linearly anneal temperature parameter τ , in our straight-through softmax estimator, from 1.0 to 0.2 over the 500 epochs. Training Details for RAM and RAM+. For train the RL policy and classification network we use the popular PPO algorithm (Schulman et al., 2017) with entropy regularization (0.01 regularization parameter). We used in initial learning rate of 3e-5 and clip value 0.2. For a fair comparison the architectures for the policy8 and classification networks are kept the same for RAM, RAM+ and V-IP. C.1 SPECIES CLASSIFICATION ON CUB. Dataset and Query Set. Caltech-UCSD Birds-200-201 (CUB-200) (Wah et al., 2011) is a dataset of 200 bird species, containing 11,788 images with 312 annotated binary features for different visual attributes of birds, such as the color or shape of the wing or the head of the bird. We construct the query set Q using these attributes. For example, given an image for a Blue-jay a possible query might be “is the back-colour blue?” and the answer “Yes”. The query set construction and data preprocessing steps as same as Chattopadhyay et al. (2022): In the original dataset, each annotated attribution for each image is too noisy, containing often imprecise description of the bird. Hence, if a certain attribute is true/false for over 50% of the samples in a given category, then that attribute is true/false for all samples in the same category. We also train a CNN to answer each query using the training set annotations called the concept network. This concept network provides answers for training all the methods compared in §4, namely, RAM, RAM+, G-IP and V-IP. Last but not least, our query set Q consists of 312 queries that ask whether each of the binary attributes is 1 for present or −1 for absent. Architecture and Training A diagram of the architectures is shown in Figure 6. Both fθ and gη have the same full-connected network architecture (except the last linear layer), but they do not share any parameters with each. We initialize each architecture randomly, and train using the optimization scheme mentioned in the beginning of this section. The input history is a |Q|-dimensional vector with unobserved answers masked out with zeros. Updating the History. Let the history of query-answer pairs observed after k steps be denoted as Sk. Sk is represented by a masked vector of dimension 312, and qk+1 = argmax(gη(Sk))9 is a one-hot vector of the same dimension, denoting the next query. For a given observation xobs, we update Sk using qk+1 as follows: • We obtain the query-answer by performing a point-wise multiplication, that is, qk+1(x obs) = qk+1 ⊙ xobs. • We update the history to Sk+1 by adding this query answer qk+1(xobs) to Sk. The entire process can be denoted by Sk+1 = Sk + qk+1 ⊙ xobs. 8This term is from the RL community. In our context, the policy network is exactly the querier network. 9recall gη is our querier function parameterized by a deep network with weights η. C.2 TOPIC IDENTIFICATION ON THE HUFFINGTON POST NEWS CATEGORY DATASET. Dataset and Query Set. The Huffington Post News Category dataset (HuffingtonNews) is a natural language dataset, containing news “headlines” and their short descriptions (continuation of the headline) extracted from Huffington Post news published between 2012 and 2018. We follow the same data-processing procedure as Chattopadhyay et al. (2022): Each data point is an extended headline formed by concatenation of the headline with their short descriptions. Moreover, we also remove redundant categories, including semantically ambiguous and HuffPost-specific words such as “Impact“ and “Worldpost.” We also remove categories with small number of articles, along with semantically equivalent category names, such as “Arts & Culture” versus ”Culture & Art.” After processing, there is a total of 10 categories in the dataset. In addition, only the top-1,000 words are kept according to their tf-idf scores (Lavin, 2019), along with semantically redundant words removed. For more details, please refer to Chattopadhyay et al. (2022). The query set Q contains binary questions of whether one of the 1000 words exist in the headline. The query answer is 1 if the word in question is present and −1 if absent. Architecture and Training. A diagram of the architectures is shown in Figure 7. Both the classifer fθ and the querier gη shares the same architecture except the last layer; However, they do not share any parameters. The inputs to fθ and gη are masked vectors, with masked values set to 0. To optimize the Deep V-IP objective, we randomly randomly initialize fθ and gη , and train using Adam optimizer and Cosine Annealing learning rate scheduler, with the settings mentioned in the beginning of the section. During Subsequent Adaptive Sampling, we train for 100 epochs using Stochastic Gradient Descent (SGD) instead of Adam, with learning rate lr=1e-4 and momentum=0.9, and Cosine Annealing learning rate scheduler, with T max=100. The input history is a |Q|-dimensional vector with unobserved answers masked out with zeros. Updating the History. The method of updating the history is equivalent to that for CUB as mentioned in §C.1. The history is now a masked vector of dimension 1000, since there are 1000 queries in our query set for this dataset. C.3 IMAGE CLASSIFICATION ON MNIST, KMNIST AND FASHION-MNIST. Each setting mentioned in the following is the same for MNIST, KMNIST and Fashing-MNIST unless stated otherwise. Dataset and Query Set. MNIST (LeCun et al., 1998), KMNIST (Clanuwat et al., 2018), FashionMNIST (Xiao et al., 2017b) are gray-scale hand-written digit datasets, each containing 60,000 train- ing images and 10,000 testing images of size 28 × 28. We follow Chattopadhyay et al. (2022) for data pre-processing procedure and the design of our query sets of all three datasets. Each gray-scale image is converted into a binary image. For MNIST and KMNIST, we round values greater or equal than 0.5 up to 1 and round values below 0.5 down to -1. For Fashion-MNIST, we round values greater or equal up to 0.1 to 1 and round values below 0.1 down to -1. Each query set Q contains all overlapping 3 × 3 patches over the 28 × 28 pixel space, resulting in 676 queries. Each query answer indicates the 9 pixel intensities at the queried patch. The inputs to the classifier fθ and the querier gθ are masked images, with masked pixels zeroed out if they are not part of the current History. Updating the History. The method for updating the history is equivalent to that for CUB as mentioned in §C.1 with some differences as we will describe next. For a given observation xobs, we update Sk using qk+1 as follows: • We reshape qk+1 from a vector of dimension 676 (the number of queries) to a 2D grid of dimension 26× 26, denoted by q̂k+1. • q̂k+1 is then converted to a binary matrix with 1s at the location corresponding to the queried 3 × 3 patch and 0s everywhere else via a convolutation operation with a kernel of all 1s of size 3× 3, stride 1 and padding 2. • We then obtain the query-answer by performing a hadamard product of the convolved output (in the previous step) with xobs. This results in a masked image, q̂k+1(xobs), with the queried patch revealed and 0 everywhere else. • Finally, we update the history to Sk+1 by adding this query-answer to Sk. To account for pixels observed (unmsaked) in q̂k+1(xobs) that overlap with the history Sk, we clip the values in Sk to lie between −1 and 1. The entire process can be summarized as, Sk+1 = Clip ( Sk + (Conv2D(q̂k+1)⊙ xobs), minval = −1, maxval = 1 ) . Architecture and Training. Refer to Figure 8 for a diagram of the architecture for the classifier fθ and the querier gη . Every Conv is a 2D Convolution with a 3 × 3 kernel, stride 1 and padding 1. Moreover, every MaxPool is a 2D max pooling operator with a 2 × 2 kernel. We initialize each architecture from random initialization, and train our networks using Adam as our optimizer and Cosine Annealing learning rate scheduler, with the same settings mentioned in the beginning of this section. C.4 MEDICAL DIAGNOSIS ON SYMCAT, MUZHI AND DXY Dataset and Query Set. MuZhi (Wei et al., 2018) and Dxy (Xu et al., 2019) are two real-world medical datasets containing symptoms and diseases extracted from Chinese healthcare websites (https://muzhi.baidu.com/ and https://dxy.com/), where doctors provide online help based on patient’s self-report symptoms and online conversations. We follow the same data processing procedure as He et al. (2022). MuZhi has 66 symptoms (features) and 4 diseases (classes), including children’s bronchitis, children’s functional dyspepsia, infantile diarrhea infection, and upper respiratory infection. Dxy dataset contains 41 symptoms for 5 diseases: allergic rhinitis, upper respiratory infection, pneumonia, children hand-foot-mouth disease, and pediatric diarrhea. Last but not least, our query set Q consists of queries that correspond to asking questions about the presence of each symptom for the patient; the query-answer is either 1 for Yes, 0 for No and -1 for Can’t Say. SymCAT is a synthetic medical dataset generated from a symptom checking website called SymCAT introduced by Peng et al. (2018). The dataset has three versions; SymCAT-200 contains 328 symptoms and 200 disease, SymCAT-300 contains 349 symptoms and 300 classes, and SymCAT-400 con- tains 355 symptoms and 400 classes. We used publically avialable version of this dataset provided by Nesterov et al. (2022) at https://github.com/SympCheck/NeuralSymptomChecker. Our query set Q consists of queries that correspond to asking questions about the presence/absence of each symptom for the patient; the query-answer is either 1 for Yes and 0 for No. Architecture and Training. A diagram of the architecture is shown in Figure 9. We used the set architecture proposed in Ma et al. (2018). We made this choice for a fair comparison with He et al. (2022) which also used to same architecture for their partial-VAEs. The input to the network is a concatenation of the query-answers {q(x j) : q ∈ Q}, trainable positional embeddings e j (red blocks), and bias terms b j (blue blocks). Each positional embedding is also multiplied by the query-answer. After the first linear layer, the intermediate embedding is multiplied by a queryanswer mask derived from the history, which each dimension has a value of 1 for query selected in the history and 0 otherwise. To optimize our objective, we randomly initialize fθ and gθ and train using the same optimizer and learning rate scheduler setting as mentioned in the beginning of the section. However, we train our algorithms for only 200 epochs, and linearly anneal the straightthrough softmax estimator’s temperature τ from 1.0 to 0.2 over the first 50 epochs. Updating the History. Let the history of query-answer pairs observed after k steps be denoted as Sk. Since we used set architectures as our querier and classifier networks for these datasets, as proposed in Ma et al. (2018), Sk is represented as a set consisting of embeddings of query-answer pairs observed so far. The next query, qk+1 = argmax(gη(Sk)) is a one-hot vector of dimension equal to the size of the query set used. For a given observation xobs, we update Sk using qk+1 as follows: • Let M be a matrix of size |Q|×d where every row corresponds to a query-answer evaluated at xobs and d is the size the embeddings used for representing the query-answers. We obtain the answer corresponding to the selected query qk+1 by performing a matrix-vector product, that is, qk+1(xobs) = qTk+1M . • We update the history to Sk+1 by concatenating qk+1(xobs) to Sk. C.5 IMAGE CLASSIFICATION ON CIFAR-10 AND CIFAR-100 Dataset and Query Set. CIFAR-{10,100} (Krizhevsky et al., 2009) are natural image datasets that contain {10, 100} different classes of objects. They contain 50,000 training images and 10,000 testing images. Each RGB image is of size 32 × 32. We design our query set Q following Rangrej & Clark (2021) consisting of all 8× 8 overlapping patches with stride 4. This results in a query set size |Q| of 49. The inputs to the classifier fθ and the querier gη are full-sized 3 × 32 × 32 masked image, with masked pixels zeroed out if they are not part of the current History. Architecture and Training. For CIFAR-10, the architectures used for the classifier fθ and querier gη are both Deep Layer Aggregation Network (DLA) (Yu et al., 2018). fθ and gη do not share any parameters with each other. An out-of-the-box implementation was used and can be found here: https://github.com/kuangliu/pytorch-cifar/blob/master/ models/dla.py. For CIFAR-100, the architectures used for the classifier fθ and querier gη are both DenseNet169 (Huang et al., 2017). fθ and gη do not share any parameters with each other. An out-of-the-box implementation was used and can be found here: https://github.com/ kuangliu/pytorch-cifar/blob/master/models/densenet.py. The only change made to the architectures is the last layer, which the dimensions depend on the number of classes or the size of the query set Q. During training, we follow standard data processing techniques as in He et al. (2016). In CIFAR10, we set batch size 128 for both initial and subsequent sampling stages. In CIFAR-100, we set batch size 64 for both initial and subsequent sampling stages. For both CIFAR-10 and CIFAR-100, we randomly initialize fθ and gη , and train them for 500 epochs during Initial Random Sampling using optimizer Adam and Cosine Annealing learning rate scheduler, with settings mentioned above. During Subsequent Adaptive Sampling, we optimize using Stochastic Gradient Descent (SGD), with learning rate lr=0.01 and momentum=0.9 and Cosine Annealing learning rate scheduler, with T max=50, for 100 epochs. Updating the History. The method of updating the history is similar to that for MNIST, KMNIST, and Fashion-MNIST, as mentioned in § C.3. For a given observation xobs, we update the history Sk using qk+1 as follows: • We reshape qk+1 from a vector of dimension 49 (the number of queries) to a 2D grid of dimension 7× 7, denoted by q̂k+1. • q̂k+1 is then converted to a binary matrix with 1s at the location corresponding to the queried 8 × 8 patch and 0s everywhere else via a 2D transposed convolutation operation with a kernel of all 1s of size 8× 8, stride 4 and no padding. • We then obtain the query-answer by performing a hadamard product of the convolved output (in the previous step) with xobs. This results in a masked image, q̂k+1(xobs), with the queried patch revealed and 0 everywhere else. • Finally, we update the history to Sk+1 by adding this query-answer to Sk. Any pixel ij that is observed (unmasked) in q̂k+1(xobs) and is also observed in the history Sk is handled by the transformation Sk+1[i, j] → Sk[i, j] if Sk+1[i, j] = 2Sk[i, j]. The entire process can be summarized as, S′k+1 = Sk + TransposedConv2D(q̂k+1)⊙ xobs. ∀(i, j) ∈ {Number of pixels in image} Sk+1[i, j] = Sk[i, j] if S′k+1[i, j] = 2Sk[i, j] Sk+1[i, j] = S ′ k+1[i, j] otherwise (11) D STRAIGHT-THROUGH SOFTMAX ESTIMATOR As mentioned in §3.3, in the main text, we employ the straight-through softmax gradient estimator for differentiating through the argmax operation. We will now describe this estimator in detail. Consider the following optimization problem, min θ∈Rd f(argmax(θ)) (12) Let Z := argmax(θ). We will assume f is differentiable in its input. The straight-through softmax estimator of the gradient of f w.r.t θ is defined as, ∇STθ f := ∂f ∂Z dsoftmaxτ (θ) dθ , where softmaxτ (θ) := [ e θ1 τ∑d i=1 e θi τ e θ2 τ∑d i=1 e θi τ . . . e θd τ∑d i=1 e θi τ ] and τ is the temperature parameter. No- tice that limτ→0 softmaxτ (θ) → argmax(θ). Thus, we replace the gradient of the argmax operation which is either 0 almost everywhere or doesn’t exist with a surrogate biased estimate. Equation 12 can then be optimized using the straight-through estimator by iteratively carrying out the following operation, θ = θ − η∇STθ f, where η is the learning rate. In our experiments we start with τ = 1.0 and linearly anneal it down to 0.2 over the course of training. E ABLATION STUDIES In §3.3 we discussed two possible architectures for operating on histories of arbitrary sequence lengths; the set-based architectures (as in Figure 9) and the fixed-sized input masking based architectures (as in Figure 6). In Figure 10, we compare the two architectures on the same task of bird species identification using the same query set (see Table 1). We see that the fixed-sized input masking based architecture performs better in terms of avg. number of queries needed to get the same performance. Based on this observation, we use the latter architecture in all our experiments except on the medical datasets, where we used the set-based architecture for fair comparison with the BSODA method which uses a similar set-based architecture for their partial-VAEs. In §3.3 we discussed that a sampling distribution PS that assigns a positive mass to every element in K̄ would make learning a good querier function challenging since the network has to learn over an exponentially (in size of Q) large number of histories. Instead we proposed to sequential bias the sampling according to our current estimate of the querier function. We validate the usefulness of this biased sampling strategy in Figure 11. In most datasets we observe that biasing the sampling distribution for S helps learn better querying strategies (in terms of accuracy vs. explanation length trade-offs). Notice that in most datasets, biased sampling without the initial random sampling ultimately ends up learning a slightly better strategy (in terms of accuracy vs. explanation length trade-offs). However, since biased sampling requires multiple forward passes through our querier network to generate histories, it is much slower than the initial random sampling (IRS) scheme for PS . Thus, when computational resources are not a concern one could do away with the initial random sampling but under a computational budget, the random sampling allows for finding a quick solution which can then be finetuned using biased sampling. This finetuning will potentially require fewer epochs to get good performance than training using biased sampling from scratch. F EXTENDED RESULTS In Figure 12, we show the trade-off between accuracy and explanation length (avg. number of queries) on KMNIST and Fashion-MNIST datasets as the stopping criterion ϵ is changed. In both these datasets, the “MAP criterion” is used as the stopping criterion. V-IP performs better than RLbased RAM and RAM+ on both these datasets. V-IP is competitive with G-IP eventually surpassing it in terms of accuracy for longer query-answer chains. In addition, Table 4 shows extended results for AUC values for test accuracy versus explanation length curves for different datasets. A simplified table is shown in Table 2. G COMPARING V-IP WITH BLACK-BOX DEEP NETWORKS An important aspect of the framework introduced by Chattopadhyay et al. (2022) is that the enduser defines queries that are interpretable to them. Given this set of queries, Q, V-IP learns to efficiently compose them into concise explanations for model predictions (in terms of query-answer chains). This begs the question, how much do we loose in terms of performance by constructing an interpretable Q? In Table 5, we report results studying this question. “Acc. w/ V-IP given ϵ” is the test accuracy obtained by V-IP after termination using our stopping criterion. “Acc. w/ V-IP given Q(X)” is the accuracy the classifier network (trained jointly with the queried network using the Deep V-IP objective) obtains when seeing all the query answers Q(X). In all data sets, V-IP learns to predict with short explanations (avg. number of queries) and a test accuracy in proximity to what can be achieved if all answers were observed (col 6). ‘Acc. w/ Black-Box given Q(X)” reports the accuracy a black-box deep network obtains by training on feature vectors comprised of all query answers Q(X) using the standard cross-entropy loss. Columns 6 and 7 show that training a classifier network with the Deep V-IP objective results in only a minor loss in performance compared to training using the standard supervised classification cross-entropy loss. “Acc. w/ Black-Box given X” reports test accuracies obtained by training black-box deep networks on the whole input X to produce a single output (the classification label). Comparing these values with the accuracies reported in col 6 we see that basing predictions on an interpretable query set, almost always, results in a drop in accuracy. This is expected since interpretability can be seen as a constraint on learning. For example, there is a drop of about 15% for the HuffingtonNews dataset since our queries are about the presence/absence of words which completely ignores the linguistic structure present in documents. Similarly, for the binary image classification tasks (MNIST, KMNIST and Fashion-MNIST) the queries are binary patches which can be easily interpreted as edges, foregrounds and backgrounds. This binarization however results in a drop in performance, especially in Fashion-MNIST where it is harder to distinguish between some classes like coat and shirt without grayscale information. H ADDITIONAL QUERY-ANSWER CHAINS We show additional trajectories for different task and datasets: CUB-200 (Figure 13), MNIST (Figure 14), Fashion-MNIST (Figure 15), KMNIST (Figure 16), HuffingtonNews (Figure 18), CIFAR10 (Figure 19) and CIFAR-100 (Figure 20). In every figure, we see that the correct predictions are explained by an interpretable sequence of query-answer pairs. The evolution of the posterior P (Y | q1:k(xobs)), as more and more queries are asked, gives insights into the model’s decisionmaking process as we see it’s belief shift among the possible labels for xobs. This adds an additional layer of transparency to the model. 10Number taken from Chattopadhyay et al. (2022) where authors fine-tuned a Bert Large Uncased Transformer model to classify documents. 11Number taken from Koh et al. (2020) which trained a CNN on raw images of birds from the CUB-200 dataset 12Number reported from Hu et al. (2018) 13Number reported from Clanuwat et al. (2018) 14Number reported from Xiao et al. (2017a) 15Trained a Deep Layer Aggregation model Yu et al. (2018) to classify CIFAR-10 images from scratch. 16Number reported fromHuang et al. (2017)
1. What is the focus and contribution of the paper regarding interpretable models? 2. What are the strengths of the proposed approach, particularly in its formulation and implementation? 3. What are the weaknesses of the paper, especially regarding the optimization process and the model's performance? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This articles positions itself in the context of designing "interpretable by design" models: models are built from the start to provide an interpretable explanation alongside their predictions. To achieve this goal, the model performs its prediction by iteratively making queries about the example to classify within a predefined set of interpretable queries. The sequence of queries serving as the explanation for the prediction. The paper follows the Information Pursuit (IP) method, where the next query is chosen by maximizing the mutual information between the variable to predict and the result of the query given the past query history. Previous work in the IP framework learned a joint generative model over the query answers and the final prediction, using MCMC sampling on that model to compute the mutual information required to choose the next query. The paper shows that the IP selection strategy can be formulated as the solution to a variational optimization problem involving two functions: a classifier that predicts the output variable given the history of queries, and a querier, that chooses the next query. This formulation alleviates the need for learning a joint generative model, drastically reducing the computational cost. The paper provides extensive experimental comparison with other models from the literature and on multiple tasks, proving the proposed V-IP model to be very competitive across the board. Strengths And Weaknesses Strengths: The proposed approach is an elegant and well-grounded variational reformulation of the IP strategy, which is itself a strong strategy in this context. The formulation of the optimization problem and its implementation using deep neural networks is clear and detailed. The experimental validation is extensive and discusses in depth the advantages and limitations of the proposed V-IP. The proposed model is competitive with previous state of the art with a significantly smaller computing cost, which is a strong contribution. Weaknesses: Regarding the straight-through optimization of the softmax of the querier model, as discussed in appendix D this requires the following function to be differentiable wrt to the one-hot output of the softmax. In V-IP, the following function is the concatenation of the next query result to the queried dataset. How is that step made differentiable? I suppose this is related to the masking-based architecture, but the paper does not seem to explain it. This is a minor point, but I think figures 3a illustrates well how query sets consisting in observing small patches of an image are very artificial given the high-level problem at hand (interpretability). The 4th observation is the one that tips the balance of the model from cat to dog, and yet it mostly consists in a black square of background. Could that maybe illustrate some overfitting in the model? Clarity, Quality, Novelty And Reproducibility The article is very clear and well written. It provides a new, solid and elegant reformulation of the problem of Information Pursuit. The appendix provides (almost, see weakness 1 above) all necessary information to reproduce all the experiments in the paper.
ICLR
Title On Pseudo-Labeling for Class-Mismatch Semi-Supervised Learning Abstract Semi-Supervised Learning (SSL) methods have shown superior performance when unlabeled data are drawn from the same distribution with labeled data. Among them, Pseudo-Labeling (PL) is a simple and widely used method that creates pseudo-labels for unlabeled data according to predictions of the training model itself. However, when there are unlabeled Out-Of-Distribution (OOD) data from other classes, these methods suffer from severe performance degradation and even get worse than merely training on labeled data. In this paper, we empirically analyze PL in class-mismatched SSL. We aim to answer the following questions: (1) How do OOD data influence PL? (2) What are the better pseudo-labels for OOD data? First, we show that the major problem of PL is imbalanced pseudolabels on OOD data. Second, we find that when labeled as their ground truths, OOD data are beneficial to classification performance on In-Distribution (ID) data. Based on the findings, we propose our model which consists of two components – Re-balanced Pseudo-Labeling (RPL) and Semantic Exploration Clustering (SEC). RPL re-balances pseudo-labels on ID classes to filter out OOD data while also addressing the imbalance problem. SEC uses balanced clustering on OOD data to create pseudo-labels on extra classes, simulating the process of training with their ground truths. Experiments show that our method achieves steady improvement over supervised baseline and state-of-the-art performance under all class mismatch ratios on different benchmarks. N/A 1 INTRODUCTION labeled data cat dog unlabeled data cat dog plane car ID data OOD data Figure 1: Realistic Semi-Supervised Learning may simultaneously contain unlabeled ID and OOD data. ID data come from the same classes as labeled data while OOD data come from classes that are not seen in labeled data. Deep Semi-Supervised Learning (SSL) methods are proposed to reduce dependency on massive labeled data by utilizing a number of cheap, accessible unlabeled data. Pseudo-Labeling (Lee et al., 2013) is a simple but effective and widely used method that creates pseudo-labels according to predictions of the training model itself. Then SSL can be transformed to standard supervised learning. Other representative SSL methods are consistency regularization (Laine & Aila, 2017; Tarvainen & Valpola, 2017; Miyato et al., 2019), holistic methods (Berthelot et al., 2019; Sohn et al., 2020) and generative methods (Kingma et al., 2014). The recent development of SSL shows that these methods have achieved competitive performance to supervised learning methods. However, all of these SSL methods achieve their good results based on an assumption that the unlabeled data are drawn from the same distribution as the labeled data. This assumption can be easily violated in real-world applications. One of the common cases is that some unlabeled data come from unseen classes. For example, as is illustrated in Figure 1, in image classification, we can collect a lot of unlabeled images from the internet but usually they cover broader category concepts than labeled data. Oliver et al. (2018) have shown that on such class-mismatched conditions, performance of traditional SSL methods is damaged. To deal with this problem, several methods have been proposed. These methods include filtering out OOD data (Yu et al., 2020; Chen et al., 2020), down weighting OOD data (Chen et al., 2020) and re-use OOD data by neural style transfer (Luo et al., 2021) or self-supervised learning (Huang et al., 2021). Although these methods achieve good results, why do OOD data damage performance and how will OOD data help remain unclear. Here, we focus on analyzing Pseudo-Labeling (PL) in class-mismatched SSL and give some answers to these two questions. In this paper, we empirically analyze PL in class-mismatched SSL setting. These experiments aim to answer the following questions: (1) How do OOD data influence PL? (2) What are the better pseudo-labels for OOD data? For question (1), we investigate pseudo-labels created by PL. The main finding is that pseudo-labels on OOD data tend to be imbalanced while on ID data, it remains balanced. We further show that PL’s performance is damaged due to such imbalance on OOD data. For question (2), several strategies for labeling OOD data are investigated. We conclude that it is beneficial when labeling OOD data as a class different from ID data, and the performance can be further improved when the pseudo-labels partition unlabeled OOD data into their semantic clusters. Based on the experimental analysis, we propose a two-branched model called Υ-Model, which processes unlabeled data according to their confidence score on ID classes. The first branch performs re-balanced pseudo-labeling on high-confidence data. It utilizes the property of imbalanced pseudolabels on OOD data, truncating the number of pseudo-labeled data for each class to their minimum. This procedure filters out many OOD data and also prevents the negative effect of imbalanced pseudo-labels. For the other branch, semantic exploration clustering is performed on low-confidence data. They are considered as OOD data and their semantics will be mined by clustering into different partitions on extra classes. The clustering result provides better pseudo-labels for these OOD data than vanilla PL. Experiments on different SSL benchmarks show that our model can achieve steady improvement in comparison to supervised baseline. We summarize our contributions as follows: • We analyze the Pseudo-Labeling model for ID and OOD data. The findings lead to two primary conclusions: (1) Imbalance of pseudo-labels on OOD data damages PL’s performance. (2) Best pseudo-labels for unlabeled OOD data are those different from ID classes and partitioning them into their semantic clusters. • We propose our two-branched Υ-Model. One branch re-balances pseudo-labels on ID classes and filter out OOD data. The other branch explores semantics of OOD data by clustering on extra classes. • Experiments on different SSL benchmarks empirically validate effectiveness of our model. 2 PRELIMINARY 2.1 CLASS-MISMATCHED SSL Similar to the SSL problem, the training dataset of the class-mismatched SSL problem contrains n ID labeled samples Dl = {(xli, yli)}ni=1 and m unlabeled samples Du = {xui}mi=1, (usually, m ≫ n,) yli ∈ YID = {1, . . . ,KID}, while different from SSL, the underlying ground truth yu of unlabeled data may be different from labeled data. i.e., yuj ∈ YID ∪ YOOD,YOOD = {KID + 1, . . . ,KID + KOOD}. The goal of class-mismatched SSL is to correctly classify ID samples into YID using labeled set with ID samples and unlabeled set possibly with OOD samples. 2.2 PSEUDO-LABELING Pseudo-Labeling (PL) leverages the idea that we can use the model itself to obtain artificial labels for unlabeled data (Lee et al., 2013). PL first perform supervised learning on labeled data to get a pre-trained model f , which outputs the probability of belonging to each ID class. It then creates the pseudo-labels for each unlabeled sample: y′ = { argmaxy∈YID f(y|x) , c(x) > τ reject , otherwise , (1) c(x) = max y∈YID f(y|x), (2) where c(x) is the confidence score for x. All the pseudo-labeled unlabel data will be treated as labeled data for the next supervised learning generation. PL iteratively performs supervised learning and pseudo-label creation until stop condition. 3 ANALYSIS OF PSEUDO-LABELING IN CLASS-MISMATCHED SSL In class-mismatched SSL, vanilla PL can only create pseudo-labels on ID classes even for OOD data. We will analyze how these OOD data influence vanilla PL and what are the better pseudo-labels for them in this section. 3.1 SETUP We use CIFAR-10 (Krizhevsky et al., 2009) as our experimental dataset. The data set contains 10 categories – 6 animal classes and 4 vehicle classes. Following Guo et al. (2020), we perform a classification task on animal classes (denoted as class 0-5) and select 400 images per class to construct the labeled data set, i.e., 2,400 labeled examples. The other 4 vehicle classes are taken as OOD classes (denoted as classes 6-9). 20,000 images are randomly selected from all the 10 classes as the unlabeled data set. We vary the ratio of unlabeled images to modulate class distribution mismatch. For example, the extent is 50% means half of the unlabeled data comes from animal classes and the others come from vehicle classes. We use Wide-ResNet-28-2 (Zagoruyko & Komodakis, 2016) as our backbone. We also adopt data augmentation techniques including random resized crop, random color distortion and random horizontal flip. We train our network for 400 epochs. For each epoch, we iterate over the unlabeled set and random sample labeled data, each unlabeled and labeled minibatch contains 128 samples. We adopt Adam as the optimization algorithm with the initial learning rate 3× 10−3. We report the averaged accuracy of the last 20 epochs, pretending there is no reliable (too small) validation set to perform early stop (Oliver et al., 2018). 3.2 IMBALANCE OF PSEUDO-LABELS ON OOD DATA In this section, we analyze the pre-trained model that creates the first set of pseudo-labels, and the final model trained by Pseudo-Labeling. Pretrained model. First, we draw the distribution of confidence score on OOD data and ID data. Figure 2(a) tells that, like what is concluded in OOD detection (Hendrycks & Gimpel, 2017), proportion of high-confidence data in ID data is larger than OOD data. However, in class-mismatched SSL, the unlabeled data are in much larger quantities. When the class mismatch ratio is large, there are quite a few OOD data with high confidence scores. We will show in the final model experiments that these high-confidence OOD data damages performance. Secondly, we study pseudo-labels on both ID data and OOD data. Figure 2(b) shows that pseudo-labels ID data is balanced. However, they are rather imbalanced on OOD data (Figure 2(c)). This is attributed to the different distribution they are drawn from. Samples with certain pattern bias to certain classes. ID data bias to ID classes uniformly because they are sampled by the same distribution. However, with little probability, OOD data will also bias to ID classes uniformly since they have little relevance to ID data. Final Pseudo-Labeling model. As an old saying goes, a good beginning is half done. However, such imbalance of the first set of pseudo-labels starts PL model badly when there is a large portion of OOD data, putting the model in danger of imbalanced learning. We run vanilla PL and show that the imbalance of pseudo-labels harms the performance. Figure 3(a) shows the performance of PL model with different OOD ratios. In accord with Oliver et al. (2018), PL model degrades as the portion of OOD data gets larger. Figure 3(b) displays the confusion matrix of the PL model on the whole test set containing both ID and OOD data. Since only 6 classes are known to us, the confusion matrix is a rectangle. We can see almost all the OOD samples (class 6-9) are classified as class 0, which means the imbalance effect on OOD data gets even worse as the PL training goes on. The possible reason is that, unlike Pseudo-labeling on ID data, supervision of labeled data can not help correct pseudo-labels on OOD data. Thus the imbalance continuously deteriorates. The imbalance on OOD data also influences classification performance on ID data. Samples of major classes (class 0) overwhelm the loss and gradient, leading to a degenerate model (Lin et al., 2017). We can see the PL model mistakenly classifies many of data with class 1-5 into class 0. 3.3 PSEUDO-LABELING STRATEGY FOR OOD DATA The previous section shows OOD data hurt performance of vanilla PL. Then here comes the question: Assuming that we already know which data are OOD, how do we use these OOD data? Is omitting them the best way? If not, what are the better pseudo-labels for them? To answer these questions, we investigate four strategies to create pseudo-labels for OOD data: • Baseline. This baseline omits all the OOD data and only trains on the labeled ID data. • Re-Assigned Labeling. This strategy assigns data of each OOD class to an ID class. It ensures that different OOD class is assigned to different ID class, keeping the semantics unchanged between OOD classes. For example, (ship, trunk, airline, automobile) can be assigned to (bird, cat, deer, dog). This strategy can be seen as training a classifier of “super-classes”. • Open-Set Labeling. This strategy is named after the related setting – Open-Set Recognition (Scheirer et al., 2013; Bendale & Boult, 2016). This strategy treats all OOD data as one unified class KID + 1. Thus this model outputs probability over KID + 1 classes. • Oracle Labeling. This strategy uses the ground truth of OOD data. Thus this model outputs probability over KID +KOOD classes. Note that Open-Set Labeling and Oracle Labeling can classify samples into more than KID classes. However, during evaluation, we only classify samples into KID ID classes. For these model, the predicted label ŷ of a test sample x is calculated as: ŷ(x) = argmax y∈YID f(y|x) (3) The overall comparison of the four strategies is illustrated in Figure 4. We also report test accuracy when class mismatch ratio is 100%. From Figure 4, we can get several important conclusions. (1) Re-Assigned Labeling underperforms baseline a little1. This indicates that assign samples with OOD classes to ID classes does not help the model distinguish between ID classes even if we somehow know which OOD data are semantically different. It also reveals that performing vanilla PL on OOD data may never help even if we do it perfectly. (2) Open-Set Labeling outperforms baseline, which indicates it improves the performance if we label the OOD data as a class other than ID classes. (3) We can see Oracle Labeling improves the performance and achieves the best result among the four strategies. It means that in addition to label OOD data as extra classes, if we can further assign OOD data with different semantics to different classes, the model will achieve better results. 3.4 SUMMARY OF SECTION In this section, we study the behavior of Pseudo-Labeling model in class-mismatched SSL. We summarize several important conclusions here: Conclusion 1: Classification model trained with labeled ID data creates imbalanced pseudo-labels on OOD data while on ID data, it remains balanced. Conclusion 2: The vanilla PL process makes the imbalance of pseudo-labels deteriorate, damaging the classification performance on ID data. Conclusion 3: Labeling OOD data as ID classes does not help and may even perform a little worse. Conclusion 4: It is beneficial to label OOD data as extra classes different from ID classes. If we can further label semantically different OOD data as different classes, the performance can be further improved. 4 METHOD Based on the findings in Section 3, we proposed Υ-Model (named after its shape) for classmismatched SSL. Υ-Model trains a classifier f that will output the posterior distribution over KID + K classes, i.e., f(y|x) ∈ RKID+K ,1⊤f(y|x) = 1. K is the number of extra classes, which can be known in advance (i.e., K = KOOD) or be set as a hyper-parameter. Similar to vanilla PL, we define confidence with the same form as Equation 2. However, this confidence is a little different from its original definition in Hendrycks & Gimpel (2017), for we only calculate the maximum probability of the KID classes instead of all. Therefore, we rename it to In-Distribution confidence (ID confidence). For evaluation, we predict labels using Equation 3. Υ-Model aims to solve the following questions: Problem 1: how to avoid imbalanced pseudo-labels in PL model? (Conclusion 1, 2) Problem 2: how to avoid labeling OOD data as ID? (Conclusion 3) Problem 3: how to create proper pseudo-labels for unlabeled OOD data? (Conclusion 4) 1Assign 4 OOD classes to 4 ID classes of 6 causes imbalance. But we test the accuracy on selected 4 classes and find they show a similar result. Υ-Model consists of two main branches – Re-balanced Pseudo-Labeling (RPL) and Semantic Exploration Clustering (SEC). RPL acts on high-confidence data to solve Problem 1, 2. SEC acts on low-confidence data to solve Problem 3. We describe the two branches in the following sections. The overview of Υ-Model is illustrated in Figure 5. 4.1 RE-BALANCED PSEUDO-LABELING As is illustrated in Section 3.3, the main problem of vanilla PL is that a large number of OOD data with high confidence scores have imbalanced pseudo-labels. One possible solution is re-weighting the unlabeled sample (Guo et al., 2020) or using other methods in the imbalance learning field. However, even if we solve the problem of imbalance learning, labeling OOD data as ID classes also may damage the performance (Conclusion 3). In this paper, we use a simple method – Re-balanced Pseudo Labeling – to simultaneously solve imbalance (Problem 1) and incorrect recognition (Problem 2). It produces a set P of pseudo-labeled samples in three steps: N = min y∈YID |{x ∈ Du | f(y | x) > τ}| , (4) τy = Nth biggest({f(y | x) | x ∈ Du}), y = 1, 2, . . . ,KID, (5) P = ⋃ y∈YID {(x, y) | f(y | x) ≥ τy,x ∈ Du}, (6) where Nth biggest represents the N–th biggest value of the given set. RPL first calculates the minimum number of pseudo-labeled samples for each ID class by Equation 4. Then it truncates the number of pseudo-labels of each ID class to that number by Equation 5, 6. The process of RPL is illustrated in Figure 5(a). First, it enforces the pseudo labels on ID classes to be balanced, solving Problem 1. Second, as is shown in Section 3, the set of high-confidence data is a mixture of ID and ODD data. Due to Conclusion 1, the pseudo-label distribution of such a set is a sum of imbalanced and balanced ones, thus still imbalanced. However, by selecting only top-N confident samples for each ID class, we will keep ID data and omit many OOD data since confidence on ID data tends to be higher than OOD data (Hendrycks & Gimpel, 2017). This process solves Problem 2. 4.2 SEMANTIC EXPLORATION CLUSTERING As is demonstrated in Section 3.3, if we know a set of samples is OOD, it will improve the performance if we label them as a unified class KID + 1. But the best way is to use their ground truths (Conclusion 4). However, it is impossible to access their ground truths since they are unlabeled. We resort to using Deep Clustering methods (Caron et al., 2018; Asano et al., 2020) to mine their semantics and approximate the process of learning with the ground truths. Here, we use the balanced clustering method in Asano et al. (2020); Caron et al. (2020) to create pseudo-labels for these OOD data. Assuming there are M samples recognized as OOD, we first compute their soft targets: min Q∈U(K,M) ⟨Q,− logP ⟩, U(K,M) := { Q ∈ RK×M+ | Q1 = 1 K 1, Q⊤1 = 1 M 1 } , (7) where P ∈ RK×M+ , Pij = f̂(KID + i|xj). f̂ is the normalized posterior distribution on extra classes, i.e., f̂(KID+ i|xj) = f(KID+ i|xj)/ ∑K k=1 f(KID+k|xj). We use the Sinkhorn-Knopp algorithm Cuturi (2013) to optimize Q. Once we get Q, we harden the label by picking the class with the maximum predicted probability and mapping it to the extra K classes: ŷj = KID + argmax i Qij . (8) ŷj is used as the pseudo-label for xj . We perform SEC on the set of data with ID confidence lower than a threshold γ, i.e., {x|c(x) < γ}. 5 RELATED WORK Class-Mismatched Semi-Supervised Learning. Deep Semi-Supervised Learning suffers from performance degradation when there are unseen classes in unlabeled data (Oliver et al., 2018). As the proportion of such out-of-distribution (OOD) data get larger, the performance drop more. To cope with such a class-mismatch problem, several methods are proposed. Chen et al. (2020) formulate a sequence of ensemble models aggregated accumulatively on-the-fly for joint self-distillation and OOD filtering. Guo et al. (2020) re-weight the unlabeled data by meta-learning to decrease the negative effect of OOD data. Huang et al. (2020) recycle transferable OOD data by means of adversarial learning. Different from all these methods, we conduct a comprehensive study on Pseudo-Labeling (PL) and give useful guidance on how to do better in class-mismatched SSL. Pseudo-Labeling. The method of Pseudo-Labeling, also known as self-training, is a simple and effective way for Deep Semi-Supervised Learning (Lee et al., 2013; Shi et al., 2018; Arazo et al., 2020; Iscen et al., 2019). Despite its simplicity, it has been widely applied to diverse fields such as image classification (Xie et al., 2020), natural language processing (He et al., 2020) and object detection (Rosenberg et al., 2005). The use of a hard label makes Pseudo-Labeling closely related to entropy minimization (Grandvalet & Bengio, 2004). Deep Clustering. Deep clustering methods improve ability of traditional cluster methods by leveraging the representation power of DNNs. A common means is to transform data into lowdimensional feature vectors and apply traditional clustering methods (Yang et al., 2017; Caron et al., 2018). In Self-Supervised Learning, clustering methods are used to learning meaningful representation for downstream tasks (Caron et al., 2018; Asano et al., 2020; Caron et al., 2020). Modern Deep Clustering can learn semantically meaningful clusters and achieves competitive results against supervised learning (Gansbeke et al., 2020). 6 EXPERIMENTS To validate the effectiveness of our Υ-Model, we conduct experiments on different benchmarks. Dataset. We test our methods on two datasets as in Oliver et al. (2018): (1) CIFAR10: we use the same configuration as in Section 3.1. SVHN: The data set contains 10 categories – digits “0”-“9”. We select the first “0”-“5” as ID classes and the rest as OOD. For each class, we randomly select 100 images as labeled data. Meanwhile, 20,000 images are randomly selected from all the 10 classes as the unlabeled data set. The class-mismatch ratio is set as {0%, 25%, 50%, 75%, 100%}. Implementation Details. We use the same network and training protocol as Section 3.1. We first train a classification model only on labeled data for 100 epochs without RPL and SEC. We update pseudo-labels every 2 epochs. For both datasets, we set τ = 0.95, γ = 0.3,K = 4. We use an exponential moving average model for final evaluation as in Athiwaratkun et al. (2019). 6.1 COMPARE WITH TRADITIONAL SSL METHODS In this subsection, we compare our methods with four traditional SSL methods – PseudoLabeling (Lee et al., 2013), Π-Model (Laine & Aila, 2017), Mean Teacher (Tarvainen & Valpola, 2017) and VAT (Miyato et al., 2019). Figures 6(a), 6(b) show the results. Traditional methods suffer from performance degradation as the mismatch ratio increases. They usually get worse than the supervised baseline when the mismatch ratio is larger than 50% on CIFAR10 and SVHN. In contrast, our methods get steady improvement under all class mismatch ratios. The reasons can be attributed as follows. First, our method is aware of the existence of OOD data. We do not treat OOD data like ID data, which can hurt performance. Second, we reuse OOD data by exploring their semantics which proves to be useful in Section 3.3. Therefore, even when the class-mismatch ratio gets 100%, performance of Υ-Model is still better than supervised baseline. 6.2 COMPARE WITH CLASS-MISMATCHED SSL METHODS In this subsection, we compare our method with two existing class-mismatched SSL methods – UASD (Chen et al., 2020) and DS3L (Guo et al., 2020). For a fair comparison, we use PseudoLabeling as the base method of DS3L. From Figures 6(c), 6(d), we can see our methods are superior to these two methods in all settings. It is noticeable that DS3L underperforms supervised baseline when all the unlabeled data are drawn from OOD classes. This is attributed to the fact that DS3L uses a down weighting strategy to alleviate the negative effect of OOD data and does not change the form of unsupervised loss. But we have shown in Section 3.3 that labeling OOD data as ID classes damage performance anyhow. On the contrary, Υ-Model uses the OOD data in the right way – simulating the process of training them with their ground truth labels. As a result, our method shows superiority especially under a large class-mismatch ratio. We also notice that the performance curve of Υ-Model appears a U-shape (obvious on CIFAR10). A possible reason is that RPL and SEC compete with each other. RPL tends to make samples get a high prediction on ID classes while SEC tends to make samples get a high prediction on OOD classes. When the class-mismatch ratio reaches 0% (100%), RPL (SEC) dominants the other. In this circumstance, one works without any disturbance of the other. However, when the class-mismatch ratio is 50%, they compete fiercely with each other, causing many incorrectly recognized ID or OOD samples. 6.3 ABLATION STUDY In this section, we validate the functionality of RPL and SEC. We conduct experiments on CIFAR10 benchmark as in the analysis section 3. Validation of effectiveness of RPL and SEC. We conduct ablation studies under different classmismatch ratios and report the averaged test accuracy and standard deviation of five runs. As usual, we vary the class-mismatch ratio. Table 1 displays the results. Firstly, comparing the first line and second line of the table, RPL not only outperforms vanilla PL in high class-mismatch ratio scenario but also improves in low class-mismatch ratio scenario. This reveals that balanced pseudo-labels always help since once the model creates imbalanced pseudo-labels, it will deteriorate when there are not enough measures to correct it. Secondly, comparing the second and third line, it shows that RPL alone alleviate the performance degradation but it can not prevent it, in accord with Conclusion 3. When using SEC, Υ-Model gets better results than supervised baseline when the class-mismatch ratio is high. Besides, comparing the third and last lines, we see that when cluster OOD data and exploring their semantics instead of using a unified class to label them, the performance improves. RPL helps filter out OOD data and solve imbalance problem. It is noticeable that the last two lines of Table 1 show that without RPL, SEC alone can not achieve better performance than supervised baseline. We will show the reason here. Figure 7(a) plots the proportion of OOD data that are pseudo-labeled as ID classes. It reveals that without RPL, i.e., using vanilla PL, the number of incorrectly recognized OOD data keep increasing as the training proceed. While with RPL, this ratio rapidly drops to 0. This proves that RPL help filter out OOD data by utilizing the imbalance property of OOD data. Further, we present the confusion matrix of Υ-Model on the full test set (all the 10 classes) of CIFAR10. Compared to vanilla PL in Figure 3(b), Υ-Model does not suffer from imbalance problem, as a result of which, its performance is not degraded. Effect of extra class number K. We vary the number of extra classes K. Figure 7(c) shows the result on CIFAR10 with class mismatch ratio 100%. The gray dashed line is the supervised baseline. The red dashed line is the Oracle Labeling strategy in Section 3.3. It is the upper bound of Υ-Model in this setup. Without SEC (K = 0), Υ-Model underperform supervised baseline. Using SEC (K ≥ 1), Υ-Model is always better than baseline. Also, it reaches its best performance when K equals the actual number of OOD classes. This demonstrates that by simulating the process of training on OOD data with ground truth, SEC helps classification model on ID data. 7 CONCLUSION In this paper, we analyze Pseudo-Labeling in class-mismatched semi-supervised learning where there are unlabeled OOD data from other classes. We show that Pseudo-Labeling suffers from performance degradation due to imbalanced pseudo-labels on OOD data. The correct way to use OOD data is to label them as classes different from ID classes while also partitioning them according to their semantics. Based on the analysis, we proposed Υ-Model and empirically validate its effectiveness. We believe our findings are not only beneficial to PL methods, but also inspiring to other methods like consistency regularization and holistic methods on how to effectively use OOD data. A ALGORITHM Algorithm 1 Υ-Model algorithm Input: Labeled dataset Dl = {(xli, yli)}ni=1, and unlabeled dataset Du = {xui}mi=1; Classification model fϕ parameterized with ϕ, ID class number KID, extra class number K, total epoch number E, pretrain epochs Ept, interval to update pseudo-labels Epl, pseudo-labeled set P , confidence calculation function c. 1: function REBALANCEDPSEUDOLABELING(D, f, τ ) 2: N ← miny∈YID |{x ∈ Du | f(y | x) > τ}| 3: τy ← Nth biggest({f(y | x) | x ∈ Du}), y = 1, 2, . . . ,KID 4: P ← ⋃ y∈YID{(x, y) | f(y | x) ≥ τy,x ∈ Du} 5: return P 6: function SEMANTICEXPLORATIONCLUSTERING(D, f, γ) 7: S ← {x|c(x) < γ} 8: M ← |S| 9: Pij ← f(KID + i|xj)/ ∑K k=1 f(KID + k|xj), i = 1, 2, . . . ,K, j = 1, 2, . . . ,M 10: Solve 7 by Sinkhorn-Knopp algorithm and get Q 11: ŷj ← KID + argmaxi Qij , j = 1, 2, . . . ,M 12: C ← {(xj , ŷj)}Mj=1 13: return C 14: for e = 1 to E do 15: if e < Ept then 16: train fϕ with standard supervised learning on Dl ▷ Pre-training phase 17: L = 18: else 19: train fϕ with standard supervised learning on Dl ∪ P ▷ PL training phase 20: if e ≤ Ept and e %Epl = 0 then 21: P ← ∅ 22: Pτ← REBALANCEDPSEUDOLABELING(Du, fϕ, τ ) ▷ Perform RPL 23: Pγ← SEMANTICEXPLORATIONCLUSTERING(Du, fϕ, γ) ▷ Perform SEC 24: P ← Pτ ∪ Pγ 25: return classification model fϕ B EMBEDDING VISUALIZATION We visualize the embedding of supervised baseline, vanilla PL and υ-Model on CIFAR10’s test set with all the classes by t-SNE. Figure 8 shows the result. OOD data are mixed with ID data since the supervised baseline does not see unlabeled OOD data. PL mixes OOD data with samples of certain classes (class 0). This is attributed to that their pseudo-labels are biased toward this class. Also, we can not clearly distinguish between OOD classes. In contrast, Υ-Model can not only make ID data distinguishable but also forms meaningful clusters on OOD data. C UNIVERSALITY OF ANALYSIS CONCLUSIONS To illustrate the universality of the conclusion in Section 3. We experiment on totally five kinds of datasets. ( We use (n/m) to represent n ID classes and m OOD classes.) • CIFAR10 (6/4) and CIFAR10 (5/5) : both created from CIFAR10 (Krizhevsky et al., 2009).CIFAR10 (6/4) takes the 6 animal classes as ID classes and 4 vehicle classes as OOD classes. CIFAR10 (5/5) has 3 animal and 2 vehicle classes for both ID and OOD classes. We select 400 labeled samples for each ID classes and totally 20,000 unlabeled samples from ID and OOD classes. • SVHN (6/4): We select the first “0”-“5” as ID classes and the rest as OOD. We select 100 labeled samples for each ID class and totally 20,000 unlabeled samples. • CIFAR100 (50/50): created from CIFAR100 (Krizhevsky et al., 2009). The first 50 classes are taken as ID classes and the rest as OOD classes. We select 100 labeled samples for each ID classes and totally 20,000 unlabeled samples. • Tiny ImageNet (100/100): created from Tiny ImageNet, which is a subset of ImageNet (Deng et al., 2009) with images downscaled to 64 × 64 from 200 classes. The first 100 classes are taken as ID classes and the rest as OOD classes. We select 100 labeled samples for each ID class and totally 40,000 unlabeled samples. Here we use C to represent CIFAR, TIN to represent Tiny ImageNet for short. For each dataset, we use the same experimental setup as in Section 3. C.1 IMBALANCE OF PSEUDO-LABELS To demonstrate the imbalance of pseudo-labels on OOD data, we compute two metrics to measure the extent of imbalance. • The KL divergence of pseudo-label distribution q and the uniform distribution u: kl = KL(q||u) = ∑ i qi log qi ui • The ratio of the ‘majority class’ and ‘minority class’: r = maxi qi mini qi The results on these datasets are displayed in Table 2. C.2 PSEUDO-LABELING STRATEGY FOR OOD DATA We report the test accuracy of the four pseudo-labeling strategies in Section 3.3. The class-mismatch ratio is set to 100%. Table 3 shows the results. Note that Re-Assigned Labeling has many possibilities. If there are n ID classes and m OOD classes, Amn possible assignments exist. It is impossible to experiment on all of them. To deal with it, we randomly choose 10 possible assignment and pick the maximum performance among them. D COMPARISON ON MORE DATASETS We compare our method with vanilla PL and the two class-mismatched methods in Section 6.2. We use the following hyperparameters: • CIFAR10 (6/4): τ = 0.95, γ = 0.3, Ept = 50, Epl = 2,K = 4 • SVHN (6/4): τ = 0.95, γ = 0.3, Ept = 50, Epl = 2,K = 4 • CIFAR100 (50/50):τ = 0.95, γ = 0.18, Ept = 50, Epl = 2,K = 20 • Tiny ImageNet (100/100): τ = 0.9, γ = 0.15, Ept = 50, Epl = 2,K = 20 For CIFAR100 (50/50) and Tiny ImageNet (100/100), we use a weight factor λ to trade off the loss on labeled set Dl and pseudo-labeled set P , which ramps up with function λ = exp ( −5× ( 1−min ( iter 40,000 , 1 ))2) , where iter is the number of training steps from Ept.
1. What is the focus of the paper regarding semi-supervised learning? 2. What are the strengths of the proposed approach, particularly its originality? 3. What are the weaknesses of the paper, especially regarding the experiment section? 4. Do you have any concerns about the evaluation of the proposed method on other datasets? 5. How does the reviewer assess the significance of the paper's contribution to the field of semi-supervised learning?
Summary Of The Paper Review
Summary Of The Paper The authors tackle the pseudo-label in class-mismatched semi-supervised learning when there are unlabeled out-of-scope data from other classes. The authors propose a model consisting of (1) Re-balanced Pseudo-Labeling, which re-balances pseudo-labels on ID classes to filter out OOD data, and (2) Semantic Exploration Clustering, which uses balanced clustering on OOD data to create pseudo-labels on extra classes. Review Strength This paper has an interesting observation on how does OOD data influence pseudo-label and resulting model performance. The proposed method also seems to be original. Weakness I have several concerns about the experiments. First, as the vanilla semi-supervised learning baselines, stronger methods should be leveraged. Especially, as the problem definition of this paper is related to the pseudo-label in semi-supervised learning, the recent “pseudo-label based” semi-supervised learning methods such as FixMatch [1] and ReMixMatch [2] should be considered. VAT or Mean Teacher seems to be outdated. It will also be interesting to investigate if we combine the proposed r-Model with the existing pseudo-labeling-based semi-supervised learning methods. [1] Sohn et al., Fixmatch: Simplifying semi-supervised learning with consistency and confidence. NeurIPS 2020. [2] ReMixMatch: Semi-Supervised Learning with Distribution Matching and Augmentation Anchoring. ICLR 2020. It would be better to evaluate the proposed method on other datasets such as CIFAR100, STL, or maybe ImageNet dataset. In order to comprehensively understand the behavior of the proposed method, the proposed method should be evaluated on other datasets (at least on the MNIST dataset).
ICLR
Title On Pseudo-Labeling for Class-Mismatch Semi-Supervised Learning Abstract Semi-Supervised Learning (SSL) methods have shown superior performance when unlabeled data are drawn from the same distribution with labeled data. Among them, Pseudo-Labeling (PL) is a simple and widely used method that creates pseudo-labels for unlabeled data according to predictions of the training model itself. However, when there are unlabeled Out-Of-Distribution (OOD) data from other classes, these methods suffer from severe performance degradation and even get worse than merely training on labeled data. In this paper, we empirically analyze PL in class-mismatched SSL. We aim to answer the following questions: (1) How do OOD data influence PL? (2) What are the better pseudo-labels for OOD data? First, we show that the major problem of PL is imbalanced pseudolabels on OOD data. Second, we find that when labeled as their ground truths, OOD data are beneficial to classification performance on In-Distribution (ID) data. Based on the findings, we propose our model which consists of two components – Re-balanced Pseudo-Labeling (RPL) and Semantic Exploration Clustering (SEC). RPL re-balances pseudo-labels on ID classes to filter out OOD data while also addressing the imbalance problem. SEC uses balanced clustering on OOD data to create pseudo-labels on extra classes, simulating the process of training with their ground truths. Experiments show that our method achieves steady improvement over supervised baseline and state-of-the-art performance under all class mismatch ratios on different benchmarks. N/A 1 INTRODUCTION labeled data cat dog unlabeled data cat dog plane car ID data OOD data Figure 1: Realistic Semi-Supervised Learning may simultaneously contain unlabeled ID and OOD data. ID data come from the same classes as labeled data while OOD data come from classes that are not seen in labeled data. Deep Semi-Supervised Learning (SSL) methods are proposed to reduce dependency on massive labeled data by utilizing a number of cheap, accessible unlabeled data. Pseudo-Labeling (Lee et al., 2013) is a simple but effective and widely used method that creates pseudo-labels according to predictions of the training model itself. Then SSL can be transformed to standard supervised learning. Other representative SSL methods are consistency regularization (Laine & Aila, 2017; Tarvainen & Valpola, 2017; Miyato et al., 2019), holistic methods (Berthelot et al., 2019; Sohn et al., 2020) and generative methods (Kingma et al., 2014). The recent development of SSL shows that these methods have achieved competitive performance to supervised learning methods. However, all of these SSL methods achieve their good results based on an assumption that the unlabeled data are drawn from the same distribution as the labeled data. This assumption can be easily violated in real-world applications. One of the common cases is that some unlabeled data come from unseen classes. For example, as is illustrated in Figure 1, in image classification, we can collect a lot of unlabeled images from the internet but usually they cover broader category concepts than labeled data. Oliver et al. (2018) have shown that on such class-mismatched conditions, performance of traditional SSL methods is damaged. To deal with this problem, several methods have been proposed. These methods include filtering out OOD data (Yu et al., 2020; Chen et al., 2020), down weighting OOD data (Chen et al., 2020) and re-use OOD data by neural style transfer (Luo et al., 2021) or self-supervised learning (Huang et al., 2021). Although these methods achieve good results, why do OOD data damage performance and how will OOD data help remain unclear. Here, we focus on analyzing Pseudo-Labeling (PL) in class-mismatched SSL and give some answers to these two questions. In this paper, we empirically analyze PL in class-mismatched SSL setting. These experiments aim to answer the following questions: (1) How do OOD data influence PL? (2) What are the better pseudo-labels for OOD data? For question (1), we investigate pseudo-labels created by PL. The main finding is that pseudo-labels on OOD data tend to be imbalanced while on ID data, it remains balanced. We further show that PL’s performance is damaged due to such imbalance on OOD data. For question (2), several strategies for labeling OOD data are investigated. We conclude that it is beneficial when labeling OOD data as a class different from ID data, and the performance can be further improved when the pseudo-labels partition unlabeled OOD data into their semantic clusters. Based on the experimental analysis, we propose a two-branched model called Υ-Model, which processes unlabeled data according to their confidence score on ID classes. The first branch performs re-balanced pseudo-labeling on high-confidence data. It utilizes the property of imbalanced pseudolabels on OOD data, truncating the number of pseudo-labeled data for each class to their minimum. This procedure filters out many OOD data and also prevents the negative effect of imbalanced pseudo-labels. For the other branch, semantic exploration clustering is performed on low-confidence data. They are considered as OOD data and their semantics will be mined by clustering into different partitions on extra classes. The clustering result provides better pseudo-labels for these OOD data than vanilla PL. Experiments on different SSL benchmarks show that our model can achieve steady improvement in comparison to supervised baseline. We summarize our contributions as follows: • We analyze the Pseudo-Labeling model for ID and OOD data. The findings lead to two primary conclusions: (1) Imbalance of pseudo-labels on OOD data damages PL’s performance. (2) Best pseudo-labels for unlabeled OOD data are those different from ID classes and partitioning them into their semantic clusters. • We propose our two-branched Υ-Model. One branch re-balances pseudo-labels on ID classes and filter out OOD data. The other branch explores semantics of OOD data by clustering on extra classes. • Experiments on different SSL benchmarks empirically validate effectiveness of our model. 2 PRELIMINARY 2.1 CLASS-MISMATCHED SSL Similar to the SSL problem, the training dataset of the class-mismatched SSL problem contrains n ID labeled samples Dl = {(xli, yli)}ni=1 and m unlabeled samples Du = {xui}mi=1, (usually, m ≫ n,) yli ∈ YID = {1, . . . ,KID}, while different from SSL, the underlying ground truth yu of unlabeled data may be different from labeled data. i.e., yuj ∈ YID ∪ YOOD,YOOD = {KID + 1, . . . ,KID + KOOD}. The goal of class-mismatched SSL is to correctly classify ID samples into YID using labeled set with ID samples and unlabeled set possibly with OOD samples. 2.2 PSEUDO-LABELING Pseudo-Labeling (PL) leverages the idea that we can use the model itself to obtain artificial labels for unlabeled data (Lee et al., 2013). PL first perform supervised learning on labeled data to get a pre-trained model f , which outputs the probability of belonging to each ID class. It then creates the pseudo-labels for each unlabeled sample: y′ = { argmaxy∈YID f(y|x) , c(x) > τ reject , otherwise , (1) c(x) = max y∈YID f(y|x), (2) where c(x) is the confidence score for x. All the pseudo-labeled unlabel data will be treated as labeled data for the next supervised learning generation. PL iteratively performs supervised learning and pseudo-label creation until stop condition. 3 ANALYSIS OF PSEUDO-LABELING IN CLASS-MISMATCHED SSL In class-mismatched SSL, vanilla PL can only create pseudo-labels on ID classes even for OOD data. We will analyze how these OOD data influence vanilla PL and what are the better pseudo-labels for them in this section. 3.1 SETUP We use CIFAR-10 (Krizhevsky et al., 2009) as our experimental dataset. The data set contains 10 categories – 6 animal classes and 4 vehicle classes. Following Guo et al. (2020), we perform a classification task on animal classes (denoted as class 0-5) and select 400 images per class to construct the labeled data set, i.e., 2,400 labeled examples. The other 4 vehicle classes are taken as OOD classes (denoted as classes 6-9). 20,000 images are randomly selected from all the 10 classes as the unlabeled data set. We vary the ratio of unlabeled images to modulate class distribution mismatch. For example, the extent is 50% means half of the unlabeled data comes from animal classes and the others come from vehicle classes. We use Wide-ResNet-28-2 (Zagoruyko & Komodakis, 2016) as our backbone. We also adopt data augmentation techniques including random resized crop, random color distortion and random horizontal flip. We train our network for 400 epochs. For each epoch, we iterate over the unlabeled set and random sample labeled data, each unlabeled and labeled minibatch contains 128 samples. We adopt Adam as the optimization algorithm with the initial learning rate 3× 10−3. We report the averaged accuracy of the last 20 epochs, pretending there is no reliable (too small) validation set to perform early stop (Oliver et al., 2018). 3.2 IMBALANCE OF PSEUDO-LABELS ON OOD DATA In this section, we analyze the pre-trained model that creates the first set of pseudo-labels, and the final model trained by Pseudo-Labeling. Pretrained model. First, we draw the distribution of confidence score on OOD data and ID data. Figure 2(a) tells that, like what is concluded in OOD detection (Hendrycks & Gimpel, 2017), proportion of high-confidence data in ID data is larger than OOD data. However, in class-mismatched SSL, the unlabeled data are in much larger quantities. When the class mismatch ratio is large, there are quite a few OOD data with high confidence scores. We will show in the final model experiments that these high-confidence OOD data damages performance. Secondly, we study pseudo-labels on both ID data and OOD data. Figure 2(b) shows that pseudo-labels ID data is balanced. However, they are rather imbalanced on OOD data (Figure 2(c)). This is attributed to the different distribution they are drawn from. Samples with certain pattern bias to certain classes. ID data bias to ID classes uniformly because they are sampled by the same distribution. However, with little probability, OOD data will also bias to ID classes uniformly since they have little relevance to ID data. Final Pseudo-Labeling model. As an old saying goes, a good beginning is half done. However, such imbalance of the first set of pseudo-labels starts PL model badly when there is a large portion of OOD data, putting the model in danger of imbalanced learning. We run vanilla PL and show that the imbalance of pseudo-labels harms the performance. Figure 3(a) shows the performance of PL model with different OOD ratios. In accord with Oliver et al. (2018), PL model degrades as the portion of OOD data gets larger. Figure 3(b) displays the confusion matrix of the PL model on the whole test set containing both ID and OOD data. Since only 6 classes are known to us, the confusion matrix is a rectangle. We can see almost all the OOD samples (class 6-9) are classified as class 0, which means the imbalance effect on OOD data gets even worse as the PL training goes on. The possible reason is that, unlike Pseudo-labeling on ID data, supervision of labeled data can not help correct pseudo-labels on OOD data. Thus the imbalance continuously deteriorates. The imbalance on OOD data also influences classification performance on ID data. Samples of major classes (class 0) overwhelm the loss and gradient, leading to a degenerate model (Lin et al., 2017). We can see the PL model mistakenly classifies many of data with class 1-5 into class 0. 3.3 PSEUDO-LABELING STRATEGY FOR OOD DATA The previous section shows OOD data hurt performance of vanilla PL. Then here comes the question: Assuming that we already know which data are OOD, how do we use these OOD data? Is omitting them the best way? If not, what are the better pseudo-labels for them? To answer these questions, we investigate four strategies to create pseudo-labels for OOD data: • Baseline. This baseline omits all the OOD data and only trains on the labeled ID data. • Re-Assigned Labeling. This strategy assigns data of each OOD class to an ID class. It ensures that different OOD class is assigned to different ID class, keeping the semantics unchanged between OOD classes. For example, (ship, trunk, airline, automobile) can be assigned to (bird, cat, deer, dog). This strategy can be seen as training a classifier of “super-classes”. • Open-Set Labeling. This strategy is named after the related setting – Open-Set Recognition (Scheirer et al., 2013; Bendale & Boult, 2016). This strategy treats all OOD data as one unified class KID + 1. Thus this model outputs probability over KID + 1 classes. • Oracle Labeling. This strategy uses the ground truth of OOD data. Thus this model outputs probability over KID +KOOD classes. Note that Open-Set Labeling and Oracle Labeling can classify samples into more than KID classes. However, during evaluation, we only classify samples into KID ID classes. For these model, the predicted label ŷ of a test sample x is calculated as: ŷ(x) = argmax y∈YID f(y|x) (3) The overall comparison of the four strategies is illustrated in Figure 4. We also report test accuracy when class mismatch ratio is 100%. From Figure 4, we can get several important conclusions. (1) Re-Assigned Labeling underperforms baseline a little1. This indicates that assign samples with OOD classes to ID classes does not help the model distinguish between ID classes even if we somehow know which OOD data are semantically different. It also reveals that performing vanilla PL on OOD data may never help even if we do it perfectly. (2) Open-Set Labeling outperforms baseline, which indicates it improves the performance if we label the OOD data as a class other than ID classes. (3) We can see Oracle Labeling improves the performance and achieves the best result among the four strategies. It means that in addition to label OOD data as extra classes, if we can further assign OOD data with different semantics to different classes, the model will achieve better results. 3.4 SUMMARY OF SECTION In this section, we study the behavior of Pseudo-Labeling model in class-mismatched SSL. We summarize several important conclusions here: Conclusion 1: Classification model trained with labeled ID data creates imbalanced pseudo-labels on OOD data while on ID data, it remains balanced. Conclusion 2: The vanilla PL process makes the imbalance of pseudo-labels deteriorate, damaging the classification performance on ID data. Conclusion 3: Labeling OOD data as ID classes does not help and may even perform a little worse. Conclusion 4: It is beneficial to label OOD data as extra classes different from ID classes. If we can further label semantically different OOD data as different classes, the performance can be further improved. 4 METHOD Based on the findings in Section 3, we proposed Υ-Model (named after its shape) for classmismatched SSL. Υ-Model trains a classifier f that will output the posterior distribution over KID + K classes, i.e., f(y|x) ∈ RKID+K ,1⊤f(y|x) = 1. K is the number of extra classes, which can be known in advance (i.e., K = KOOD) or be set as a hyper-parameter. Similar to vanilla PL, we define confidence with the same form as Equation 2. However, this confidence is a little different from its original definition in Hendrycks & Gimpel (2017), for we only calculate the maximum probability of the KID classes instead of all. Therefore, we rename it to In-Distribution confidence (ID confidence). For evaluation, we predict labels using Equation 3. Υ-Model aims to solve the following questions: Problem 1: how to avoid imbalanced pseudo-labels in PL model? (Conclusion 1, 2) Problem 2: how to avoid labeling OOD data as ID? (Conclusion 3) Problem 3: how to create proper pseudo-labels for unlabeled OOD data? (Conclusion 4) 1Assign 4 OOD classes to 4 ID classes of 6 causes imbalance. But we test the accuracy on selected 4 classes and find they show a similar result. Υ-Model consists of two main branches – Re-balanced Pseudo-Labeling (RPL) and Semantic Exploration Clustering (SEC). RPL acts on high-confidence data to solve Problem 1, 2. SEC acts on low-confidence data to solve Problem 3. We describe the two branches in the following sections. The overview of Υ-Model is illustrated in Figure 5. 4.1 RE-BALANCED PSEUDO-LABELING As is illustrated in Section 3.3, the main problem of vanilla PL is that a large number of OOD data with high confidence scores have imbalanced pseudo-labels. One possible solution is re-weighting the unlabeled sample (Guo et al., 2020) or using other methods in the imbalance learning field. However, even if we solve the problem of imbalance learning, labeling OOD data as ID classes also may damage the performance (Conclusion 3). In this paper, we use a simple method – Re-balanced Pseudo Labeling – to simultaneously solve imbalance (Problem 1) and incorrect recognition (Problem 2). It produces a set P of pseudo-labeled samples in three steps: N = min y∈YID |{x ∈ Du | f(y | x) > τ}| , (4) τy = Nth biggest({f(y | x) | x ∈ Du}), y = 1, 2, . . . ,KID, (5) P = ⋃ y∈YID {(x, y) | f(y | x) ≥ τy,x ∈ Du}, (6) where Nth biggest represents the N–th biggest value of the given set. RPL first calculates the minimum number of pseudo-labeled samples for each ID class by Equation 4. Then it truncates the number of pseudo-labels of each ID class to that number by Equation 5, 6. The process of RPL is illustrated in Figure 5(a). First, it enforces the pseudo labels on ID classes to be balanced, solving Problem 1. Second, as is shown in Section 3, the set of high-confidence data is a mixture of ID and ODD data. Due to Conclusion 1, the pseudo-label distribution of such a set is a sum of imbalanced and balanced ones, thus still imbalanced. However, by selecting only top-N confident samples for each ID class, we will keep ID data and omit many OOD data since confidence on ID data tends to be higher than OOD data (Hendrycks & Gimpel, 2017). This process solves Problem 2. 4.2 SEMANTIC EXPLORATION CLUSTERING As is demonstrated in Section 3.3, if we know a set of samples is OOD, it will improve the performance if we label them as a unified class KID + 1. But the best way is to use their ground truths (Conclusion 4). However, it is impossible to access their ground truths since they are unlabeled. We resort to using Deep Clustering methods (Caron et al., 2018; Asano et al., 2020) to mine their semantics and approximate the process of learning with the ground truths. Here, we use the balanced clustering method in Asano et al. (2020); Caron et al. (2020) to create pseudo-labels for these OOD data. Assuming there are M samples recognized as OOD, we first compute their soft targets: min Q∈U(K,M) ⟨Q,− logP ⟩, U(K,M) := { Q ∈ RK×M+ | Q1 = 1 K 1, Q⊤1 = 1 M 1 } , (7) where P ∈ RK×M+ , Pij = f̂(KID + i|xj). f̂ is the normalized posterior distribution on extra classes, i.e., f̂(KID+ i|xj) = f(KID+ i|xj)/ ∑K k=1 f(KID+k|xj). We use the Sinkhorn-Knopp algorithm Cuturi (2013) to optimize Q. Once we get Q, we harden the label by picking the class with the maximum predicted probability and mapping it to the extra K classes: ŷj = KID + argmax i Qij . (8) ŷj is used as the pseudo-label for xj . We perform SEC on the set of data with ID confidence lower than a threshold γ, i.e., {x|c(x) < γ}. 5 RELATED WORK Class-Mismatched Semi-Supervised Learning. Deep Semi-Supervised Learning suffers from performance degradation when there are unseen classes in unlabeled data (Oliver et al., 2018). As the proportion of such out-of-distribution (OOD) data get larger, the performance drop more. To cope with such a class-mismatch problem, several methods are proposed. Chen et al. (2020) formulate a sequence of ensemble models aggregated accumulatively on-the-fly for joint self-distillation and OOD filtering. Guo et al. (2020) re-weight the unlabeled data by meta-learning to decrease the negative effect of OOD data. Huang et al. (2020) recycle transferable OOD data by means of adversarial learning. Different from all these methods, we conduct a comprehensive study on Pseudo-Labeling (PL) and give useful guidance on how to do better in class-mismatched SSL. Pseudo-Labeling. The method of Pseudo-Labeling, also known as self-training, is a simple and effective way for Deep Semi-Supervised Learning (Lee et al., 2013; Shi et al., 2018; Arazo et al., 2020; Iscen et al., 2019). Despite its simplicity, it has been widely applied to diverse fields such as image classification (Xie et al., 2020), natural language processing (He et al., 2020) and object detection (Rosenberg et al., 2005). The use of a hard label makes Pseudo-Labeling closely related to entropy minimization (Grandvalet & Bengio, 2004). Deep Clustering. Deep clustering methods improve ability of traditional cluster methods by leveraging the representation power of DNNs. A common means is to transform data into lowdimensional feature vectors and apply traditional clustering methods (Yang et al., 2017; Caron et al., 2018). In Self-Supervised Learning, clustering methods are used to learning meaningful representation for downstream tasks (Caron et al., 2018; Asano et al., 2020; Caron et al., 2020). Modern Deep Clustering can learn semantically meaningful clusters and achieves competitive results against supervised learning (Gansbeke et al., 2020). 6 EXPERIMENTS To validate the effectiveness of our Υ-Model, we conduct experiments on different benchmarks. Dataset. We test our methods on two datasets as in Oliver et al. (2018): (1) CIFAR10: we use the same configuration as in Section 3.1. SVHN: The data set contains 10 categories – digits “0”-“9”. We select the first “0”-“5” as ID classes and the rest as OOD. For each class, we randomly select 100 images as labeled data. Meanwhile, 20,000 images are randomly selected from all the 10 classes as the unlabeled data set. The class-mismatch ratio is set as {0%, 25%, 50%, 75%, 100%}. Implementation Details. We use the same network and training protocol as Section 3.1. We first train a classification model only on labeled data for 100 epochs without RPL and SEC. We update pseudo-labels every 2 epochs. For both datasets, we set τ = 0.95, γ = 0.3,K = 4. We use an exponential moving average model for final evaluation as in Athiwaratkun et al. (2019). 6.1 COMPARE WITH TRADITIONAL SSL METHODS In this subsection, we compare our methods with four traditional SSL methods – PseudoLabeling (Lee et al., 2013), Π-Model (Laine & Aila, 2017), Mean Teacher (Tarvainen & Valpola, 2017) and VAT (Miyato et al., 2019). Figures 6(a), 6(b) show the results. Traditional methods suffer from performance degradation as the mismatch ratio increases. They usually get worse than the supervised baseline when the mismatch ratio is larger than 50% on CIFAR10 and SVHN. In contrast, our methods get steady improvement under all class mismatch ratios. The reasons can be attributed as follows. First, our method is aware of the existence of OOD data. We do not treat OOD data like ID data, which can hurt performance. Second, we reuse OOD data by exploring their semantics which proves to be useful in Section 3.3. Therefore, even when the class-mismatch ratio gets 100%, performance of Υ-Model is still better than supervised baseline. 6.2 COMPARE WITH CLASS-MISMATCHED SSL METHODS In this subsection, we compare our method with two existing class-mismatched SSL methods – UASD (Chen et al., 2020) and DS3L (Guo et al., 2020). For a fair comparison, we use PseudoLabeling as the base method of DS3L. From Figures 6(c), 6(d), we can see our methods are superior to these two methods in all settings. It is noticeable that DS3L underperforms supervised baseline when all the unlabeled data are drawn from OOD classes. This is attributed to the fact that DS3L uses a down weighting strategy to alleviate the negative effect of OOD data and does not change the form of unsupervised loss. But we have shown in Section 3.3 that labeling OOD data as ID classes damage performance anyhow. On the contrary, Υ-Model uses the OOD data in the right way – simulating the process of training them with their ground truth labels. As a result, our method shows superiority especially under a large class-mismatch ratio. We also notice that the performance curve of Υ-Model appears a U-shape (obvious on CIFAR10). A possible reason is that RPL and SEC compete with each other. RPL tends to make samples get a high prediction on ID classes while SEC tends to make samples get a high prediction on OOD classes. When the class-mismatch ratio reaches 0% (100%), RPL (SEC) dominants the other. In this circumstance, one works without any disturbance of the other. However, when the class-mismatch ratio is 50%, they compete fiercely with each other, causing many incorrectly recognized ID or OOD samples. 6.3 ABLATION STUDY In this section, we validate the functionality of RPL and SEC. We conduct experiments on CIFAR10 benchmark as in the analysis section 3. Validation of effectiveness of RPL and SEC. We conduct ablation studies under different classmismatch ratios and report the averaged test accuracy and standard deviation of five runs. As usual, we vary the class-mismatch ratio. Table 1 displays the results. Firstly, comparing the first line and second line of the table, RPL not only outperforms vanilla PL in high class-mismatch ratio scenario but also improves in low class-mismatch ratio scenario. This reveals that balanced pseudo-labels always help since once the model creates imbalanced pseudo-labels, it will deteriorate when there are not enough measures to correct it. Secondly, comparing the second and third line, it shows that RPL alone alleviate the performance degradation but it can not prevent it, in accord with Conclusion 3. When using SEC, Υ-Model gets better results than supervised baseline when the class-mismatch ratio is high. Besides, comparing the third and last lines, we see that when cluster OOD data and exploring their semantics instead of using a unified class to label them, the performance improves. RPL helps filter out OOD data and solve imbalance problem. It is noticeable that the last two lines of Table 1 show that without RPL, SEC alone can not achieve better performance than supervised baseline. We will show the reason here. Figure 7(a) plots the proportion of OOD data that are pseudo-labeled as ID classes. It reveals that without RPL, i.e., using vanilla PL, the number of incorrectly recognized OOD data keep increasing as the training proceed. While with RPL, this ratio rapidly drops to 0. This proves that RPL help filter out OOD data by utilizing the imbalance property of OOD data. Further, we present the confusion matrix of Υ-Model on the full test set (all the 10 classes) of CIFAR10. Compared to vanilla PL in Figure 3(b), Υ-Model does not suffer from imbalance problem, as a result of which, its performance is not degraded. Effect of extra class number K. We vary the number of extra classes K. Figure 7(c) shows the result on CIFAR10 with class mismatch ratio 100%. The gray dashed line is the supervised baseline. The red dashed line is the Oracle Labeling strategy in Section 3.3. It is the upper bound of Υ-Model in this setup. Without SEC (K = 0), Υ-Model underperform supervised baseline. Using SEC (K ≥ 1), Υ-Model is always better than baseline. Also, it reaches its best performance when K equals the actual number of OOD classes. This demonstrates that by simulating the process of training on OOD data with ground truth, SEC helps classification model on ID data. 7 CONCLUSION In this paper, we analyze Pseudo-Labeling in class-mismatched semi-supervised learning where there are unlabeled OOD data from other classes. We show that Pseudo-Labeling suffers from performance degradation due to imbalanced pseudo-labels on OOD data. The correct way to use OOD data is to label them as classes different from ID classes while also partitioning them according to their semantics. Based on the analysis, we proposed Υ-Model and empirically validate its effectiveness. We believe our findings are not only beneficial to PL methods, but also inspiring to other methods like consistency regularization and holistic methods on how to effectively use OOD data. A ALGORITHM Algorithm 1 Υ-Model algorithm Input: Labeled dataset Dl = {(xli, yli)}ni=1, and unlabeled dataset Du = {xui}mi=1; Classification model fϕ parameterized with ϕ, ID class number KID, extra class number K, total epoch number E, pretrain epochs Ept, interval to update pseudo-labels Epl, pseudo-labeled set P , confidence calculation function c. 1: function REBALANCEDPSEUDOLABELING(D, f, τ ) 2: N ← miny∈YID |{x ∈ Du | f(y | x) > τ}| 3: τy ← Nth biggest({f(y | x) | x ∈ Du}), y = 1, 2, . . . ,KID 4: P ← ⋃ y∈YID{(x, y) | f(y | x) ≥ τy,x ∈ Du} 5: return P 6: function SEMANTICEXPLORATIONCLUSTERING(D, f, γ) 7: S ← {x|c(x) < γ} 8: M ← |S| 9: Pij ← f(KID + i|xj)/ ∑K k=1 f(KID + k|xj), i = 1, 2, . . . ,K, j = 1, 2, . . . ,M 10: Solve 7 by Sinkhorn-Knopp algorithm and get Q 11: ŷj ← KID + argmaxi Qij , j = 1, 2, . . . ,M 12: C ← {(xj , ŷj)}Mj=1 13: return C 14: for e = 1 to E do 15: if e < Ept then 16: train fϕ with standard supervised learning on Dl ▷ Pre-training phase 17: L = 18: else 19: train fϕ with standard supervised learning on Dl ∪ P ▷ PL training phase 20: if e ≤ Ept and e %Epl = 0 then 21: P ← ∅ 22: Pτ← REBALANCEDPSEUDOLABELING(Du, fϕ, τ ) ▷ Perform RPL 23: Pγ← SEMANTICEXPLORATIONCLUSTERING(Du, fϕ, γ) ▷ Perform SEC 24: P ← Pτ ∪ Pγ 25: return classification model fϕ B EMBEDDING VISUALIZATION We visualize the embedding of supervised baseline, vanilla PL and υ-Model on CIFAR10’s test set with all the classes by t-SNE. Figure 8 shows the result. OOD data are mixed with ID data since the supervised baseline does not see unlabeled OOD data. PL mixes OOD data with samples of certain classes (class 0). This is attributed to that their pseudo-labels are biased toward this class. Also, we can not clearly distinguish between OOD classes. In contrast, Υ-Model can not only make ID data distinguishable but also forms meaningful clusters on OOD data. C UNIVERSALITY OF ANALYSIS CONCLUSIONS To illustrate the universality of the conclusion in Section 3. We experiment on totally five kinds of datasets. ( We use (n/m) to represent n ID classes and m OOD classes.) • CIFAR10 (6/4) and CIFAR10 (5/5) : both created from CIFAR10 (Krizhevsky et al., 2009).CIFAR10 (6/4) takes the 6 animal classes as ID classes and 4 vehicle classes as OOD classes. CIFAR10 (5/5) has 3 animal and 2 vehicle classes for both ID and OOD classes. We select 400 labeled samples for each ID classes and totally 20,000 unlabeled samples from ID and OOD classes. • SVHN (6/4): We select the first “0”-“5” as ID classes and the rest as OOD. We select 100 labeled samples for each ID class and totally 20,000 unlabeled samples. • CIFAR100 (50/50): created from CIFAR100 (Krizhevsky et al., 2009). The first 50 classes are taken as ID classes and the rest as OOD classes. We select 100 labeled samples for each ID classes and totally 20,000 unlabeled samples. • Tiny ImageNet (100/100): created from Tiny ImageNet, which is a subset of ImageNet (Deng et al., 2009) with images downscaled to 64 × 64 from 200 classes. The first 100 classes are taken as ID classes and the rest as OOD classes. We select 100 labeled samples for each ID class and totally 40,000 unlabeled samples. Here we use C to represent CIFAR, TIN to represent Tiny ImageNet for short. For each dataset, we use the same experimental setup as in Section 3. C.1 IMBALANCE OF PSEUDO-LABELS To demonstrate the imbalance of pseudo-labels on OOD data, we compute two metrics to measure the extent of imbalance. • The KL divergence of pseudo-label distribution q and the uniform distribution u: kl = KL(q||u) = ∑ i qi log qi ui • The ratio of the ‘majority class’ and ‘minority class’: r = maxi qi mini qi The results on these datasets are displayed in Table 2. C.2 PSEUDO-LABELING STRATEGY FOR OOD DATA We report the test accuracy of the four pseudo-labeling strategies in Section 3.3. The class-mismatch ratio is set to 100%. Table 3 shows the results. Note that Re-Assigned Labeling has many possibilities. If there are n ID classes and m OOD classes, Amn possible assignments exist. It is impossible to experiment on all of them. To deal with it, we randomly choose 10 possible assignment and pick the maximum performance among them. D COMPARISON ON MORE DATASETS We compare our method with vanilla PL and the two class-mismatched methods in Section 6.2. We use the following hyperparameters: • CIFAR10 (6/4): τ = 0.95, γ = 0.3, Ept = 50, Epl = 2,K = 4 • SVHN (6/4): τ = 0.95, γ = 0.3, Ept = 50, Epl = 2,K = 4 • CIFAR100 (50/50):τ = 0.95, γ = 0.18, Ept = 50, Epl = 2,K = 20 • Tiny ImageNet (100/100): τ = 0.9, γ = 0.15, Ept = 50, Epl = 2,K = 20 For CIFAR100 (50/50) and Tiny ImageNet (100/100), we use a weight factor λ to trade off the loss on labeled set Dl and pseudo-labeled set P , which ramps up with function λ = exp ( −5× ( 1−min ( iter 40,000 , 1 ))2) , where iter is the number of training steps from Ept.
1. What is the focus of the paper regarding semi-supervised learning? 2. What are the strengths of the proposed approach, particularly in addressing class mismatch SSL? 3. What are the weaknesses of the paper, especially regarding the choice of dataset and comparison with state-of-the-art methods? 4. How does the reviewer assess the novelty and effectiveness of the proposed Y-model? 5. Are there any questions regarding the feasibility and scalability of the method, especially when applied to large-scale datasets?
Summary Of The Paper Review
Summary Of The Paper This paper researched on the semi-supervised learning task when there are unlabeled out of distribution data from other classes. Several interesting issues of class mismatch SSL are studied, including the reasons for the performance degradation of PL in OOD data and how to better pseudo-label OOD data to provide a more balanced semantic distribution. To address above-mentioned problems, Y-model, consisting of two components – Re-balanced Pseudo-Labeling (RPL) and Semantic Exploration Clustering (SEC), are proposed. Authors conducted several experiments to show the effectiveness of the proposed method, such as CIFAR-10 and SVHN. Review Pros: (1) OOD SSL is a very interesting topic, and it is also a realistic setting in real world applications. (2) The insight that pseudo-labeled data is often class-imbalanced is also interesting and can guide future research in this direction. (3) Authors conducted comprehensive experiments and ablation studies on some small-scale benchmarks. (4) This paper is well-written and easy to follow. Cons: (1) CIFAR-10 and SVHN are much easier than CIFAR-100, TinyImageNet or ImageNet. To show the effectiveness of this method on large-scale dataset, it may be necessary to conduct some experiments on larger benchmarks, especially when the performance improvements on small-scale benchmarks are relatively marginal (about 1.7~2.8% on CIFAR-10 as in Table 1 and ~3% on SVHN as shown in Figure 6). (2) The current state-of-the-art method on semi-supervised learning is FixMatch [1], however, the authors only compared against worse-performing methods in SSL like Mean Teacher. Therefore, can we still obtain similar observations and probelms when we are using a better SSL framework like Fixmatch? Will the method still perform better when it is integrated with FixMatch? (3) The OOD SSL is in fact investigated by a con-current work [2], which was released on arXiv about 10 months ago and was in submission to ICLR 2022 too. The rate of this paper will be not reduced for not comparing [2], however, I am curious about the key differences between this paper and [2]. If the author can provide some comparisons and insights, I would be very grateful. (4) The idea of simply truncating the number of pseudo-labeled data to the minimum is quite brute force, is there any ablation studies on other popular long-tailed recognition method re-weighting, re-sampling, marginal loss or even multi-expert framework? (5) Will the code of this method be made public available? (6) Using the expensive and time-consuming clustering method on large-scale benchmark like ImageNet may be of a big challenge to computational efficiency. [1] Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang, Colin Raffel, Ekin Do- gus Cubuk, Alexey Kurakin, and Chun-Liang Li. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. In NeurIPS, 2020. [2] Cao, Kaidi, Maria Brbic, and Jure Leskovec. "Open-World Semi-Supervised Learning." arXiv preprint arXiv:2102.03526 (2021). Harvard
ICLR
Title On Pseudo-Labeling for Class-Mismatch Semi-Supervised Learning Abstract Semi-Supervised Learning (SSL) methods have shown superior performance when unlabeled data are drawn from the same distribution with labeled data. Among them, Pseudo-Labeling (PL) is a simple and widely used method that creates pseudo-labels for unlabeled data according to predictions of the training model itself. However, when there are unlabeled Out-Of-Distribution (OOD) data from other classes, these methods suffer from severe performance degradation and even get worse than merely training on labeled data. In this paper, we empirically analyze PL in class-mismatched SSL. We aim to answer the following questions: (1) How do OOD data influence PL? (2) What are the better pseudo-labels for OOD data? First, we show that the major problem of PL is imbalanced pseudolabels on OOD data. Second, we find that when labeled as their ground truths, OOD data are beneficial to classification performance on In-Distribution (ID) data. Based on the findings, we propose our model which consists of two components – Re-balanced Pseudo-Labeling (RPL) and Semantic Exploration Clustering (SEC). RPL re-balances pseudo-labels on ID classes to filter out OOD data while also addressing the imbalance problem. SEC uses balanced clustering on OOD data to create pseudo-labels on extra classes, simulating the process of training with their ground truths. Experiments show that our method achieves steady improvement over supervised baseline and state-of-the-art performance under all class mismatch ratios on different benchmarks. N/A 1 INTRODUCTION labeled data cat dog unlabeled data cat dog plane car ID data OOD data Figure 1: Realistic Semi-Supervised Learning may simultaneously contain unlabeled ID and OOD data. ID data come from the same classes as labeled data while OOD data come from classes that are not seen in labeled data. Deep Semi-Supervised Learning (SSL) methods are proposed to reduce dependency on massive labeled data by utilizing a number of cheap, accessible unlabeled data. Pseudo-Labeling (Lee et al., 2013) is a simple but effective and widely used method that creates pseudo-labels according to predictions of the training model itself. Then SSL can be transformed to standard supervised learning. Other representative SSL methods are consistency regularization (Laine & Aila, 2017; Tarvainen & Valpola, 2017; Miyato et al., 2019), holistic methods (Berthelot et al., 2019; Sohn et al., 2020) and generative methods (Kingma et al., 2014). The recent development of SSL shows that these methods have achieved competitive performance to supervised learning methods. However, all of these SSL methods achieve their good results based on an assumption that the unlabeled data are drawn from the same distribution as the labeled data. This assumption can be easily violated in real-world applications. One of the common cases is that some unlabeled data come from unseen classes. For example, as is illustrated in Figure 1, in image classification, we can collect a lot of unlabeled images from the internet but usually they cover broader category concepts than labeled data. Oliver et al. (2018) have shown that on such class-mismatched conditions, performance of traditional SSL methods is damaged. To deal with this problem, several methods have been proposed. These methods include filtering out OOD data (Yu et al., 2020; Chen et al., 2020), down weighting OOD data (Chen et al., 2020) and re-use OOD data by neural style transfer (Luo et al., 2021) or self-supervised learning (Huang et al., 2021). Although these methods achieve good results, why do OOD data damage performance and how will OOD data help remain unclear. Here, we focus on analyzing Pseudo-Labeling (PL) in class-mismatched SSL and give some answers to these two questions. In this paper, we empirically analyze PL in class-mismatched SSL setting. These experiments aim to answer the following questions: (1) How do OOD data influence PL? (2) What are the better pseudo-labels for OOD data? For question (1), we investigate pseudo-labels created by PL. The main finding is that pseudo-labels on OOD data tend to be imbalanced while on ID data, it remains balanced. We further show that PL’s performance is damaged due to such imbalance on OOD data. For question (2), several strategies for labeling OOD data are investigated. We conclude that it is beneficial when labeling OOD data as a class different from ID data, and the performance can be further improved when the pseudo-labels partition unlabeled OOD data into their semantic clusters. Based on the experimental analysis, we propose a two-branched model called Υ-Model, which processes unlabeled data according to their confidence score on ID classes. The first branch performs re-balanced pseudo-labeling on high-confidence data. It utilizes the property of imbalanced pseudolabels on OOD data, truncating the number of pseudo-labeled data for each class to their minimum. This procedure filters out many OOD data and also prevents the negative effect of imbalanced pseudo-labels. For the other branch, semantic exploration clustering is performed on low-confidence data. They are considered as OOD data and their semantics will be mined by clustering into different partitions on extra classes. The clustering result provides better pseudo-labels for these OOD data than vanilla PL. Experiments on different SSL benchmarks show that our model can achieve steady improvement in comparison to supervised baseline. We summarize our contributions as follows: • We analyze the Pseudo-Labeling model for ID and OOD data. The findings lead to two primary conclusions: (1) Imbalance of pseudo-labels on OOD data damages PL’s performance. (2) Best pseudo-labels for unlabeled OOD data are those different from ID classes and partitioning them into their semantic clusters. • We propose our two-branched Υ-Model. One branch re-balances pseudo-labels on ID classes and filter out OOD data. The other branch explores semantics of OOD data by clustering on extra classes. • Experiments on different SSL benchmarks empirically validate effectiveness of our model. 2 PRELIMINARY 2.1 CLASS-MISMATCHED SSL Similar to the SSL problem, the training dataset of the class-mismatched SSL problem contrains n ID labeled samples Dl = {(xli, yli)}ni=1 and m unlabeled samples Du = {xui}mi=1, (usually, m ≫ n,) yli ∈ YID = {1, . . . ,KID}, while different from SSL, the underlying ground truth yu of unlabeled data may be different from labeled data. i.e., yuj ∈ YID ∪ YOOD,YOOD = {KID + 1, . . . ,KID + KOOD}. The goal of class-mismatched SSL is to correctly classify ID samples into YID using labeled set with ID samples and unlabeled set possibly with OOD samples. 2.2 PSEUDO-LABELING Pseudo-Labeling (PL) leverages the idea that we can use the model itself to obtain artificial labels for unlabeled data (Lee et al., 2013). PL first perform supervised learning on labeled data to get a pre-trained model f , which outputs the probability of belonging to each ID class. It then creates the pseudo-labels for each unlabeled sample: y′ = { argmaxy∈YID f(y|x) , c(x) > τ reject , otherwise , (1) c(x) = max y∈YID f(y|x), (2) where c(x) is the confidence score for x. All the pseudo-labeled unlabel data will be treated as labeled data for the next supervised learning generation. PL iteratively performs supervised learning and pseudo-label creation until stop condition. 3 ANALYSIS OF PSEUDO-LABELING IN CLASS-MISMATCHED SSL In class-mismatched SSL, vanilla PL can only create pseudo-labels on ID classes even for OOD data. We will analyze how these OOD data influence vanilla PL and what are the better pseudo-labels for them in this section. 3.1 SETUP We use CIFAR-10 (Krizhevsky et al., 2009) as our experimental dataset. The data set contains 10 categories – 6 animal classes and 4 vehicle classes. Following Guo et al. (2020), we perform a classification task on animal classes (denoted as class 0-5) and select 400 images per class to construct the labeled data set, i.e., 2,400 labeled examples. The other 4 vehicle classes are taken as OOD classes (denoted as classes 6-9). 20,000 images are randomly selected from all the 10 classes as the unlabeled data set. We vary the ratio of unlabeled images to modulate class distribution mismatch. For example, the extent is 50% means half of the unlabeled data comes from animal classes and the others come from vehicle classes. We use Wide-ResNet-28-2 (Zagoruyko & Komodakis, 2016) as our backbone. We also adopt data augmentation techniques including random resized crop, random color distortion and random horizontal flip. We train our network for 400 epochs. For each epoch, we iterate over the unlabeled set and random sample labeled data, each unlabeled and labeled minibatch contains 128 samples. We adopt Adam as the optimization algorithm with the initial learning rate 3× 10−3. We report the averaged accuracy of the last 20 epochs, pretending there is no reliable (too small) validation set to perform early stop (Oliver et al., 2018). 3.2 IMBALANCE OF PSEUDO-LABELS ON OOD DATA In this section, we analyze the pre-trained model that creates the first set of pseudo-labels, and the final model trained by Pseudo-Labeling. Pretrained model. First, we draw the distribution of confidence score on OOD data and ID data. Figure 2(a) tells that, like what is concluded in OOD detection (Hendrycks & Gimpel, 2017), proportion of high-confidence data in ID data is larger than OOD data. However, in class-mismatched SSL, the unlabeled data are in much larger quantities. When the class mismatch ratio is large, there are quite a few OOD data with high confidence scores. We will show in the final model experiments that these high-confidence OOD data damages performance. Secondly, we study pseudo-labels on both ID data and OOD data. Figure 2(b) shows that pseudo-labels ID data is balanced. However, they are rather imbalanced on OOD data (Figure 2(c)). This is attributed to the different distribution they are drawn from. Samples with certain pattern bias to certain classes. ID data bias to ID classes uniformly because they are sampled by the same distribution. However, with little probability, OOD data will also bias to ID classes uniformly since they have little relevance to ID data. Final Pseudo-Labeling model. As an old saying goes, a good beginning is half done. However, such imbalance of the first set of pseudo-labels starts PL model badly when there is a large portion of OOD data, putting the model in danger of imbalanced learning. We run vanilla PL and show that the imbalance of pseudo-labels harms the performance. Figure 3(a) shows the performance of PL model with different OOD ratios. In accord with Oliver et al. (2018), PL model degrades as the portion of OOD data gets larger. Figure 3(b) displays the confusion matrix of the PL model on the whole test set containing both ID and OOD data. Since only 6 classes are known to us, the confusion matrix is a rectangle. We can see almost all the OOD samples (class 6-9) are classified as class 0, which means the imbalance effect on OOD data gets even worse as the PL training goes on. The possible reason is that, unlike Pseudo-labeling on ID data, supervision of labeled data can not help correct pseudo-labels on OOD data. Thus the imbalance continuously deteriorates. The imbalance on OOD data also influences classification performance on ID data. Samples of major classes (class 0) overwhelm the loss and gradient, leading to a degenerate model (Lin et al., 2017). We can see the PL model mistakenly classifies many of data with class 1-5 into class 0. 3.3 PSEUDO-LABELING STRATEGY FOR OOD DATA The previous section shows OOD data hurt performance of vanilla PL. Then here comes the question: Assuming that we already know which data are OOD, how do we use these OOD data? Is omitting them the best way? If not, what are the better pseudo-labels for them? To answer these questions, we investigate four strategies to create pseudo-labels for OOD data: • Baseline. This baseline omits all the OOD data and only trains on the labeled ID data. • Re-Assigned Labeling. This strategy assigns data of each OOD class to an ID class. It ensures that different OOD class is assigned to different ID class, keeping the semantics unchanged between OOD classes. For example, (ship, trunk, airline, automobile) can be assigned to (bird, cat, deer, dog). This strategy can be seen as training a classifier of “super-classes”. • Open-Set Labeling. This strategy is named after the related setting – Open-Set Recognition (Scheirer et al., 2013; Bendale & Boult, 2016). This strategy treats all OOD data as one unified class KID + 1. Thus this model outputs probability over KID + 1 classes. • Oracle Labeling. This strategy uses the ground truth of OOD data. Thus this model outputs probability over KID +KOOD classes. Note that Open-Set Labeling and Oracle Labeling can classify samples into more than KID classes. However, during evaluation, we only classify samples into KID ID classes. For these model, the predicted label ŷ of a test sample x is calculated as: ŷ(x) = argmax y∈YID f(y|x) (3) The overall comparison of the four strategies is illustrated in Figure 4. We also report test accuracy when class mismatch ratio is 100%. From Figure 4, we can get several important conclusions. (1) Re-Assigned Labeling underperforms baseline a little1. This indicates that assign samples with OOD classes to ID classes does not help the model distinguish between ID classes even if we somehow know which OOD data are semantically different. It also reveals that performing vanilla PL on OOD data may never help even if we do it perfectly. (2) Open-Set Labeling outperforms baseline, which indicates it improves the performance if we label the OOD data as a class other than ID classes. (3) We can see Oracle Labeling improves the performance and achieves the best result among the four strategies. It means that in addition to label OOD data as extra classes, if we can further assign OOD data with different semantics to different classes, the model will achieve better results. 3.4 SUMMARY OF SECTION In this section, we study the behavior of Pseudo-Labeling model in class-mismatched SSL. We summarize several important conclusions here: Conclusion 1: Classification model trained with labeled ID data creates imbalanced pseudo-labels on OOD data while on ID data, it remains balanced. Conclusion 2: The vanilla PL process makes the imbalance of pseudo-labels deteriorate, damaging the classification performance on ID data. Conclusion 3: Labeling OOD data as ID classes does not help and may even perform a little worse. Conclusion 4: It is beneficial to label OOD data as extra classes different from ID classes. If we can further label semantically different OOD data as different classes, the performance can be further improved. 4 METHOD Based on the findings in Section 3, we proposed Υ-Model (named after its shape) for classmismatched SSL. Υ-Model trains a classifier f that will output the posterior distribution over KID + K classes, i.e., f(y|x) ∈ RKID+K ,1⊤f(y|x) = 1. K is the number of extra classes, which can be known in advance (i.e., K = KOOD) or be set as a hyper-parameter. Similar to vanilla PL, we define confidence with the same form as Equation 2. However, this confidence is a little different from its original definition in Hendrycks & Gimpel (2017), for we only calculate the maximum probability of the KID classes instead of all. Therefore, we rename it to In-Distribution confidence (ID confidence). For evaluation, we predict labels using Equation 3. Υ-Model aims to solve the following questions: Problem 1: how to avoid imbalanced pseudo-labels in PL model? (Conclusion 1, 2) Problem 2: how to avoid labeling OOD data as ID? (Conclusion 3) Problem 3: how to create proper pseudo-labels for unlabeled OOD data? (Conclusion 4) 1Assign 4 OOD classes to 4 ID classes of 6 causes imbalance. But we test the accuracy on selected 4 classes and find they show a similar result. Υ-Model consists of two main branches – Re-balanced Pseudo-Labeling (RPL) and Semantic Exploration Clustering (SEC). RPL acts on high-confidence data to solve Problem 1, 2. SEC acts on low-confidence data to solve Problem 3. We describe the two branches in the following sections. The overview of Υ-Model is illustrated in Figure 5. 4.1 RE-BALANCED PSEUDO-LABELING As is illustrated in Section 3.3, the main problem of vanilla PL is that a large number of OOD data with high confidence scores have imbalanced pseudo-labels. One possible solution is re-weighting the unlabeled sample (Guo et al., 2020) or using other methods in the imbalance learning field. However, even if we solve the problem of imbalance learning, labeling OOD data as ID classes also may damage the performance (Conclusion 3). In this paper, we use a simple method – Re-balanced Pseudo Labeling – to simultaneously solve imbalance (Problem 1) and incorrect recognition (Problem 2). It produces a set P of pseudo-labeled samples in three steps: N = min y∈YID |{x ∈ Du | f(y | x) > τ}| , (4) τy = Nth biggest({f(y | x) | x ∈ Du}), y = 1, 2, . . . ,KID, (5) P = ⋃ y∈YID {(x, y) | f(y | x) ≥ τy,x ∈ Du}, (6) where Nth biggest represents the N–th biggest value of the given set. RPL first calculates the minimum number of pseudo-labeled samples for each ID class by Equation 4. Then it truncates the number of pseudo-labels of each ID class to that number by Equation 5, 6. The process of RPL is illustrated in Figure 5(a). First, it enforces the pseudo labels on ID classes to be balanced, solving Problem 1. Second, as is shown in Section 3, the set of high-confidence data is a mixture of ID and ODD data. Due to Conclusion 1, the pseudo-label distribution of such a set is a sum of imbalanced and balanced ones, thus still imbalanced. However, by selecting only top-N confident samples for each ID class, we will keep ID data and omit many OOD data since confidence on ID data tends to be higher than OOD data (Hendrycks & Gimpel, 2017). This process solves Problem 2. 4.2 SEMANTIC EXPLORATION CLUSTERING As is demonstrated in Section 3.3, if we know a set of samples is OOD, it will improve the performance if we label them as a unified class KID + 1. But the best way is to use their ground truths (Conclusion 4). However, it is impossible to access their ground truths since they are unlabeled. We resort to using Deep Clustering methods (Caron et al., 2018; Asano et al., 2020) to mine their semantics and approximate the process of learning with the ground truths. Here, we use the balanced clustering method in Asano et al. (2020); Caron et al. (2020) to create pseudo-labels for these OOD data. Assuming there are M samples recognized as OOD, we first compute their soft targets: min Q∈U(K,M) ⟨Q,− logP ⟩, U(K,M) := { Q ∈ RK×M+ | Q1 = 1 K 1, Q⊤1 = 1 M 1 } , (7) where P ∈ RK×M+ , Pij = f̂(KID + i|xj). f̂ is the normalized posterior distribution on extra classes, i.e., f̂(KID+ i|xj) = f(KID+ i|xj)/ ∑K k=1 f(KID+k|xj). We use the Sinkhorn-Knopp algorithm Cuturi (2013) to optimize Q. Once we get Q, we harden the label by picking the class with the maximum predicted probability and mapping it to the extra K classes: ŷj = KID + argmax i Qij . (8) ŷj is used as the pseudo-label for xj . We perform SEC on the set of data with ID confidence lower than a threshold γ, i.e., {x|c(x) < γ}. 5 RELATED WORK Class-Mismatched Semi-Supervised Learning. Deep Semi-Supervised Learning suffers from performance degradation when there are unseen classes in unlabeled data (Oliver et al., 2018). As the proportion of such out-of-distribution (OOD) data get larger, the performance drop more. To cope with such a class-mismatch problem, several methods are proposed. Chen et al. (2020) formulate a sequence of ensemble models aggregated accumulatively on-the-fly for joint self-distillation and OOD filtering. Guo et al. (2020) re-weight the unlabeled data by meta-learning to decrease the negative effect of OOD data. Huang et al. (2020) recycle transferable OOD data by means of adversarial learning. Different from all these methods, we conduct a comprehensive study on Pseudo-Labeling (PL) and give useful guidance on how to do better in class-mismatched SSL. Pseudo-Labeling. The method of Pseudo-Labeling, also known as self-training, is a simple and effective way for Deep Semi-Supervised Learning (Lee et al., 2013; Shi et al., 2018; Arazo et al., 2020; Iscen et al., 2019). Despite its simplicity, it has been widely applied to diverse fields such as image classification (Xie et al., 2020), natural language processing (He et al., 2020) and object detection (Rosenberg et al., 2005). The use of a hard label makes Pseudo-Labeling closely related to entropy minimization (Grandvalet & Bengio, 2004). Deep Clustering. Deep clustering methods improve ability of traditional cluster methods by leveraging the representation power of DNNs. A common means is to transform data into lowdimensional feature vectors and apply traditional clustering methods (Yang et al., 2017; Caron et al., 2018). In Self-Supervised Learning, clustering methods are used to learning meaningful representation for downstream tasks (Caron et al., 2018; Asano et al., 2020; Caron et al., 2020). Modern Deep Clustering can learn semantically meaningful clusters and achieves competitive results against supervised learning (Gansbeke et al., 2020). 6 EXPERIMENTS To validate the effectiveness of our Υ-Model, we conduct experiments on different benchmarks. Dataset. We test our methods on two datasets as in Oliver et al. (2018): (1) CIFAR10: we use the same configuration as in Section 3.1. SVHN: The data set contains 10 categories – digits “0”-“9”. We select the first “0”-“5” as ID classes and the rest as OOD. For each class, we randomly select 100 images as labeled data. Meanwhile, 20,000 images are randomly selected from all the 10 classes as the unlabeled data set. The class-mismatch ratio is set as {0%, 25%, 50%, 75%, 100%}. Implementation Details. We use the same network and training protocol as Section 3.1. We first train a classification model only on labeled data for 100 epochs without RPL and SEC. We update pseudo-labels every 2 epochs. For both datasets, we set τ = 0.95, γ = 0.3,K = 4. We use an exponential moving average model for final evaluation as in Athiwaratkun et al. (2019). 6.1 COMPARE WITH TRADITIONAL SSL METHODS In this subsection, we compare our methods with four traditional SSL methods – PseudoLabeling (Lee et al., 2013), Π-Model (Laine & Aila, 2017), Mean Teacher (Tarvainen & Valpola, 2017) and VAT (Miyato et al., 2019). Figures 6(a), 6(b) show the results. Traditional methods suffer from performance degradation as the mismatch ratio increases. They usually get worse than the supervised baseline when the mismatch ratio is larger than 50% on CIFAR10 and SVHN. In contrast, our methods get steady improvement under all class mismatch ratios. The reasons can be attributed as follows. First, our method is aware of the existence of OOD data. We do not treat OOD data like ID data, which can hurt performance. Second, we reuse OOD data by exploring their semantics which proves to be useful in Section 3.3. Therefore, even when the class-mismatch ratio gets 100%, performance of Υ-Model is still better than supervised baseline. 6.2 COMPARE WITH CLASS-MISMATCHED SSL METHODS In this subsection, we compare our method with two existing class-mismatched SSL methods – UASD (Chen et al., 2020) and DS3L (Guo et al., 2020). For a fair comparison, we use PseudoLabeling as the base method of DS3L. From Figures 6(c), 6(d), we can see our methods are superior to these two methods in all settings. It is noticeable that DS3L underperforms supervised baseline when all the unlabeled data are drawn from OOD classes. This is attributed to the fact that DS3L uses a down weighting strategy to alleviate the negative effect of OOD data and does not change the form of unsupervised loss. But we have shown in Section 3.3 that labeling OOD data as ID classes damage performance anyhow. On the contrary, Υ-Model uses the OOD data in the right way – simulating the process of training them with their ground truth labels. As a result, our method shows superiority especially under a large class-mismatch ratio. We also notice that the performance curve of Υ-Model appears a U-shape (obvious on CIFAR10). A possible reason is that RPL and SEC compete with each other. RPL tends to make samples get a high prediction on ID classes while SEC tends to make samples get a high prediction on OOD classes. When the class-mismatch ratio reaches 0% (100%), RPL (SEC) dominants the other. In this circumstance, one works without any disturbance of the other. However, when the class-mismatch ratio is 50%, they compete fiercely with each other, causing many incorrectly recognized ID or OOD samples. 6.3 ABLATION STUDY In this section, we validate the functionality of RPL and SEC. We conduct experiments on CIFAR10 benchmark as in the analysis section 3. Validation of effectiveness of RPL and SEC. We conduct ablation studies under different classmismatch ratios and report the averaged test accuracy and standard deviation of five runs. As usual, we vary the class-mismatch ratio. Table 1 displays the results. Firstly, comparing the first line and second line of the table, RPL not only outperforms vanilla PL in high class-mismatch ratio scenario but also improves in low class-mismatch ratio scenario. This reveals that balanced pseudo-labels always help since once the model creates imbalanced pseudo-labels, it will deteriorate when there are not enough measures to correct it. Secondly, comparing the second and third line, it shows that RPL alone alleviate the performance degradation but it can not prevent it, in accord with Conclusion 3. When using SEC, Υ-Model gets better results than supervised baseline when the class-mismatch ratio is high. Besides, comparing the third and last lines, we see that when cluster OOD data and exploring their semantics instead of using a unified class to label them, the performance improves. RPL helps filter out OOD data and solve imbalance problem. It is noticeable that the last two lines of Table 1 show that without RPL, SEC alone can not achieve better performance than supervised baseline. We will show the reason here. Figure 7(a) plots the proportion of OOD data that are pseudo-labeled as ID classes. It reveals that without RPL, i.e., using vanilla PL, the number of incorrectly recognized OOD data keep increasing as the training proceed. While with RPL, this ratio rapidly drops to 0. This proves that RPL help filter out OOD data by utilizing the imbalance property of OOD data. Further, we present the confusion matrix of Υ-Model on the full test set (all the 10 classes) of CIFAR10. Compared to vanilla PL in Figure 3(b), Υ-Model does not suffer from imbalance problem, as a result of which, its performance is not degraded. Effect of extra class number K. We vary the number of extra classes K. Figure 7(c) shows the result on CIFAR10 with class mismatch ratio 100%. The gray dashed line is the supervised baseline. The red dashed line is the Oracle Labeling strategy in Section 3.3. It is the upper bound of Υ-Model in this setup. Without SEC (K = 0), Υ-Model underperform supervised baseline. Using SEC (K ≥ 1), Υ-Model is always better than baseline. Also, it reaches its best performance when K equals the actual number of OOD classes. This demonstrates that by simulating the process of training on OOD data with ground truth, SEC helps classification model on ID data. 7 CONCLUSION In this paper, we analyze Pseudo-Labeling in class-mismatched semi-supervised learning where there are unlabeled OOD data from other classes. We show that Pseudo-Labeling suffers from performance degradation due to imbalanced pseudo-labels on OOD data. The correct way to use OOD data is to label them as classes different from ID classes while also partitioning them according to their semantics. Based on the analysis, we proposed Υ-Model and empirically validate its effectiveness. We believe our findings are not only beneficial to PL methods, but also inspiring to other methods like consistency regularization and holistic methods on how to effectively use OOD data. A ALGORITHM Algorithm 1 Υ-Model algorithm Input: Labeled dataset Dl = {(xli, yli)}ni=1, and unlabeled dataset Du = {xui}mi=1; Classification model fϕ parameterized with ϕ, ID class number KID, extra class number K, total epoch number E, pretrain epochs Ept, interval to update pseudo-labels Epl, pseudo-labeled set P , confidence calculation function c. 1: function REBALANCEDPSEUDOLABELING(D, f, τ ) 2: N ← miny∈YID |{x ∈ Du | f(y | x) > τ}| 3: τy ← Nth biggest({f(y | x) | x ∈ Du}), y = 1, 2, . . . ,KID 4: P ← ⋃ y∈YID{(x, y) | f(y | x) ≥ τy,x ∈ Du} 5: return P 6: function SEMANTICEXPLORATIONCLUSTERING(D, f, γ) 7: S ← {x|c(x) < γ} 8: M ← |S| 9: Pij ← f(KID + i|xj)/ ∑K k=1 f(KID + k|xj), i = 1, 2, . . . ,K, j = 1, 2, . . . ,M 10: Solve 7 by Sinkhorn-Knopp algorithm and get Q 11: ŷj ← KID + argmaxi Qij , j = 1, 2, . . . ,M 12: C ← {(xj , ŷj)}Mj=1 13: return C 14: for e = 1 to E do 15: if e < Ept then 16: train fϕ with standard supervised learning on Dl ▷ Pre-training phase 17: L = 18: else 19: train fϕ with standard supervised learning on Dl ∪ P ▷ PL training phase 20: if e ≤ Ept and e %Epl = 0 then 21: P ← ∅ 22: Pτ← REBALANCEDPSEUDOLABELING(Du, fϕ, τ ) ▷ Perform RPL 23: Pγ← SEMANTICEXPLORATIONCLUSTERING(Du, fϕ, γ) ▷ Perform SEC 24: P ← Pτ ∪ Pγ 25: return classification model fϕ B EMBEDDING VISUALIZATION We visualize the embedding of supervised baseline, vanilla PL and υ-Model on CIFAR10’s test set with all the classes by t-SNE. Figure 8 shows the result. OOD data are mixed with ID data since the supervised baseline does not see unlabeled OOD data. PL mixes OOD data with samples of certain classes (class 0). This is attributed to that their pseudo-labels are biased toward this class. Also, we can not clearly distinguish between OOD classes. In contrast, Υ-Model can not only make ID data distinguishable but also forms meaningful clusters on OOD data. C UNIVERSALITY OF ANALYSIS CONCLUSIONS To illustrate the universality of the conclusion in Section 3. We experiment on totally five kinds of datasets. ( We use (n/m) to represent n ID classes and m OOD classes.) • CIFAR10 (6/4) and CIFAR10 (5/5) : both created from CIFAR10 (Krizhevsky et al., 2009).CIFAR10 (6/4) takes the 6 animal classes as ID classes and 4 vehicle classes as OOD classes. CIFAR10 (5/5) has 3 animal and 2 vehicle classes for both ID and OOD classes. We select 400 labeled samples for each ID classes and totally 20,000 unlabeled samples from ID and OOD classes. • SVHN (6/4): We select the first “0”-“5” as ID classes and the rest as OOD. We select 100 labeled samples for each ID class and totally 20,000 unlabeled samples. • CIFAR100 (50/50): created from CIFAR100 (Krizhevsky et al., 2009). The first 50 classes are taken as ID classes and the rest as OOD classes. We select 100 labeled samples for each ID classes and totally 20,000 unlabeled samples. • Tiny ImageNet (100/100): created from Tiny ImageNet, which is a subset of ImageNet (Deng et al., 2009) with images downscaled to 64 × 64 from 200 classes. The first 100 classes are taken as ID classes and the rest as OOD classes. We select 100 labeled samples for each ID class and totally 40,000 unlabeled samples. Here we use C to represent CIFAR, TIN to represent Tiny ImageNet for short. For each dataset, we use the same experimental setup as in Section 3. C.1 IMBALANCE OF PSEUDO-LABELS To demonstrate the imbalance of pseudo-labels on OOD data, we compute two metrics to measure the extent of imbalance. • The KL divergence of pseudo-label distribution q and the uniform distribution u: kl = KL(q||u) = ∑ i qi log qi ui • The ratio of the ‘majority class’ and ‘minority class’: r = maxi qi mini qi The results on these datasets are displayed in Table 2. C.2 PSEUDO-LABELING STRATEGY FOR OOD DATA We report the test accuracy of the four pseudo-labeling strategies in Section 3.3. The class-mismatch ratio is set to 100%. Table 3 shows the results. Note that Re-Assigned Labeling has many possibilities. If there are n ID classes and m OOD classes, Amn possible assignments exist. It is impossible to experiment on all of them. To deal with it, we randomly choose 10 possible assignment and pick the maximum performance among them. D COMPARISON ON MORE DATASETS We compare our method with vanilla PL and the two class-mismatched methods in Section 6.2. We use the following hyperparameters: • CIFAR10 (6/4): τ = 0.95, γ = 0.3, Ept = 50, Epl = 2,K = 4 • SVHN (6/4): τ = 0.95, γ = 0.3, Ept = 50, Epl = 2,K = 4 • CIFAR100 (50/50):τ = 0.95, γ = 0.18, Ept = 50, Epl = 2,K = 20 • Tiny ImageNet (100/100): τ = 0.9, γ = 0.15, Ept = 50, Epl = 2,K = 20 For CIFAR100 (50/50) and Tiny ImageNet (100/100), we use a weight factor λ to trade off the loss on labeled set Dl and pseudo-labeled set P , which ramps up with function λ = exp ( −5× ( 1−min ( iter 40,000 , 1 ))2) , where iter is the number of training steps from Ept.
1. What is the main contribution of the paper in semi-supervised learning? 2. What are the strengths of the proposed method, particularly in addressing class miss-match issues? 3. What are the weaknesses of the paper, especially regarding the motivation and technical novelty of the proposed approach? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper focuses on the problem of semi-supervised learning (SSL) with class miss-match. The authors first empirically analyze Pseudo-Labeling (PL) in class-mismatched SSL and proposed a new method that consists of two components – Re-balanced Pseudo-Labeling (RPL) and Semantic Exploration Clustering (SEC). Experiments show that the proposed method achieves steady improvement over supervised baseline and state-of-the-art performance under all class mismatch ratios on different benchmarks. Review Strengths: This paper provides the empirical analysis of the Pseudo-Labeling model for ID and OOD data. This paper proposed a novel two-branched Υ-Model to solve the class miss-match issue in SSL. Experiments on different SSL benchmarks empirically validate the effectiveness of the proposed method. Weakness: The motivation for imbalanced pseudo-labels is not clear. The technical novelty and depth of the proposed approach are limited. The experiments are not extensive.
ICLR
Title On Pseudo-Labeling for Class-Mismatch Semi-Supervised Learning Abstract Semi-Supervised Learning (SSL) methods have shown superior performance when unlabeled data are drawn from the same distribution with labeled data. Among them, Pseudo-Labeling (PL) is a simple and widely used method that creates pseudo-labels for unlabeled data according to predictions of the training model itself. However, when there are unlabeled Out-Of-Distribution (OOD) data from other classes, these methods suffer from severe performance degradation and even get worse than merely training on labeled data. In this paper, we empirically analyze PL in class-mismatched SSL. We aim to answer the following questions: (1) How do OOD data influence PL? (2) What are the better pseudo-labels for OOD data? First, we show that the major problem of PL is imbalanced pseudolabels on OOD data. Second, we find that when labeled as their ground truths, OOD data are beneficial to classification performance on In-Distribution (ID) data. Based on the findings, we propose our model which consists of two components – Re-balanced Pseudo-Labeling (RPL) and Semantic Exploration Clustering (SEC). RPL re-balances pseudo-labels on ID classes to filter out OOD data while also addressing the imbalance problem. SEC uses balanced clustering on OOD data to create pseudo-labels on extra classes, simulating the process of training with their ground truths. Experiments show that our method achieves steady improvement over supervised baseline and state-of-the-art performance under all class mismatch ratios on different benchmarks. N/A 1 INTRODUCTION labeled data cat dog unlabeled data cat dog plane car ID data OOD data Figure 1: Realistic Semi-Supervised Learning may simultaneously contain unlabeled ID and OOD data. ID data come from the same classes as labeled data while OOD data come from classes that are not seen in labeled data. Deep Semi-Supervised Learning (SSL) methods are proposed to reduce dependency on massive labeled data by utilizing a number of cheap, accessible unlabeled data. Pseudo-Labeling (Lee et al., 2013) is a simple but effective and widely used method that creates pseudo-labels according to predictions of the training model itself. Then SSL can be transformed to standard supervised learning. Other representative SSL methods are consistency regularization (Laine & Aila, 2017; Tarvainen & Valpola, 2017; Miyato et al., 2019), holistic methods (Berthelot et al., 2019; Sohn et al., 2020) and generative methods (Kingma et al., 2014). The recent development of SSL shows that these methods have achieved competitive performance to supervised learning methods. However, all of these SSL methods achieve their good results based on an assumption that the unlabeled data are drawn from the same distribution as the labeled data. This assumption can be easily violated in real-world applications. One of the common cases is that some unlabeled data come from unseen classes. For example, as is illustrated in Figure 1, in image classification, we can collect a lot of unlabeled images from the internet but usually they cover broader category concepts than labeled data. Oliver et al. (2018) have shown that on such class-mismatched conditions, performance of traditional SSL methods is damaged. To deal with this problem, several methods have been proposed. These methods include filtering out OOD data (Yu et al., 2020; Chen et al., 2020), down weighting OOD data (Chen et al., 2020) and re-use OOD data by neural style transfer (Luo et al., 2021) or self-supervised learning (Huang et al., 2021). Although these methods achieve good results, why do OOD data damage performance and how will OOD data help remain unclear. Here, we focus on analyzing Pseudo-Labeling (PL) in class-mismatched SSL and give some answers to these two questions. In this paper, we empirically analyze PL in class-mismatched SSL setting. These experiments aim to answer the following questions: (1) How do OOD data influence PL? (2) What are the better pseudo-labels for OOD data? For question (1), we investigate pseudo-labels created by PL. The main finding is that pseudo-labels on OOD data tend to be imbalanced while on ID data, it remains balanced. We further show that PL’s performance is damaged due to such imbalance on OOD data. For question (2), several strategies for labeling OOD data are investigated. We conclude that it is beneficial when labeling OOD data as a class different from ID data, and the performance can be further improved when the pseudo-labels partition unlabeled OOD data into their semantic clusters. Based on the experimental analysis, we propose a two-branched model called Υ-Model, which processes unlabeled data according to their confidence score on ID classes. The first branch performs re-balanced pseudo-labeling on high-confidence data. It utilizes the property of imbalanced pseudolabels on OOD data, truncating the number of pseudo-labeled data for each class to their minimum. This procedure filters out many OOD data and also prevents the negative effect of imbalanced pseudo-labels. For the other branch, semantic exploration clustering is performed on low-confidence data. They are considered as OOD data and their semantics will be mined by clustering into different partitions on extra classes. The clustering result provides better pseudo-labels for these OOD data than vanilla PL. Experiments on different SSL benchmarks show that our model can achieve steady improvement in comparison to supervised baseline. We summarize our contributions as follows: • We analyze the Pseudo-Labeling model for ID and OOD data. The findings lead to two primary conclusions: (1) Imbalance of pseudo-labels on OOD data damages PL’s performance. (2) Best pseudo-labels for unlabeled OOD data are those different from ID classes and partitioning them into their semantic clusters. • We propose our two-branched Υ-Model. One branch re-balances pseudo-labels on ID classes and filter out OOD data. The other branch explores semantics of OOD data by clustering on extra classes. • Experiments on different SSL benchmarks empirically validate effectiveness of our model. 2 PRELIMINARY 2.1 CLASS-MISMATCHED SSL Similar to the SSL problem, the training dataset of the class-mismatched SSL problem contrains n ID labeled samples Dl = {(xli, yli)}ni=1 and m unlabeled samples Du = {xui}mi=1, (usually, m ≫ n,) yli ∈ YID = {1, . . . ,KID}, while different from SSL, the underlying ground truth yu of unlabeled data may be different from labeled data. i.e., yuj ∈ YID ∪ YOOD,YOOD = {KID + 1, . . . ,KID + KOOD}. The goal of class-mismatched SSL is to correctly classify ID samples into YID using labeled set with ID samples and unlabeled set possibly with OOD samples. 2.2 PSEUDO-LABELING Pseudo-Labeling (PL) leverages the idea that we can use the model itself to obtain artificial labels for unlabeled data (Lee et al., 2013). PL first perform supervised learning on labeled data to get a pre-trained model f , which outputs the probability of belonging to each ID class. It then creates the pseudo-labels for each unlabeled sample: y′ = { argmaxy∈YID f(y|x) , c(x) > τ reject , otherwise , (1) c(x) = max y∈YID f(y|x), (2) where c(x) is the confidence score for x. All the pseudo-labeled unlabel data will be treated as labeled data for the next supervised learning generation. PL iteratively performs supervised learning and pseudo-label creation until stop condition. 3 ANALYSIS OF PSEUDO-LABELING IN CLASS-MISMATCHED SSL In class-mismatched SSL, vanilla PL can only create pseudo-labels on ID classes even for OOD data. We will analyze how these OOD data influence vanilla PL and what are the better pseudo-labels for them in this section. 3.1 SETUP We use CIFAR-10 (Krizhevsky et al., 2009) as our experimental dataset. The data set contains 10 categories – 6 animal classes and 4 vehicle classes. Following Guo et al. (2020), we perform a classification task on animal classes (denoted as class 0-5) and select 400 images per class to construct the labeled data set, i.e., 2,400 labeled examples. The other 4 vehicle classes are taken as OOD classes (denoted as classes 6-9). 20,000 images are randomly selected from all the 10 classes as the unlabeled data set. We vary the ratio of unlabeled images to modulate class distribution mismatch. For example, the extent is 50% means half of the unlabeled data comes from animal classes and the others come from vehicle classes. We use Wide-ResNet-28-2 (Zagoruyko & Komodakis, 2016) as our backbone. We also adopt data augmentation techniques including random resized crop, random color distortion and random horizontal flip. We train our network for 400 epochs. For each epoch, we iterate over the unlabeled set and random sample labeled data, each unlabeled and labeled minibatch contains 128 samples. We adopt Adam as the optimization algorithm with the initial learning rate 3× 10−3. We report the averaged accuracy of the last 20 epochs, pretending there is no reliable (too small) validation set to perform early stop (Oliver et al., 2018). 3.2 IMBALANCE OF PSEUDO-LABELS ON OOD DATA In this section, we analyze the pre-trained model that creates the first set of pseudo-labels, and the final model trained by Pseudo-Labeling. Pretrained model. First, we draw the distribution of confidence score on OOD data and ID data. Figure 2(a) tells that, like what is concluded in OOD detection (Hendrycks & Gimpel, 2017), proportion of high-confidence data in ID data is larger than OOD data. However, in class-mismatched SSL, the unlabeled data are in much larger quantities. When the class mismatch ratio is large, there are quite a few OOD data with high confidence scores. We will show in the final model experiments that these high-confidence OOD data damages performance. Secondly, we study pseudo-labels on both ID data and OOD data. Figure 2(b) shows that pseudo-labels ID data is balanced. However, they are rather imbalanced on OOD data (Figure 2(c)). This is attributed to the different distribution they are drawn from. Samples with certain pattern bias to certain classes. ID data bias to ID classes uniformly because they are sampled by the same distribution. However, with little probability, OOD data will also bias to ID classes uniformly since they have little relevance to ID data. Final Pseudo-Labeling model. As an old saying goes, a good beginning is half done. However, such imbalance of the first set of pseudo-labels starts PL model badly when there is a large portion of OOD data, putting the model in danger of imbalanced learning. We run vanilla PL and show that the imbalance of pseudo-labels harms the performance. Figure 3(a) shows the performance of PL model with different OOD ratios. In accord with Oliver et al. (2018), PL model degrades as the portion of OOD data gets larger. Figure 3(b) displays the confusion matrix of the PL model on the whole test set containing both ID and OOD data. Since only 6 classes are known to us, the confusion matrix is a rectangle. We can see almost all the OOD samples (class 6-9) are classified as class 0, which means the imbalance effect on OOD data gets even worse as the PL training goes on. The possible reason is that, unlike Pseudo-labeling on ID data, supervision of labeled data can not help correct pseudo-labels on OOD data. Thus the imbalance continuously deteriorates. The imbalance on OOD data also influences classification performance on ID data. Samples of major classes (class 0) overwhelm the loss and gradient, leading to a degenerate model (Lin et al., 2017). We can see the PL model mistakenly classifies many of data with class 1-5 into class 0. 3.3 PSEUDO-LABELING STRATEGY FOR OOD DATA The previous section shows OOD data hurt performance of vanilla PL. Then here comes the question: Assuming that we already know which data are OOD, how do we use these OOD data? Is omitting them the best way? If not, what are the better pseudo-labels for them? To answer these questions, we investigate four strategies to create pseudo-labels for OOD data: • Baseline. This baseline omits all the OOD data and only trains on the labeled ID data. • Re-Assigned Labeling. This strategy assigns data of each OOD class to an ID class. It ensures that different OOD class is assigned to different ID class, keeping the semantics unchanged between OOD classes. For example, (ship, trunk, airline, automobile) can be assigned to (bird, cat, deer, dog). This strategy can be seen as training a classifier of “super-classes”. • Open-Set Labeling. This strategy is named after the related setting – Open-Set Recognition (Scheirer et al., 2013; Bendale & Boult, 2016). This strategy treats all OOD data as one unified class KID + 1. Thus this model outputs probability over KID + 1 classes. • Oracle Labeling. This strategy uses the ground truth of OOD data. Thus this model outputs probability over KID +KOOD classes. Note that Open-Set Labeling and Oracle Labeling can classify samples into more than KID classes. However, during evaluation, we only classify samples into KID ID classes. For these model, the predicted label ŷ of a test sample x is calculated as: ŷ(x) = argmax y∈YID f(y|x) (3) The overall comparison of the four strategies is illustrated in Figure 4. We also report test accuracy when class mismatch ratio is 100%. From Figure 4, we can get several important conclusions. (1) Re-Assigned Labeling underperforms baseline a little1. This indicates that assign samples with OOD classes to ID classes does not help the model distinguish between ID classes even if we somehow know which OOD data are semantically different. It also reveals that performing vanilla PL on OOD data may never help even if we do it perfectly. (2) Open-Set Labeling outperforms baseline, which indicates it improves the performance if we label the OOD data as a class other than ID classes. (3) We can see Oracle Labeling improves the performance and achieves the best result among the four strategies. It means that in addition to label OOD data as extra classes, if we can further assign OOD data with different semantics to different classes, the model will achieve better results. 3.4 SUMMARY OF SECTION In this section, we study the behavior of Pseudo-Labeling model in class-mismatched SSL. We summarize several important conclusions here: Conclusion 1: Classification model trained with labeled ID data creates imbalanced pseudo-labels on OOD data while on ID data, it remains balanced. Conclusion 2: The vanilla PL process makes the imbalance of pseudo-labels deteriorate, damaging the classification performance on ID data. Conclusion 3: Labeling OOD data as ID classes does not help and may even perform a little worse. Conclusion 4: It is beneficial to label OOD data as extra classes different from ID classes. If we can further label semantically different OOD data as different classes, the performance can be further improved. 4 METHOD Based on the findings in Section 3, we proposed Υ-Model (named after its shape) for classmismatched SSL. Υ-Model trains a classifier f that will output the posterior distribution over KID + K classes, i.e., f(y|x) ∈ RKID+K ,1⊤f(y|x) = 1. K is the number of extra classes, which can be known in advance (i.e., K = KOOD) or be set as a hyper-parameter. Similar to vanilla PL, we define confidence with the same form as Equation 2. However, this confidence is a little different from its original definition in Hendrycks & Gimpel (2017), for we only calculate the maximum probability of the KID classes instead of all. Therefore, we rename it to In-Distribution confidence (ID confidence). For evaluation, we predict labels using Equation 3. Υ-Model aims to solve the following questions: Problem 1: how to avoid imbalanced pseudo-labels in PL model? (Conclusion 1, 2) Problem 2: how to avoid labeling OOD data as ID? (Conclusion 3) Problem 3: how to create proper pseudo-labels for unlabeled OOD data? (Conclusion 4) 1Assign 4 OOD classes to 4 ID classes of 6 causes imbalance. But we test the accuracy on selected 4 classes and find they show a similar result. Υ-Model consists of two main branches – Re-balanced Pseudo-Labeling (RPL) and Semantic Exploration Clustering (SEC). RPL acts on high-confidence data to solve Problem 1, 2. SEC acts on low-confidence data to solve Problem 3. We describe the two branches in the following sections. The overview of Υ-Model is illustrated in Figure 5. 4.1 RE-BALANCED PSEUDO-LABELING As is illustrated in Section 3.3, the main problem of vanilla PL is that a large number of OOD data with high confidence scores have imbalanced pseudo-labels. One possible solution is re-weighting the unlabeled sample (Guo et al., 2020) or using other methods in the imbalance learning field. However, even if we solve the problem of imbalance learning, labeling OOD data as ID classes also may damage the performance (Conclusion 3). In this paper, we use a simple method – Re-balanced Pseudo Labeling – to simultaneously solve imbalance (Problem 1) and incorrect recognition (Problem 2). It produces a set P of pseudo-labeled samples in three steps: N = min y∈YID |{x ∈ Du | f(y | x) > τ}| , (4) τy = Nth biggest({f(y | x) | x ∈ Du}), y = 1, 2, . . . ,KID, (5) P = ⋃ y∈YID {(x, y) | f(y | x) ≥ τy,x ∈ Du}, (6) where Nth biggest represents the N–th biggest value of the given set. RPL first calculates the minimum number of pseudo-labeled samples for each ID class by Equation 4. Then it truncates the number of pseudo-labels of each ID class to that number by Equation 5, 6. The process of RPL is illustrated in Figure 5(a). First, it enforces the pseudo labels on ID classes to be balanced, solving Problem 1. Second, as is shown in Section 3, the set of high-confidence data is a mixture of ID and ODD data. Due to Conclusion 1, the pseudo-label distribution of such a set is a sum of imbalanced and balanced ones, thus still imbalanced. However, by selecting only top-N confident samples for each ID class, we will keep ID data and omit many OOD data since confidence on ID data tends to be higher than OOD data (Hendrycks & Gimpel, 2017). This process solves Problem 2. 4.2 SEMANTIC EXPLORATION CLUSTERING As is demonstrated in Section 3.3, if we know a set of samples is OOD, it will improve the performance if we label them as a unified class KID + 1. But the best way is to use their ground truths (Conclusion 4). However, it is impossible to access their ground truths since they are unlabeled. We resort to using Deep Clustering methods (Caron et al., 2018; Asano et al., 2020) to mine their semantics and approximate the process of learning with the ground truths. Here, we use the balanced clustering method in Asano et al. (2020); Caron et al. (2020) to create pseudo-labels for these OOD data. Assuming there are M samples recognized as OOD, we first compute their soft targets: min Q∈U(K,M) ⟨Q,− logP ⟩, U(K,M) := { Q ∈ RK×M+ | Q1 = 1 K 1, Q⊤1 = 1 M 1 } , (7) where P ∈ RK×M+ , Pij = f̂(KID + i|xj). f̂ is the normalized posterior distribution on extra classes, i.e., f̂(KID+ i|xj) = f(KID+ i|xj)/ ∑K k=1 f(KID+k|xj). We use the Sinkhorn-Knopp algorithm Cuturi (2013) to optimize Q. Once we get Q, we harden the label by picking the class with the maximum predicted probability and mapping it to the extra K classes: ŷj = KID + argmax i Qij . (8) ŷj is used as the pseudo-label for xj . We perform SEC on the set of data with ID confidence lower than a threshold γ, i.e., {x|c(x) < γ}. 5 RELATED WORK Class-Mismatched Semi-Supervised Learning. Deep Semi-Supervised Learning suffers from performance degradation when there are unseen classes in unlabeled data (Oliver et al., 2018). As the proportion of such out-of-distribution (OOD) data get larger, the performance drop more. To cope with such a class-mismatch problem, several methods are proposed. Chen et al. (2020) formulate a sequence of ensemble models aggregated accumulatively on-the-fly for joint self-distillation and OOD filtering. Guo et al. (2020) re-weight the unlabeled data by meta-learning to decrease the negative effect of OOD data. Huang et al. (2020) recycle transferable OOD data by means of adversarial learning. Different from all these methods, we conduct a comprehensive study on Pseudo-Labeling (PL) and give useful guidance on how to do better in class-mismatched SSL. Pseudo-Labeling. The method of Pseudo-Labeling, also known as self-training, is a simple and effective way for Deep Semi-Supervised Learning (Lee et al., 2013; Shi et al., 2018; Arazo et al., 2020; Iscen et al., 2019). Despite its simplicity, it has been widely applied to diverse fields such as image classification (Xie et al., 2020), natural language processing (He et al., 2020) and object detection (Rosenberg et al., 2005). The use of a hard label makes Pseudo-Labeling closely related to entropy minimization (Grandvalet & Bengio, 2004). Deep Clustering. Deep clustering methods improve ability of traditional cluster methods by leveraging the representation power of DNNs. A common means is to transform data into lowdimensional feature vectors and apply traditional clustering methods (Yang et al., 2017; Caron et al., 2018). In Self-Supervised Learning, clustering methods are used to learning meaningful representation for downstream tasks (Caron et al., 2018; Asano et al., 2020; Caron et al., 2020). Modern Deep Clustering can learn semantically meaningful clusters and achieves competitive results against supervised learning (Gansbeke et al., 2020). 6 EXPERIMENTS To validate the effectiveness of our Υ-Model, we conduct experiments on different benchmarks. Dataset. We test our methods on two datasets as in Oliver et al. (2018): (1) CIFAR10: we use the same configuration as in Section 3.1. SVHN: The data set contains 10 categories – digits “0”-“9”. We select the first “0”-“5” as ID classes and the rest as OOD. For each class, we randomly select 100 images as labeled data. Meanwhile, 20,000 images are randomly selected from all the 10 classes as the unlabeled data set. The class-mismatch ratio is set as {0%, 25%, 50%, 75%, 100%}. Implementation Details. We use the same network and training protocol as Section 3.1. We first train a classification model only on labeled data for 100 epochs without RPL and SEC. We update pseudo-labels every 2 epochs. For both datasets, we set τ = 0.95, γ = 0.3,K = 4. We use an exponential moving average model for final evaluation as in Athiwaratkun et al. (2019). 6.1 COMPARE WITH TRADITIONAL SSL METHODS In this subsection, we compare our methods with four traditional SSL methods – PseudoLabeling (Lee et al., 2013), Π-Model (Laine & Aila, 2017), Mean Teacher (Tarvainen & Valpola, 2017) and VAT (Miyato et al., 2019). Figures 6(a), 6(b) show the results. Traditional methods suffer from performance degradation as the mismatch ratio increases. They usually get worse than the supervised baseline when the mismatch ratio is larger than 50% on CIFAR10 and SVHN. In contrast, our methods get steady improvement under all class mismatch ratios. The reasons can be attributed as follows. First, our method is aware of the existence of OOD data. We do not treat OOD data like ID data, which can hurt performance. Second, we reuse OOD data by exploring their semantics which proves to be useful in Section 3.3. Therefore, even when the class-mismatch ratio gets 100%, performance of Υ-Model is still better than supervised baseline. 6.2 COMPARE WITH CLASS-MISMATCHED SSL METHODS In this subsection, we compare our method with two existing class-mismatched SSL methods – UASD (Chen et al., 2020) and DS3L (Guo et al., 2020). For a fair comparison, we use PseudoLabeling as the base method of DS3L. From Figures 6(c), 6(d), we can see our methods are superior to these two methods in all settings. It is noticeable that DS3L underperforms supervised baseline when all the unlabeled data are drawn from OOD classes. This is attributed to the fact that DS3L uses a down weighting strategy to alleviate the negative effect of OOD data and does not change the form of unsupervised loss. But we have shown in Section 3.3 that labeling OOD data as ID classes damage performance anyhow. On the contrary, Υ-Model uses the OOD data in the right way – simulating the process of training them with their ground truth labels. As a result, our method shows superiority especially under a large class-mismatch ratio. We also notice that the performance curve of Υ-Model appears a U-shape (obvious on CIFAR10). A possible reason is that RPL and SEC compete with each other. RPL tends to make samples get a high prediction on ID classes while SEC tends to make samples get a high prediction on OOD classes. When the class-mismatch ratio reaches 0% (100%), RPL (SEC) dominants the other. In this circumstance, one works without any disturbance of the other. However, when the class-mismatch ratio is 50%, they compete fiercely with each other, causing many incorrectly recognized ID or OOD samples. 6.3 ABLATION STUDY In this section, we validate the functionality of RPL and SEC. We conduct experiments on CIFAR10 benchmark as in the analysis section 3. Validation of effectiveness of RPL and SEC. We conduct ablation studies under different classmismatch ratios and report the averaged test accuracy and standard deviation of five runs. As usual, we vary the class-mismatch ratio. Table 1 displays the results. Firstly, comparing the first line and second line of the table, RPL not only outperforms vanilla PL in high class-mismatch ratio scenario but also improves in low class-mismatch ratio scenario. This reveals that balanced pseudo-labels always help since once the model creates imbalanced pseudo-labels, it will deteriorate when there are not enough measures to correct it. Secondly, comparing the second and third line, it shows that RPL alone alleviate the performance degradation but it can not prevent it, in accord with Conclusion 3. When using SEC, Υ-Model gets better results than supervised baseline when the class-mismatch ratio is high. Besides, comparing the third and last lines, we see that when cluster OOD data and exploring their semantics instead of using a unified class to label them, the performance improves. RPL helps filter out OOD data and solve imbalance problem. It is noticeable that the last two lines of Table 1 show that without RPL, SEC alone can not achieve better performance than supervised baseline. We will show the reason here. Figure 7(a) plots the proportion of OOD data that are pseudo-labeled as ID classes. It reveals that without RPL, i.e., using vanilla PL, the number of incorrectly recognized OOD data keep increasing as the training proceed. While with RPL, this ratio rapidly drops to 0. This proves that RPL help filter out OOD data by utilizing the imbalance property of OOD data. Further, we present the confusion matrix of Υ-Model on the full test set (all the 10 classes) of CIFAR10. Compared to vanilla PL in Figure 3(b), Υ-Model does not suffer from imbalance problem, as a result of which, its performance is not degraded. Effect of extra class number K. We vary the number of extra classes K. Figure 7(c) shows the result on CIFAR10 with class mismatch ratio 100%. The gray dashed line is the supervised baseline. The red dashed line is the Oracle Labeling strategy in Section 3.3. It is the upper bound of Υ-Model in this setup. Without SEC (K = 0), Υ-Model underperform supervised baseline. Using SEC (K ≥ 1), Υ-Model is always better than baseline. Also, it reaches its best performance when K equals the actual number of OOD classes. This demonstrates that by simulating the process of training on OOD data with ground truth, SEC helps classification model on ID data. 7 CONCLUSION In this paper, we analyze Pseudo-Labeling in class-mismatched semi-supervised learning where there are unlabeled OOD data from other classes. We show that Pseudo-Labeling suffers from performance degradation due to imbalanced pseudo-labels on OOD data. The correct way to use OOD data is to label them as classes different from ID classes while also partitioning them according to their semantics. Based on the analysis, we proposed Υ-Model and empirically validate its effectiveness. We believe our findings are not only beneficial to PL methods, but also inspiring to other methods like consistency regularization and holistic methods on how to effectively use OOD data. A ALGORITHM Algorithm 1 Υ-Model algorithm Input: Labeled dataset Dl = {(xli, yli)}ni=1, and unlabeled dataset Du = {xui}mi=1; Classification model fϕ parameterized with ϕ, ID class number KID, extra class number K, total epoch number E, pretrain epochs Ept, interval to update pseudo-labels Epl, pseudo-labeled set P , confidence calculation function c. 1: function REBALANCEDPSEUDOLABELING(D, f, τ ) 2: N ← miny∈YID |{x ∈ Du | f(y | x) > τ}| 3: τy ← Nth biggest({f(y | x) | x ∈ Du}), y = 1, 2, . . . ,KID 4: P ← ⋃ y∈YID{(x, y) | f(y | x) ≥ τy,x ∈ Du} 5: return P 6: function SEMANTICEXPLORATIONCLUSTERING(D, f, γ) 7: S ← {x|c(x) < γ} 8: M ← |S| 9: Pij ← f(KID + i|xj)/ ∑K k=1 f(KID + k|xj), i = 1, 2, . . . ,K, j = 1, 2, . . . ,M 10: Solve 7 by Sinkhorn-Knopp algorithm and get Q 11: ŷj ← KID + argmaxi Qij , j = 1, 2, . . . ,M 12: C ← {(xj , ŷj)}Mj=1 13: return C 14: for e = 1 to E do 15: if e < Ept then 16: train fϕ with standard supervised learning on Dl ▷ Pre-training phase 17: L = 18: else 19: train fϕ with standard supervised learning on Dl ∪ P ▷ PL training phase 20: if e ≤ Ept and e %Epl = 0 then 21: P ← ∅ 22: Pτ← REBALANCEDPSEUDOLABELING(Du, fϕ, τ ) ▷ Perform RPL 23: Pγ← SEMANTICEXPLORATIONCLUSTERING(Du, fϕ, γ) ▷ Perform SEC 24: P ← Pτ ∪ Pγ 25: return classification model fϕ B EMBEDDING VISUALIZATION We visualize the embedding of supervised baseline, vanilla PL and υ-Model on CIFAR10’s test set with all the classes by t-SNE. Figure 8 shows the result. OOD data are mixed with ID data since the supervised baseline does not see unlabeled OOD data. PL mixes OOD data with samples of certain classes (class 0). This is attributed to that their pseudo-labels are biased toward this class. Also, we can not clearly distinguish between OOD classes. In contrast, Υ-Model can not only make ID data distinguishable but also forms meaningful clusters on OOD data. C UNIVERSALITY OF ANALYSIS CONCLUSIONS To illustrate the universality of the conclusion in Section 3. We experiment on totally five kinds of datasets. ( We use (n/m) to represent n ID classes and m OOD classes.) • CIFAR10 (6/4) and CIFAR10 (5/5) : both created from CIFAR10 (Krizhevsky et al., 2009).CIFAR10 (6/4) takes the 6 animal classes as ID classes and 4 vehicle classes as OOD classes. CIFAR10 (5/5) has 3 animal and 2 vehicle classes for both ID and OOD classes. We select 400 labeled samples for each ID classes and totally 20,000 unlabeled samples from ID and OOD classes. • SVHN (6/4): We select the first “0”-“5” as ID classes and the rest as OOD. We select 100 labeled samples for each ID class and totally 20,000 unlabeled samples. • CIFAR100 (50/50): created from CIFAR100 (Krizhevsky et al., 2009). The first 50 classes are taken as ID classes and the rest as OOD classes. We select 100 labeled samples for each ID classes and totally 20,000 unlabeled samples. • Tiny ImageNet (100/100): created from Tiny ImageNet, which is a subset of ImageNet (Deng et al., 2009) with images downscaled to 64 × 64 from 200 classes. The first 100 classes are taken as ID classes and the rest as OOD classes. We select 100 labeled samples for each ID class and totally 40,000 unlabeled samples. Here we use C to represent CIFAR, TIN to represent Tiny ImageNet for short. For each dataset, we use the same experimental setup as in Section 3. C.1 IMBALANCE OF PSEUDO-LABELS To demonstrate the imbalance of pseudo-labels on OOD data, we compute two metrics to measure the extent of imbalance. • The KL divergence of pseudo-label distribution q and the uniform distribution u: kl = KL(q||u) = ∑ i qi log qi ui • The ratio of the ‘majority class’ and ‘minority class’: r = maxi qi mini qi The results on these datasets are displayed in Table 2. C.2 PSEUDO-LABELING STRATEGY FOR OOD DATA We report the test accuracy of the four pseudo-labeling strategies in Section 3.3. The class-mismatch ratio is set to 100%. Table 3 shows the results. Note that Re-Assigned Labeling has many possibilities. If there are n ID classes and m OOD classes, Amn possible assignments exist. It is impossible to experiment on all of them. To deal with it, we randomly choose 10 possible assignment and pick the maximum performance among them. D COMPARISON ON MORE DATASETS We compare our method with vanilla PL and the two class-mismatched methods in Section 6.2. We use the following hyperparameters: • CIFAR10 (6/4): τ = 0.95, γ = 0.3, Ept = 50, Epl = 2,K = 4 • SVHN (6/4): τ = 0.95, γ = 0.3, Ept = 50, Epl = 2,K = 4 • CIFAR100 (50/50):τ = 0.95, γ = 0.18, Ept = 50, Epl = 2,K = 20 • Tiny ImageNet (100/100): τ = 0.9, γ = 0.15, Ept = 50, Epl = 2,K = 20 For CIFAR100 (50/50) and Tiny ImageNet (100/100), we use a weight factor λ to trade off the loss on labeled set Dl and pseudo-labeled set P , which ramps up with function λ = exp ( −5× ( 1−min ( iter 40,000 , 1 ))2) , where iter is the number of training steps from Ept.
1. What is the focus of the paper regarding semi-supervised learning? 2. What are the strengths of the proposed Upsilon model? 3. Do you have any concerns or suggestions for improving the paper? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper works on the class-mismatch semi-supervised learning problem, where the assumption is that the unlabeled samples include class labels that do not appear in the labeled data, i.e., out-of-distribution (OOD) data, in addition to the class labels that appear in the labeled data, i.e., in-distribution (ID) data. It is known that if there are OOD data included in the unlabeled data, it can degrade performance of semi-supervised learning algorithms. This paper focuses on one of the semi-supervised learning methods, which is the pseudo-labeling method. The paper first investigates how using pseudo-labels in the class-mismatch semi-supervised learning setup can be problematic. The experiments show that proportions of high-confidence data in ID data is larger than OOD data. Experiments also show that pseudo-labels (w.r.t. class predictions) are more balanced in ID data while it is unbalanced in OOD data at the beginning of training (with a pretrained model). This phenomenon becomes exaggerated at the end of training, where almost all of the OOD data is predicted into a single ID class. Assuming we know the labels of the unlabeled data, the paper performs further experiments that compare a few methods, and finds out that it is ideal if we have access to the underlying single class labels of OOD data, and solve the problem as a multi-class classification problem with K_id + K_ood (number of classes of ID and OOD) classes. Based on these observations, the paper proposes an Upsilon model, that consists of a re-balanced pseudo-labelling (RPL) and semantic exploration clustering (SEC). RPL aims to balance the distributions of pseudo-labels based on the observation that pseudo-labels become balanced/imbalanced with ID/OOD data, respectively. SEC is a workaround for not having ground truth labels of OOD data. Experiments show comparison with both traditional SSL methods and class-mismatch SSL methods, and show how the proposed Upsilon model works better than these baselines. Ablation study shows why the components within the Upsilon model is necessary. Review Strength The paper gives insights to the behavior of pseudo-labels under the class-mismatch problem experimentally, and provides an algorithm that can alleviate the issues that are discussed. It performs better than the pseudo-label baseline, but also performs better than two recent class-mismatch SSL baselines (especially with high mismatch ratio), so the benefits shown in the experimental results are one of the main strengths of the paper. The empirical comparison of open-set labeling and oracle labeling is interesting and motivates the proposed methodology, and makes the story clear. The ablation study further motivates the design of the Upsilon method. Weaknesses Since Upsilon is based on the findings and discussion in Section 3, it would be better to have at least one more dataset (for example, SVHN that is used in a later section) to see if the findings are not specific to a single dataset. In Figure 2, it would be interesting to investigate if similar results hold with random splits for ID/OOD classes, instead of the animal/vehicle split. (I think the SVHN experiments later on somehow show that the splits doesn't matter so much, since the 0-5/6-9 split is not as semantically meaningful as the one in CIFAR-10.) It is interesting and surprising how open-set labeling is worse than oracle labeling, although both are evaluated with only ID classes. It would be interesting to have some discussions about potential underlying mechanisms of this experimental result. Minor comments In the caption of Figure 3, it says "A lot of ID samples are misclassified into one class." but is this correct? Should it be "OOD samples"? Which animal class is class 0 in Fig.2(c) and Fig.3(b)? Table 1: typo "aof" ======================= After rebuttal Thank you for the additional experiments and for answering my questions. I would like to keep my positive score. The answer to "It is interesting .. experimental result." makes sense. So there may be some implicit transfer learning going on.
ICLR
Title Restricted Strong Convexity of Deep Learning Models with Smooth Activations Abstract We consider the problem of optimization of deep learning models with smooth activation functions. While there exist influential results on the problem from the “near initialization” perspective, we shed considerable new light on the problem. In particular, we make two key technical contributions for such models with L layers, m width, and σ 0 initialization variance. First, for suitable σ 2 0 , we establish a O( poly(L) √ m ) upper bound on the spectral norm of the Hessian of such models, considerably sharpening prior results. Second, we introduce a new analysis of optimization based on Restricted Strong Convexity (RSC) which holds as long as the squared norm of the average gradient of predictors is Ω( poly(L) √ m ) for the square loss. We also present results for more general losses. The RSC based analysis does not need the “near initialization” perspective and guarantees geometric convergence for gradient descent (GD). To the best of our knowledge, ours is the first result on establishing geometric convergence of GD based on RSC for deep learning models, thus becoming an alternative sufficient condition for convergence that does not depend on the widely-used Neural Tangent Kernel (NTK). We share preliminary experimental results supporting our theoretical advances. N/A 2 0 , we establish a O( poly(L)√ m ) upper bound on the spectral norm of the Hessian of such models, considerably sharpening prior results. Second, we introduce a new analysis of optimization based on Restricted Strong Convexity (RSC) which holds as long as the squared norm of the average gradient of predictors is Ω( poly(L)√ m ) for the square loss. We also present results for more general losses. The RSC based analysis does not need the “near initialization” perspective and guarantees geometric convergence for gradient descent (GD). To the best of our knowledge, ours is the first result on establishing geometric convergence of GD based on RSC for deep learning models, thus becoming an alternative sufficient condition for convergence that does not depend on the widely-used Neural Tangent Kernel (NTK). We share preliminary experimental results supporting our theoretical advances. 1 INTRODUCTION Recent years have seen advances in understanding convergence of gradient descent (GD) and variants for deep learning models (Du et al., 2019; Allen-Zhu et al., 2019; Zou & Gu, 2019; Zou et al., 2020; Liu et al., 2022; Ji & Telgarsky, 2019; Oymak & Soltanolkotabi, 2020; Nguyen, 2021). Despite the fact that such optimization problems are non-convex, a series of recent results have shown that GD has geometric convergence and finds near global solution “near initialization” for wide networks. Such analysis is typically done based on the Neural Tangent Kernel (NTK) (Jacot et al., 2018), in particular by showing that the NTK is positive definite “near initialization,” in turn implying the optimization problem satisfies a condition closely related to the Polyak-Łojasiewicz (PL) condition, which in turn implies geometric convergence to the global minima (Liu et al., 2022; Nguyen, 2021). Such results have been generalized to more flexible forms of “lazy learning” where similar guarantees hold (Chizat et al., 2019). However, there are concerns regarding whether such “near initialization” or “lazy learning” truly explains the optimization behavior in realistic deep learning models (Geiger et al., 2020; Yang & Hu, 2020; Fort et al., 2020; Chizat et al., 2019). Our work focuses on optimization of deep models with smooth activation functions, which have become increasingly popular in recent years (Du et al., 2019; Liu et al., 2022; Huang & Yau, 2020). Much of the theoretical convergence analysis of GD has focused on ReLU networks (Allen-Zhu et al., 2019; Nguyen, 2021). Some progress has also been made for deep models with smooth activations, but existing results are based on a variant of the NTK analysis, and the requirements on the width of such models are high (Du et al., 2019; Liu et al., 2022). Based on such background and context, the motivating question behind our work is: Are there other (meaningful) sufficient conditions beyond NTK which lead to (geometric) convergence of GD for deep learning optimization? Based on such motivation, we make two technical contributions in this paper which shed light on optimization of deep learning models with smooth activations and with L layers, m width, and σ20 initialization variance. First, for suitable σ20 , we establish a O( poly(L)√ m ) upper bound on the spectral norm of the Hessian of such models (Section 4). The bound holds over a large layerwise spectral norm (instead of Frobenius norm) ball BSpecρ,ρ1 (θ0) around the random initialization θ0, where the radius ρ < √ m, arguably much bigger than what real world deep models need. Our analysis builds on and sharpens recent prior work on the topic (Liu et al., 2020). While our analysis holds for Gaussian random initialization of weights with any variance σ20 , the poly(L) dependence happens when σ20 ≤ 14+o(1) 1 m (we handle the 1 m scaling explicitly) . Second, based on our Hessian spectral norm bound, we introduce a new approach to the analysis of optimization of deep models with smooth activations based on the concept of Restricted Strong Convexity (RSC) (Section 5) (Wainwright, 2019; Negahban et al., 2012; Negahban & Wainwright, 2012; Banerjee et al., 2014; Chen & Banerjee, 2015). While RSC has been a core theme in high-dimensional statistics especially for linear models and convex losses (Wainwright, 2019), to the best of our knowledge, RSC has not been considered in the context of non-convex optimization of overparameterized deep models. For a normalized total loss function L(θ) = 1n ∑n i=1 ℓ(yi, ŷi), ŷi = f(θ;xi) with predictor or neural network model f parameterized by vector θ and data points {xi, yi}ni=1, when ℓ corresponds to the square loss we show that the total loss function satisfies RSC on a suitable restricted set Qtκ ⊂ Rp (Definition 5.2 in Section 5) at step t as long as ∥∥ 1 n ∑n i=1 ∇θf(θt;xi) ∥∥2 2 = Ω( 1√ m ). We also present similar results for general losses for which additional assumptions are needed. We show that the RSC property implies a Restricted Polyak-Łojasiewicz (RPL) condition on Qtκ, in turn implying a geometric one-step decrease of the loss towards the minimum in Qtκ, and subsequently implying geometric decrease of the loss towards the minimum in the large (layerwise spectral norm) ball BSpecρ,ρ1 (θ0). The geometric convergence due to RSC is a novel approach in the context of deep learning optimization which does not depend on properties of the NTK. Thus, the RSC condition provides an alternative sufficient condition for geometric convergence for deep learning optimization to the widely-used NTK condition. The rest of the paper is organized as follows. We briefly present related work in Section 2 and discuss the problem setup in Section 3. We establish the Hessian spectral norm bound in Section 4 and introduce the RSC based optimization analysis in Section 5. We experimental results corresponding to the RSC condition in Section 6 and conclude in Section 7. All technical proofs are in the Appendix. 2 RELATED WORK The literature on gradient descent and variants for deep learning is increasingly large, and we refer the readers to the following surveys for an overview of the field (Fan et al., 2021; Bartlett et al., 2021). Among the theoretical works, we consider (Du et al., 2019; Allen-Zhu et al., 2019; Zou & Gu, 2019; Zou et al., 2020; Liu et al., 2022) as the closest to our work in terms of their study of convergence on multi-layer neural networks. For a literature review on shallow and/or linear networks, we refer to the recent survey (Fang et al., 2021). Due to the rapidly growing related work, we only refer to the most related or recent work for most parts. Du et al. (2019); Zou & Gu (2019); Allen-Zhu et al. (2019); Liu et al. (2022) considered optimization of square loss, which we also consider for our main results, and we also present extensions to more general class of loss functions. Zou & Gu (2019); Zou et al. (2020); Allen-Zhu et al. (2019); Nguyen & Mondelli (2020); Nguyen (2021); Nguyen et al. (2021) analyzed deep ReLU networks. Instead, we consider smooth activation functions, similar to (Du et al., 2019; Liu et al., 2022). The convergence analysis of the gradient descent in (Du et al., 2019; Allen-Zhu et al., 2019; Zou & Gu, 2019; Zou et al., 2020; Liu et al., 2022) relied on the near constancy of NTK for wide neural networks (Jacot et al., 2018; Lee et al., 2019; Arora et al., 2019; Liu et al., 2020), which yield certain desirable properties for their training using gradient descent based methods. One such property is related to the PL condition (Karimi et al., 2016; Nguyen, 2021), formulated as PL∗ condition in (Liu et al., 2022). Our work uses a different optimization analysis based on RSC (Wainwright, 2019; Negahban et al., 2012; Negahban & Wainwright, 2012) related to a restricted version of the PL condition. Furthermore, Du et al. (2019); Allen-Zhu et al. (2019); Zou & Gu (2019); Zou et al. (2020) showed convergence in value to a global minimizer of the total loss, as we also do. 3 PROBLEM SETUP: DEEP LEARNING WITH SMOOTH ACTIVATIONS Consider a training set D = {xi, yi}ni=1,xi ∈ X ⊆ Rd, yi ∈ Y ⊆ R. We will denote by X ∈ Rn×d the matrix whose ith row is x⊤i . For a suitable loss function ℓ, the goal is to minimize the empirical loss: L(θ) = 1n ∑n i=1 ℓ(yi, ŷi) = 1 n ∑n i=1 ℓ(yi, f(θ;xi)), where the prediction ŷi := f(θ;xi) is from a deep model, and the parameter vector θ ∈ Rp. In our setting f is a feed-forward multi-layer (fully-connected) neural network with depth L and widths ml, l ∈ [L] := {1, . . . , L} given by α(0)(x) = x , α(l)(x) = ϕ ( 1 √ ml−1 W (l)α(l−1)(x) ) , l = 1, . . . , L , f(θ;x) = α(L+1)(x) = 1 √ mL v⊤α(L)(x) , (1) where W (l) ∈ Rml×ml−1 , l ∈ [L] are layer-wise weight matrices, v ∈ RmL is the last layer vector, ϕ(·) is the smooth (pointwise) activation function, and the total set of parameters θ := (vec(W (1))⊤, . . . , vec(W (L))⊤,v⊤)⊤ ∈ R ∑L k=1 mkmk−1+mL , (2) with m0 = d. For simplicity, we will assume that the width of all the layers is the same, i.e., ml = m, l ∈ [L], and so that θ ∈ RLm2+m. For simplicity, we also consider deep models with only one output, i.e., f(θ;x) ∈ R as in (Du et al., 2019), but our results can be extended to multi-dimension outputs as in (Zou & Gu, 2019), using V ∈ RmL×k for k outputs at the last layer; see Appendix C. Define the pointwise loss ℓi := ℓ(yi, ·) : R → R+ and denote its first- and second-derivative as ℓ′i := dℓ(yi,ŷi) dŷi and ℓ′′i := d2ℓ(yi,ŷi) dŷ2i . The particular case of square loss is ℓ(yi, ŷi) = (yi − ŷi)2. We denote the gradient and Hessian of f(·;xi) : Rp → R as ∇if := ∂f(θ;xi)∂θ , and ∇ 2 i f := ∂2f(θ;xi) ∂θ2 . The neural tangent kernel (NTK) Kntk(·; θ) ∈ Rn×n corresponding to parameter θ is defined as Kntk(xi,xj ; θ) = ⟨∇if,∇jf⟩. By chain rule, the gradient and Hessian of the empirical loss w.r.t. θ are given by ∂L(θ)∂θ = 1 n ∑n i=1 ℓ ′ i∇if and ∂2L(θ) ∂θ2 = 1 n ∑n i=1 [ ℓ′′i ∇if∇if⊤ + ℓ′i∇2i f ] . Let ∥ · ∥2 denote the spectral norm for matrices and L2-norm for vectors We make the following assumption regarding the activation function ϕ: Assumption 1 (Activation function). The activation ϕ is 1-Lipschitz, i.e., |ϕ′| ≤ 1, and βϕ-smooth, i.e., |ϕ′′l | ≤ βϕ. Remark 3.1. Our analysis holds for any ςϕ-Lipchitz smooth activations, with a dependence on ςϕ on most key results. The main (qualitative) conclusions stay true if ςϕ ≤ 1 + o(1) or ςϕ = poly(L), which is typically satisfied for commonly used smooth activations and moderate values of L. We define two types of balls over parameters that will be used throughout our analysis. Definition 3.1 (Norm balls). Given θ ∈ Rp of the form (2) with parameters W (l), l ∈ [L],v, we define BSpecρ,ρ1 (θ̄) := { θ ∈ Rp as in (2) | ∥W (ℓ) −W (ℓ)∥2 ≤ ρ, ℓ ∈ [L], ∥v − v̄∥2 ≤ ρ1 } , (3) BEucρ (θ̄) := { θ ∈ Rp as in (2) | ∥θ − θ̄∥2 ≤ ρ } . (4) Remark 3.2. The layerwise spectral norm ball BSpecρ,ρ1 plays a key role in our analysis. The last layer radius of ρ1 gives more flexibility, and we will usually assume ρ1 ≤ ρ; e.g., we could choose the desirable operating regime of ρ < √ m and ρ1 = O(1). Our analysis in fact goes through for any choice of ρ, ρ1 and the detailed results will indicate specific dependencies on both ρ and ρ1. 4 SPECTRAL NORM OF THE HESSIAN OF THE MODEL We start with the following assumption regarding the random initialization of the weights. Assumption 2 (Initialization weights and data normalization). The initialization weights w(l)0,ij ∼ N (0, σ20) for l ∈ [L] where σ0 = σ1 2 ( 1+ √ log m√ 2m ) , σ1 > 0, and v0 is a random unit vector with ∥v0∥2 = 1. Further, we assume the input data satisfies: ∥xi∥2 = √ d, i ∈ [n]. We focus on bounding the spectral norm of the Hessian ∥∇2θf(θ;x)∥2 for θ ∈ BSpecρ,ρ1 (θ0) and any input x ∈ Rd with ∥x∥2 = √ d. The assumption ∥x∥2 = √ d is for convenient scaling, such assumptions are common in the literature (Allen-Zhu et al., 2019; Oymak & Soltanolkotabi, 2020; Nguyen et al., 2021). Prior work (Liu et al., 2020) has considered a similar analysis for θ ∈ BEucρ (θ0), effectively the layerwise Frobenius norm ball, which is much smaller than BSpecρ,ρ1 (θ0), the layerwise spectral norm ball. We choose a unit value for the last layer’s weight norm for convenience, since our results hold under appropriate scaling for any other constant in O(1). All missing proofs are in Appendix A. Theorem 4.1 (Hessian Spectral Norm Bound). Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least (1− 2(L+1)m ), for any xi, i ∈ [n], we have∥∥∇2θf(θ;xi)∥∥2 ≤ cH√m , (5) with cH = O(L5(1 + γ6L)(1 + ρ1)) where γ := σ1 + ρ√m . Remark 4.1 (Desirable operating regimes). The constant γ needs careful scrutiny as cH depends on γ6L. Let us choose ρ1 = O(poly(L)). For any choice of the spectral norm radius ρ < √ m, we can choose σ1 ≤ 1− ρ√m ensuring γ ≤ 1 and hence cH = O(poly(L)). If ρ = O(1), we can keep σ1 = 1 so that γ = 1+ O(1)√ m , and cH = O(poly(L)) as long as L < √ m, which is common. Both of these give good choices for σ1 and desirable operating regime for the result. If we choose σ1 > 1, an undesirable operating regime, then cH = O(cΘ(L)), c > 1, and we will need m = Ω(cΘ(L)) for the result to be of interest. Remark 4.2 (Recent Related Work). In recent work, Liu et al. (2020) analyzed the Hessian spectral norm bound and showed that cH = Õ(ρ3L) for θ ∈ BEucρ (θ0) (logarithmic terms hidden in Õ(·)). Our analysis builds on and sharpens the result in (Liu et al., 2020) in three respects: (a) we have cH = O(poly(L)(1 + γ 6L)) for ρ1 = O(poly(L)) where we can choose σ1 to make γ ≤ 1 and thus obtain cH = O(poly(L)), instead of the worse cH = Õ(ρ3L) in Liu et al. (2020)1; (b) even for the same ρ, our results hold for a much larger spectral norm ball BSpecρ,ρ1 (θ0) compared to their Euclidean norm ball BEucρ (θ0) in (Liu et al., 2020); and (c) to avoid an exponential term, the bound in (Liu et al., 2020) needs ρ ≤ 1 whereas our result can use radius ρ < √ m for all intermediate layer matrices and ρ1 = O(poly(L)) for the last layer vector. Moreover, as a consequence of (b) and (c), our results hold for a larger (spectral norm) ball whose radius can increase with m, unlike the results in Liu et al. (2020) which hold for a smaller (Euclidean) ball with constant radius, i.e., “near initialization.” Remark 4.3 (Exact constant cH ). For completeness, we show the exact expression of the constant cH in Theorem 4.1 so the dependencies on different factors is clear. Let h(l) := γl−1+|ϕ(0)| ∑l−1 i=1 γ i−1. Then, cH = 2L(L 2γ2L + LγL + 1) · (1 + ρ1) · ψH ·max l∈[L] γL−l + 2LγL max l∈[L] h(l) , (6) where ψH = max 1≤l1<l2≤L { βϕ(h(l1)) 2 , h(l1) ( βϕ 2 (γ2 + (h(l2)) 2) + 1 ) , βϕγ 2h(l1)h(l2) } . (7) The source of the terms will be discussed shortly. Note the dependence on ρ1, the radius for the last layer in BSpecρ,ρ1 (θ0), and why ρ1 = O(poly(L)) is a desirable operating regime. 1See the end of Appendix A for a quick note about the network architecture in our work and the one in (Liu et al., 2020). Next, we give a high level outline of the proof of Theorem 4.1. Proof sketch. Our analysis follows the structure developed in Liu et al. (2020), but is considerably sharper as discussed in Remark 4.2. We start by defining the following quantities: Q∞(f) := max1≤l≤L {∥∥∥ ∂f∂α(l) ∥∥∥∞}, ∂f∂α(l) ∈ Rm, Q2(f) := max1≤l≤L {∥∥∥ ∂α(l)∂w(l) ∥∥∥2}, w(l) := vec(W (l)), ∂α(l) ∂w(l) ∈ Rm×m2 , and Q2,2,1(f) is the maximum over 1 ≤ l1 < l2 < l3 ≤ L among the three quantities ∥∥∥ ∂2α(l1) ∂w(l1) 2 ∥∥∥ 2,2,1 , ∥∥∥ ∂α(l1) ∂w(l1) ∥∥∥ 2 ∥∥∥ ∂2α(l2) ∂α(l2−1)∂w(l2) ∥∥∥ 2,2,1 , and ∥∥∥ ∂α(l1) ∂w(l1) ∥∥∥ 2 ∥∥∥ ∂α(l2) ∂w(l2) ∥∥∥ 2 ∥∥∥ ∂2α(l3) ∂α(l3−1) 2 ∥∥∥ 2,2,1 . where for an order-3 tensor T ∈ Rd1×d2×d3 we define the (2, 2, 1)−norm as ∥T∥2,2,1 := sup∥x∥2=∥z∥2=1 ∑d3 k=1 ∣∣∣∑d1i=1∑d2j=1 Tijkxizj∣∣∣ ,x ∈ Rd1 , z ∈ Rd2 . The following result in (Liu et al., 2020) provides an upper bound to the spectral norm of the Hessian. Theorem 4.2 (Liu et al. (2020), Theorem 3.1). Under Assumptions 1, assuming there is δ such that∥∥∥ ∂α(l)∂α(l−1) ∥∥∥2 ≤ δ, with C1 ≤ L2δ2L + LδL + L and C2 ≤ LδL, we have∥∥∇2θf(θ;x)∥∥2 ≤ 2C1Q2,2,1(f)Q∞(f) + 2√mC2Q2(f) , (8) In order to prove Theorem 4.1, we prove that Theorem 4.2 holds with high-probability where δ = γ, Q2(f) = O(L(1+γL)), Q2,2,1(f) = O(L3(1+γ3L)), and Q∞(f) = O ( (1+γL)(1+ρ1)√ m ) . Thus we obtain that the upper bound (4.2) becomes O(poly(L)(1+γ 6L)(1+ρ1)√ m ), providing a benign polynomial dependence on L when γ ≤ 1, rather than an exponential dependence on the radius ρ as in (Liu et al., 2020). The analysis for bounding the spectral norm of the Hessian can be used to establish additional bounds, which we believe are of independent interest, some of which will be used later in Section 5. First, we bound the norms of gradient of the predictor and the loss w.r.t. the weight vector θ and the input data x. Lemma 4.1 (Predictor gradient bounds). Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least ( 1− 2(L+1)m ) , we have ∥∇θf(θ;x)∥2 ≤ ϱ and ∥∇xf(θ;x)∥2 ≤ γL√ m (1 + ρ1) , (9) with ϱ2 = (h(L + 1))2 + 1m (1 + ρ1) 2 ∑L l=1(h(l)) 2γ2(L−l), γ = σ1 + ρ√m , h(l) = γ l−1 + |ϕ(0)| ∑l−1 i=1 γ i−1. Remark 4.4. Our analysis in Lemma 4.1 provides a bound on the Lipschitz constant of the predictor, a quantity which has generated interest in recent work on robust training (Salman et al., 2019; Cohen et al., 2020; Bubeck & Sellke, 2021). Under the assumption of square losses, further bounds can be obtained. Lemma 4.2 (Loss bounds). Consider the square loss. Under Assumptions 1, and 2, for γ = σ1+ ρ√m , each of the following inequalities hold with probability at least ( 1− 2(L+1)m ) : L(θ0) ≤ c0,σ1 and L(θ) ≤ cρ1,γ for θ ∈ BSpecρ,ρ1 (θ0), where ca,b = 2 n ∑n i=1 y 2 i + 2(1 + a) 2|g(b)|2 and g(a) = aL + |ϕ(0)| ∑L i=1 a i for any a, b ∈ R. Corollary 4.1 (Loss gradient bound). Consider the square loss. Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least ( 1− 2(L+1)m ) , we have ∥∇θL(θ)∥2 ≤ 2 √ L(θ)ϱ ≤ 2√cρ1,γϱ, with ϱ as in Lemma 4.1 and cρ1,γ as in Lemma 4.2. 5 OPTIMIZATION GUARANTEES WITH RESTRICTED STRONG CONVEXITY We focus on minimizing the empirical loss L(θ) over θ ∈ BSpecρ,ρ1 (θ0), the layerwise spectral norm ball in (3). Our analysis is based on Restricted Strong Convexity (RSC) (Negahban et al., 2012; Banerjee et al., 2014; Chen & Banerjee, 2015; Wainwright, 2019), which relaxes the definition of strong convexity by only needing strong convexity in certain directions or over a subset of the ambient space. We introduce the following specific definition of RSC with respect to a tuple (S, θ). Definition 5.1 (Restricted Strong Convexity (RSC)). A function L is said to satisfy α-restricted strong convexity (α-RSC) with respect to the tuple (S, θ) if for any θ′ ∈ S ⊆ Rp and some fixed θ ∈ Rp, we have L(θ′) ≥ L(θ) + ⟨θ′ − θ,∇θL(θ)⟩+ α2 ∥θ ′ − θ∥22, with α > 0. Note that L being α-RSC w.r.t. (S, θ) does not need L to be convex on Rp. Let us consider a sequence of iterates {θt}t≥0 ⊂ Rp. Our RSC analysis will rely on the following Qtκ-sets at step t, which avoid directions almost orthogonal to the average gradient of the predictor. We define the following notation: for two vectors π and π̄, cos(π, π̄) denotes the cosine of the angle between π and π̄. Definition 5.2 (Qtκ sets). For iterate θt ∈ Rp, let ḡt = 1n ∑n i=1 ∇θf(θt;xi). For any κ ∈ (0, 1], define Qtκ := {θ ∈ Rp | | cos(θ − θt, ḡt)| ≥ κ}. We define the set Bt := Qtκ ∩BSpecρ,ρ1 (θ0) ∩B Euc ρ2 (θt). We focus on establishing RSC w.r.t. the tuple (Bt, θt), where BSpecρ,ρ1 (θ0) becomes the feasible set for the optimization and B Euc ρ2 (θt) is a Euclidean ball around the current iterate. Assumption 3 (Loss function). The loss ℓi, i ∈ [n], is (i) strongly convex, i.e., ℓ′′i ≥ a > 0 and (ii) smooth, i.e., ℓ′′i ≤ b. Assumption 3 is satisfied by commonly used loss functions such as square loss, where a = b = 2. We state the RSC result for square loss; the result for other losses and proofs of all technical results in this section are in Appendix B. Theorem 5.1 (RSC for Square Loss). For square loss, under Assumptions 1 and 2, with probability at least (1− 2(L+1)m ), ∀θ ′ ∈ Qtκ ∩BSpecρ,ρ1 (θ0) ∩B Euc ρ2 (θt) with θt ∈ B Spec ρ,ρ1 (θ0), L(θ′) ≥ L(θt) + ⟨θ′ − θt,∇θL(θt)⟩+ αt 2 ∥θ′ − θt∥22 , with αt = c1 ∥ḡt∥ 2 2 − c2√ m , (10) where ḡt = 1n ∑n i=1 ∇θf(θt;xi), c1 = 2κ2 and c2 = 2cH(2ϱρ2 + √ cρ1,γ), with cH as in Theorem 4.1, ϱ as in Lemma 4.1, and cρ1,γ as in Lemma 4.2. Consequently, L satisfies RSC w.r.t. (Qtκ ∩BSpecρ,ρ1 (θ0) ∩B Euc ρ2 (θt), θt) whenever αt > 0. Remark 5.1. The RSC condition αt > 0 is satisfied at iteration t as long as ∥ḡt∥22 > c2c1√m where c1, c2 are exactly specified in Theorem 5.1. Indeed, if γ (and so σ1 and ρ) is chosen according to the desirable operating regimes (see Remark 4.1), ρ1 = O(poly(L)) and ρ2 = O(poly(L)), then we can use the bounds from Lemma 4.2 and obtain that the RSC condition is satisfied when ∥ḡt∥22 > O(poly(L))√ m . The condition is arguably mild, does not need the NTK condition λmin(Kntk(·; θt)) > 0, and is expected to hold till convergence (see Remark 5.3). Moreover, it is a local condition at step t and has no dependence on being ”near initialization” in the sense of θt ∈ BEucρ (θ0) for ρ = O(1) as in (Liu et al., 2020; 2022). For the convergence analysis, we also need to establish a smoothness property of the total loss. Theorem 5.2 (Local Smoothness for Square Loss). For square loss, under Assumptions 1 and 2, with probability at least (1− 2(L+1)m ), ∀θ, θ ′ ∈ BSpecρ,ρ1 (θ0), L(θ′) ≤ L(θ) + ⟨θ′ − θ,∇θL(θ)⟩+ β 2 ∥θ′ − θ∥22 , with β = 2ϱ2 + 2cH √ cρ1,γ√ m , (11) with cH as in Theorem 4.1, ϱ as in Lemma 4.1, and cρ1,γ as in Lemma 4.2. Consequently, L is locally β-smooth. Moreover, if γ (and so σ1 and ρ) is chosen according to the desirable operating regimes (see Remark 4.1) and ρ1 = O(poly(L)), then β = O(poly(L)). Remark 5.2. Similar to the case of the standard strong convexity and smoothness, the RSC and smoothness parameters respectively in Theorems 5.1 and 5.2 satisfy αt < β. To see this note that αt < 2κ 2∥ḡt∥22 ≤ 2ϱ2 ≤ β, where the second inequality follows since κ ≤ 1, and ∥ḡt∥22 ≤ ϱ2 using Lemma 4.1. Next, we show that the RSC condition w.r.t. the tuple (Bt, θt) implies a restricted Polyak-Łojasiewicz (RPL) condition w.r.t. the tuple (Bt, θt), unlike standard PL which holds without restrictions (Karimi et al., 2016). Lemma 5.1 (RSC ⇒ RPL). Let Bt := Qtκ ∩BSpecρ,ρ1 (θ0) ∩B Euc ρ2 (θt). In the setting of Theorem 5.1, if αt > 0, then the tuple (Bt, θt) satisfies the Restricted Polyak-Łojasiewicz (RPL) condition, i.e., L(θt)− inf θ∈Bt L(θ) ≤ 1 2αt ∥∇θL(θt)∥22 , (12) with probability at least (1− 2(L+1)m ). For the rest of the convergence analysis, we make the following assumption where T can be viewed as the stopping time so the convergence analysis holds given the assumptions are satisfied. Assumption 4 (Iterates’ conditions). For iterates {θt}t=0,1,...,T : (A4.1) αt > 0; (A4.2) θt ∈ BSpecρ,ρ1 (θ0). Remark 5.3 (Assumption (A4.1)). From Remark 5.1, (A4.1) is satisfied as long as ∥ḡt∥22 > c2c1√m where c1, c2 are as in Theorem 5.1, which is arguably a mild condition. In Section 6 we will present some empirical findings that show that this condition on ∥ḡt∥22 behaves well empirically. We now consider the particular case of gradient descent (GD) for the iterates: θt+1 = θt − ηt∇L(θt), where ηt is chosen so that θt+1 ∈ BSpecρ,ρ1 (θ0) and ρ2 is chosen so that θt+1 ∈ B Euc ρ2 (θt), which are sufficient for the analysis of Theorem 5.1 — we specify suitable choices in the sequel (see Remark 5.4). Given RPL w.r.t. (Bt, θt), gradient descent leads to a strict decrease of loss in Bt. Lemma 5.2 (Local Loss Reduction in Bt). Let αt, β be as in Theorems 5.1 and 5.2 respectively, and Bt := Qtκ ∩ BSpecρ,ρ1 (θ0) ∩ B Euc ρ2 (θt). Consider Assumptions 1, 2, and 4, and gradient descent with step size ηt = ωtβ , ωt ∈ (0, 2). Then, for any θt+1 ∈ arginfθ∈Bt L(θ), we have with probability at least (1− 2(L+1)m ), L(θt+1)− L(θt+1) ≤ ( 1− αtωt β (2− ωt) ) (L(θt)− L(θt+1)) . (13) Building on Lemma 5.2, we show that GD in fact leads to a geometric decrease in the loss relative to the minimum value of L(·) in the set BSpecρ,ρ1 (θ0). Theorem 5.3 (Global Loss Reduction in BSpecρ,ρ1 (θ0)). Let αt, β be as in Theorems 5.1 and 5.2 respectively, and Bt := Qtκ ∩ BSpecρ,ρ1 (θ0) ∩ B Euc ρ2 (θt). Let θ ∗ ∈ arginfθ∈BSpecρ,ρ1 (θ0) L(θ), θt+1 ∈ arginfθ∈Bt L(θ), and γt := L(θt+1)−L(θ∗) L(θt)−L(θ∗) . Let αt, β be as in Theorems 5.1 and 5.2 respectively, and Bt := Qtκ ∩ BSpecρ,ρ1 (θ0) ∩ B Euc ρ2 (θt). Consider Assumptions 1, 2, and 4, and gradient descent with step size ηt = ωtβ , ωt ∈ (0, 2). Then, with probability at least (1− 2(L+1) m ), we have we have γt ∈ [0, 1) and L(θt+1)− L(θ∗) ≤ ( 1− αtωt β (1− γt)(2− ωt) ) (L(θt)− L(θ∗)) . (14) As long as the conditions in Theorem 5.3 are kept across iterations, there will be a geometric decrease in loss. For Assumption 4, we have discussed (A4.1) in Remark 5.1, and we discuss (A4.2) next. Remark 5.4 (Assumption (A4.2)). Consider we run gradient descent iterations until some stopping time T > 0. Given radius ρ < √ m , Assumption (A4.2) θt ∈ BSpecρ,ρ1 (θ0), t = 0, . . . , T , can be verified empirically. Alternatively, we can choose suitable step sizes ηt to ensure the property using the geometric convergence from Theorem 5.3. Assume that our goal is to get L(θT )− L(θ∗) ≤ ϵ. Then, with χT := mint∈[T ] αtωtβ (1 − γt)(2 − ωt), Assumption (A4.1) along with Remark 5.2 ensures χT < 1. Then, it suffices to have T = ⌈log(L(θ0)−L(θ ∗) ϵ )/ log 1 1−χT ⌉ = Θ(log 1 ϵ ). Then, to ensure θt ∈ BSpecρ,ρ1 (θ0), t ∈ [T ], in the case of the square loss, since ∥∇L(θt)∥2 ≤ c for some constant c (see Corollary 4.1), it suffices to have ηt ≤ min{ρ,ρ1}Θ(log 1ϵ ) . Moreover, we point out that having ρ2 ≥ ηtc ensures ∥θt+1 − θt∥2 ≤ ρ2 ⇒ θt+1 ∈ BEucρ2 (θt), which in this case can be guaranteed if ρ2 ≥ min{ρ,ρ1}Θ(log 1ϵ ) . The argument above is informal, but illustrates that Assumption (A4.1) along with suitable constant step sizes ηt would ensure (A4.2). Thus, Assumption (A4.1), which ensures the RSC condition, is the main assumption behind the analysis. The conditions in Assumption 4 (see Remarks 5.1 and 5.4) along with Theorem 5.3 imply that the RSC based convergence analysis holds for a much larger layerwise spectral radius norm ball BSpecρ,ρ1 (θ0) with any radius ρ < √ m and ρ1 = O(poly(L)). Remark 5.5 (RSC and NTK). In the context of square loss, the NTK condition for geometric convergence needs λmin(Kntk(·; θt)) ≥ c0 > 0 for every t, i.e., uniformly bounded away from 0 by a constant c0 > 0. The NTK condition can also be written as inf v:∥v∥2=1 ∥∥∥∥∥ n∑ i=1 vi∇θf(θt;xi) ∥∥∥∥∥ 2 2 ≥ c0 > 0 . (15) In contrast, the proposed RSC condition (Theorem 5.1) needs∥∥∥∥∥ 1n n∑ i=1 ∇θf(θt;xi) ∥∥∥∥∥ 2 2 ≥ c̄0√ m , (16) where m is the width and c̄0 = c2c1 where c1, c2 are constants defined in Theorem 5.1. As a quadratic form on the NTK, the RSC condition can be viewed as using a specific v in (15), i.e., vi = 1√n for i ∈ [n], since the RSC condition is ∥∥∥∑ni=1 1√n∇θf(θt;xi)∥∥∥22 ≥ c̄0n√m . For m = Ω(n2), the RSC condition is more general since NTK ⇒ RSC, but the converse is not necessarily true. Remark 5.6 (RSC covers different settings than NTK). The NTK condition may be violated in certain settings, e.g., ∇θf(θt;xi), i = 1, . . . , n are linearly dependent, xi ≈ xj for some i ̸= j, layer widths are small ml < n, etc., but the optimization may work in practice. The RSC condition provides a way to analyze convergence in such settings. The RSC condition gets violated when 1 n ∑n i=1 ∇θf(θt;xi) ≈ 0, which does not seem to happen in practice (see Section 6), and future work will focus on understanding the phenomena. Finally, note that it is possible to construct a set of gradient vectors which satisfy the NTK condition but violates the RSC condition. Our perspective is to view the NTK and the RSC as two different sufficient conditions and geometric convergence of gradient descent (GD) is guaranteed as long as one of them is satisfied in any step. 6 RSC CONDITION: EXPERIMENTAL RESULTS In this section, we present experimental results verifying the RSC condition∥∥ 1 n ∑n i=1 ∇θf(θt;xi) ∥∥2 2 = Ω ( poly(L)√ m ) , t = 1, . . . , T , on standard benchmarks: CIFAR- 10, MNIST, and Fashion-MNIST. For simplicity, as before, we use ḡt = 1n ∑n i=1 ∇θf(θt;xi). In Figure 1(a), we consider CIFAR-10 and show the trajectory of ∥ḡt∥2 over iterations t, for different values of the network width m. For any width, the value of ∥ḡt∥2 stabilizes to a constant value over iterations, empirically validating the RSC condition ∥ḡt∥22 = Ω(poly(L)/ √ m). Interestingly, the smallest value of ∥ḡt∥2 seems to increase with the width. To study the width dependence further, in Figure 1(b), we plot mint∈[T ] ∥ḡt∥2 as a function of width m for several values of the width. The plot shows that mint∈[T ] ∥ḡt∥2 increases steadily with m illustrating that the RSC condition is empirically satisfied more comfortably for wider networks. In Figure 1(c) and (d), we show similar plots for MNIST and Fashion-MNIST illustrating the same phenomena of mint∈[T ] ∥ḡt∥2 increasing with m. For the experiments, the network architecture we used had 3-layer fully connected neural network with tanh activation function. The training algorithm is gradient descent (GD) width constant learning rate, chosen appropriately to keep the training in NTK regime. Since we are using GD, we use 512 randomly chosen training points for the experiments. The stopping criteria is either training loss < 10−3 or number of iterations larger than 3000. 7 CONCLUSIONS In this paper, we revisit deep learning optimization for feedforward models with smooth activations, and make two technical contributions. First, we bound the spectral norm of the Hessian over a large layerwise spectral norm radius ball, highlighting the role of initialization in such analysis. Second, we introduce a new approach to showing geometric convergence in deep learning optimization using restricted strong convexity (RSC). Our analysis sheds considerably new light on deep learning optimization problems, underscores the importance of initialization variance, and introduces a RSC based alternative to the prevailing NTK based analysis, which may fuel future work. ACKNOWLEDGMENTS AB is grateful for support from the National Science Foundation (NSF) through awards IIS 21-31335, OAC 21-30835, DBI 20-21898, as well as a C3.ai research award. MB and LZ are grateful for support from the National Science Foundation (NSF) and the Simons Foundation for the Collaboration on the Theoretical Foundations of Deep Learning2 through awards DMS-2031883 and 814639 as well as NSF IIS-1815697 and the TILOS institute (NSF CCF-2112665). A SPECTRAL NORM OF THE HESSIAN We establish the main theorem from Section 4 in this Appendix. Theorem 4.1 (Hessian Spectral Norm Bound). Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least (1− 2(L+1)m ), for any xi, i ∈ [n], we have∥∥∇2θf(θ;xi)∥∥2 ≤ cH√m , (5) with cH = O(L5(1 + γ6L)(1 + ρ1)) where γ := σ1 + ρ√m . A.1 ANALYSIS OUTLINE Our analysis follows that of Liu et al. (2020) and sharpens the analysis to get better dependence on the depth L of the neural network. We start by defining the following quantities: Q∞(f) := max 1≤l≤L {∥∥∥∥ ∂f∂α(l) ∥∥∥∥ ∞ } , ∂f ∂α(l) ∈ Rm , (17) Q2(f) := max 1≤l≤L {∥∥∥∥ ∂α(l)∂w(l) ∥∥∥∥ 2 } , w(l) := vec(W (l)) , ∂α(l) ∂w(l) ∈ Rm×m 2 , (18) Q2,2,1(f) := max 1≤l1<l2<l3≤L {∥∥∥∥ ∂2α(l1) ∂w(l1) 2 ∥∥∥∥ 2,2,1 , ∥∥∥∥ ∂α(l1)∂w(l1) ∥∥∥∥ 2 ∥∥∥∥ ∂2α(l2)∂α(l2−1)∂w(l2) ∥∥∥∥ 2,2,1 , (19) ∥∥∥∥ ∂α(l1)∂w(l1) ∥∥∥∥ 2 ∥∥∥∥ ∂α(l2)∂w(l2) ∥∥∥∥ 2 ∥∥∥∥ ∂2α(l3) ∂α(l3−1) 2 ∥∥∥∥ 2,2,1 } (20) where for an order-3 tensor T ∈ Rd1×d2×d3 we define the (2, 2, 1)−norm as follows, ∥T∥2,2,1 := sup ∥x∥2=∥z∥2=1 d3∑ k=1 ∣∣∣∣∣∣ d1∑ i=1 d2∑ j=1 Tijkxizj ∣∣∣∣∣∣ , x ∈ Rd1 , z ∈ Rd2 . (21) We will also use the notation W (L+1) := v. A key result established in Liu et al. (2020) provides an upper bound to the spectral norm of the Hessian: Theorem 4.2 (Liu et al. (2020), Theorem 3.1). Under Assumptions 1, assuming there is δ such that∥∥∥ ∂α(l)∂α(l−1) ∥∥∥2 ≤ δ, with C1 ≤ L2δ2L + LδL + L and C2 ≤ LδL, we have∥∥∇2θf(θ;x)∥∥2 ≤ 2C1Q2,2,1(f)Q∞(f) + 2√mC2Q2(f) , (8) In order to prove Theorem 4.1, we prove that Theorem 4.2 holds with high-probability where • δ = γ follows from Lemma A.3, • Q2(f) = O(L(1 + γL)) follows from Lemma A.4, • Q2,2,1(f) = O(L3(1 + γ3L)) follows from Lemma A.4 and Lemma A.5, and • Q∞(f) = O ( (1+γL)(1+ρ1)√ m ) follows from Lemma A.7 , while also establishing precise constants to get a proper form for the constant cH in Theorem 4.1. As a result, cH ≤ O(L 5(1+γ6L)(1+ρ1)√ m ). A.2 SPECTRAL NORMS OF W (l) AND L2 NORMS OF α(l) We start by bounding the spectral norm of the layer-wise matrices at initialization. Lemma A.1. Consider any l ∈ [L]. If the parameters are initialized as w(l)0,ij ∼ N (0, σ20) where σ0 = σ1 2(1+ √ log m 2m ) as in Assumption 2, then with probability at least ( 1− 2m ) , we have ∥W (l)0 ∥2 ≤ σ1 √ m . (22) Proof. For a (ml ×ml−1) random matrix W (l)0 with i.i.d. entries w (l) 0,ij ∈ N (0, σ20), with probability at least (1− 2 exp(−t2/2σ20)), the largest singular value of W0 is bounded by σmax(W (ℓ) 0 ) ≤ σ0( √ ml + √ ml−1) + t . (23) This concentration result can be easily derived as follows: notice that W0 = σ0W̄ (ℓ) 0 , where w̄ (ℓ) 0,ij ∼ N(0, 1), thus we can use the expectation E[∥W0∥(ℓ)2 ] = σ0E[ ∥∥W̄0∥∥(ℓ)2 ] = σ0(√mℓ + √mℓ−1) from Gordon’s Theorem for Gaussian matrices (Vershynin, 2012, Theorem 5.32) in the Gaussian concentration result for Lipschitz functions (Vershynin, 2012, Proposition 3.4) considering that B 7→ ∥σ0B∥2 is a σ0-Lipschitz function when the matrix B is treated as a vector. Let us choose t = σ0 √ 2 logm so that (23) holds with probability at least (1− 2m ). Then, to obtain (22), Case 1: l = 1. With m0 = d and m1 = m, ∥W (1)0 ∥2 ≤ σ0( √ d+ √ m+ √ 2 logm) ≤ σ0(2 √ m+ √ 2 logm) since we are in the over-parameterized regime m ≥ d. Case 2: 2 ≤ l ≤ L. With ml = ml−1 = m, ∥W (l)0 ∥2 ≤ σ0(2 √ m+ √ 2 logm) . Now, using σ0 = σ1 2(1+ √ log m 2m ) in both cases completes the proof. Next we bound the spectral norm of layerwise matrices. Proposition A.1. Under Assumptions 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least ( 1− 2m ) , ∥W (l)∥2 ≤ ( σ1 + ρ√ m )√ m , l ∈ [L]. Proof. By triangle inequality, for l ∈ [L], ∥W (l)∥2 ≤ ∥W (l)0 ∥2 + ∥W (l) −W (l) 0 ∥2 (a) ≤ σ1 √ m+ ρ , where (a) follows from Lemma A.1. This completes the proof. Next, we show that the output α(l) of layer l has an L2 norm bounded by O( √ m). Lemma A.2. Consider any l ∈ [L]. Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least ( 1− 2lm ) , we have ∥α(l)∥2 ≤ √ m ( σ1 + ρ√ m )l + √ m l∑ i=1 ( σ1 + ρ√ m )i−1 |ϕ(0)| = ( γl + |ϕ(0)| l∑ i=1 γi−1 ) √ m . Proof. Following Allen-Zhu et al. (2019); Liu et al. (2020), we prove the result by recursion. First, recall that since ∥x∥22 = d, we have ∥α(0)∥2 = √ d. Then, since m0 = d and ϕ is 1-Lipschitz,∥∥∥∥ϕ( 1√dW (1)α(0) )∥∥∥∥ 2 − ∥ϕ(0)∥2 ≤ ∥∥∥∥ϕ( 1√dW (1)α(0) ) − ϕ(0) ∥∥∥∥ 2 ≤ ∥∥∥∥ 1√dW (1)α(0) ∥∥∥∥ 2 , so that ∥α(1)∥2 = ∥∥∥∥ϕ( 1√dW (1)α(0) )∥∥∥∥ 2 ≤ ∥∥∥∥ 1√dW (1)α(0) ∥∥∥∥ 2 + ∥ϕ(0)∥2 ≤ 1√ d ∥W (1)∥2∥α(0)∥2 + |ϕ(0)| √ m ≤ ( σ1 + ρ√ m )√ m+ |ϕ(0)| √ m , where we used Proposition A.1 in the last inequality, which holds with probability at least 1− 2m . For the inductive step, we assume that for some l − 1, we have ∥α(l−1)∥2 ≤ √ m ( σ1 + ρ√ m )l−1 + √ m l−1∑ i=1 ( σ1 + ρ√ m )i−1 |ϕ(0)|, which holds with the probability at least 1− 2(l−1)m . Since ϕ is 1-Lipschitz, for layer l, we have∥∥∥∥ϕ( 1√mW (l)α(l−1) )∥∥∥∥ 2 − ∥ϕ(0)∥2 ≤ ∥∥∥∥ϕ( 1√mW (l)α(l−1) ) − ϕ(0) ∥∥∥∥ 2 ≤ ∥∥∥∥ 1√mW (l)α(l−1) ∥∥∥∥ 2 , so that ∥α(l)∥2 = ∥∥∥∥ϕ( 1√mW (l)α(l−1) )∥∥∥∥ 2 ≤ ∥∥∥∥ 1√mW (l)α(l−1) ∥∥∥∥ 2 + ∥ϕ(0)∥2 ≤ 1√ m ∥W (l)∥2∥α(l−1)∥2 + √ m|ϕ(0)| (a) ≤ ( σ1 + ρ√ m ) ∥α(l−1)∥2 + √ m|ϕ(0)| (b) = √ m ( σ1 + ρ√ m )l + √ m l∑ i=1 ( σ1 + ρ√ m )i−1 |ϕ(0)|, where (a) follows from Proposition A.1 and (b) from the inductive step. Since we have used Proposition A.1 l times, after a union bound, our result would hold with probability at least 1− 2lm . This completes the proof. A.3 SPECTRAL NORMS OF ∂α (l) ∂w(l) AND ∂α (l) ∂α(l−1) Recall that in our setup, the layerwise outputs and pre-activations are respectively given by: α(l) = ϕ ( α̃(l) ) , α̃(l) := 1 √ ml−1 W (l)α(l−1) . (24) Lemma A.3. Consider any l ∈ {2, . . . , L}. Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least ( 1− 2m ) , ∥∥∥∥ ∂α(l)∂α(l−1) ∥∥∥∥2 2 ≤ ( σ1 + ρ√ m )2 = γ2 . (25) Proof. By definition, we have [ ∂α(l) ∂α(l−1) ] i,j = 1√ m ϕ′(α̃ (l) i )W (l) ij . (26) Since ∥A∥2 = sup∥v∥2=1 ∥Av∥2, so that ∥A∥ 2 2 = sup∥v∥2=1 ∑ i⟨ai,v⟩2, we have that for 2 ≤ l ≤ L, ∥∥∥∥ ∂α(l)∂α(l−1) ∥∥∥∥2 2 = sup ∥v∥2=1 1 m m∑ i=1 ϕ′(α̃(l)i ) m∑ j=1 W (l) ij vj 2 (a) ≤ sup ∥v∥2=1 1 m ∥W (l)v∥22 = 1 m ∥W (l)∥22 (b) ≤ γ2 , where (a) follows from ϕ being 1-Lipschitz by Assumption 1 and (b) from Proposition A.1. This completes the proof. Lemma A.4. Consider any l ∈ [L]. Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least ( 1− 2lm ) ,∥∥∥∥ ∂α(l)∂w(l) ∥∥∥∥2 2 ≤ 1 m [ √ m ( σ1 + ρ√ m )l−1 + √ m l−1∑ i=1 ( σ1 + ρ√ m )i−1 |ϕ(0)| ]2 = ( γl−1 + |ϕ(0)| l−1∑ i=1 γi−1 )2 . (27) Proof. Note that the parameter vector w(l) = vec(W (l)) and can be indexed with j ∈ [m] and j′ ∈ [d] when l = 1 and j′ ∈ [m] when l ≥ 2. Then, we have[ ∂α(l) ∂w(l) ] i,jj′ = [ ∂α(l) ∂W (l) ] i,jj′ = 1√ m ϕ′(α̃ (l) i )α (l−1) j′ 1[i=j] . (28) For l ∈ {2, . . . , L}, noting that ∂α (l) ∂w(l) ∈ Rm×m2 and ∥V ∥F = ∥vec(V )∥2 for any matrix V , we have∥∥∥∥ ∂α(l)∂w(l) ∥∥∥∥2 2 = sup ∥V ∥F=1 1 m m∑ i=1 ϕ′(α̃(l)i ) m∑ j,j′=1 α (l−1) j′ 1[i=j]Vjj′ 2 ≤ sup ∥V ∥F=1 1 m ∥V α(l−1)∥22 ≤ 1 m sup ∥V ∥F=1 ∥V ∥22∥α(l−1)∥22 (a) ≤ 1 m ∥α(l−1)∥22 (b) ≤ 1 m [ √ m ( σ1 + ρ√ m )l−1 + √ m l−1∑ i=1 ( σ1 + ρ√ m )i−1 |ϕ(0)| ]2 = ( γl−1 + |ϕ(0)| l−1∑ i=1 γi−1 )2 where (a) follows from ∥V ∥22 ≤ ∥V ∥ 2 F for any matrix V , and (b) from Lemma A.2. The l = 1 case follows in a similar manner:∥∥∥∥ ∂α(1)∂w(1) ∥∥∥∥2 2 ≤ 1 d ∥α(0)∥22 = 1 d ∥x∥22 = 1 which satisfies the form for l = 1. That completes the proof. A.4 (2, 2, 1)-NORMS OF ORDER 3 TENSORS Lemma A.5. Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), each of the following inequalities hold with probability at least ( 1− 2lm ) ,∥∥∥∥ ∂2α(l)(∂α(l−1))2 ∥∥∥∥ 2,2,1 ≤ βϕγ2, (29) ∥∥∥∥ ∂2α(l)∂α(l−1)∂W (l) ∥∥∥∥ 2,2,1 ≤ βϕ 2 γ2 +(γl−1 + |ϕ(0)| l−1∑ i=1 γi−1 )2+ 1, (30) for l = 2, . . . , L; and ∥∥∥∥ ∂2α(l)(∂W (l))2 ∥∥∥∥ 2,2,1 ≤ βϕ ( γl−1 + |ϕ(0)| l−1∑ i=1 γi−1 )2 , (31) for l ∈ [L]. Proof. For the inequality (29), note that from (26) we obtain ( ∂2α(l) (∂α(l−1))2 ) i,j,k = 1 mϕ ′′(α̃ (l) i )W (l) ik W (l) ij , and so∥∥∥∥ ∂2α(l)(∂α(l−1))2 ∥∥∥∥ 2,2,1 = sup ∥v1∥2=∥v2∥2=1 1 m m∑ i=1 ∣∣∣ϕ′′(α̃(l)i )(W (l)v1)i(W (l)v2)i∣∣∣ ≤ sup ∥v1∥2=∥v2∥2=1 1 m βϕ m∑ i=1 ∣∣∣(W (l)v1)i(W (l)v2)i∣∣∣ (a) ≤ sup ∥v1∥2=∥v2∥2=1 1 2m βϕ m∑ i=1 (W (l)v1) 2 i + (W (l)v2) 2 i ≤ 1 2m βϕ sup ∥v1∥2=∥v2∥2=1 (∥W (l)v1∥22 + ∥W (l)v2∥22) ≤ 1 2m βϕ(∥W (l)∥22 + ∥W (l)∥22) (b) ≤ βϕ(σ1 + ρ/ √ m)2 = βϕγ 2, (32) where (a) follows from 2ab ≤ a2 + b2 for a, b ∈ R, and (b) from Proposition A.1, with probability at least 1− 2m . For the inequality (30), carefully following the chain rule in (28) we obtain( ∂2α(l) ∂α(l−1)∂W (l) ) i,jj′,k = 1 m ϕ′′(α̃ (l) i )W (l) ik α (l−1) j′ 1[j=i] + 1√ m ϕ′(α̃ (l) i )1[i=j]1[j′=k]. Then, we have ∥∥∥∥ ∂2α(l)∂α(l−1)∂W (l) ∥∥∥∥ 2,2,1 = sup ∥v1∥2=∥V2∥F=1 m∑ i=1 ∣∣∣∣∣∣ m∑ k=1 m∑ j=1 m∑ j′=1 ( 1 m ϕ′′(α̃ (l) i )W (l) ik α (l−1) j′ 1[j=i] + 1√ m ϕ′(α̃ (l) i )1[i=j]1[j′=k] ) v1,kV2,jj′ ∣∣∣∣ = sup ∥v1∥2=∥V2∥F=1 m∑ i=1 ∣∣∣∣∣∣ 1m m∑ j′=1 ϕ′′(α̃ (l) i )α (l−1) j′ V2,ij′ ( m∑ k=1 W (l) ik v1,k ) + 1√ m m∑ k=1 ϕ′(α̃ (l) i )v1,kV2,ik ∣∣∣∣∣ ≤ sup ∥v1∥2=∥V2∥F=1 1 m βϕ m∑ i=1 ∣∣∣(W (l)v1)i(V2α(l−1))i∣∣∣+ 1√ m m∑ i=1 m∑ k=1 |v1,kV2,ik| ≤ sup ∥v1∥2=∥v2∥F=1 1 2m βϕ m∑ i=1 (W (l)v1) 2 i + (V2α (l−1))2i + 1√ m m∑ i=1 ∥v1∥2 ∥∥V2,i,:∥∥2 = sup ∥v1∥2=∥V2∥F=1 1 2m βϕ( ∥∥∥W (l)v1∥∥∥2 2 + ∥∥∥V2α(l−1)∥∥∥2 2 ) + 1√ m m∑ i=1 ∥∥V2,i,:∥∥2 (a) ≤ 1 2m βϕ( ∥∥∥W (l)∥∥∥2 2 + ∥∥∥α(l−1)∥∥∥2 2 ) + ∥V2∥F (b) ≤ βϕ 2 γ2 +(γl−1 + |ϕ(0)| l−1∑ i=1 γi−1 )2+ 1 where (a) follows from ∥∥V2α(l−1)∥∥2 ≤ ∥V2∥2 ∥∥αl−1∥∥ ≤ ∥V2∥F ∥∥αl−1∥∥2 = ∥∥αl−1∥∥2 and∑m i=1 ∥∥V2,i,:∥∥2 ≤ √m√∑mi=1 ∥∥V2,i,:∥∥22, and (b) follows from Proposition A.1 and Lemma A.2, with altogether holds with probability at least 1− 2lm . For the last inequality (31), we start with the analysis for l ≥ 2. Carefully following the chain rule in (28) we obtain ( ∂2α(l) (∂W (l))2 ) i,jj′,kk′ = 1 m ϕ′′(α̃ (l) i )α (l−1) k′ α (l−1) j′ 1[j=i]1[k=i]. Then, we have∥∥∥∥ ∂2α(l)(∂W (l))2 ∥∥∥∥ 2,2,1 = sup ∥V1∥F=∥V2∥F=1 m∑ i=1 ∣∣∣∣∣∣ m∑ j,j′=1 m∑ k,k′=1 ( 1 m ϕ′′(α̃ (l) i )α (l−1) k′ α (l−1) j′ 1[j=i]1[k=i]V1,jj′V2,kk′ )∣∣∣∣∣∣ = sup ∥V1∥F=∥V2∥F=1 m∑ i=1 ∣∣∣∣∣∣ϕ ′′(α̃ (l) i ) m m∑ j′=1 ( α (l−1) j′ V1,ij′ ) m∑ k′=1 ( α (l−1) k′ V2,ik′ )∣∣∣∣∣∣ ≤ sup ∥V1∥F=∥V2∥F=1 1 m βϕ m∑ i=1 ∣∣∣(V1α(l−1))i(V2α(l−1))i∣∣∣ ≤ sup ∥V1∥F=∥v2∥F=1 1 2m βϕ m∑ i=1 (V2α (l−1))2i + (V2α (l−1))2i = sup ∥V1∥F=∥V2∥F=1 1 2m βϕ( ∥∥∥V2α(l−1)∥∥∥2 2 + ∥∥∥V2α(l−1)∥∥∥2 2 ) ≤ 1 2m βϕ( ∥∥∥α(l−1)∥∥∥2 2 + ∥∥∥α(l−1)∥∥∥2 2 ) ≤ βϕ ( γl−1 + |ϕ(0)| l−1∑ i=1 γi−1 )2 , which holds with probability at least 1 − 2(l−1)m . For the case l = 1, it is easy to show that( ∂2α(1) (∂W (1))2 ) i,jj′,kk′ = 1dϕ ′′(α̃ (1) i )xk′xj′1[j=i]1[k=i] and so ∥∥∥ ∂2α(1)(∂W (1))2 ∥∥∥2,2,1 ≤ βϕ. This completes the proof. A.5 L∞ NORM OF ∂f∂α(l) Let b(l) := ∂f ∂α(l) ∈ Rm for any l ∈ [L]. Let b(l)0 denote b(l) at initialization. By a direct calculation, we have b(l) = ∂f ∂α(l) = ( L∏ l′=l+1 ∂α(l) ∂α(l−1) ) ∂f ∂α(L) = ( L∏ l′=l+1 1√ m (W (l ′))⊤D(l ′) ) 1√ m v , where D(l ′) is a diagonal matrix of the gradient of activations, i.e., D(l ′) ii = ϕ ′(α̃ (l′) i ). Note that we also have the following recursion: b(l) = ∂f ∂α(l) = ∂α(l+1) ∂α(l) ∂f ∂α(l+1) = 1√ m (W (l+1))⊤D(l+1)b(l+1) . Lemma A.6. Consider any l ∈ [L]. Under Assumptions1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least 1− 2(L−l+1)m , ∥b(l)∥2 ≤ 1√ m ( σ1 + ρ√ m )L−l (1 + ρ1) (33) and ∥b(l)0 ∥2 ≤ σL−l1√ m ≤ γ L−l √ m . (34) Proof. First, note that ∥∥b(L)∥∥ 2 = 1√ m ∥v∥2 ≤ 1√ m (∥v0∥2 + ∥v − v0∥2) ≤ 1√ m (1 + ρ1), where the inequality follows from from Proposition A.1. Now, for the inductive step, assume ∥∥b(l)∥∥ 2 ≤( σ1 + ρ√ m )L−l 1√ m (1 + ρ1) with probability at least 1− 2lm . Then,∥∥∥b(l−1)∥∥∥ 2 = ∥∥∥∥ ∂α(l)∂α(l−1)b(l) ∥∥∥∥ 2 ≤ ∥∥∥∥ ∂α(l)∂α(l−1) ∥∥∥∥ 2 ∥∥∥b(l)∥∥∥ 2 ≤ ( σ1 + ρ√ m )( σ1 + ρ√ m )L−l 1√ m (1 + ρ1) = ( σ1 + ρ√ m )L−l+1 1√ m (1 + ρ1) where the last inequality follows from Lemma A.3 with probability at least 1− 2m (l + 1). Since we use Proposition A.1 once at layer L and then Lemma A.3 (L− l) times at layer l, then we have that everything holds altogether with probability at least 1− 2m (L− l + 1). We have finished the proof by induction. Lemma A.7. Consider any l ∈ [L]. Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least 1− 2(L−l)m , ∥∥∥b(l)∥∥∥ ∞ ≤ γ L−l √ m (1 + ρ1). (35) Proof. For any l ∈ [L], by definition i-th component of b(l), i.e., b(l)i , takes the form b (l) i = ∂α(L) ∂α (l) i ∂f ∂α(L) = ∂α(L) ∂α (l) i 1√ m v. Then, with W (l):,i denoting the i-th column of the matrix W (l),∥∥∥∥∥∂α(L)∂α(l)i ∥∥∥∥∥ 2 (a) = ∥∥∥∥∥ϕ′(α̃(l)i )√m (W (l):,i )⊤ L∏ l′=l+2 ( ∂α(l ′) ∂α(l′−1) )∥∥∥∥∥ 2 (b) ≤ 1√ m ∥∥∥W (l):,i ∥∥∥ 2 L∏ l′=l+2 ∥∥∥∥∥ ∂α(l ′) ∂α(l′−1) ∥∥∥∥∥ 2 (c) ≤ 1√ m ∥∥∥W (l):,i ∥∥∥ 2 γL−l−1 (d) ≤ γ γL−l−1 = γL−l (36) where (a) follows from ∂α (l+1) ∂α (l) i = 1√ m ϕ′(α̃ (l) i )(W (l) :,i ) ⊤, (b) from ϕ being 1-Lipschitz, (c) from Lemma A.3, and (d) from ∥∥∥W (l):,i ∥∥∥ 2 ≤ ∥∥W (l)∥∥ 2 and Proposition A.1, which altogether holds with probability 1− 2m (L− l). Therefore, for every i ∈ [m], ∣∣∣b(l)i ∣∣∣ ≤ ∣∣∣∣∣ 1√m ∂α(L)∂α(l)i v ∣∣∣∣∣ ≤ 1√ m ∥∥∥∥∥∂α(L)∂α(l)i ∥∥∥∥∥ 2 ∥v∥2 ≤ 1√ m γL−l(1 + ρ1) , where the last inequality follows from (36) and ∥v∥2 ≤ ∥v0∥2+∥v − v0∥2 ≤ 1+ρ1. This completes the proof. A.6 USEFUL BOUNDS Lemma 4.1 (Predictor gradient bounds). Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least ( 1− 2(L+1)m ) , we have ∥∇θf(θ;x)∥2 ≤ ϱ and ∥∇xf(θ;x)∥2 ≤ γL√ m (1 + ρ1) , (9) with ϱ2 = (h(L + 1))2 + 1m (1 + ρ1) 2 ∑L l=1(h(l)) 2γ2(L−l), γ = σ1 + ρ√m , h(l) = γ l−1 + |ϕ(0)| ∑l−1 i=1 γ i−1. Proof. We first prove the bound on the gradient with respect to the weights. Using the chain rule, ∂f ∂w(l) = ∂α(l) ∂w(l) L∏ l′=l+1 ∂α(l ′) ∂α(l′−1) ∂f ∂α(L) and so∥∥∥∥ ∂f∂w(l) ∥∥∥∥2 2 ≤ ∥∥∥∥ ∂α(l)∂w(l) ∥∥∥∥2 2 ∥∥∥∥∥ L∏ l′=l+1 ∂α(l ′) ∂α(l′−1) ∂f ∂α(L) ∥∥∥∥∥ 2 2 (a) ≤ ∥∥∥∥ ∂α(l)∂w(l) ∥∥∥∥2 2 γ2(L−l) · 1 m (1 + ρ1) 2 (b) ≤ ( γl−1 + |ϕ
1. What is the focus of the paper regarding the training of deep learning models? 2. What are the strengths and weaknesses of the proposed approach in analyzing gradient descent's convergence? 3. Are there any concerns or minor comments regarding the paper's content? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper? 5. Are there any questions regarding the paper's results, comparisons, or extensions?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper In the paper, the authors analyze the convergence of gradient descent (GD) on training a deep learning model with smooth activation function. The paper is presented for fully connected neural network with linear last layer, output dimension of 1, and equal layer width for all layers. Under reasonable assumptions on the loss function, activation function, randomness of the initialization, etc., the authors show that GD will converge (geometrically) to the minimizer in the neighborhood around the initialization. The analysis establishes an upper bound on the Hessian of the loss function in the neighborhood and then using Restricted Strong Convexity of the loss function to argue for convergence to the minimizer. The approach for showing convergence differs from NTK approaches which are generally restricted to wide networks. Strengths And Weaknesses Overall the paper provides a new approach to analyze the behavior of gradient descent when training deep learning models. Below I highlight my concerns and include some minor comments. Weakness The analysis presented is for NN with same width for all layers and with the output dimension of 1. The authors point to another paper for extension to multidimensional output. It would be beneficial to provide a remark, at the least, on the central steps required for extending the result. Another restriction to the network is the activation function. Does the analysis extend to networks with smooth activation in last layer? Does it extend to non-smooth activation function? In my understanding, the result presented is for convergence to a neighborhood near the initialization. The main point made in the paper is that the networks does not have to be as wide for convergence to hold, unlike for NTK approaches. A comparison between the two methods on the required width size is missing. The experimental result presented is very limited. Following the results presented in the paper, the average gradient satisfies g ¯ t = Ω ( L m ) . However, figure 1b shows that the norm of the average gradient grows with network width. What accounts for this difference in theoretical vs experimental results. In the abstract, the authors state "...norm of the Hessian of such models...". This should be reworded to indicate that Hessian is of the loss function. In page 2, the authors state the set Q k t without an explanation of what the set is (or referencing to where it is defined). In assumption 2, the initialization for v 0 is stated in a roundabout way. I believe it is simply sampled uniformly at random from a unit sphere. In Theorem 4.1, the bound on the maximum of the Hessian should either depend on n or the bound holds for all x i . This should be made clear. In Definition 5.2, missing inner product in cos ⁡ ( θ − θ t , g ¯ t ) . Clarity, Quality, Novelty And Reproducibility Clarity The paper is well written and the main paper provides sufficient clarity on the main proof ideas. However some terms are used without properly introducing them. Novelty and quality The paper provides (and according to the authors, they are the first to do so) an convergence analysis that uses the Restricted Strong Convexity argument in the context of deep learning models. However the scope of the paper is limited to particular neural networks (see above comment) and not enough comparison is provided between their result and NTK.
ICLR
Title Restricted Strong Convexity of Deep Learning Models with Smooth Activations Abstract We consider the problem of optimization of deep learning models with smooth activation functions. While there exist influential results on the problem from the “near initialization” perspective, we shed considerable new light on the problem. In particular, we make two key technical contributions for such models with L layers, m width, and σ 0 initialization variance. First, for suitable σ 2 0 , we establish a O( poly(L) √ m ) upper bound on the spectral norm of the Hessian of such models, considerably sharpening prior results. Second, we introduce a new analysis of optimization based on Restricted Strong Convexity (RSC) which holds as long as the squared norm of the average gradient of predictors is Ω( poly(L) √ m ) for the square loss. We also present results for more general losses. The RSC based analysis does not need the “near initialization” perspective and guarantees geometric convergence for gradient descent (GD). To the best of our knowledge, ours is the first result on establishing geometric convergence of GD based on RSC for deep learning models, thus becoming an alternative sufficient condition for convergence that does not depend on the widely-used Neural Tangent Kernel (NTK). We share preliminary experimental results supporting our theoretical advances. N/A 2 0 , we establish a O( poly(L)√ m ) upper bound on the spectral norm of the Hessian of such models, considerably sharpening prior results. Second, we introduce a new analysis of optimization based on Restricted Strong Convexity (RSC) which holds as long as the squared norm of the average gradient of predictors is Ω( poly(L)√ m ) for the square loss. We also present results for more general losses. The RSC based analysis does not need the “near initialization” perspective and guarantees geometric convergence for gradient descent (GD). To the best of our knowledge, ours is the first result on establishing geometric convergence of GD based on RSC for deep learning models, thus becoming an alternative sufficient condition for convergence that does not depend on the widely-used Neural Tangent Kernel (NTK). We share preliminary experimental results supporting our theoretical advances. 1 INTRODUCTION Recent years have seen advances in understanding convergence of gradient descent (GD) and variants for deep learning models (Du et al., 2019; Allen-Zhu et al., 2019; Zou & Gu, 2019; Zou et al., 2020; Liu et al., 2022; Ji & Telgarsky, 2019; Oymak & Soltanolkotabi, 2020; Nguyen, 2021). Despite the fact that such optimization problems are non-convex, a series of recent results have shown that GD has geometric convergence and finds near global solution “near initialization” for wide networks. Such analysis is typically done based on the Neural Tangent Kernel (NTK) (Jacot et al., 2018), in particular by showing that the NTK is positive definite “near initialization,” in turn implying the optimization problem satisfies a condition closely related to the Polyak-Łojasiewicz (PL) condition, which in turn implies geometric convergence to the global minima (Liu et al., 2022; Nguyen, 2021). Such results have been generalized to more flexible forms of “lazy learning” where similar guarantees hold (Chizat et al., 2019). However, there are concerns regarding whether such “near initialization” or “lazy learning” truly explains the optimization behavior in realistic deep learning models (Geiger et al., 2020; Yang & Hu, 2020; Fort et al., 2020; Chizat et al., 2019). Our work focuses on optimization of deep models with smooth activation functions, which have become increasingly popular in recent years (Du et al., 2019; Liu et al., 2022; Huang & Yau, 2020). Much of the theoretical convergence analysis of GD has focused on ReLU networks (Allen-Zhu et al., 2019; Nguyen, 2021). Some progress has also been made for deep models with smooth activations, but existing results are based on a variant of the NTK analysis, and the requirements on the width of such models are high (Du et al., 2019; Liu et al., 2022). Based on such background and context, the motivating question behind our work is: Are there other (meaningful) sufficient conditions beyond NTK which lead to (geometric) convergence of GD for deep learning optimization? Based on such motivation, we make two technical contributions in this paper which shed light on optimization of deep learning models with smooth activations and with L layers, m width, and σ20 initialization variance. First, for suitable σ20 , we establish a O( poly(L)√ m ) upper bound on the spectral norm of the Hessian of such models (Section 4). The bound holds over a large layerwise spectral norm (instead of Frobenius norm) ball BSpecρ,ρ1 (θ0) around the random initialization θ0, where the radius ρ < √ m, arguably much bigger than what real world deep models need. Our analysis builds on and sharpens recent prior work on the topic (Liu et al., 2020). While our analysis holds for Gaussian random initialization of weights with any variance σ20 , the poly(L) dependence happens when σ20 ≤ 14+o(1) 1 m (we handle the 1 m scaling explicitly) . Second, based on our Hessian spectral norm bound, we introduce a new approach to the analysis of optimization of deep models with smooth activations based on the concept of Restricted Strong Convexity (RSC) (Section 5) (Wainwright, 2019; Negahban et al., 2012; Negahban & Wainwright, 2012; Banerjee et al., 2014; Chen & Banerjee, 2015). While RSC has been a core theme in high-dimensional statistics especially for linear models and convex losses (Wainwright, 2019), to the best of our knowledge, RSC has not been considered in the context of non-convex optimization of overparameterized deep models. For a normalized total loss function L(θ) = 1n ∑n i=1 ℓ(yi, ŷi), ŷi = f(θ;xi) with predictor or neural network model f parameterized by vector θ and data points {xi, yi}ni=1, when ℓ corresponds to the square loss we show that the total loss function satisfies RSC on a suitable restricted set Qtκ ⊂ Rp (Definition 5.2 in Section 5) at step t as long as ∥∥ 1 n ∑n i=1 ∇θf(θt;xi) ∥∥2 2 = Ω( 1√ m ). We also present similar results for general losses for which additional assumptions are needed. We show that the RSC property implies a Restricted Polyak-Łojasiewicz (RPL) condition on Qtκ, in turn implying a geometric one-step decrease of the loss towards the minimum in Qtκ, and subsequently implying geometric decrease of the loss towards the minimum in the large (layerwise spectral norm) ball BSpecρ,ρ1 (θ0). The geometric convergence due to RSC is a novel approach in the context of deep learning optimization which does not depend on properties of the NTK. Thus, the RSC condition provides an alternative sufficient condition for geometric convergence for deep learning optimization to the widely-used NTK condition. The rest of the paper is organized as follows. We briefly present related work in Section 2 and discuss the problem setup in Section 3. We establish the Hessian spectral norm bound in Section 4 and introduce the RSC based optimization analysis in Section 5. We experimental results corresponding to the RSC condition in Section 6 and conclude in Section 7. All technical proofs are in the Appendix. 2 RELATED WORK The literature on gradient descent and variants for deep learning is increasingly large, and we refer the readers to the following surveys for an overview of the field (Fan et al., 2021; Bartlett et al., 2021). Among the theoretical works, we consider (Du et al., 2019; Allen-Zhu et al., 2019; Zou & Gu, 2019; Zou et al., 2020; Liu et al., 2022) as the closest to our work in terms of their study of convergence on multi-layer neural networks. For a literature review on shallow and/or linear networks, we refer to the recent survey (Fang et al., 2021). Due to the rapidly growing related work, we only refer to the most related or recent work for most parts. Du et al. (2019); Zou & Gu (2019); Allen-Zhu et al. (2019); Liu et al. (2022) considered optimization of square loss, which we also consider for our main results, and we also present extensions to more general class of loss functions. Zou & Gu (2019); Zou et al. (2020); Allen-Zhu et al. (2019); Nguyen & Mondelli (2020); Nguyen (2021); Nguyen et al. (2021) analyzed deep ReLU networks. Instead, we consider smooth activation functions, similar to (Du et al., 2019; Liu et al., 2022). The convergence analysis of the gradient descent in (Du et al., 2019; Allen-Zhu et al., 2019; Zou & Gu, 2019; Zou et al., 2020; Liu et al., 2022) relied on the near constancy of NTK for wide neural networks (Jacot et al., 2018; Lee et al., 2019; Arora et al., 2019; Liu et al., 2020), which yield certain desirable properties for their training using gradient descent based methods. One such property is related to the PL condition (Karimi et al., 2016; Nguyen, 2021), formulated as PL∗ condition in (Liu et al., 2022). Our work uses a different optimization analysis based on RSC (Wainwright, 2019; Negahban et al., 2012; Negahban & Wainwright, 2012) related to a restricted version of the PL condition. Furthermore, Du et al. (2019); Allen-Zhu et al. (2019); Zou & Gu (2019); Zou et al. (2020) showed convergence in value to a global minimizer of the total loss, as we also do. 3 PROBLEM SETUP: DEEP LEARNING WITH SMOOTH ACTIVATIONS Consider a training set D = {xi, yi}ni=1,xi ∈ X ⊆ Rd, yi ∈ Y ⊆ R. We will denote by X ∈ Rn×d the matrix whose ith row is x⊤i . For a suitable loss function ℓ, the goal is to minimize the empirical loss: L(θ) = 1n ∑n i=1 ℓ(yi, ŷi) = 1 n ∑n i=1 ℓ(yi, f(θ;xi)), where the prediction ŷi := f(θ;xi) is from a deep model, and the parameter vector θ ∈ Rp. In our setting f is a feed-forward multi-layer (fully-connected) neural network with depth L and widths ml, l ∈ [L] := {1, . . . , L} given by α(0)(x) = x , α(l)(x) = ϕ ( 1 √ ml−1 W (l)α(l−1)(x) ) , l = 1, . . . , L , f(θ;x) = α(L+1)(x) = 1 √ mL v⊤α(L)(x) , (1) where W (l) ∈ Rml×ml−1 , l ∈ [L] are layer-wise weight matrices, v ∈ RmL is the last layer vector, ϕ(·) is the smooth (pointwise) activation function, and the total set of parameters θ := (vec(W (1))⊤, . . . , vec(W (L))⊤,v⊤)⊤ ∈ R ∑L k=1 mkmk−1+mL , (2) with m0 = d. For simplicity, we will assume that the width of all the layers is the same, i.e., ml = m, l ∈ [L], and so that θ ∈ RLm2+m. For simplicity, we also consider deep models with only one output, i.e., f(θ;x) ∈ R as in (Du et al., 2019), but our results can be extended to multi-dimension outputs as in (Zou & Gu, 2019), using V ∈ RmL×k for k outputs at the last layer; see Appendix C. Define the pointwise loss ℓi := ℓ(yi, ·) : R → R+ and denote its first- and second-derivative as ℓ′i := dℓ(yi,ŷi) dŷi and ℓ′′i := d2ℓ(yi,ŷi) dŷ2i . The particular case of square loss is ℓ(yi, ŷi) = (yi − ŷi)2. We denote the gradient and Hessian of f(·;xi) : Rp → R as ∇if := ∂f(θ;xi)∂θ , and ∇ 2 i f := ∂2f(θ;xi) ∂θ2 . The neural tangent kernel (NTK) Kntk(·; θ) ∈ Rn×n corresponding to parameter θ is defined as Kntk(xi,xj ; θ) = ⟨∇if,∇jf⟩. By chain rule, the gradient and Hessian of the empirical loss w.r.t. θ are given by ∂L(θ)∂θ = 1 n ∑n i=1 ℓ ′ i∇if and ∂2L(θ) ∂θ2 = 1 n ∑n i=1 [ ℓ′′i ∇if∇if⊤ + ℓ′i∇2i f ] . Let ∥ · ∥2 denote the spectral norm for matrices and L2-norm for vectors We make the following assumption regarding the activation function ϕ: Assumption 1 (Activation function). The activation ϕ is 1-Lipschitz, i.e., |ϕ′| ≤ 1, and βϕ-smooth, i.e., |ϕ′′l | ≤ βϕ. Remark 3.1. Our analysis holds for any ςϕ-Lipchitz smooth activations, with a dependence on ςϕ on most key results. The main (qualitative) conclusions stay true if ςϕ ≤ 1 + o(1) or ςϕ = poly(L), which is typically satisfied for commonly used smooth activations and moderate values of L. We define two types of balls over parameters that will be used throughout our analysis. Definition 3.1 (Norm balls). Given θ ∈ Rp of the form (2) with parameters W (l), l ∈ [L],v, we define BSpecρ,ρ1 (θ̄) := { θ ∈ Rp as in (2) | ∥W (ℓ) −W (ℓ)∥2 ≤ ρ, ℓ ∈ [L], ∥v − v̄∥2 ≤ ρ1 } , (3) BEucρ (θ̄) := { θ ∈ Rp as in (2) | ∥θ − θ̄∥2 ≤ ρ } . (4) Remark 3.2. The layerwise spectral norm ball BSpecρ,ρ1 plays a key role in our analysis. The last layer radius of ρ1 gives more flexibility, and we will usually assume ρ1 ≤ ρ; e.g., we could choose the desirable operating regime of ρ < √ m and ρ1 = O(1). Our analysis in fact goes through for any choice of ρ, ρ1 and the detailed results will indicate specific dependencies on both ρ and ρ1. 4 SPECTRAL NORM OF THE HESSIAN OF THE MODEL We start with the following assumption regarding the random initialization of the weights. Assumption 2 (Initialization weights and data normalization). The initialization weights w(l)0,ij ∼ N (0, σ20) for l ∈ [L] where σ0 = σ1 2 ( 1+ √ log m√ 2m ) , σ1 > 0, and v0 is a random unit vector with ∥v0∥2 = 1. Further, we assume the input data satisfies: ∥xi∥2 = √ d, i ∈ [n]. We focus on bounding the spectral norm of the Hessian ∥∇2θf(θ;x)∥2 for θ ∈ BSpecρ,ρ1 (θ0) and any input x ∈ Rd with ∥x∥2 = √ d. The assumption ∥x∥2 = √ d is for convenient scaling, such assumptions are common in the literature (Allen-Zhu et al., 2019; Oymak & Soltanolkotabi, 2020; Nguyen et al., 2021). Prior work (Liu et al., 2020) has considered a similar analysis for θ ∈ BEucρ (θ0), effectively the layerwise Frobenius norm ball, which is much smaller than BSpecρ,ρ1 (θ0), the layerwise spectral norm ball. We choose a unit value for the last layer’s weight norm for convenience, since our results hold under appropriate scaling for any other constant in O(1). All missing proofs are in Appendix A. Theorem 4.1 (Hessian Spectral Norm Bound). Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least (1− 2(L+1)m ), for any xi, i ∈ [n], we have∥∥∇2θf(θ;xi)∥∥2 ≤ cH√m , (5) with cH = O(L5(1 + γ6L)(1 + ρ1)) where γ := σ1 + ρ√m . Remark 4.1 (Desirable operating regimes). The constant γ needs careful scrutiny as cH depends on γ6L. Let us choose ρ1 = O(poly(L)). For any choice of the spectral norm radius ρ < √ m, we can choose σ1 ≤ 1− ρ√m ensuring γ ≤ 1 and hence cH = O(poly(L)). If ρ = O(1), we can keep σ1 = 1 so that γ = 1+ O(1)√ m , and cH = O(poly(L)) as long as L < √ m, which is common. Both of these give good choices for σ1 and desirable operating regime for the result. If we choose σ1 > 1, an undesirable operating regime, then cH = O(cΘ(L)), c > 1, and we will need m = Ω(cΘ(L)) for the result to be of interest. Remark 4.2 (Recent Related Work). In recent work, Liu et al. (2020) analyzed the Hessian spectral norm bound and showed that cH = Õ(ρ3L) for θ ∈ BEucρ (θ0) (logarithmic terms hidden in Õ(·)). Our analysis builds on and sharpens the result in (Liu et al., 2020) in three respects: (a) we have cH = O(poly(L)(1 + γ 6L)) for ρ1 = O(poly(L)) where we can choose σ1 to make γ ≤ 1 and thus obtain cH = O(poly(L)), instead of the worse cH = Õ(ρ3L) in Liu et al. (2020)1; (b) even for the same ρ, our results hold for a much larger spectral norm ball BSpecρ,ρ1 (θ0) compared to their Euclidean norm ball BEucρ (θ0) in (Liu et al., 2020); and (c) to avoid an exponential term, the bound in (Liu et al., 2020) needs ρ ≤ 1 whereas our result can use radius ρ < √ m for all intermediate layer matrices and ρ1 = O(poly(L)) for the last layer vector. Moreover, as a consequence of (b) and (c), our results hold for a larger (spectral norm) ball whose radius can increase with m, unlike the results in Liu et al. (2020) which hold for a smaller (Euclidean) ball with constant radius, i.e., “near initialization.” Remark 4.3 (Exact constant cH ). For completeness, we show the exact expression of the constant cH in Theorem 4.1 so the dependencies on different factors is clear. Let h(l) := γl−1+|ϕ(0)| ∑l−1 i=1 γ i−1. Then, cH = 2L(L 2γ2L + LγL + 1) · (1 + ρ1) · ψH ·max l∈[L] γL−l + 2LγL max l∈[L] h(l) , (6) where ψH = max 1≤l1<l2≤L { βϕ(h(l1)) 2 , h(l1) ( βϕ 2 (γ2 + (h(l2)) 2) + 1 ) , βϕγ 2h(l1)h(l2) } . (7) The source of the terms will be discussed shortly. Note the dependence on ρ1, the radius for the last layer in BSpecρ,ρ1 (θ0), and why ρ1 = O(poly(L)) is a desirable operating regime. 1See the end of Appendix A for a quick note about the network architecture in our work and the one in (Liu et al., 2020). Next, we give a high level outline of the proof of Theorem 4.1. Proof sketch. Our analysis follows the structure developed in Liu et al. (2020), but is considerably sharper as discussed in Remark 4.2. We start by defining the following quantities: Q∞(f) := max1≤l≤L {∥∥∥ ∂f∂α(l) ∥∥∥∞}, ∂f∂α(l) ∈ Rm, Q2(f) := max1≤l≤L {∥∥∥ ∂α(l)∂w(l) ∥∥∥2}, w(l) := vec(W (l)), ∂α(l) ∂w(l) ∈ Rm×m2 , and Q2,2,1(f) is the maximum over 1 ≤ l1 < l2 < l3 ≤ L among the three quantities ∥∥∥ ∂2α(l1) ∂w(l1) 2 ∥∥∥ 2,2,1 , ∥∥∥ ∂α(l1) ∂w(l1) ∥∥∥ 2 ∥∥∥ ∂2α(l2) ∂α(l2−1)∂w(l2) ∥∥∥ 2,2,1 , and ∥∥∥ ∂α(l1) ∂w(l1) ∥∥∥ 2 ∥∥∥ ∂α(l2) ∂w(l2) ∥∥∥ 2 ∥∥∥ ∂2α(l3) ∂α(l3−1) 2 ∥∥∥ 2,2,1 . where for an order-3 tensor T ∈ Rd1×d2×d3 we define the (2, 2, 1)−norm as ∥T∥2,2,1 := sup∥x∥2=∥z∥2=1 ∑d3 k=1 ∣∣∣∑d1i=1∑d2j=1 Tijkxizj∣∣∣ ,x ∈ Rd1 , z ∈ Rd2 . The following result in (Liu et al., 2020) provides an upper bound to the spectral norm of the Hessian. Theorem 4.2 (Liu et al. (2020), Theorem 3.1). Under Assumptions 1, assuming there is δ such that∥∥∥ ∂α(l)∂α(l−1) ∥∥∥2 ≤ δ, with C1 ≤ L2δ2L + LδL + L and C2 ≤ LδL, we have∥∥∇2θf(θ;x)∥∥2 ≤ 2C1Q2,2,1(f)Q∞(f) + 2√mC2Q2(f) , (8) In order to prove Theorem 4.1, we prove that Theorem 4.2 holds with high-probability where δ = γ, Q2(f) = O(L(1+γL)), Q2,2,1(f) = O(L3(1+γ3L)), and Q∞(f) = O ( (1+γL)(1+ρ1)√ m ) . Thus we obtain that the upper bound (4.2) becomes O(poly(L)(1+γ 6L)(1+ρ1)√ m ), providing a benign polynomial dependence on L when γ ≤ 1, rather than an exponential dependence on the radius ρ as in (Liu et al., 2020). The analysis for bounding the spectral norm of the Hessian can be used to establish additional bounds, which we believe are of independent interest, some of which will be used later in Section 5. First, we bound the norms of gradient of the predictor and the loss w.r.t. the weight vector θ and the input data x. Lemma 4.1 (Predictor gradient bounds). Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least ( 1− 2(L+1)m ) , we have ∥∇θf(θ;x)∥2 ≤ ϱ and ∥∇xf(θ;x)∥2 ≤ γL√ m (1 + ρ1) , (9) with ϱ2 = (h(L + 1))2 + 1m (1 + ρ1) 2 ∑L l=1(h(l)) 2γ2(L−l), γ = σ1 + ρ√m , h(l) = γ l−1 + |ϕ(0)| ∑l−1 i=1 γ i−1. Remark 4.4. Our analysis in Lemma 4.1 provides a bound on the Lipschitz constant of the predictor, a quantity which has generated interest in recent work on robust training (Salman et al., 2019; Cohen et al., 2020; Bubeck & Sellke, 2021). Under the assumption of square losses, further bounds can be obtained. Lemma 4.2 (Loss bounds). Consider the square loss. Under Assumptions 1, and 2, for γ = σ1+ ρ√m , each of the following inequalities hold with probability at least ( 1− 2(L+1)m ) : L(θ0) ≤ c0,σ1 and L(θ) ≤ cρ1,γ for θ ∈ BSpecρ,ρ1 (θ0), where ca,b = 2 n ∑n i=1 y 2 i + 2(1 + a) 2|g(b)|2 and g(a) = aL + |ϕ(0)| ∑L i=1 a i for any a, b ∈ R. Corollary 4.1 (Loss gradient bound). Consider the square loss. Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least ( 1− 2(L+1)m ) , we have ∥∇θL(θ)∥2 ≤ 2 √ L(θ)ϱ ≤ 2√cρ1,γϱ, with ϱ as in Lemma 4.1 and cρ1,γ as in Lemma 4.2. 5 OPTIMIZATION GUARANTEES WITH RESTRICTED STRONG CONVEXITY We focus on minimizing the empirical loss L(θ) over θ ∈ BSpecρ,ρ1 (θ0), the layerwise spectral norm ball in (3). Our analysis is based on Restricted Strong Convexity (RSC) (Negahban et al., 2012; Banerjee et al., 2014; Chen & Banerjee, 2015; Wainwright, 2019), which relaxes the definition of strong convexity by only needing strong convexity in certain directions or over a subset of the ambient space. We introduce the following specific definition of RSC with respect to a tuple (S, θ). Definition 5.1 (Restricted Strong Convexity (RSC)). A function L is said to satisfy α-restricted strong convexity (α-RSC) with respect to the tuple (S, θ) if for any θ′ ∈ S ⊆ Rp and some fixed θ ∈ Rp, we have L(θ′) ≥ L(θ) + ⟨θ′ − θ,∇θL(θ)⟩+ α2 ∥θ ′ − θ∥22, with α > 0. Note that L being α-RSC w.r.t. (S, θ) does not need L to be convex on Rp. Let us consider a sequence of iterates {θt}t≥0 ⊂ Rp. Our RSC analysis will rely on the following Qtκ-sets at step t, which avoid directions almost orthogonal to the average gradient of the predictor. We define the following notation: for two vectors π and π̄, cos(π, π̄) denotes the cosine of the angle between π and π̄. Definition 5.2 (Qtκ sets). For iterate θt ∈ Rp, let ḡt = 1n ∑n i=1 ∇θf(θt;xi). For any κ ∈ (0, 1], define Qtκ := {θ ∈ Rp | | cos(θ − θt, ḡt)| ≥ κ}. We define the set Bt := Qtκ ∩BSpecρ,ρ1 (θ0) ∩B Euc ρ2 (θt). We focus on establishing RSC w.r.t. the tuple (Bt, θt), where BSpecρ,ρ1 (θ0) becomes the feasible set for the optimization and B Euc ρ2 (θt) is a Euclidean ball around the current iterate. Assumption 3 (Loss function). The loss ℓi, i ∈ [n], is (i) strongly convex, i.e., ℓ′′i ≥ a > 0 and (ii) smooth, i.e., ℓ′′i ≤ b. Assumption 3 is satisfied by commonly used loss functions such as square loss, where a = b = 2. We state the RSC result for square loss; the result for other losses and proofs of all technical results in this section are in Appendix B. Theorem 5.1 (RSC for Square Loss). For square loss, under Assumptions 1 and 2, with probability at least (1− 2(L+1)m ), ∀θ ′ ∈ Qtκ ∩BSpecρ,ρ1 (θ0) ∩B Euc ρ2 (θt) with θt ∈ B Spec ρ,ρ1 (θ0), L(θ′) ≥ L(θt) + ⟨θ′ − θt,∇θL(θt)⟩+ αt 2 ∥θ′ − θt∥22 , with αt = c1 ∥ḡt∥ 2 2 − c2√ m , (10) where ḡt = 1n ∑n i=1 ∇θf(θt;xi), c1 = 2κ2 and c2 = 2cH(2ϱρ2 + √ cρ1,γ), with cH as in Theorem 4.1, ϱ as in Lemma 4.1, and cρ1,γ as in Lemma 4.2. Consequently, L satisfies RSC w.r.t. (Qtκ ∩BSpecρ,ρ1 (θ0) ∩B Euc ρ2 (θt), θt) whenever αt > 0. Remark 5.1. The RSC condition αt > 0 is satisfied at iteration t as long as ∥ḡt∥22 > c2c1√m where c1, c2 are exactly specified in Theorem 5.1. Indeed, if γ (and so σ1 and ρ) is chosen according to the desirable operating regimes (see Remark 4.1), ρ1 = O(poly(L)) and ρ2 = O(poly(L)), then we can use the bounds from Lemma 4.2 and obtain that the RSC condition is satisfied when ∥ḡt∥22 > O(poly(L))√ m . The condition is arguably mild, does not need the NTK condition λmin(Kntk(·; θt)) > 0, and is expected to hold till convergence (see Remark 5.3). Moreover, it is a local condition at step t and has no dependence on being ”near initialization” in the sense of θt ∈ BEucρ (θ0) for ρ = O(1) as in (Liu et al., 2020; 2022). For the convergence analysis, we also need to establish a smoothness property of the total loss. Theorem 5.2 (Local Smoothness for Square Loss). For square loss, under Assumptions 1 and 2, with probability at least (1− 2(L+1)m ), ∀θ, θ ′ ∈ BSpecρ,ρ1 (θ0), L(θ′) ≤ L(θ) + ⟨θ′ − θ,∇θL(θ)⟩+ β 2 ∥θ′ − θ∥22 , with β = 2ϱ2 + 2cH √ cρ1,γ√ m , (11) with cH as in Theorem 4.1, ϱ as in Lemma 4.1, and cρ1,γ as in Lemma 4.2. Consequently, L is locally β-smooth. Moreover, if γ (and so σ1 and ρ) is chosen according to the desirable operating regimes (see Remark 4.1) and ρ1 = O(poly(L)), then β = O(poly(L)). Remark 5.2. Similar to the case of the standard strong convexity and smoothness, the RSC and smoothness parameters respectively in Theorems 5.1 and 5.2 satisfy αt < β. To see this note that αt < 2κ 2∥ḡt∥22 ≤ 2ϱ2 ≤ β, where the second inequality follows since κ ≤ 1, and ∥ḡt∥22 ≤ ϱ2 using Lemma 4.1. Next, we show that the RSC condition w.r.t. the tuple (Bt, θt) implies a restricted Polyak-Łojasiewicz (RPL) condition w.r.t. the tuple (Bt, θt), unlike standard PL which holds without restrictions (Karimi et al., 2016). Lemma 5.1 (RSC ⇒ RPL). Let Bt := Qtκ ∩BSpecρ,ρ1 (θ0) ∩B Euc ρ2 (θt). In the setting of Theorem 5.1, if αt > 0, then the tuple (Bt, θt) satisfies the Restricted Polyak-Łojasiewicz (RPL) condition, i.e., L(θt)− inf θ∈Bt L(θ) ≤ 1 2αt ∥∇θL(θt)∥22 , (12) with probability at least (1− 2(L+1)m ). For the rest of the convergence analysis, we make the following assumption where T can be viewed as the stopping time so the convergence analysis holds given the assumptions are satisfied. Assumption 4 (Iterates’ conditions). For iterates {θt}t=0,1,...,T : (A4.1) αt > 0; (A4.2) θt ∈ BSpecρ,ρ1 (θ0). Remark 5.3 (Assumption (A4.1)). From Remark 5.1, (A4.1) is satisfied as long as ∥ḡt∥22 > c2c1√m where c1, c2 are as in Theorem 5.1, which is arguably a mild condition. In Section 6 we will present some empirical findings that show that this condition on ∥ḡt∥22 behaves well empirically. We now consider the particular case of gradient descent (GD) for the iterates: θt+1 = θt − ηt∇L(θt), where ηt is chosen so that θt+1 ∈ BSpecρ,ρ1 (θ0) and ρ2 is chosen so that θt+1 ∈ B Euc ρ2 (θt), which are sufficient for the analysis of Theorem 5.1 — we specify suitable choices in the sequel (see Remark 5.4). Given RPL w.r.t. (Bt, θt), gradient descent leads to a strict decrease of loss in Bt. Lemma 5.2 (Local Loss Reduction in Bt). Let αt, β be as in Theorems 5.1 and 5.2 respectively, and Bt := Qtκ ∩ BSpecρ,ρ1 (θ0) ∩ B Euc ρ2 (θt). Consider Assumptions 1, 2, and 4, and gradient descent with step size ηt = ωtβ , ωt ∈ (0, 2). Then, for any θt+1 ∈ arginfθ∈Bt L(θ), we have with probability at least (1− 2(L+1)m ), L(θt+1)− L(θt+1) ≤ ( 1− αtωt β (2− ωt) ) (L(θt)− L(θt+1)) . (13) Building on Lemma 5.2, we show that GD in fact leads to a geometric decrease in the loss relative to the minimum value of L(·) in the set BSpecρ,ρ1 (θ0). Theorem 5.3 (Global Loss Reduction in BSpecρ,ρ1 (θ0)). Let αt, β be as in Theorems 5.1 and 5.2 respectively, and Bt := Qtκ ∩ BSpecρ,ρ1 (θ0) ∩ B Euc ρ2 (θt). Let θ ∗ ∈ arginfθ∈BSpecρ,ρ1 (θ0) L(θ), θt+1 ∈ arginfθ∈Bt L(θ), and γt := L(θt+1)−L(θ∗) L(θt)−L(θ∗) . Let αt, β be as in Theorems 5.1 and 5.2 respectively, and Bt := Qtκ ∩ BSpecρ,ρ1 (θ0) ∩ B Euc ρ2 (θt). Consider Assumptions 1, 2, and 4, and gradient descent with step size ηt = ωtβ , ωt ∈ (0, 2). Then, with probability at least (1− 2(L+1) m ), we have we have γt ∈ [0, 1) and L(θt+1)− L(θ∗) ≤ ( 1− αtωt β (1− γt)(2− ωt) ) (L(θt)− L(θ∗)) . (14) As long as the conditions in Theorem 5.3 are kept across iterations, there will be a geometric decrease in loss. For Assumption 4, we have discussed (A4.1) in Remark 5.1, and we discuss (A4.2) next. Remark 5.4 (Assumption (A4.2)). Consider we run gradient descent iterations until some stopping time T > 0. Given radius ρ < √ m , Assumption (A4.2) θt ∈ BSpecρ,ρ1 (θ0), t = 0, . . . , T , can be verified empirically. Alternatively, we can choose suitable step sizes ηt to ensure the property using the geometric convergence from Theorem 5.3. Assume that our goal is to get L(θT )− L(θ∗) ≤ ϵ. Then, with χT := mint∈[T ] αtωtβ (1 − γt)(2 − ωt), Assumption (A4.1) along with Remark 5.2 ensures χT < 1. Then, it suffices to have T = ⌈log(L(θ0)−L(θ ∗) ϵ )/ log 1 1−χT ⌉ = Θ(log 1 ϵ ). Then, to ensure θt ∈ BSpecρ,ρ1 (θ0), t ∈ [T ], in the case of the square loss, since ∥∇L(θt)∥2 ≤ c for some constant c (see Corollary 4.1), it suffices to have ηt ≤ min{ρ,ρ1}Θ(log 1ϵ ) . Moreover, we point out that having ρ2 ≥ ηtc ensures ∥θt+1 − θt∥2 ≤ ρ2 ⇒ θt+1 ∈ BEucρ2 (θt), which in this case can be guaranteed if ρ2 ≥ min{ρ,ρ1}Θ(log 1ϵ ) . The argument above is informal, but illustrates that Assumption (A4.1) along with suitable constant step sizes ηt would ensure (A4.2). Thus, Assumption (A4.1), which ensures the RSC condition, is the main assumption behind the analysis. The conditions in Assumption 4 (see Remarks 5.1 and 5.4) along with Theorem 5.3 imply that the RSC based convergence analysis holds for a much larger layerwise spectral radius norm ball BSpecρ,ρ1 (θ0) with any radius ρ < √ m and ρ1 = O(poly(L)). Remark 5.5 (RSC and NTK). In the context of square loss, the NTK condition for geometric convergence needs λmin(Kntk(·; θt)) ≥ c0 > 0 for every t, i.e., uniformly bounded away from 0 by a constant c0 > 0. The NTK condition can also be written as inf v:∥v∥2=1 ∥∥∥∥∥ n∑ i=1 vi∇θf(θt;xi) ∥∥∥∥∥ 2 2 ≥ c0 > 0 . (15) In contrast, the proposed RSC condition (Theorem 5.1) needs∥∥∥∥∥ 1n n∑ i=1 ∇θf(θt;xi) ∥∥∥∥∥ 2 2 ≥ c̄0√ m , (16) where m is the width and c̄0 = c2c1 where c1, c2 are constants defined in Theorem 5.1. As a quadratic form on the NTK, the RSC condition can be viewed as using a specific v in (15), i.e., vi = 1√n for i ∈ [n], since the RSC condition is ∥∥∥∑ni=1 1√n∇θf(θt;xi)∥∥∥22 ≥ c̄0n√m . For m = Ω(n2), the RSC condition is more general since NTK ⇒ RSC, but the converse is not necessarily true. Remark 5.6 (RSC covers different settings than NTK). The NTK condition may be violated in certain settings, e.g., ∇θf(θt;xi), i = 1, . . . , n are linearly dependent, xi ≈ xj for some i ̸= j, layer widths are small ml < n, etc., but the optimization may work in practice. The RSC condition provides a way to analyze convergence in such settings. The RSC condition gets violated when 1 n ∑n i=1 ∇θf(θt;xi) ≈ 0, which does not seem to happen in practice (see Section 6), and future work will focus on understanding the phenomena. Finally, note that it is possible to construct a set of gradient vectors which satisfy the NTK condition but violates the RSC condition. Our perspective is to view the NTK and the RSC as two different sufficient conditions and geometric convergence of gradient descent (GD) is guaranteed as long as one of them is satisfied in any step. 6 RSC CONDITION: EXPERIMENTAL RESULTS In this section, we present experimental results verifying the RSC condition∥∥ 1 n ∑n i=1 ∇θf(θt;xi) ∥∥2 2 = Ω ( poly(L)√ m ) , t = 1, . . . , T , on standard benchmarks: CIFAR- 10, MNIST, and Fashion-MNIST. For simplicity, as before, we use ḡt = 1n ∑n i=1 ∇θf(θt;xi). In Figure 1(a), we consider CIFAR-10 and show the trajectory of ∥ḡt∥2 over iterations t, for different values of the network width m. For any width, the value of ∥ḡt∥2 stabilizes to a constant value over iterations, empirically validating the RSC condition ∥ḡt∥22 = Ω(poly(L)/ √ m). Interestingly, the smallest value of ∥ḡt∥2 seems to increase with the width. To study the width dependence further, in Figure 1(b), we plot mint∈[T ] ∥ḡt∥2 as a function of width m for several values of the width. The plot shows that mint∈[T ] ∥ḡt∥2 increases steadily with m illustrating that the RSC condition is empirically satisfied more comfortably for wider networks. In Figure 1(c) and (d), we show similar plots for MNIST and Fashion-MNIST illustrating the same phenomena of mint∈[T ] ∥ḡt∥2 increasing with m. For the experiments, the network architecture we used had 3-layer fully connected neural network with tanh activation function. The training algorithm is gradient descent (GD) width constant learning rate, chosen appropriately to keep the training in NTK regime. Since we are using GD, we use 512 randomly chosen training points for the experiments. The stopping criteria is either training loss < 10−3 or number of iterations larger than 3000. 7 CONCLUSIONS In this paper, we revisit deep learning optimization for feedforward models with smooth activations, and make two technical contributions. First, we bound the spectral norm of the Hessian over a large layerwise spectral norm radius ball, highlighting the role of initialization in such analysis. Second, we introduce a new approach to showing geometric convergence in deep learning optimization using restricted strong convexity (RSC). Our analysis sheds considerably new light on deep learning optimization problems, underscores the importance of initialization variance, and introduces a RSC based alternative to the prevailing NTK based analysis, which may fuel future work. ACKNOWLEDGMENTS AB is grateful for support from the National Science Foundation (NSF) through awards IIS 21-31335, OAC 21-30835, DBI 20-21898, as well as a C3.ai research award. MB and LZ are grateful for support from the National Science Foundation (NSF) and the Simons Foundation for the Collaboration on the Theoretical Foundations of Deep Learning2 through awards DMS-2031883 and 814639 as well as NSF IIS-1815697 and the TILOS institute (NSF CCF-2112665). A SPECTRAL NORM OF THE HESSIAN We establish the main theorem from Section 4 in this Appendix. Theorem 4.1 (Hessian Spectral Norm Bound). Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least (1− 2(L+1)m ), for any xi, i ∈ [n], we have∥∥∇2θf(θ;xi)∥∥2 ≤ cH√m , (5) with cH = O(L5(1 + γ6L)(1 + ρ1)) where γ := σ1 + ρ√m . A.1 ANALYSIS OUTLINE Our analysis follows that of Liu et al. (2020) and sharpens the analysis to get better dependence on the depth L of the neural network. We start by defining the following quantities: Q∞(f) := max 1≤l≤L {∥∥∥∥ ∂f∂α(l) ∥∥∥∥ ∞ } , ∂f ∂α(l) ∈ Rm , (17) Q2(f) := max 1≤l≤L {∥∥∥∥ ∂α(l)∂w(l) ∥∥∥∥ 2 } , w(l) := vec(W (l)) , ∂α(l) ∂w(l) ∈ Rm×m 2 , (18) Q2,2,1(f) := max 1≤l1<l2<l3≤L {∥∥∥∥ ∂2α(l1) ∂w(l1) 2 ∥∥∥∥ 2,2,1 , ∥∥∥∥ ∂α(l1)∂w(l1) ∥∥∥∥ 2 ∥∥∥∥ ∂2α(l2)∂α(l2−1)∂w(l2) ∥∥∥∥ 2,2,1 , (19) ∥∥∥∥ ∂α(l1)∂w(l1) ∥∥∥∥ 2 ∥∥∥∥ ∂α(l2)∂w(l2) ∥∥∥∥ 2 ∥∥∥∥ ∂2α(l3) ∂α(l3−1) 2 ∥∥∥∥ 2,2,1 } (20) where for an order-3 tensor T ∈ Rd1×d2×d3 we define the (2, 2, 1)−norm as follows, ∥T∥2,2,1 := sup ∥x∥2=∥z∥2=1 d3∑ k=1 ∣∣∣∣∣∣ d1∑ i=1 d2∑ j=1 Tijkxizj ∣∣∣∣∣∣ , x ∈ Rd1 , z ∈ Rd2 . (21) We will also use the notation W (L+1) := v. A key result established in Liu et al. (2020) provides an upper bound to the spectral norm of the Hessian: Theorem 4.2 (Liu et al. (2020), Theorem 3.1). Under Assumptions 1, assuming there is δ such that∥∥∥ ∂α(l)∂α(l−1) ∥∥∥2 ≤ δ, with C1 ≤ L2δ2L + LδL + L and C2 ≤ LδL, we have∥∥∇2θf(θ;x)∥∥2 ≤ 2C1Q2,2,1(f)Q∞(f) + 2√mC2Q2(f) , (8) In order to prove Theorem 4.1, we prove that Theorem 4.2 holds with high-probability where • δ = γ follows from Lemma A.3, • Q2(f) = O(L(1 + γL)) follows from Lemma A.4, • Q2,2,1(f) = O(L3(1 + γ3L)) follows from Lemma A.4 and Lemma A.5, and • Q∞(f) = O ( (1+γL)(1+ρ1)√ m ) follows from Lemma A.7 , while also establishing precise constants to get a proper form for the constant cH in Theorem 4.1. As a result, cH ≤ O(L 5(1+γ6L)(1+ρ1)√ m ). A.2 SPECTRAL NORMS OF W (l) AND L2 NORMS OF α(l) We start by bounding the spectral norm of the layer-wise matrices at initialization. Lemma A.1. Consider any l ∈ [L]. If the parameters are initialized as w(l)0,ij ∼ N (0, σ20) where σ0 = σ1 2(1+ √ log m 2m ) as in Assumption 2, then with probability at least ( 1− 2m ) , we have ∥W (l)0 ∥2 ≤ σ1 √ m . (22) Proof. For a (ml ×ml−1) random matrix W (l)0 with i.i.d. entries w (l) 0,ij ∈ N (0, σ20), with probability at least (1− 2 exp(−t2/2σ20)), the largest singular value of W0 is bounded by σmax(W (ℓ) 0 ) ≤ σ0( √ ml + √ ml−1) + t . (23) This concentration result can be easily derived as follows: notice that W0 = σ0W̄ (ℓ) 0 , where w̄ (ℓ) 0,ij ∼ N(0, 1), thus we can use the expectation E[∥W0∥(ℓ)2 ] = σ0E[ ∥∥W̄0∥∥(ℓ)2 ] = σ0(√mℓ + √mℓ−1) from Gordon’s Theorem for Gaussian matrices (Vershynin, 2012, Theorem 5.32) in the Gaussian concentration result for Lipschitz functions (Vershynin, 2012, Proposition 3.4) considering that B 7→ ∥σ0B∥2 is a σ0-Lipschitz function when the matrix B is treated as a vector. Let us choose t = σ0 √ 2 logm so that (23) holds with probability at least (1− 2m ). Then, to obtain (22), Case 1: l = 1. With m0 = d and m1 = m, ∥W (1)0 ∥2 ≤ σ0( √ d+ √ m+ √ 2 logm) ≤ σ0(2 √ m+ √ 2 logm) since we are in the over-parameterized regime m ≥ d. Case 2: 2 ≤ l ≤ L. With ml = ml−1 = m, ∥W (l)0 ∥2 ≤ σ0(2 √ m+ √ 2 logm) . Now, using σ0 = σ1 2(1+ √ log m 2m ) in both cases completes the proof. Next we bound the spectral norm of layerwise matrices. Proposition A.1. Under Assumptions 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least ( 1− 2m ) , ∥W (l)∥2 ≤ ( σ1 + ρ√ m )√ m , l ∈ [L]. Proof. By triangle inequality, for l ∈ [L], ∥W (l)∥2 ≤ ∥W (l)0 ∥2 + ∥W (l) −W (l) 0 ∥2 (a) ≤ σ1 √ m+ ρ , where (a) follows from Lemma A.1. This completes the proof. Next, we show that the output α(l) of layer l has an L2 norm bounded by O( √ m). Lemma A.2. Consider any l ∈ [L]. Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least ( 1− 2lm ) , we have ∥α(l)∥2 ≤ √ m ( σ1 + ρ√ m )l + √ m l∑ i=1 ( σ1 + ρ√ m )i−1 |ϕ(0)| = ( γl + |ϕ(0)| l∑ i=1 γi−1 ) √ m . Proof. Following Allen-Zhu et al. (2019); Liu et al. (2020), we prove the result by recursion. First, recall that since ∥x∥22 = d, we have ∥α(0)∥2 = √ d. Then, since m0 = d and ϕ is 1-Lipschitz,∥∥∥∥ϕ( 1√dW (1)α(0) )∥∥∥∥ 2 − ∥ϕ(0)∥2 ≤ ∥∥∥∥ϕ( 1√dW (1)α(0) ) − ϕ(0) ∥∥∥∥ 2 ≤ ∥∥∥∥ 1√dW (1)α(0) ∥∥∥∥ 2 , so that ∥α(1)∥2 = ∥∥∥∥ϕ( 1√dW (1)α(0) )∥∥∥∥ 2 ≤ ∥∥∥∥ 1√dW (1)α(0) ∥∥∥∥ 2 + ∥ϕ(0)∥2 ≤ 1√ d ∥W (1)∥2∥α(0)∥2 + |ϕ(0)| √ m ≤ ( σ1 + ρ√ m )√ m+ |ϕ(0)| √ m , where we used Proposition A.1 in the last inequality, which holds with probability at least 1− 2m . For the inductive step, we assume that for some l − 1, we have ∥α(l−1)∥2 ≤ √ m ( σ1 + ρ√ m )l−1 + √ m l−1∑ i=1 ( σ1 + ρ√ m )i−1 |ϕ(0)|, which holds with the probability at least 1− 2(l−1)m . Since ϕ is 1-Lipschitz, for layer l, we have∥∥∥∥ϕ( 1√mW (l)α(l−1) )∥∥∥∥ 2 − ∥ϕ(0)∥2 ≤ ∥∥∥∥ϕ( 1√mW (l)α(l−1) ) − ϕ(0) ∥∥∥∥ 2 ≤ ∥∥∥∥ 1√mW (l)α(l−1) ∥∥∥∥ 2 , so that ∥α(l)∥2 = ∥∥∥∥ϕ( 1√mW (l)α(l−1) )∥∥∥∥ 2 ≤ ∥∥∥∥ 1√mW (l)α(l−1) ∥∥∥∥ 2 + ∥ϕ(0)∥2 ≤ 1√ m ∥W (l)∥2∥α(l−1)∥2 + √ m|ϕ(0)| (a) ≤ ( σ1 + ρ√ m ) ∥α(l−1)∥2 + √ m|ϕ(0)| (b) = √ m ( σ1 + ρ√ m )l + √ m l∑ i=1 ( σ1 + ρ√ m )i−1 |ϕ(0)|, where (a) follows from Proposition A.1 and (b) from the inductive step. Since we have used Proposition A.1 l times, after a union bound, our result would hold with probability at least 1− 2lm . This completes the proof. A.3 SPECTRAL NORMS OF ∂α (l) ∂w(l) AND ∂α (l) ∂α(l−1) Recall that in our setup, the layerwise outputs and pre-activations are respectively given by: α(l) = ϕ ( α̃(l) ) , α̃(l) := 1 √ ml−1 W (l)α(l−1) . (24) Lemma A.3. Consider any l ∈ {2, . . . , L}. Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least ( 1− 2m ) , ∥∥∥∥ ∂α(l)∂α(l−1) ∥∥∥∥2 2 ≤ ( σ1 + ρ√ m )2 = γ2 . (25) Proof. By definition, we have [ ∂α(l) ∂α(l−1) ] i,j = 1√ m ϕ′(α̃ (l) i )W (l) ij . (26) Since ∥A∥2 = sup∥v∥2=1 ∥Av∥2, so that ∥A∥ 2 2 = sup∥v∥2=1 ∑ i⟨ai,v⟩2, we have that for 2 ≤ l ≤ L, ∥∥∥∥ ∂α(l)∂α(l−1) ∥∥∥∥2 2 = sup ∥v∥2=1 1 m m∑ i=1 ϕ′(α̃(l)i ) m∑ j=1 W (l) ij vj 2 (a) ≤ sup ∥v∥2=1 1 m ∥W (l)v∥22 = 1 m ∥W (l)∥22 (b) ≤ γ2 , where (a) follows from ϕ being 1-Lipschitz by Assumption 1 and (b) from Proposition A.1. This completes the proof. Lemma A.4. Consider any l ∈ [L]. Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least ( 1− 2lm ) ,∥∥∥∥ ∂α(l)∂w(l) ∥∥∥∥2 2 ≤ 1 m [ √ m ( σ1 + ρ√ m )l−1 + √ m l−1∑ i=1 ( σ1 + ρ√ m )i−1 |ϕ(0)| ]2 = ( γl−1 + |ϕ(0)| l−1∑ i=1 γi−1 )2 . (27) Proof. Note that the parameter vector w(l) = vec(W (l)) and can be indexed with j ∈ [m] and j′ ∈ [d] when l = 1 and j′ ∈ [m] when l ≥ 2. Then, we have[ ∂α(l) ∂w(l) ] i,jj′ = [ ∂α(l) ∂W (l) ] i,jj′ = 1√ m ϕ′(α̃ (l) i )α (l−1) j′ 1[i=j] . (28) For l ∈ {2, . . . , L}, noting that ∂α (l) ∂w(l) ∈ Rm×m2 and ∥V ∥F = ∥vec(V )∥2 for any matrix V , we have∥∥∥∥ ∂α(l)∂w(l) ∥∥∥∥2 2 = sup ∥V ∥F=1 1 m m∑ i=1 ϕ′(α̃(l)i ) m∑ j,j′=1 α (l−1) j′ 1[i=j]Vjj′ 2 ≤ sup ∥V ∥F=1 1 m ∥V α(l−1)∥22 ≤ 1 m sup ∥V ∥F=1 ∥V ∥22∥α(l−1)∥22 (a) ≤ 1 m ∥α(l−1)∥22 (b) ≤ 1 m [ √ m ( σ1 + ρ√ m )l−1 + √ m l−1∑ i=1 ( σ1 + ρ√ m )i−1 |ϕ(0)| ]2 = ( γl−1 + |ϕ(0)| l−1∑ i=1 γi−1 )2 where (a) follows from ∥V ∥22 ≤ ∥V ∥ 2 F for any matrix V , and (b) from Lemma A.2. The l = 1 case follows in a similar manner:∥∥∥∥ ∂α(1)∂w(1) ∥∥∥∥2 2 ≤ 1 d ∥α(0)∥22 = 1 d ∥x∥22 = 1 which satisfies the form for l = 1. That completes the proof. A.4 (2, 2, 1)-NORMS OF ORDER 3 TENSORS Lemma A.5. Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), each of the following inequalities hold with probability at least ( 1− 2lm ) ,∥∥∥∥ ∂2α(l)(∂α(l−1))2 ∥∥∥∥ 2,2,1 ≤ βϕγ2, (29) ∥∥∥∥ ∂2α(l)∂α(l−1)∂W (l) ∥∥∥∥ 2,2,1 ≤ βϕ 2 γ2 +(γl−1 + |ϕ(0)| l−1∑ i=1 γi−1 )2+ 1, (30) for l = 2, . . . , L; and ∥∥∥∥ ∂2α(l)(∂W (l))2 ∥∥∥∥ 2,2,1 ≤ βϕ ( γl−1 + |ϕ(0)| l−1∑ i=1 γi−1 )2 , (31) for l ∈ [L]. Proof. For the inequality (29), note that from (26) we obtain ( ∂2α(l) (∂α(l−1))2 ) i,j,k = 1 mϕ ′′(α̃ (l) i )W (l) ik W (l) ij , and so∥∥∥∥ ∂2α(l)(∂α(l−1))2 ∥∥∥∥ 2,2,1 = sup ∥v1∥2=∥v2∥2=1 1 m m∑ i=1 ∣∣∣ϕ′′(α̃(l)i )(W (l)v1)i(W (l)v2)i∣∣∣ ≤ sup ∥v1∥2=∥v2∥2=1 1 m βϕ m∑ i=1 ∣∣∣(W (l)v1)i(W (l)v2)i∣∣∣ (a) ≤ sup ∥v1∥2=∥v2∥2=1 1 2m βϕ m∑ i=1 (W (l)v1) 2 i + (W (l)v2) 2 i ≤ 1 2m βϕ sup ∥v1∥2=∥v2∥2=1 (∥W (l)v1∥22 + ∥W (l)v2∥22) ≤ 1 2m βϕ(∥W (l)∥22 + ∥W (l)∥22) (b) ≤ βϕ(σ1 + ρ/ √ m)2 = βϕγ 2, (32) where (a) follows from 2ab ≤ a2 + b2 for a, b ∈ R, and (b) from Proposition A.1, with probability at least 1− 2m . For the inequality (30), carefully following the chain rule in (28) we obtain( ∂2α(l) ∂α(l−1)∂W (l) ) i,jj′,k = 1 m ϕ′′(α̃ (l) i )W (l) ik α (l−1) j′ 1[j=i] + 1√ m ϕ′(α̃ (l) i )1[i=j]1[j′=k]. Then, we have ∥∥∥∥ ∂2α(l)∂α(l−1)∂W (l) ∥∥∥∥ 2,2,1 = sup ∥v1∥2=∥V2∥F=1 m∑ i=1 ∣∣∣∣∣∣ m∑ k=1 m∑ j=1 m∑ j′=1 ( 1 m ϕ′′(α̃ (l) i )W (l) ik α (l−1) j′ 1[j=i] + 1√ m ϕ′(α̃ (l) i )1[i=j]1[j′=k] ) v1,kV2,jj′ ∣∣∣∣ = sup ∥v1∥2=∥V2∥F=1 m∑ i=1 ∣∣∣∣∣∣ 1m m∑ j′=1 ϕ′′(α̃ (l) i )α (l−1) j′ V2,ij′ ( m∑ k=1 W (l) ik v1,k ) + 1√ m m∑ k=1 ϕ′(α̃ (l) i )v1,kV2,ik ∣∣∣∣∣ ≤ sup ∥v1∥2=∥V2∥F=1 1 m βϕ m∑ i=1 ∣∣∣(W (l)v1)i(V2α(l−1))i∣∣∣+ 1√ m m∑ i=1 m∑ k=1 |v1,kV2,ik| ≤ sup ∥v1∥2=∥v2∥F=1 1 2m βϕ m∑ i=1 (W (l)v1) 2 i + (V2α (l−1))2i + 1√ m m∑ i=1 ∥v1∥2 ∥∥V2,i,:∥∥2 = sup ∥v1∥2=∥V2∥F=1 1 2m βϕ( ∥∥∥W (l)v1∥∥∥2 2 + ∥∥∥V2α(l−1)∥∥∥2 2 ) + 1√ m m∑ i=1 ∥∥V2,i,:∥∥2 (a) ≤ 1 2m βϕ( ∥∥∥W (l)∥∥∥2 2 + ∥∥∥α(l−1)∥∥∥2 2 ) + ∥V2∥F (b) ≤ βϕ 2 γ2 +(γl−1 + |ϕ(0)| l−1∑ i=1 γi−1 )2+ 1 where (a) follows from ∥∥V2α(l−1)∥∥2 ≤ ∥V2∥2 ∥∥αl−1∥∥ ≤ ∥V2∥F ∥∥αl−1∥∥2 = ∥∥αl−1∥∥2 and∑m i=1 ∥∥V2,i,:∥∥2 ≤ √m√∑mi=1 ∥∥V2,i,:∥∥22, and (b) follows from Proposition A.1 and Lemma A.2, with altogether holds with probability at least 1− 2lm . For the last inequality (31), we start with the analysis for l ≥ 2. Carefully following the chain rule in (28) we obtain ( ∂2α(l) (∂W (l))2 ) i,jj′,kk′ = 1 m ϕ′′(α̃ (l) i )α (l−1) k′ α (l−1) j′ 1[j=i]1[k=i]. Then, we have∥∥∥∥ ∂2α(l)(∂W (l))2 ∥∥∥∥ 2,2,1 = sup ∥V1∥F=∥V2∥F=1 m∑ i=1 ∣∣∣∣∣∣ m∑ j,j′=1 m∑ k,k′=1 ( 1 m ϕ′′(α̃ (l) i )α (l−1) k′ α (l−1) j′ 1[j=i]1[k=i]V1,jj′V2,kk′ )∣∣∣∣∣∣ = sup ∥V1∥F=∥V2∥F=1 m∑ i=1 ∣∣∣∣∣∣ϕ ′′(α̃ (l) i ) m m∑ j′=1 ( α (l−1) j′ V1,ij′ ) m∑ k′=1 ( α (l−1) k′ V2,ik′ )∣∣∣∣∣∣ ≤ sup ∥V1∥F=∥V2∥F=1 1 m βϕ m∑ i=1 ∣∣∣(V1α(l−1))i(V2α(l−1))i∣∣∣ ≤ sup ∥V1∥F=∥v2∥F=1 1 2m βϕ m∑ i=1 (V2α (l−1))2i + (V2α (l−1))2i = sup ∥V1∥F=∥V2∥F=1 1 2m βϕ( ∥∥∥V2α(l−1)∥∥∥2 2 + ∥∥∥V2α(l−1)∥∥∥2 2 ) ≤ 1 2m βϕ( ∥∥∥α(l−1)∥∥∥2 2 + ∥∥∥α(l−1)∥∥∥2 2 ) ≤ βϕ ( γl−1 + |ϕ(0)| l−1∑ i=1 γi−1 )2 , which holds with probability at least 1 − 2(l−1)m . For the case l = 1, it is easy to show that( ∂2α(1) (∂W (1))2 ) i,jj′,kk′ = 1dϕ ′′(α̃ (1) i )xk′xj′1[j=i]1[k=i] and so ∥∥∥ ∂2α(1)(∂W (1))2 ∥∥∥2,2,1 ≤ βϕ. This completes the proof. A.5 L∞ NORM OF ∂f∂α(l) Let b(l) := ∂f ∂α(l) ∈ Rm for any l ∈ [L]. Let b(l)0 denote b(l) at initialization. By a direct calculation, we have b(l) = ∂f ∂α(l) = ( L∏ l′=l+1 ∂α(l) ∂α(l−1) ) ∂f ∂α(L) = ( L∏ l′=l+1 1√ m (W (l ′))⊤D(l ′) ) 1√ m v , where D(l ′) is a diagonal matrix of the gradient of activations, i.e., D(l ′) ii = ϕ ′(α̃ (l′) i ). Note that we also have the following recursion: b(l) = ∂f ∂α(l) = ∂α(l+1) ∂α(l) ∂f ∂α(l+1) = 1√ m (W (l+1))⊤D(l+1)b(l+1) . Lemma A.6. Consider any l ∈ [L]. Under Assumptions1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least 1− 2(L−l+1)m , ∥b(l)∥2 ≤ 1√ m ( σ1 + ρ√ m )L−l (1 + ρ1) (33) and ∥b(l)0 ∥2 ≤ σL−l1√ m ≤ γ L−l √ m . (34) Proof. First, note that ∥∥b(L)∥∥ 2 = 1√ m ∥v∥2 ≤ 1√ m (∥v0∥2 + ∥v − v0∥2) ≤ 1√ m (1 + ρ1), where the inequality follows from from Proposition A.1. Now, for the inductive step, assume ∥∥b(l)∥∥ 2 ≤( σ1 + ρ√ m )L−l 1√ m (1 + ρ1) with probability at least 1− 2lm . Then,∥∥∥b(l−1)∥∥∥ 2 = ∥∥∥∥ ∂α(l)∂α(l−1)b(l) ∥∥∥∥ 2 ≤ ∥∥∥∥ ∂α(l)∂α(l−1) ∥∥∥∥ 2 ∥∥∥b(l)∥∥∥ 2 ≤ ( σ1 + ρ√ m )( σ1 + ρ√ m )L−l 1√ m (1 + ρ1) = ( σ1 + ρ√ m )L−l+1 1√ m (1 + ρ1) where the last inequality follows from Lemma A.3 with probability at least 1− 2m (l + 1). Since we use Proposition A.1 once at layer L and then Lemma A.3 (L− l) times at layer l, then we have that everything holds altogether with probability at least 1− 2m (L− l + 1). We have finished the proof by induction. Lemma A.7. Consider any l ∈ [L]. Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least 1− 2(L−l)m , ∥∥∥b(l)∥∥∥ ∞ ≤ γ L−l √ m (1 + ρ1). (35) Proof. For any l ∈ [L], by definition i-th component of b(l), i.e., b(l)i , takes the form b (l) i = ∂α(L) ∂α (l) i ∂f ∂α(L) = ∂α(L) ∂α (l) i 1√ m v. Then, with W (l):,i denoting the i-th column of the matrix W (l),∥∥∥∥∥∂α(L)∂α(l)i ∥∥∥∥∥ 2 (a) = ∥∥∥∥∥ϕ′(α̃(l)i )√m (W (l):,i )⊤ L∏ l′=l+2 ( ∂α(l ′) ∂α(l′−1) )∥∥∥∥∥ 2 (b) ≤ 1√ m ∥∥∥W (l):,i ∥∥∥ 2 L∏ l′=l+2 ∥∥∥∥∥ ∂α(l ′) ∂α(l′−1) ∥∥∥∥∥ 2 (c) ≤ 1√ m ∥∥∥W (l):,i ∥∥∥ 2 γL−l−1 (d) ≤ γ γL−l−1 = γL−l (36) where (a) follows from ∂α (l+1) ∂α (l) i = 1√ m ϕ′(α̃ (l) i )(W (l) :,i ) ⊤, (b) from ϕ being 1-Lipschitz, (c) from Lemma A.3, and (d) from ∥∥∥W (l):,i ∥∥∥ 2 ≤ ∥∥W (l)∥∥ 2 and Proposition A.1, which altogether holds with probability 1− 2m (L− l). Therefore, for every i ∈ [m], ∣∣∣b(l)i ∣∣∣ ≤ ∣∣∣∣∣ 1√m ∂α(L)∂α(l)i v ∣∣∣∣∣ ≤ 1√ m ∥∥∥∥∥∂α(L)∂α(l)i ∥∥∥∥∥ 2 ∥v∥2 ≤ 1√ m γL−l(1 + ρ1) , where the last inequality follows from (36) and ∥v∥2 ≤ ∥v0∥2+∥v − v0∥2 ≤ 1+ρ1. This completes the proof. A.6 USEFUL BOUNDS Lemma 4.1 (Predictor gradient bounds). Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least ( 1− 2(L+1)m ) , we have ∥∇θf(θ;x)∥2 ≤ ϱ and ∥∇xf(θ;x)∥2 ≤ γL√ m (1 + ρ1) , (9) with ϱ2 = (h(L + 1))2 + 1m (1 + ρ1) 2 ∑L l=1(h(l)) 2γ2(L−l), γ = σ1 + ρ√m , h(l) = γ l−1 + |ϕ(0)| ∑l−1 i=1 γ i−1. Proof. We first prove the bound on the gradient with respect to the weights. Using the chain rule, ∂f ∂w(l) = ∂α(l) ∂w(l) L∏ l′=l+1 ∂α(l ′) ∂α(l′−1) ∂f ∂α(L) and so∥∥∥∥ ∂f∂w(l) ∥∥∥∥2 2 ≤ ∥∥∥∥ ∂α(l)∂w(l) ∥∥∥∥2 2 ∥∥∥∥∥ L∏ l′=l+1 ∂α(l ′) ∂α(l′−1) ∂f ∂α(L) ∥∥∥∥∥ 2 2 (a) ≤ ∥∥∥∥ ∂α(l)∂w(l) ∥∥∥∥2 2 γ2(L−l) · 1 m (1 + ρ1) 2 (b) ≤ ( γl−1 + |ϕ
1. What are the main contributions of the paper regarding single-output deep learning models with smooth activations? 2. How do the two contributions of the paper connect or relate to each other? 3. What are the strengths and weaknesses of the paper, particularly in terms of its assumptions and conditions for restricted strong convexity? 4. Can you provide examples or further explanations regarding the needed conditions for restricted strong convexity, especially regarding the squared norm of the average gradient? 5. How does the paper view the relationship between NTK and RSC, and how do these concepts inform each other in the context of deep learning models? 6. What does "step-size is chosen appropriately to keep training in NTK regimes" mean in the experimental context, and how does this relate to the paper's overall findings?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper makes two contributions for single-output deep learning models with smooth activations: Upper bounds the spectral norm of the model's Hessian over a layer-wise spectral norm, which is larger than the euclidean ball that was studied in previous work. Hence, the result holds with relaxed conditions on how close it is to initialization. proves a restricted strong convexity property along the path of GD iterates that avoid directions orthogonal to the average gradient and that stay close to initialization with respect to the layer-wise spectral norm. Strengths And Weaknesses Strengths: The paper is generally well-written. The authors make particular effort to make the presentation less 'dry' with several remarks on intuitions and interpretations of their results The contributions are made clear and it is clarified what the novelties are compared to previous works Thm 1 improves on a previous result by (Liu et al.) Thm 5.2 provides an alternative a criterion for linear convergence that is different compared to the now common uniform minimum eigenvalue bound of the NTK Weaknesses: The two contributions of the paper are not well connected. Have I missed something on this? Perhaps the authors can clarify how the content of Sec. 4 informs that of Sec. 5 How is Lemma 5.1 different from what is already shown in Karimi et al. that RSC=>RPL other than here you are talking about restricted PL? The result on RSC requires (essentially) two assumptions. (a)The first is that GD travels a path in which iterates are not orthogonal to the average gradient. While it is mentioned, that this is something observed in practice, no further evidence is given. Also, it would be useful to add some intuition on why this is needed. (b) The second needed condition is that the squared norm of the average gradient is greater than c/\sqrt{m}, where m is the network's width. Remark 5.5 is nice, but it would strengthen the argument if the authors could show an example of concrete architecture where (20) holds but (19) does not. In Remark 5.6 they authors mention: "our perspective is to view the NTK and RSC as two different sufficient conditions ...". This remark is somewhat contradictory to the flavor of comment in the rest of the paper, e.g. Remark 5.5 also earlier that the layer-wise spectral ball does not require to be in NTK regime In the experiments: what does it mean, step-size is chosen appropriately to keep training in NTK regimes? minor: above Thm 1, cross-entropy is not strongly convex Clarity, Quality, Novelty And Reproducibility The paper is generally well-written. The authors make particular effort to make the presentation less 'dry' with several remarks on intuitions and interpretations of their results The contributions are made clear and it is clarified what the novelties are compared to previous works
ICLR
Title Restricted Strong Convexity of Deep Learning Models with Smooth Activations Abstract We consider the problem of optimization of deep learning models with smooth activation functions. While there exist influential results on the problem from the “near initialization” perspective, we shed considerable new light on the problem. In particular, we make two key technical contributions for such models with L layers, m width, and σ 0 initialization variance. First, for suitable σ 2 0 , we establish a O( poly(L) √ m ) upper bound on the spectral norm of the Hessian of such models, considerably sharpening prior results. Second, we introduce a new analysis of optimization based on Restricted Strong Convexity (RSC) which holds as long as the squared norm of the average gradient of predictors is Ω( poly(L) √ m ) for the square loss. We also present results for more general losses. The RSC based analysis does not need the “near initialization” perspective and guarantees geometric convergence for gradient descent (GD). To the best of our knowledge, ours is the first result on establishing geometric convergence of GD based on RSC for deep learning models, thus becoming an alternative sufficient condition for convergence that does not depend on the widely-used Neural Tangent Kernel (NTK). We share preliminary experimental results supporting our theoretical advances. N/A 2 0 , we establish a O( poly(L)√ m ) upper bound on the spectral norm of the Hessian of such models, considerably sharpening prior results. Second, we introduce a new analysis of optimization based on Restricted Strong Convexity (RSC) which holds as long as the squared norm of the average gradient of predictors is Ω( poly(L)√ m ) for the square loss. We also present results for more general losses. The RSC based analysis does not need the “near initialization” perspective and guarantees geometric convergence for gradient descent (GD). To the best of our knowledge, ours is the first result on establishing geometric convergence of GD based on RSC for deep learning models, thus becoming an alternative sufficient condition for convergence that does not depend on the widely-used Neural Tangent Kernel (NTK). We share preliminary experimental results supporting our theoretical advances. 1 INTRODUCTION Recent years have seen advances in understanding convergence of gradient descent (GD) and variants for deep learning models (Du et al., 2019; Allen-Zhu et al., 2019; Zou & Gu, 2019; Zou et al., 2020; Liu et al., 2022; Ji & Telgarsky, 2019; Oymak & Soltanolkotabi, 2020; Nguyen, 2021). Despite the fact that such optimization problems are non-convex, a series of recent results have shown that GD has geometric convergence and finds near global solution “near initialization” for wide networks. Such analysis is typically done based on the Neural Tangent Kernel (NTK) (Jacot et al., 2018), in particular by showing that the NTK is positive definite “near initialization,” in turn implying the optimization problem satisfies a condition closely related to the Polyak-Łojasiewicz (PL) condition, which in turn implies geometric convergence to the global minima (Liu et al., 2022; Nguyen, 2021). Such results have been generalized to more flexible forms of “lazy learning” where similar guarantees hold (Chizat et al., 2019). However, there are concerns regarding whether such “near initialization” or “lazy learning” truly explains the optimization behavior in realistic deep learning models (Geiger et al., 2020; Yang & Hu, 2020; Fort et al., 2020; Chizat et al., 2019). Our work focuses on optimization of deep models with smooth activation functions, which have become increasingly popular in recent years (Du et al., 2019; Liu et al., 2022; Huang & Yau, 2020). Much of the theoretical convergence analysis of GD has focused on ReLU networks (Allen-Zhu et al., 2019; Nguyen, 2021). Some progress has also been made for deep models with smooth activations, but existing results are based on a variant of the NTK analysis, and the requirements on the width of such models are high (Du et al., 2019; Liu et al., 2022). Based on such background and context, the motivating question behind our work is: Are there other (meaningful) sufficient conditions beyond NTK which lead to (geometric) convergence of GD for deep learning optimization? Based on such motivation, we make two technical contributions in this paper which shed light on optimization of deep learning models with smooth activations and with L layers, m width, and σ20 initialization variance. First, for suitable σ20 , we establish a O( poly(L)√ m ) upper bound on the spectral norm of the Hessian of such models (Section 4). The bound holds over a large layerwise spectral norm (instead of Frobenius norm) ball BSpecρ,ρ1 (θ0) around the random initialization θ0, where the radius ρ < √ m, arguably much bigger than what real world deep models need. Our analysis builds on and sharpens recent prior work on the topic (Liu et al., 2020). While our analysis holds for Gaussian random initialization of weights with any variance σ20 , the poly(L) dependence happens when σ20 ≤ 14+o(1) 1 m (we handle the 1 m scaling explicitly) . Second, based on our Hessian spectral norm bound, we introduce a new approach to the analysis of optimization of deep models with smooth activations based on the concept of Restricted Strong Convexity (RSC) (Section 5) (Wainwright, 2019; Negahban et al., 2012; Negahban & Wainwright, 2012; Banerjee et al., 2014; Chen & Banerjee, 2015). While RSC has been a core theme in high-dimensional statistics especially for linear models and convex losses (Wainwright, 2019), to the best of our knowledge, RSC has not been considered in the context of non-convex optimization of overparameterized deep models. For a normalized total loss function L(θ) = 1n ∑n i=1 ℓ(yi, ŷi), ŷi = f(θ;xi) with predictor or neural network model f parameterized by vector θ and data points {xi, yi}ni=1, when ℓ corresponds to the square loss we show that the total loss function satisfies RSC on a suitable restricted set Qtκ ⊂ Rp (Definition 5.2 in Section 5) at step t as long as ∥∥ 1 n ∑n i=1 ∇θf(θt;xi) ∥∥2 2 = Ω( 1√ m ). We also present similar results for general losses for which additional assumptions are needed. We show that the RSC property implies a Restricted Polyak-Łojasiewicz (RPL) condition on Qtκ, in turn implying a geometric one-step decrease of the loss towards the minimum in Qtκ, and subsequently implying geometric decrease of the loss towards the minimum in the large (layerwise spectral norm) ball BSpecρ,ρ1 (θ0). The geometric convergence due to RSC is a novel approach in the context of deep learning optimization which does not depend on properties of the NTK. Thus, the RSC condition provides an alternative sufficient condition for geometric convergence for deep learning optimization to the widely-used NTK condition. The rest of the paper is organized as follows. We briefly present related work in Section 2 and discuss the problem setup in Section 3. We establish the Hessian spectral norm bound in Section 4 and introduce the RSC based optimization analysis in Section 5. We experimental results corresponding to the RSC condition in Section 6 and conclude in Section 7. All technical proofs are in the Appendix. 2 RELATED WORK The literature on gradient descent and variants for deep learning is increasingly large, and we refer the readers to the following surveys for an overview of the field (Fan et al., 2021; Bartlett et al., 2021). Among the theoretical works, we consider (Du et al., 2019; Allen-Zhu et al., 2019; Zou & Gu, 2019; Zou et al., 2020; Liu et al., 2022) as the closest to our work in terms of their study of convergence on multi-layer neural networks. For a literature review on shallow and/or linear networks, we refer to the recent survey (Fang et al., 2021). Due to the rapidly growing related work, we only refer to the most related or recent work for most parts. Du et al. (2019); Zou & Gu (2019); Allen-Zhu et al. (2019); Liu et al. (2022) considered optimization of square loss, which we also consider for our main results, and we also present extensions to more general class of loss functions. Zou & Gu (2019); Zou et al. (2020); Allen-Zhu et al. (2019); Nguyen & Mondelli (2020); Nguyen (2021); Nguyen et al. (2021) analyzed deep ReLU networks. Instead, we consider smooth activation functions, similar to (Du et al., 2019; Liu et al., 2022). The convergence analysis of the gradient descent in (Du et al., 2019; Allen-Zhu et al., 2019; Zou & Gu, 2019; Zou et al., 2020; Liu et al., 2022) relied on the near constancy of NTK for wide neural networks (Jacot et al., 2018; Lee et al., 2019; Arora et al., 2019; Liu et al., 2020), which yield certain desirable properties for their training using gradient descent based methods. One such property is related to the PL condition (Karimi et al., 2016; Nguyen, 2021), formulated as PL∗ condition in (Liu et al., 2022). Our work uses a different optimization analysis based on RSC (Wainwright, 2019; Negahban et al., 2012; Negahban & Wainwright, 2012) related to a restricted version of the PL condition. Furthermore, Du et al. (2019); Allen-Zhu et al. (2019); Zou & Gu (2019); Zou et al. (2020) showed convergence in value to a global minimizer of the total loss, as we also do. 3 PROBLEM SETUP: DEEP LEARNING WITH SMOOTH ACTIVATIONS Consider a training set D = {xi, yi}ni=1,xi ∈ X ⊆ Rd, yi ∈ Y ⊆ R. We will denote by X ∈ Rn×d the matrix whose ith row is x⊤i . For a suitable loss function ℓ, the goal is to minimize the empirical loss: L(θ) = 1n ∑n i=1 ℓ(yi, ŷi) = 1 n ∑n i=1 ℓ(yi, f(θ;xi)), where the prediction ŷi := f(θ;xi) is from a deep model, and the parameter vector θ ∈ Rp. In our setting f is a feed-forward multi-layer (fully-connected) neural network with depth L and widths ml, l ∈ [L] := {1, . . . , L} given by α(0)(x) = x , α(l)(x) = ϕ ( 1 √ ml−1 W (l)α(l−1)(x) ) , l = 1, . . . , L , f(θ;x) = α(L+1)(x) = 1 √ mL v⊤α(L)(x) , (1) where W (l) ∈ Rml×ml−1 , l ∈ [L] are layer-wise weight matrices, v ∈ RmL is the last layer vector, ϕ(·) is the smooth (pointwise) activation function, and the total set of parameters θ := (vec(W (1))⊤, . . . , vec(W (L))⊤,v⊤)⊤ ∈ R ∑L k=1 mkmk−1+mL , (2) with m0 = d. For simplicity, we will assume that the width of all the layers is the same, i.e., ml = m, l ∈ [L], and so that θ ∈ RLm2+m. For simplicity, we also consider deep models with only one output, i.e., f(θ;x) ∈ R as in (Du et al., 2019), but our results can be extended to multi-dimension outputs as in (Zou & Gu, 2019), using V ∈ RmL×k for k outputs at the last layer; see Appendix C. Define the pointwise loss ℓi := ℓ(yi, ·) : R → R+ and denote its first- and second-derivative as ℓ′i := dℓ(yi,ŷi) dŷi and ℓ′′i := d2ℓ(yi,ŷi) dŷ2i . The particular case of square loss is ℓ(yi, ŷi) = (yi − ŷi)2. We denote the gradient and Hessian of f(·;xi) : Rp → R as ∇if := ∂f(θ;xi)∂θ , and ∇ 2 i f := ∂2f(θ;xi) ∂θ2 . The neural tangent kernel (NTK) Kntk(·; θ) ∈ Rn×n corresponding to parameter θ is defined as Kntk(xi,xj ; θ) = ⟨∇if,∇jf⟩. By chain rule, the gradient and Hessian of the empirical loss w.r.t. θ are given by ∂L(θ)∂θ = 1 n ∑n i=1 ℓ ′ i∇if and ∂2L(θ) ∂θ2 = 1 n ∑n i=1 [ ℓ′′i ∇if∇if⊤ + ℓ′i∇2i f ] . Let ∥ · ∥2 denote the spectral norm for matrices and L2-norm for vectors We make the following assumption regarding the activation function ϕ: Assumption 1 (Activation function). The activation ϕ is 1-Lipschitz, i.e., |ϕ′| ≤ 1, and βϕ-smooth, i.e., |ϕ′′l | ≤ βϕ. Remark 3.1. Our analysis holds for any ςϕ-Lipchitz smooth activations, with a dependence on ςϕ on most key results. The main (qualitative) conclusions stay true if ςϕ ≤ 1 + o(1) or ςϕ = poly(L), which is typically satisfied for commonly used smooth activations and moderate values of L. We define two types of balls over parameters that will be used throughout our analysis. Definition 3.1 (Norm balls). Given θ ∈ Rp of the form (2) with parameters W (l), l ∈ [L],v, we define BSpecρ,ρ1 (θ̄) := { θ ∈ Rp as in (2) | ∥W (ℓ) −W (ℓ)∥2 ≤ ρ, ℓ ∈ [L], ∥v − v̄∥2 ≤ ρ1 } , (3) BEucρ (θ̄) := { θ ∈ Rp as in (2) | ∥θ − θ̄∥2 ≤ ρ } . (4) Remark 3.2. The layerwise spectral norm ball BSpecρ,ρ1 plays a key role in our analysis. The last layer radius of ρ1 gives more flexibility, and we will usually assume ρ1 ≤ ρ; e.g., we could choose the desirable operating regime of ρ < √ m and ρ1 = O(1). Our analysis in fact goes through for any choice of ρ, ρ1 and the detailed results will indicate specific dependencies on both ρ and ρ1. 4 SPECTRAL NORM OF THE HESSIAN OF THE MODEL We start with the following assumption regarding the random initialization of the weights. Assumption 2 (Initialization weights and data normalization). The initialization weights w(l)0,ij ∼ N (0, σ20) for l ∈ [L] where σ0 = σ1 2 ( 1+ √ log m√ 2m ) , σ1 > 0, and v0 is a random unit vector with ∥v0∥2 = 1. Further, we assume the input data satisfies: ∥xi∥2 = √ d, i ∈ [n]. We focus on bounding the spectral norm of the Hessian ∥∇2θf(θ;x)∥2 for θ ∈ BSpecρ,ρ1 (θ0) and any input x ∈ Rd with ∥x∥2 = √ d. The assumption ∥x∥2 = √ d is for convenient scaling, such assumptions are common in the literature (Allen-Zhu et al., 2019; Oymak & Soltanolkotabi, 2020; Nguyen et al., 2021). Prior work (Liu et al., 2020) has considered a similar analysis for θ ∈ BEucρ (θ0), effectively the layerwise Frobenius norm ball, which is much smaller than BSpecρ,ρ1 (θ0), the layerwise spectral norm ball. We choose a unit value for the last layer’s weight norm for convenience, since our results hold under appropriate scaling for any other constant in O(1). All missing proofs are in Appendix A. Theorem 4.1 (Hessian Spectral Norm Bound). Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least (1− 2(L+1)m ), for any xi, i ∈ [n], we have∥∥∇2θf(θ;xi)∥∥2 ≤ cH√m , (5) with cH = O(L5(1 + γ6L)(1 + ρ1)) where γ := σ1 + ρ√m . Remark 4.1 (Desirable operating regimes). The constant γ needs careful scrutiny as cH depends on γ6L. Let us choose ρ1 = O(poly(L)). For any choice of the spectral norm radius ρ < √ m, we can choose σ1 ≤ 1− ρ√m ensuring γ ≤ 1 and hence cH = O(poly(L)). If ρ = O(1), we can keep σ1 = 1 so that γ = 1+ O(1)√ m , and cH = O(poly(L)) as long as L < √ m, which is common. Both of these give good choices for σ1 and desirable operating regime for the result. If we choose σ1 > 1, an undesirable operating regime, then cH = O(cΘ(L)), c > 1, and we will need m = Ω(cΘ(L)) for the result to be of interest. Remark 4.2 (Recent Related Work). In recent work, Liu et al. (2020) analyzed the Hessian spectral norm bound and showed that cH = Õ(ρ3L) for θ ∈ BEucρ (θ0) (logarithmic terms hidden in Õ(·)). Our analysis builds on and sharpens the result in (Liu et al., 2020) in three respects: (a) we have cH = O(poly(L)(1 + γ 6L)) for ρ1 = O(poly(L)) where we can choose σ1 to make γ ≤ 1 and thus obtain cH = O(poly(L)), instead of the worse cH = Õ(ρ3L) in Liu et al. (2020)1; (b) even for the same ρ, our results hold for a much larger spectral norm ball BSpecρ,ρ1 (θ0) compared to their Euclidean norm ball BEucρ (θ0) in (Liu et al., 2020); and (c) to avoid an exponential term, the bound in (Liu et al., 2020) needs ρ ≤ 1 whereas our result can use radius ρ < √ m for all intermediate layer matrices and ρ1 = O(poly(L)) for the last layer vector. Moreover, as a consequence of (b) and (c), our results hold for a larger (spectral norm) ball whose radius can increase with m, unlike the results in Liu et al. (2020) which hold for a smaller (Euclidean) ball with constant radius, i.e., “near initialization.” Remark 4.3 (Exact constant cH ). For completeness, we show the exact expression of the constant cH in Theorem 4.1 so the dependencies on different factors is clear. Let h(l) := γl−1+|ϕ(0)| ∑l−1 i=1 γ i−1. Then, cH = 2L(L 2γ2L + LγL + 1) · (1 + ρ1) · ψH ·max l∈[L] γL−l + 2LγL max l∈[L] h(l) , (6) where ψH = max 1≤l1<l2≤L { βϕ(h(l1)) 2 , h(l1) ( βϕ 2 (γ2 + (h(l2)) 2) + 1 ) , βϕγ 2h(l1)h(l2) } . (7) The source of the terms will be discussed shortly. Note the dependence on ρ1, the radius for the last layer in BSpecρ,ρ1 (θ0), and why ρ1 = O(poly(L)) is a desirable operating regime. 1See the end of Appendix A for a quick note about the network architecture in our work and the one in (Liu et al., 2020). Next, we give a high level outline of the proof of Theorem 4.1. Proof sketch. Our analysis follows the structure developed in Liu et al. (2020), but is considerably sharper as discussed in Remark 4.2. We start by defining the following quantities: Q∞(f) := max1≤l≤L {∥∥∥ ∂f∂α(l) ∥∥∥∞}, ∂f∂α(l) ∈ Rm, Q2(f) := max1≤l≤L {∥∥∥ ∂α(l)∂w(l) ∥∥∥2}, w(l) := vec(W (l)), ∂α(l) ∂w(l) ∈ Rm×m2 , and Q2,2,1(f) is the maximum over 1 ≤ l1 < l2 < l3 ≤ L among the three quantities ∥∥∥ ∂2α(l1) ∂w(l1) 2 ∥∥∥ 2,2,1 , ∥∥∥ ∂α(l1) ∂w(l1) ∥∥∥ 2 ∥∥∥ ∂2α(l2) ∂α(l2−1)∂w(l2) ∥∥∥ 2,2,1 , and ∥∥∥ ∂α(l1) ∂w(l1) ∥∥∥ 2 ∥∥∥ ∂α(l2) ∂w(l2) ∥∥∥ 2 ∥∥∥ ∂2α(l3) ∂α(l3−1) 2 ∥∥∥ 2,2,1 . where for an order-3 tensor T ∈ Rd1×d2×d3 we define the (2, 2, 1)−norm as ∥T∥2,2,1 := sup∥x∥2=∥z∥2=1 ∑d3 k=1 ∣∣∣∑d1i=1∑d2j=1 Tijkxizj∣∣∣ ,x ∈ Rd1 , z ∈ Rd2 . The following result in (Liu et al., 2020) provides an upper bound to the spectral norm of the Hessian. Theorem 4.2 (Liu et al. (2020), Theorem 3.1). Under Assumptions 1, assuming there is δ such that∥∥∥ ∂α(l)∂α(l−1) ∥∥∥2 ≤ δ, with C1 ≤ L2δ2L + LδL + L and C2 ≤ LδL, we have∥∥∇2θf(θ;x)∥∥2 ≤ 2C1Q2,2,1(f)Q∞(f) + 2√mC2Q2(f) , (8) In order to prove Theorem 4.1, we prove that Theorem 4.2 holds with high-probability where δ = γ, Q2(f) = O(L(1+γL)), Q2,2,1(f) = O(L3(1+γ3L)), and Q∞(f) = O ( (1+γL)(1+ρ1)√ m ) . Thus we obtain that the upper bound (4.2) becomes O(poly(L)(1+γ 6L)(1+ρ1)√ m ), providing a benign polynomial dependence on L when γ ≤ 1, rather than an exponential dependence on the radius ρ as in (Liu et al., 2020). The analysis for bounding the spectral norm of the Hessian can be used to establish additional bounds, which we believe are of independent interest, some of which will be used later in Section 5. First, we bound the norms of gradient of the predictor and the loss w.r.t. the weight vector θ and the input data x. Lemma 4.1 (Predictor gradient bounds). Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least ( 1− 2(L+1)m ) , we have ∥∇θf(θ;x)∥2 ≤ ϱ and ∥∇xf(θ;x)∥2 ≤ γL√ m (1 + ρ1) , (9) with ϱ2 = (h(L + 1))2 + 1m (1 + ρ1) 2 ∑L l=1(h(l)) 2γ2(L−l), γ = σ1 + ρ√m , h(l) = γ l−1 + |ϕ(0)| ∑l−1 i=1 γ i−1. Remark 4.4. Our analysis in Lemma 4.1 provides a bound on the Lipschitz constant of the predictor, a quantity which has generated interest in recent work on robust training (Salman et al., 2019; Cohen et al., 2020; Bubeck & Sellke, 2021). Under the assumption of square losses, further bounds can be obtained. Lemma 4.2 (Loss bounds). Consider the square loss. Under Assumptions 1, and 2, for γ = σ1+ ρ√m , each of the following inequalities hold with probability at least ( 1− 2(L+1)m ) : L(θ0) ≤ c0,σ1 and L(θ) ≤ cρ1,γ for θ ∈ BSpecρ,ρ1 (θ0), where ca,b = 2 n ∑n i=1 y 2 i + 2(1 + a) 2|g(b)|2 and g(a) = aL + |ϕ(0)| ∑L i=1 a i for any a, b ∈ R. Corollary 4.1 (Loss gradient bound). Consider the square loss. Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least ( 1− 2(L+1)m ) , we have ∥∇θL(θ)∥2 ≤ 2 √ L(θ)ϱ ≤ 2√cρ1,γϱ, with ϱ as in Lemma 4.1 and cρ1,γ as in Lemma 4.2. 5 OPTIMIZATION GUARANTEES WITH RESTRICTED STRONG CONVEXITY We focus on minimizing the empirical loss L(θ) over θ ∈ BSpecρ,ρ1 (θ0), the layerwise spectral norm ball in (3). Our analysis is based on Restricted Strong Convexity (RSC) (Negahban et al., 2012; Banerjee et al., 2014; Chen & Banerjee, 2015; Wainwright, 2019), which relaxes the definition of strong convexity by only needing strong convexity in certain directions or over a subset of the ambient space. We introduce the following specific definition of RSC with respect to a tuple (S, θ). Definition 5.1 (Restricted Strong Convexity (RSC)). A function L is said to satisfy α-restricted strong convexity (α-RSC) with respect to the tuple (S, θ) if for any θ′ ∈ S ⊆ Rp and some fixed θ ∈ Rp, we have L(θ′) ≥ L(θ) + ⟨θ′ − θ,∇θL(θ)⟩+ α2 ∥θ ′ − θ∥22, with α > 0. Note that L being α-RSC w.r.t. (S, θ) does not need L to be convex on Rp. Let us consider a sequence of iterates {θt}t≥0 ⊂ Rp. Our RSC analysis will rely on the following Qtκ-sets at step t, which avoid directions almost orthogonal to the average gradient of the predictor. We define the following notation: for two vectors π and π̄, cos(π, π̄) denotes the cosine of the angle between π and π̄. Definition 5.2 (Qtκ sets). For iterate θt ∈ Rp, let ḡt = 1n ∑n i=1 ∇θf(θt;xi). For any κ ∈ (0, 1], define Qtκ := {θ ∈ Rp | | cos(θ − θt, ḡt)| ≥ κ}. We define the set Bt := Qtκ ∩BSpecρ,ρ1 (θ0) ∩B Euc ρ2 (θt). We focus on establishing RSC w.r.t. the tuple (Bt, θt), where BSpecρ,ρ1 (θ0) becomes the feasible set for the optimization and B Euc ρ2 (θt) is a Euclidean ball around the current iterate. Assumption 3 (Loss function). The loss ℓi, i ∈ [n], is (i) strongly convex, i.e., ℓ′′i ≥ a > 0 and (ii) smooth, i.e., ℓ′′i ≤ b. Assumption 3 is satisfied by commonly used loss functions such as square loss, where a = b = 2. We state the RSC result for square loss; the result for other losses and proofs of all technical results in this section are in Appendix B. Theorem 5.1 (RSC for Square Loss). For square loss, under Assumptions 1 and 2, with probability at least (1− 2(L+1)m ), ∀θ ′ ∈ Qtκ ∩BSpecρ,ρ1 (θ0) ∩B Euc ρ2 (θt) with θt ∈ B Spec ρ,ρ1 (θ0), L(θ′) ≥ L(θt) + ⟨θ′ − θt,∇θL(θt)⟩+ αt 2 ∥θ′ − θt∥22 , with αt = c1 ∥ḡt∥ 2 2 − c2√ m , (10) where ḡt = 1n ∑n i=1 ∇θf(θt;xi), c1 = 2κ2 and c2 = 2cH(2ϱρ2 + √ cρ1,γ), with cH as in Theorem 4.1, ϱ as in Lemma 4.1, and cρ1,γ as in Lemma 4.2. Consequently, L satisfies RSC w.r.t. (Qtκ ∩BSpecρ,ρ1 (θ0) ∩B Euc ρ2 (θt), θt) whenever αt > 0. Remark 5.1. The RSC condition αt > 0 is satisfied at iteration t as long as ∥ḡt∥22 > c2c1√m where c1, c2 are exactly specified in Theorem 5.1. Indeed, if γ (and so σ1 and ρ) is chosen according to the desirable operating regimes (see Remark 4.1), ρ1 = O(poly(L)) and ρ2 = O(poly(L)), then we can use the bounds from Lemma 4.2 and obtain that the RSC condition is satisfied when ∥ḡt∥22 > O(poly(L))√ m . The condition is arguably mild, does not need the NTK condition λmin(Kntk(·; θt)) > 0, and is expected to hold till convergence (see Remark 5.3). Moreover, it is a local condition at step t and has no dependence on being ”near initialization” in the sense of θt ∈ BEucρ (θ0) for ρ = O(1) as in (Liu et al., 2020; 2022). For the convergence analysis, we also need to establish a smoothness property of the total loss. Theorem 5.2 (Local Smoothness for Square Loss). For square loss, under Assumptions 1 and 2, with probability at least (1− 2(L+1)m ), ∀θ, θ ′ ∈ BSpecρ,ρ1 (θ0), L(θ′) ≤ L(θ) + ⟨θ′ − θ,∇θL(θ)⟩+ β 2 ∥θ′ − θ∥22 , with β = 2ϱ2 + 2cH √ cρ1,γ√ m , (11) with cH as in Theorem 4.1, ϱ as in Lemma 4.1, and cρ1,γ as in Lemma 4.2. Consequently, L is locally β-smooth. Moreover, if γ (and so σ1 and ρ) is chosen according to the desirable operating regimes (see Remark 4.1) and ρ1 = O(poly(L)), then β = O(poly(L)). Remark 5.2. Similar to the case of the standard strong convexity and smoothness, the RSC and smoothness parameters respectively in Theorems 5.1 and 5.2 satisfy αt < β. To see this note that αt < 2κ 2∥ḡt∥22 ≤ 2ϱ2 ≤ β, where the second inequality follows since κ ≤ 1, and ∥ḡt∥22 ≤ ϱ2 using Lemma 4.1. Next, we show that the RSC condition w.r.t. the tuple (Bt, θt) implies a restricted Polyak-Łojasiewicz (RPL) condition w.r.t. the tuple (Bt, θt), unlike standard PL which holds without restrictions (Karimi et al., 2016). Lemma 5.1 (RSC ⇒ RPL). Let Bt := Qtκ ∩BSpecρ,ρ1 (θ0) ∩B Euc ρ2 (θt). In the setting of Theorem 5.1, if αt > 0, then the tuple (Bt, θt) satisfies the Restricted Polyak-Łojasiewicz (RPL) condition, i.e., L(θt)− inf θ∈Bt L(θ) ≤ 1 2αt ∥∇θL(θt)∥22 , (12) with probability at least (1− 2(L+1)m ). For the rest of the convergence analysis, we make the following assumption where T can be viewed as the stopping time so the convergence analysis holds given the assumptions are satisfied. Assumption 4 (Iterates’ conditions). For iterates {θt}t=0,1,...,T : (A4.1) αt > 0; (A4.2) θt ∈ BSpecρ,ρ1 (θ0). Remark 5.3 (Assumption (A4.1)). From Remark 5.1, (A4.1) is satisfied as long as ∥ḡt∥22 > c2c1√m where c1, c2 are as in Theorem 5.1, which is arguably a mild condition. In Section 6 we will present some empirical findings that show that this condition on ∥ḡt∥22 behaves well empirically. We now consider the particular case of gradient descent (GD) for the iterates: θt+1 = θt − ηt∇L(θt), where ηt is chosen so that θt+1 ∈ BSpecρ,ρ1 (θ0) and ρ2 is chosen so that θt+1 ∈ B Euc ρ2 (θt), which are sufficient for the analysis of Theorem 5.1 — we specify suitable choices in the sequel (see Remark 5.4). Given RPL w.r.t. (Bt, θt), gradient descent leads to a strict decrease of loss in Bt. Lemma 5.2 (Local Loss Reduction in Bt). Let αt, β be as in Theorems 5.1 and 5.2 respectively, and Bt := Qtκ ∩ BSpecρ,ρ1 (θ0) ∩ B Euc ρ2 (θt). Consider Assumptions 1, 2, and 4, and gradient descent with step size ηt = ωtβ , ωt ∈ (0, 2). Then, for any θt+1 ∈ arginfθ∈Bt L(θ), we have with probability at least (1− 2(L+1)m ), L(θt+1)− L(θt+1) ≤ ( 1− αtωt β (2− ωt) ) (L(θt)− L(θt+1)) . (13) Building on Lemma 5.2, we show that GD in fact leads to a geometric decrease in the loss relative to the minimum value of L(·) in the set BSpecρ,ρ1 (θ0). Theorem 5.3 (Global Loss Reduction in BSpecρ,ρ1 (θ0)). Let αt, β be as in Theorems 5.1 and 5.2 respectively, and Bt := Qtκ ∩ BSpecρ,ρ1 (θ0) ∩ B Euc ρ2 (θt). Let θ ∗ ∈ arginfθ∈BSpecρ,ρ1 (θ0) L(θ), θt+1 ∈ arginfθ∈Bt L(θ), and γt := L(θt+1)−L(θ∗) L(θt)−L(θ∗) . Let αt, β be as in Theorems 5.1 and 5.2 respectively, and Bt := Qtκ ∩ BSpecρ,ρ1 (θ0) ∩ B Euc ρ2 (θt). Consider Assumptions 1, 2, and 4, and gradient descent with step size ηt = ωtβ , ωt ∈ (0, 2). Then, with probability at least (1− 2(L+1) m ), we have we have γt ∈ [0, 1) and L(θt+1)− L(θ∗) ≤ ( 1− αtωt β (1− γt)(2− ωt) ) (L(θt)− L(θ∗)) . (14) As long as the conditions in Theorem 5.3 are kept across iterations, there will be a geometric decrease in loss. For Assumption 4, we have discussed (A4.1) in Remark 5.1, and we discuss (A4.2) next. Remark 5.4 (Assumption (A4.2)). Consider we run gradient descent iterations until some stopping time T > 0. Given radius ρ < √ m , Assumption (A4.2) θt ∈ BSpecρ,ρ1 (θ0), t = 0, . . . , T , can be verified empirically. Alternatively, we can choose suitable step sizes ηt to ensure the property using the geometric convergence from Theorem 5.3. Assume that our goal is to get L(θT )− L(θ∗) ≤ ϵ. Then, with χT := mint∈[T ] αtωtβ (1 − γt)(2 − ωt), Assumption (A4.1) along with Remark 5.2 ensures χT < 1. Then, it suffices to have T = ⌈log(L(θ0)−L(θ ∗) ϵ )/ log 1 1−χT ⌉ = Θ(log 1 ϵ ). Then, to ensure θt ∈ BSpecρ,ρ1 (θ0), t ∈ [T ], in the case of the square loss, since ∥∇L(θt)∥2 ≤ c for some constant c (see Corollary 4.1), it suffices to have ηt ≤ min{ρ,ρ1}Θ(log 1ϵ ) . Moreover, we point out that having ρ2 ≥ ηtc ensures ∥θt+1 − θt∥2 ≤ ρ2 ⇒ θt+1 ∈ BEucρ2 (θt), which in this case can be guaranteed if ρ2 ≥ min{ρ,ρ1}Θ(log 1ϵ ) . The argument above is informal, but illustrates that Assumption (A4.1) along with suitable constant step sizes ηt would ensure (A4.2). Thus, Assumption (A4.1), which ensures the RSC condition, is the main assumption behind the analysis. The conditions in Assumption 4 (see Remarks 5.1 and 5.4) along with Theorem 5.3 imply that the RSC based convergence analysis holds for a much larger layerwise spectral radius norm ball BSpecρ,ρ1 (θ0) with any radius ρ < √ m and ρ1 = O(poly(L)). Remark 5.5 (RSC and NTK). In the context of square loss, the NTK condition for geometric convergence needs λmin(Kntk(·; θt)) ≥ c0 > 0 for every t, i.e., uniformly bounded away from 0 by a constant c0 > 0. The NTK condition can also be written as inf v:∥v∥2=1 ∥∥∥∥∥ n∑ i=1 vi∇θf(θt;xi) ∥∥∥∥∥ 2 2 ≥ c0 > 0 . (15) In contrast, the proposed RSC condition (Theorem 5.1) needs∥∥∥∥∥ 1n n∑ i=1 ∇θf(θt;xi) ∥∥∥∥∥ 2 2 ≥ c̄0√ m , (16) where m is the width and c̄0 = c2c1 where c1, c2 are constants defined in Theorem 5.1. As a quadratic form on the NTK, the RSC condition can be viewed as using a specific v in (15), i.e., vi = 1√n for i ∈ [n], since the RSC condition is ∥∥∥∑ni=1 1√n∇θf(θt;xi)∥∥∥22 ≥ c̄0n√m . For m = Ω(n2), the RSC condition is more general since NTK ⇒ RSC, but the converse is not necessarily true. Remark 5.6 (RSC covers different settings than NTK). The NTK condition may be violated in certain settings, e.g., ∇θf(θt;xi), i = 1, . . . , n are linearly dependent, xi ≈ xj for some i ̸= j, layer widths are small ml < n, etc., but the optimization may work in practice. The RSC condition provides a way to analyze convergence in such settings. The RSC condition gets violated when 1 n ∑n i=1 ∇θf(θt;xi) ≈ 0, which does not seem to happen in practice (see Section 6), and future work will focus on understanding the phenomena. Finally, note that it is possible to construct a set of gradient vectors which satisfy the NTK condition but violates the RSC condition. Our perspective is to view the NTK and the RSC as two different sufficient conditions and geometric convergence of gradient descent (GD) is guaranteed as long as one of them is satisfied in any step. 6 RSC CONDITION: EXPERIMENTAL RESULTS In this section, we present experimental results verifying the RSC condition∥∥ 1 n ∑n i=1 ∇θf(θt;xi) ∥∥2 2 = Ω ( poly(L)√ m ) , t = 1, . . . , T , on standard benchmarks: CIFAR- 10, MNIST, and Fashion-MNIST. For simplicity, as before, we use ḡt = 1n ∑n i=1 ∇θf(θt;xi). In Figure 1(a), we consider CIFAR-10 and show the trajectory of ∥ḡt∥2 over iterations t, for different values of the network width m. For any width, the value of ∥ḡt∥2 stabilizes to a constant value over iterations, empirically validating the RSC condition ∥ḡt∥22 = Ω(poly(L)/ √ m). Interestingly, the smallest value of ∥ḡt∥2 seems to increase with the width. To study the width dependence further, in Figure 1(b), we plot mint∈[T ] ∥ḡt∥2 as a function of width m for several values of the width. The plot shows that mint∈[T ] ∥ḡt∥2 increases steadily with m illustrating that the RSC condition is empirically satisfied more comfortably for wider networks. In Figure 1(c) and (d), we show similar plots for MNIST and Fashion-MNIST illustrating the same phenomena of mint∈[T ] ∥ḡt∥2 increasing with m. For the experiments, the network architecture we used had 3-layer fully connected neural network with tanh activation function. The training algorithm is gradient descent (GD) width constant learning rate, chosen appropriately to keep the training in NTK regime. Since we are using GD, we use 512 randomly chosen training points for the experiments. The stopping criteria is either training loss < 10−3 or number of iterations larger than 3000. 7 CONCLUSIONS In this paper, we revisit deep learning optimization for feedforward models with smooth activations, and make two technical contributions. First, we bound the spectral norm of the Hessian over a large layerwise spectral norm radius ball, highlighting the role of initialization in such analysis. Second, we introduce a new approach to showing geometric convergence in deep learning optimization using restricted strong convexity (RSC). Our analysis sheds considerably new light on deep learning optimization problems, underscores the importance of initialization variance, and introduces a RSC based alternative to the prevailing NTK based analysis, which may fuel future work. ACKNOWLEDGMENTS AB is grateful for support from the National Science Foundation (NSF) through awards IIS 21-31335, OAC 21-30835, DBI 20-21898, as well as a C3.ai research award. MB and LZ are grateful for support from the National Science Foundation (NSF) and the Simons Foundation for the Collaboration on the Theoretical Foundations of Deep Learning2 through awards DMS-2031883 and 814639 as well as NSF IIS-1815697 and the TILOS institute (NSF CCF-2112665). A SPECTRAL NORM OF THE HESSIAN We establish the main theorem from Section 4 in this Appendix. Theorem 4.1 (Hessian Spectral Norm Bound). Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least (1− 2(L+1)m ), for any xi, i ∈ [n], we have∥∥∇2θf(θ;xi)∥∥2 ≤ cH√m , (5) with cH = O(L5(1 + γ6L)(1 + ρ1)) where γ := σ1 + ρ√m . A.1 ANALYSIS OUTLINE Our analysis follows that of Liu et al. (2020) and sharpens the analysis to get better dependence on the depth L of the neural network. We start by defining the following quantities: Q∞(f) := max 1≤l≤L {∥∥∥∥ ∂f∂α(l) ∥∥∥∥ ∞ } , ∂f ∂α(l) ∈ Rm , (17) Q2(f) := max 1≤l≤L {∥∥∥∥ ∂α(l)∂w(l) ∥∥∥∥ 2 } , w(l) := vec(W (l)) , ∂α(l) ∂w(l) ∈ Rm×m 2 , (18) Q2,2,1(f) := max 1≤l1<l2<l3≤L {∥∥∥∥ ∂2α(l1) ∂w(l1) 2 ∥∥∥∥ 2,2,1 , ∥∥∥∥ ∂α(l1)∂w(l1) ∥∥∥∥ 2 ∥∥∥∥ ∂2α(l2)∂α(l2−1)∂w(l2) ∥∥∥∥ 2,2,1 , (19) ∥∥∥∥ ∂α(l1)∂w(l1) ∥∥∥∥ 2 ∥∥∥∥ ∂α(l2)∂w(l2) ∥∥∥∥ 2 ∥∥∥∥ ∂2α(l3) ∂α(l3−1) 2 ∥∥∥∥ 2,2,1 } (20) where for an order-3 tensor T ∈ Rd1×d2×d3 we define the (2, 2, 1)−norm as follows, ∥T∥2,2,1 := sup ∥x∥2=∥z∥2=1 d3∑ k=1 ∣∣∣∣∣∣ d1∑ i=1 d2∑ j=1 Tijkxizj ∣∣∣∣∣∣ , x ∈ Rd1 , z ∈ Rd2 . (21) We will also use the notation W (L+1) := v. A key result established in Liu et al. (2020) provides an upper bound to the spectral norm of the Hessian: Theorem 4.2 (Liu et al. (2020), Theorem 3.1). Under Assumptions 1, assuming there is δ such that∥∥∥ ∂α(l)∂α(l−1) ∥∥∥2 ≤ δ, with C1 ≤ L2δ2L + LδL + L and C2 ≤ LδL, we have∥∥∇2θf(θ;x)∥∥2 ≤ 2C1Q2,2,1(f)Q∞(f) + 2√mC2Q2(f) , (8) In order to prove Theorem 4.1, we prove that Theorem 4.2 holds with high-probability where • δ = γ follows from Lemma A.3, • Q2(f) = O(L(1 + γL)) follows from Lemma A.4, • Q2,2,1(f) = O(L3(1 + γ3L)) follows from Lemma A.4 and Lemma A.5, and • Q∞(f) = O ( (1+γL)(1+ρ1)√ m ) follows from Lemma A.7 , while also establishing precise constants to get a proper form for the constant cH in Theorem 4.1. As a result, cH ≤ O(L 5(1+γ6L)(1+ρ1)√ m ). A.2 SPECTRAL NORMS OF W (l) AND L2 NORMS OF α(l) We start by bounding the spectral norm of the layer-wise matrices at initialization. Lemma A.1. Consider any l ∈ [L]. If the parameters are initialized as w(l)0,ij ∼ N (0, σ20) where σ0 = σ1 2(1+ √ log m 2m ) as in Assumption 2, then with probability at least ( 1− 2m ) , we have ∥W (l)0 ∥2 ≤ σ1 √ m . (22) Proof. For a (ml ×ml−1) random matrix W (l)0 with i.i.d. entries w (l) 0,ij ∈ N (0, σ20), with probability at least (1− 2 exp(−t2/2σ20)), the largest singular value of W0 is bounded by σmax(W (ℓ) 0 ) ≤ σ0( √ ml + √ ml−1) + t . (23) This concentration result can be easily derived as follows: notice that W0 = σ0W̄ (ℓ) 0 , where w̄ (ℓ) 0,ij ∼ N(0, 1), thus we can use the expectation E[∥W0∥(ℓ)2 ] = σ0E[ ∥∥W̄0∥∥(ℓ)2 ] = σ0(√mℓ + √mℓ−1) from Gordon’s Theorem for Gaussian matrices (Vershynin, 2012, Theorem 5.32) in the Gaussian concentration result for Lipschitz functions (Vershynin, 2012, Proposition 3.4) considering that B 7→ ∥σ0B∥2 is a σ0-Lipschitz function when the matrix B is treated as a vector. Let us choose t = σ0 √ 2 logm so that (23) holds with probability at least (1− 2m ). Then, to obtain (22), Case 1: l = 1. With m0 = d and m1 = m, ∥W (1)0 ∥2 ≤ σ0( √ d+ √ m+ √ 2 logm) ≤ σ0(2 √ m+ √ 2 logm) since we are in the over-parameterized regime m ≥ d. Case 2: 2 ≤ l ≤ L. With ml = ml−1 = m, ∥W (l)0 ∥2 ≤ σ0(2 √ m+ √ 2 logm) . Now, using σ0 = σ1 2(1+ √ log m 2m ) in both cases completes the proof. Next we bound the spectral norm of layerwise matrices. Proposition A.1. Under Assumptions 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least ( 1− 2m ) , ∥W (l)∥2 ≤ ( σ1 + ρ√ m )√ m , l ∈ [L]. Proof. By triangle inequality, for l ∈ [L], ∥W (l)∥2 ≤ ∥W (l)0 ∥2 + ∥W (l) −W (l) 0 ∥2 (a) ≤ σ1 √ m+ ρ , where (a) follows from Lemma A.1. This completes the proof. Next, we show that the output α(l) of layer l has an L2 norm bounded by O( √ m). Lemma A.2. Consider any l ∈ [L]. Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least ( 1− 2lm ) , we have ∥α(l)∥2 ≤ √ m ( σ1 + ρ√ m )l + √ m l∑ i=1 ( σ1 + ρ√ m )i−1 |ϕ(0)| = ( γl + |ϕ(0)| l∑ i=1 γi−1 ) √ m . Proof. Following Allen-Zhu et al. (2019); Liu et al. (2020), we prove the result by recursion. First, recall that since ∥x∥22 = d, we have ∥α(0)∥2 = √ d. Then, since m0 = d and ϕ is 1-Lipschitz,∥∥∥∥ϕ( 1√dW (1)α(0) )∥∥∥∥ 2 − ∥ϕ(0)∥2 ≤ ∥∥∥∥ϕ( 1√dW (1)α(0) ) − ϕ(0) ∥∥∥∥ 2 ≤ ∥∥∥∥ 1√dW (1)α(0) ∥∥∥∥ 2 , so that ∥α(1)∥2 = ∥∥∥∥ϕ( 1√dW (1)α(0) )∥∥∥∥ 2 ≤ ∥∥∥∥ 1√dW (1)α(0) ∥∥∥∥ 2 + ∥ϕ(0)∥2 ≤ 1√ d ∥W (1)∥2∥α(0)∥2 + |ϕ(0)| √ m ≤ ( σ1 + ρ√ m )√ m+ |ϕ(0)| √ m , where we used Proposition A.1 in the last inequality, which holds with probability at least 1− 2m . For the inductive step, we assume that for some l − 1, we have ∥α(l−1)∥2 ≤ √ m ( σ1 + ρ√ m )l−1 + √ m l−1∑ i=1 ( σ1 + ρ√ m )i−1 |ϕ(0)|, which holds with the probability at least 1− 2(l−1)m . Since ϕ is 1-Lipschitz, for layer l, we have∥∥∥∥ϕ( 1√mW (l)α(l−1) )∥∥∥∥ 2 − ∥ϕ(0)∥2 ≤ ∥∥∥∥ϕ( 1√mW (l)α(l−1) ) − ϕ(0) ∥∥∥∥ 2 ≤ ∥∥∥∥ 1√mW (l)α(l−1) ∥∥∥∥ 2 , so that ∥α(l)∥2 = ∥∥∥∥ϕ( 1√mW (l)α(l−1) )∥∥∥∥ 2 ≤ ∥∥∥∥ 1√mW (l)α(l−1) ∥∥∥∥ 2 + ∥ϕ(0)∥2 ≤ 1√ m ∥W (l)∥2∥α(l−1)∥2 + √ m|ϕ(0)| (a) ≤ ( σ1 + ρ√ m ) ∥α(l−1)∥2 + √ m|ϕ(0)| (b) = √ m ( σ1 + ρ√ m )l + √ m l∑ i=1 ( σ1 + ρ√ m )i−1 |ϕ(0)|, where (a) follows from Proposition A.1 and (b) from the inductive step. Since we have used Proposition A.1 l times, after a union bound, our result would hold with probability at least 1− 2lm . This completes the proof. A.3 SPECTRAL NORMS OF ∂α (l) ∂w(l) AND ∂α (l) ∂α(l−1) Recall that in our setup, the layerwise outputs and pre-activations are respectively given by: α(l) = ϕ ( α̃(l) ) , α̃(l) := 1 √ ml−1 W (l)α(l−1) . (24) Lemma A.3. Consider any l ∈ {2, . . . , L}. Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least ( 1− 2m ) , ∥∥∥∥ ∂α(l)∂α(l−1) ∥∥∥∥2 2 ≤ ( σ1 + ρ√ m )2 = γ2 . (25) Proof. By definition, we have [ ∂α(l) ∂α(l−1) ] i,j = 1√ m ϕ′(α̃ (l) i )W (l) ij . (26) Since ∥A∥2 = sup∥v∥2=1 ∥Av∥2, so that ∥A∥ 2 2 = sup∥v∥2=1 ∑ i⟨ai,v⟩2, we have that for 2 ≤ l ≤ L, ∥∥∥∥ ∂α(l)∂α(l−1) ∥∥∥∥2 2 = sup ∥v∥2=1 1 m m∑ i=1 ϕ′(α̃(l)i ) m∑ j=1 W (l) ij vj 2 (a) ≤ sup ∥v∥2=1 1 m ∥W (l)v∥22 = 1 m ∥W (l)∥22 (b) ≤ γ2 , where (a) follows from ϕ being 1-Lipschitz by Assumption 1 and (b) from Proposition A.1. This completes the proof. Lemma A.4. Consider any l ∈ [L]. Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least ( 1− 2lm ) ,∥∥∥∥ ∂α(l)∂w(l) ∥∥∥∥2 2 ≤ 1 m [ √ m ( σ1 + ρ√ m )l−1 + √ m l−1∑ i=1 ( σ1 + ρ√ m )i−1 |ϕ(0)| ]2 = ( γl−1 + |ϕ(0)| l−1∑ i=1 γi−1 )2 . (27) Proof. Note that the parameter vector w(l) = vec(W (l)) and can be indexed with j ∈ [m] and j′ ∈ [d] when l = 1 and j′ ∈ [m] when l ≥ 2. Then, we have[ ∂α(l) ∂w(l) ] i,jj′ = [ ∂α(l) ∂W (l) ] i,jj′ = 1√ m ϕ′(α̃ (l) i )α (l−1) j′ 1[i=j] . (28) For l ∈ {2, . . . , L}, noting that ∂α (l) ∂w(l) ∈ Rm×m2 and ∥V ∥F = ∥vec(V )∥2 for any matrix V , we have∥∥∥∥ ∂α(l)∂w(l) ∥∥∥∥2 2 = sup ∥V ∥F=1 1 m m∑ i=1 ϕ′(α̃(l)i ) m∑ j,j′=1 α (l−1) j′ 1[i=j]Vjj′ 2 ≤ sup ∥V ∥F=1 1 m ∥V α(l−1)∥22 ≤ 1 m sup ∥V ∥F=1 ∥V ∥22∥α(l−1)∥22 (a) ≤ 1 m ∥α(l−1)∥22 (b) ≤ 1 m [ √ m ( σ1 + ρ√ m )l−1 + √ m l−1∑ i=1 ( σ1 + ρ√ m )i−1 |ϕ(0)| ]2 = ( γl−1 + |ϕ(0)| l−1∑ i=1 γi−1 )2 where (a) follows from ∥V ∥22 ≤ ∥V ∥ 2 F for any matrix V , and (b) from Lemma A.2. The l = 1 case follows in a similar manner:∥∥∥∥ ∂α(1)∂w(1) ∥∥∥∥2 2 ≤ 1 d ∥α(0)∥22 = 1 d ∥x∥22 = 1 which satisfies the form for l = 1. That completes the proof. A.4 (2, 2, 1)-NORMS OF ORDER 3 TENSORS Lemma A.5. Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), each of the following inequalities hold with probability at least ( 1− 2lm ) ,∥∥∥∥ ∂2α(l)(∂α(l−1))2 ∥∥∥∥ 2,2,1 ≤ βϕγ2, (29) ∥∥∥∥ ∂2α(l)∂α(l−1)∂W (l) ∥∥∥∥ 2,2,1 ≤ βϕ 2 γ2 +(γl−1 + |ϕ(0)| l−1∑ i=1 γi−1 )2+ 1, (30) for l = 2, . . . , L; and ∥∥∥∥ ∂2α(l)(∂W (l))2 ∥∥∥∥ 2,2,1 ≤ βϕ ( γl−1 + |ϕ(0)| l−1∑ i=1 γi−1 )2 , (31) for l ∈ [L]. Proof. For the inequality (29), note that from (26) we obtain ( ∂2α(l) (∂α(l−1))2 ) i,j,k = 1 mϕ ′′(α̃ (l) i )W (l) ik W (l) ij , and so∥∥∥∥ ∂2α(l)(∂α(l−1))2 ∥∥∥∥ 2,2,1 = sup ∥v1∥2=∥v2∥2=1 1 m m∑ i=1 ∣∣∣ϕ′′(α̃(l)i )(W (l)v1)i(W (l)v2)i∣∣∣ ≤ sup ∥v1∥2=∥v2∥2=1 1 m βϕ m∑ i=1 ∣∣∣(W (l)v1)i(W (l)v2)i∣∣∣ (a) ≤ sup ∥v1∥2=∥v2∥2=1 1 2m βϕ m∑ i=1 (W (l)v1) 2 i + (W (l)v2) 2 i ≤ 1 2m βϕ sup ∥v1∥2=∥v2∥2=1 (∥W (l)v1∥22 + ∥W (l)v2∥22) ≤ 1 2m βϕ(∥W (l)∥22 + ∥W (l)∥22) (b) ≤ βϕ(σ1 + ρ/ √ m)2 = βϕγ 2, (32) where (a) follows from 2ab ≤ a2 + b2 for a, b ∈ R, and (b) from Proposition A.1, with probability at least 1− 2m . For the inequality (30), carefully following the chain rule in (28) we obtain( ∂2α(l) ∂α(l−1)∂W (l) ) i,jj′,k = 1 m ϕ′′(α̃ (l) i )W (l) ik α (l−1) j′ 1[j=i] + 1√ m ϕ′(α̃ (l) i )1[i=j]1[j′=k]. Then, we have ∥∥∥∥ ∂2α(l)∂α(l−1)∂W (l) ∥∥∥∥ 2,2,1 = sup ∥v1∥2=∥V2∥F=1 m∑ i=1 ∣∣∣∣∣∣ m∑ k=1 m∑ j=1 m∑ j′=1 ( 1 m ϕ′′(α̃ (l) i )W (l) ik α (l−1) j′ 1[j=i] + 1√ m ϕ′(α̃ (l) i )1[i=j]1[j′=k] ) v1,kV2,jj′ ∣∣∣∣ = sup ∥v1∥2=∥V2∥F=1 m∑ i=1 ∣∣∣∣∣∣ 1m m∑ j′=1 ϕ′′(α̃ (l) i )α (l−1) j′ V2,ij′ ( m∑ k=1 W (l) ik v1,k ) + 1√ m m∑ k=1 ϕ′(α̃ (l) i )v1,kV2,ik ∣∣∣∣∣ ≤ sup ∥v1∥2=∥V2∥F=1 1 m βϕ m∑ i=1 ∣∣∣(W (l)v1)i(V2α(l−1))i∣∣∣+ 1√ m m∑ i=1 m∑ k=1 |v1,kV2,ik| ≤ sup ∥v1∥2=∥v2∥F=1 1 2m βϕ m∑ i=1 (W (l)v1) 2 i + (V2α (l−1))2i + 1√ m m∑ i=1 ∥v1∥2 ∥∥V2,i,:∥∥2 = sup ∥v1∥2=∥V2∥F=1 1 2m βϕ( ∥∥∥W (l)v1∥∥∥2 2 + ∥∥∥V2α(l−1)∥∥∥2 2 ) + 1√ m m∑ i=1 ∥∥V2,i,:∥∥2 (a) ≤ 1 2m βϕ( ∥∥∥W (l)∥∥∥2 2 + ∥∥∥α(l−1)∥∥∥2 2 ) + ∥V2∥F (b) ≤ βϕ 2 γ2 +(γl−1 + |ϕ(0)| l−1∑ i=1 γi−1 )2+ 1 where (a) follows from ∥∥V2α(l−1)∥∥2 ≤ ∥V2∥2 ∥∥αl−1∥∥ ≤ ∥V2∥F ∥∥αl−1∥∥2 = ∥∥αl−1∥∥2 and∑m i=1 ∥∥V2,i,:∥∥2 ≤ √m√∑mi=1 ∥∥V2,i,:∥∥22, and (b) follows from Proposition A.1 and Lemma A.2, with altogether holds with probability at least 1− 2lm . For the last inequality (31), we start with the analysis for l ≥ 2. Carefully following the chain rule in (28) we obtain ( ∂2α(l) (∂W (l))2 ) i,jj′,kk′ = 1 m ϕ′′(α̃ (l) i )α (l−1) k′ α (l−1) j′ 1[j=i]1[k=i]. Then, we have∥∥∥∥ ∂2α(l)(∂W (l))2 ∥∥∥∥ 2,2,1 = sup ∥V1∥F=∥V2∥F=1 m∑ i=1 ∣∣∣∣∣∣ m∑ j,j′=1 m∑ k,k′=1 ( 1 m ϕ′′(α̃ (l) i )α (l−1) k′ α (l−1) j′ 1[j=i]1[k=i]V1,jj′V2,kk′ )∣∣∣∣∣∣ = sup ∥V1∥F=∥V2∥F=1 m∑ i=1 ∣∣∣∣∣∣ϕ ′′(α̃ (l) i ) m m∑ j′=1 ( α (l−1) j′ V1,ij′ ) m∑ k′=1 ( α (l−1) k′ V2,ik′ )∣∣∣∣∣∣ ≤ sup ∥V1∥F=∥V2∥F=1 1 m βϕ m∑ i=1 ∣∣∣(V1α(l−1))i(V2α(l−1))i∣∣∣ ≤ sup ∥V1∥F=∥v2∥F=1 1 2m βϕ m∑ i=1 (V2α (l−1))2i + (V2α (l−1))2i = sup ∥V1∥F=∥V2∥F=1 1 2m βϕ( ∥∥∥V2α(l−1)∥∥∥2 2 + ∥∥∥V2α(l−1)∥∥∥2 2 ) ≤ 1 2m βϕ( ∥∥∥α(l−1)∥∥∥2 2 + ∥∥∥α(l−1)∥∥∥2 2 ) ≤ βϕ ( γl−1 + |ϕ(0)| l−1∑ i=1 γi−1 )2 , which holds with probability at least 1 − 2(l−1)m . For the case l = 1, it is easy to show that( ∂2α(1) (∂W (1))2 ) i,jj′,kk′ = 1dϕ ′′(α̃ (1) i )xk′xj′1[j=i]1[k=i] and so ∥∥∥ ∂2α(1)(∂W (1))2 ∥∥∥2,2,1 ≤ βϕ. This completes the proof. A.5 L∞ NORM OF ∂f∂α(l) Let b(l) := ∂f ∂α(l) ∈ Rm for any l ∈ [L]. Let b(l)0 denote b(l) at initialization. By a direct calculation, we have b(l) = ∂f ∂α(l) = ( L∏ l′=l+1 ∂α(l) ∂α(l−1) ) ∂f ∂α(L) = ( L∏ l′=l+1 1√ m (W (l ′))⊤D(l ′) ) 1√ m v , where D(l ′) is a diagonal matrix of the gradient of activations, i.e., D(l ′) ii = ϕ ′(α̃ (l′) i ). Note that we also have the following recursion: b(l) = ∂f ∂α(l) = ∂α(l+1) ∂α(l) ∂f ∂α(l+1) = 1√ m (W (l+1))⊤D(l+1)b(l+1) . Lemma A.6. Consider any l ∈ [L]. Under Assumptions1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least 1− 2(L−l+1)m , ∥b(l)∥2 ≤ 1√ m ( σ1 + ρ√ m )L−l (1 + ρ1) (33) and ∥b(l)0 ∥2 ≤ σL−l1√ m ≤ γ L−l √ m . (34) Proof. First, note that ∥∥b(L)∥∥ 2 = 1√ m ∥v∥2 ≤ 1√ m (∥v0∥2 + ∥v − v0∥2) ≤ 1√ m (1 + ρ1), where the inequality follows from from Proposition A.1. Now, for the inductive step, assume ∥∥b(l)∥∥ 2 ≤( σ1 + ρ√ m )L−l 1√ m (1 + ρ1) with probability at least 1− 2lm . Then,∥∥∥b(l−1)∥∥∥ 2 = ∥∥∥∥ ∂α(l)∂α(l−1)b(l) ∥∥∥∥ 2 ≤ ∥∥∥∥ ∂α(l)∂α(l−1) ∥∥∥∥ 2 ∥∥∥b(l)∥∥∥ 2 ≤ ( σ1 + ρ√ m )( σ1 + ρ√ m )L−l 1√ m (1 + ρ1) = ( σ1 + ρ√ m )L−l+1 1√ m (1 + ρ1) where the last inequality follows from Lemma A.3 with probability at least 1− 2m (l + 1). Since we use Proposition A.1 once at layer L and then Lemma A.3 (L− l) times at layer l, then we have that everything holds altogether with probability at least 1− 2m (L− l + 1). We have finished the proof by induction. Lemma A.7. Consider any l ∈ [L]. Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least 1− 2(L−l)m , ∥∥∥b(l)∥∥∥ ∞ ≤ γ L−l √ m (1 + ρ1). (35) Proof. For any l ∈ [L], by definition i-th component of b(l), i.e., b(l)i , takes the form b (l) i = ∂α(L) ∂α (l) i ∂f ∂α(L) = ∂α(L) ∂α (l) i 1√ m v. Then, with W (l):,i denoting the i-th column of the matrix W (l),∥∥∥∥∥∂α(L)∂α(l)i ∥∥∥∥∥ 2 (a) = ∥∥∥∥∥ϕ′(α̃(l)i )√m (W (l):,i )⊤ L∏ l′=l+2 ( ∂α(l ′) ∂α(l′−1) )∥∥∥∥∥ 2 (b) ≤ 1√ m ∥∥∥W (l):,i ∥∥∥ 2 L∏ l′=l+2 ∥∥∥∥∥ ∂α(l ′) ∂α(l′−1) ∥∥∥∥∥ 2 (c) ≤ 1√ m ∥∥∥W (l):,i ∥∥∥ 2 γL−l−1 (d) ≤ γ γL−l−1 = γL−l (36) where (a) follows from ∂α (l+1) ∂α (l) i = 1√ m ϕ′(α̃ (l) i )(W (l) :,i ) ⊤, (b) from ϕ being 1-Lipschitz, (c) from Lemma A.3, and (d) from ∥∥∥W (l):,i ∥∥∥ 2 ≤ ∥∥W (l)∥∥ 2 and Proposition A.1, which altogether holds with probability 1− 2m (L− l). Therefore, for every i ∈ [m], ∣∣∣b(l)i ∣∣∣ ≤ ∣∣∣∣∣ 1√m ∂α(L)∂α(l)i v ∣∣∣∣∣ ≤ 1√ m ∥∥∥∥∥∂α(L)∂α(l)i ∥∥∥∥∥ 2 ∥v∥2 ≤ 1√ m γL−l(1 + ρ1) , where the last inequality follows from (36) and ∥v∥2 ≤ ∥v0∥2+∥v − v0∥2 ≤ 1+ρ1. This completes the proof. A.6 USEFUL BOUNDS Lemma 4.1 (Predictor gradient bounds). Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least ( 1− 2(L+1)m ) , we have ∥∇θf(θ;x)∥2 ≤ ϱ and ∥∇xf(θ;x)∥2 ≤ γL√ m (1 + ρ1) , (9) with ϱ2 = (h(L + 1))2 + 1m (1 + ρ1) 2 ∑L l=1(h(l)) 2γ2(L−l), γ = σ1 + ρ√m , h(l) = γ l−1 + |ϕ(0)| ∑l−1 i=1 γ i−1. Proof. We first prove the bound on the gradient with respect to the weights. Using the chain rule, ∂f ∂w(l) = ∂α(l) ∂w(l) L∏ l′=l+1 ∂α(l ′) ∂α(l′−1) ∂f ∂α(L) and so∥∥∥∥ ∂f∂w(l) ∥∥∥∥2 2 ≤ ∥∥∥∥ ∂α(l)∂w(l) ∥∥∥∥2 2 ∥∥∥∥∥ L∏ l′=l+1 ∂α(l ′) ∂α(l′−1) ∂f ∂α(L) ∥∥∥∥∥ 2 2 (a) ≤ ∥∥∥∥ ∂α(l)∂w(l) ∥∥∥∥2 2 γ2(L−l) · 1 m (1 + ρ1) 2 (b) ≤ ( γl−1 + |ϕ
1. What is the focus of the paper regarding deep learning? 2. What are the strengths and weaknesses of the paper's content? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper provides another sufficient condition besides NTK to ensure geometric convergence of deep learning. Strengths And Weaknesses The paper provides a good litterature review. It is well written and easy to follow. However, it is a dense paper with lots of theoretical proof to check. The ICLR review period is too short to allow a careful theoretical review. It would be better as a journal paper. Clarity, Quality, Novelty And Reproducibility It is clear and has good writting quality.
ICLR
Title Restricted Strong Convexity of Deep Learning Models with Smooth Activations Abstract We consider the problem of optimization of deep learning models with smooth activation functions. While there exist influential results on the problem from the “near initialization” perspective, we shed considerable new light on the problem. In particular, we make two key technical contributions for such models with L layers, m width, and σ 0 initialization variance. First, for suitable σ 2 0 , we establish a O( poly(L) √ m ) upper bound on the spectral norm of the Hessian of such models, considerably sharpening prior results. Second, we introduce a new analysis of optimization based on Restricted Strong Convexity (RSC) which holds as long as the squared norm of the average gradient of predictors is Ω( poly(L) √ m ) for the square loss. We also present results for more general losses. The RSC based analysis does not need the “near initialization” perspective and guarantees geometric convergence for gradient descent (GD). To the best of our knowledge, ours is the first result on establishing geometric convergence of GD based on RSC for deep learning models, thus becoming an alternative sufficient condition for convergence that does not depend on the widely-used Neural Tangent Kernel (NTK). We share preliminary experimental results supporting our theoretical advances. N/A 2 0 , we establish a O( poly(L)√ m ) upper bound on the spectral norm of the Hessian of such models, considerably sharpening prior results. Second, we introduce a new analysis of optimization based on Restricted Strong Convexity (RSC) which holds as long as the squared norm of the average gradient of predictors is Ω( poly(L)√ m ) for the square loss. We also present results for more general losses. The RSC based analysis does not need the “near initialization” perspective and guarantees geometric convergence for gradient descent (GD). To the best of our knowledge, ours is the first result on establishing geometric convergence of GD based on RSC for deep learning models, thus becoming an alternative sufficient condition for convergence that does not depend on the widely-used Neural Tangent Kernel (NTK). We share preliminary experimental results supporting our theoretical advances. 1 INTRODUCTION Recent years have seen advances in understanding convergence of gradient descent (GD) and variants for deep learning models (Du et al., 2019; Allen-Zhu et al., 2019; Zou & Gu, 2019; Zou et al., 2020; Liu et al., 2022; Ji & Telgarsky, 2019; Oymak & Soltanolkotabi, 2020; Nguyen, 2021). Despite the fact that such optimization problems are non-convex, a series of recent results have shown that GD has geometric convergence and finds near global solution “near initialization” for wide networks. Such analysis is typically done based on the Neural Tangent Kernel (NTK) (Jacot et al., 2018), in particular by showing that the NTK is positive definite “near initialization,” in turn implying the optimization problem satisfies a condition closely related to the Polyak-Łojasiewicz (PL) condition, which in turn implies geometric convergence to the global minima (Liu et al., 2022; Nguyen, 2021). Such results have been generalized to more flexible forms of “lazy learning” where similar guarantees hold (Chizat et al., 2019). However, there are concerns regarding whether such “near initialization” or “lazy learning” truly explains the optimization behavior in realistic deep learning models (Geiger et al., 2020; Yang & Hu, 2020; Fort et al., 2020; Chizat et al., 2019). Our work focuses on optimization of deep models with smooth activation functions, which have become increasingly popular in recent years (Du et al., 2019; Liu et al., 2022; Huang & Yau, 2020). Much of the theoretical convergence analysis of GD has focused on ReLU networks (Allen-Zhu et al., 2019; Nguyen, 2021). Some progress has also been made for deep models with smooth activations, but existing results are based on a variant of the NTK analysis, and the requirements on the width of such models are high (Du et al., 2019; Liu et al., 2022). Based on such background and context, the motivating question behind our work is: Are there other (meaningful) sufficient conditions beyond NTK which lead to (geometric) convergence of GD for deep learning optimization? Based on such motivation, we make two technical contributions in this paper which shed light on optimization of deep learning models with smooth activations and with L layers, m width, and σ20 initialization variance. First, for suitable σ20 , we establish a O( poly(L)√ m ) upper bound on the spectral norm of the Hessian of such models (Section 4). The bound holds over a large layerwise spectral norm (instead of Frobenius norm) ball BSpecρ,ρ1 (θ0) around the random initialization θ0, where the radius ρ < √ m, arguably much bigger than what real world deep models need. Our analysis builds on and sharpens recent prior work on the topic (Liu et al., 2020). While our analysis holds for Gaussian random initialization of weights with any variance σ20 , the poly(L) dependence happens when σ20 ≤ 14+o(1) 1 m (we handle the 1 m scaling explicitly) . Second, based on our Hessian spectral norm bound, we introduce a new approach to the analysis of optimization of deep models with smooth activations based on the concept of Restricted Strong Convexity (RSC) (Section 5) (Wainwright, 2019; Negahban et al., 2012; Negahban & Wainwright, 2012; Banerjee et al., 2014; Chen & Banerjee, 2015). While RSC has been a core theme in high-dimensional statistics especially for linear models and convex losses (Wainwright, 2019), to the best of our knowledge, RSC has not been considered in the context of non-convex optimization of overparameterized deep models. For a normalized total loss function L(θ) = 1n ∑n i=1 ℓ(yi, ŷi), ŷi = f(θ;xi) with predictor or neural network model f parameterized by vector θ and data points {xi, yi}ni=1, when ℓ corresponds to the square loss we show that the total loss function satisfies RSC on a suitable restricted set Qtκ ⊂ Rp (Definition 5.2 in Section 5) at step t as long as ∥∥ 1 n ∑n i=1 ∇θf(θt;xi) ∥∥2 2 = Ω( 1√ m ). We also present similar results for general losses for which additional assumptions are needed. We show that the RSC property implies a Restricted Polyak-Łojasiewicz (RPL) condition on Qtκ, in turn implying a geometric one-step decrease of the loss towards the minimum in Qtκ, and subsequently implying geometric decrease of the loss towards the minimum in the large (layerwise spectral norm) ball BSpecρ,ρ1 (θ0). The geometric convergence due to RSC is a novel approach in the context of deep learning optimization which does not depend on properties of the NTK. Thus, the RSC condition provides an alternative sufficient condition for geometric convergence for deep learning optimization to the widely-used NTK condition. The rest of the paper is organized as follows. We briefly present related work in Section 2 and discuss the problem setup in Section 3. We establish the Hessian spectral norm bound in Section 4 and introduce the RSC based optimization analysis in Section 5. We experimental results corresponding to the RSC condition in Section 6 and conclude in Section 7. All technical proofs are in the Appendix. 2 RELATED WORK The literature on gradient descent and variants for deep learning is increasingly large, and we refer the readers to the following surveys for an overview of the field (Fan et al., 2021; Bartlett et al., 2021). Among the theoretical works, we consider (Du et al., 2019; Allen-Zhu et al., 2019; Zou & Gu, 2019; Zou et al., 2020; Liu et al., 2022) as the closest to our work in terms of their study of convergence on multi-layer neural networks. For a literature review on shallow and/or linear networks, we refer to the recent survey (Fang et al., 2021). Due to the rapidly growing related work, we only refer to the most related or recent work for most parts. Du et al. (2019); Zou & Gu (2019); Allen-Zhu et al. (2019); Liu et al. (2022) considered optimization of square loss, which we also consider for our main results, and we also present extensions to more general class of loss functions. Zou & Gu (2019); Zou et al. (2020); Allen-Zhu et al. (2019); Nguyen & Mondelli (2020); Nguyen (2021); Nguyen et al. (2021) analyzed deep ReLU networks. Instead, we consider smooth activation functions, similar to (Du et al., 2019; Liu et al., 2022). The convergence analysis of the gradient descent in (Du et al., 2019; Allen-Zhu et al., 2019; Zou & Gu, 2019; Zou et al., 2020; Liu et al., 2022) relied on the near constancy of NTK for wide neural networks (Jacot et al., 2018; Lee et al., 2019; Arora et al., 2019; Liu et al., 2020), which yield certain desirable properties for their training using gradient descent based methods. One such property is related to the PL condition (Karimi et al., 2016; Nguyen, 2021), formulated as PL∗ condition in (Liu et al., 2022). Our work uses a different optimization analysis based on RSC (Wainwright, 2019; Negahban et al., 2012; Negahban & Wainwright, 2012) related to a restricted version of the PL condition. Furthermore, Du et al. (2019); Allen-Zhu et al. (2019); Zou & Gu (2019); Zou et al. (2020) showed convergence in value to a global minimizer of the total loss, as we also do. 3 PROBLEM SETUP: DEEP LEARNING WITH SMOOTH ACTIVATIONS Consider a training set D = {xi, yi}ni=1,xi ∈ X ⊆ Rd, yi ∈ Y ⊆ R. We will denote by X ∈ Rn×d the matrix whose ith row is x⊤i . For a suitable loss function ℓ, the goal is to minimize the empirical loss: L(θ) = 1n ∑n i=1 ℓ(yi, ŷi) = 1 n ∑n i=1 ℓ(yi, f(θ;xi)), where the prediction ŷi := f(θ;xi) is from a deep model, and the parameter vector θ ∈ Rp. In our setting f is a feed-forward multi-layer (fully-connected) neural network with depth L and widths ml, l ∈ [L] := {1, . . . , L} given by α(0)(x) = x , α(l)(x) = ϕ ( 1 √ ml−1 W (l)α(l−1)(x) ) , l = 1, . . . , L , f(θ;x) = α(L+1)(x) = 1 √ mL v⊤α(L)(x) , (1) where W (l) ∈ Rml×ml−1 , l ∈ [L] are layer-wise weight matrices, v ∈ RmL is the last layer vector, ϕ(·) is the smooth (pointwise) activation function, and the total set of parameters θ := (vec(W (1))⊤, . . . , vec(W (L))⊤,v⊤)⊤ ∈ R ∑L k=1 mkmk−1+mL , (2) with m0 = d. For simplicity, we will assume that the width of all the layers is the same, i.e., ml = m, l ∈ [L], and so that θ ∈ RLm2+m. For simplicity, we also consider deep models with only one output, i.e., f(θ;x) ∈ R as in (Du et al., 2019), but our results can be extended to multi-dimension outputs as in (Zou & Gu, 2019), using V ∈ RmL×k for k outputs at the last layer; see Appendix C. Define the pointwise loss ℓi := ℓ(yi, ·) : R → R+ and denote its first- and second-derivative as ℓ′i := dℓ(yi,ŷi) dŷi and ℓ′′i := d2ℓ(yi,ŷi) dŷ2i . The particular case of square loss is ℓ(yi, ŷi) = (yi − ŷi)2. We denote the gradient and Hessian of f(·;xi) : Rp → R as ∇if := ∂f(θ;xi)∂θ , and ∇ 2 i f := ∂2f(θ;xi) ∂θ2 . The neural tangent kernel (NTK) Kntk(·; θ) ∈ Rn×n corresponding to parameter θ is defined as Kntk(xi,xj ; θ) = ⟨∇if,∇jf⟩. By chain rule, the gradient and Hessian of the empirical loss w.r.t. θ are given by ∂L(θ)∂θ = 1 n ∑n i=1 ℓ ′ i∇if and ∂2L(θ) ∂θ2 = 1 n ∑n i=1 [ ℓ′′i ∇if∇if⊤ + ℓ′i∇2i f ] . Let ∥ · ∥2 denote the spectral norm for matrices and L2-norm for vectors We make the following assumption regarding the activation function ϕ: Assumption 1 (Activation function). The activation ϕ is 1-Lipschitz, i.e., |ϕ′| ≤ 1, and βϕ-smooth, i.e., |ϕ′′l | ≤ βϕ. Remark 3.1. Our analysis holds for any ςϕ-Lipchitz smooth activations, with a dependence on ςϕ on most key results. The main (qualitative) conclusions stay true if ςϕ ≤ 1 + o(1) or ςϕ = poly(L), which is typically satisfied for commonly used smooth activations and moderate values of L. We define two types of balls over parameters that will be used throughout our analysis. Definition 3.1 (Norm balls). Given θ ∈ Rp of the form (2) with parameters W (l), l ∈ [L],v, we define BSpecρ,ρ1 (θ̄) := { θ ∈ Rp as in (2) | ∥W (ℓ) −W (ℓ)∥2 ≤ ρ, ℓ ∈ [L], ∥v − v̄∥2 ≤ ρ1 } , (3) BEucρ (θ̄) := { θ ∈ Rp as in (2) | ∥θ − θ̄∥2 ≤ ρ } . (4) Remark 3.2. The layerwise spectral norm ball BSpecρ,ρ1 plays a key role in our analysis. The last layer radius of ρ1 gives more flexibility, and we will usually assume ρ1 ≤ ρ; e.g., we could choose the desirable operating regime of ρ < √ m and ρ1 = O(1). Our analysis in fact goes through for any choice of ρ, ρ1 and the detailed results will indicate specific dependencies on both ρ and ρ1. 4 SPECTRAL NORM OF THE HESSIAN OF THE MODEL We start with the following assumption regarding the random initialization of the weights. Assumption 2 (Initialization weights and data normalization). The initialization weights w(l)0,ij ∼ N (0, σ20) for l ∈ [L] where σ0 = σ1 2 ( 1+ √ log m√ 2m ) , σ1 > 0, and v0 is a random unit vector with ∥v0∥2 = 1. Further, we assume the input data satisfies: ∥xi∥2 = √ d, i ∈ [n]. We focus on bounding the spectral norm of the Hessian ∥∇2θf(θ;x)∥2 for θ ∈ BSpecρ,ρ1 (θ0) and any input x ∈ Rd with ∥x∥2 = √ d. The assumption ∥x∥2 = √ d is for convenient scaling, such assumptions are common in the literature (Allen-Zhu et al., 2019; Oymak & Soltanolkotabi, 2020; Nguyen et al., 2021). Prior work (Liu et al., 2020) has considered a similar analysis for θ ∈ BEucρ (θ0), effectively the layerwise Frobenius norm ball, which is much smaller than BSpecρ,ρ1 (θ0), the layerwise spectral norm ball. We choose a unit value for the last layer’s weight norm for convenience, since our results hold under appropriate scaling for any other constant in O(1). All missing proofs are in Appendix A. Theorem 4.1 (Hessian Spectral Norm Bound). Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least (1− 2(L+1)m ), for any xi, i ∈ [n], we have∥∥∇2θf(θ;xi)∥∥2 ≤ cH√m , (5) with cH = O(L5(1 + γ6L)(1 + ρ1)) where γ := σ1 + ρ√m . Remark 4.1 (Desirable operating regimes). The constant γ needs careful scrutiny as cH depends on γ6L. Let us choose ρ1 = O(poly(L)). For any choice of the spectral norm radius ρ < √ m, we can choose σ1 ≤ 1− ρ√m ensuring γ ≤ 1 and hence cH = O(poly(L)). If ρ = O(1), we can keep σ1 = 1 so that γ = 1+ O(1)√ m , and cH = O(poly(L)) as long as L < √ m, which is common. Both of these give good choices for σ1 and desirable operating regime for the result. If we choose σ1 > 1, an undesirable operating regime, then cH = O(cΘ(L)), c > 1, and we will need m = Ω(cΘ(L)) for the result to be of interest. Remark 4.2 (Recent Related Work). In recent work, Liu et al. (2020) analyzed the Hessian spectral norm bound and showed that cH = Õ(ρ3L) for θ ∈ BEucρ (θ0) (logarithmic terms hidden in Õ(·)). Our analysis builds on and sharpens the result in (Liu et al., 2020) in three respects: (a) we have cH = O(poly(L)(1 + γ 6L)) for ρ1 = O(poly(L)) where we can choose σ1 to make γ ≤ 1 and thus obtain cH = O(poly(L)), instead of the worse cH = Õ(ρ3L) in Liu et al. (2020)1; (b) even for the same ρ, our results hold for a much larger spectral norm ball BSpecρ,ρ1 (θ0) compared to their Euclidean norm ball BEucρ (θ0) in (Liu et al., 2020); and (c) to avoid an exponential term, the bound in (Liu et al., 2020) needs ρ ≤ 1 whereas our result can use radius ρ < √ m for all intermediate layer matrices and ρ1 = O(poly(L)) for the last layer vector. Moreover, as a consequence of (b) and (c), our results hold for a larger (spectral norm) ball whose radius can increase with m, unlike the results in Liu et al. (2020) which hold for a smaller (Euclidean) ball with constant radius, i.e., “near initialization.” Remark 4.3 (Exact constant cH ). For completeness, we show the exact expression of the constant cH in Theorem 4.1 so the dependencies on different factors is clear. Let h(l) := γl−1+|ϕ(0)| ∑l−1 i=1 γ i−1. Then, cH = 2L(L 2γ2L + LγL + 1) · (1 + ρ1) · ψH ·max l∈[L] γL−l + 2LγL max l∈[L] h(l) , (6) where ψH = max 1≤l1<l2≤L { βϕ(h(l1)) 2 , h(l1) ( βϕ 2 (γ2 + (h(l2)) 2) + 1 ) , βϕγ 2h(l1)h(l2) } . (7) The source of the terms will be discussed shortly. Note the dependence on ρ1, the radius for the last layer in BSpecρ,ρ1 (θ0), and why ρ1 = O(poly(L)) is a desirable operating regime. 1See the end of Appendix A for a quick note about the network architecture in our work and the one in (Liu et al., 2020). Next, we give a high level outline of the proof of Theorem 4.1. Proof sketch. Our analysis follows the structure developed in Liu et al. (2020), but is considerably sharper as discussed in Remark 4.2. We start by defining the following quantities: Q∞(f) := max1≤l≤L {∥∥∥ ∂f∂α(l) ∥∥∥∞}, ∂f∂α(l) ∈ Rm, Q2(f) := max1≤l≤L {∥∥∥ ∂α(l)∂w(l) ∥∥∥2}, w(l) := vec(W (l)), ∂α(l) ∂w(l) ∈ Rm×m2 , and Q2,2,1(f) is the maximum over 1 ≤ l1 < l2 < l3 ≤ L among the three quantities ∥∥∥ ∂2α(l1) ∂w(l1) 2 ∥∥∥ 2,2,1 , ∥∥∥ ∂α(l1) ∂w(l1) ∥∥∥ 2 ∥∥∥ ∂2α(l2) ∂α(l2−1)∂w(l2) ∥∥∥ 2,2,1 , and ∥∥∥ ∂α(l1) ∂w(l1) ∥∥∥ 2 ∥∥∥ ∂α(l2) ∂w(l2) ∥∥∥ 2 ∥∥∥ ∂2α(l3) ∂α(l3−1) 2 ∥∥∥ 2,2,1 . where for an order-3 tensor T ∈ Rd1×d2×d3 we define the (2, 2, 1)−norm as ∥T∥2,2,1 := sup∥x∥2=∥z∥2=1 ∑d3 k=1 ∣∣∣∑d1i=1∑d2j=1 Tijkxizj∣∣∣ ,x ∈ Rd1 , z ∈ Rd2 . The following result in (Liu et al., 2020) provides an upper bound to the spectral norm of the Hessian. Theorem 4.2 (Liu et al. (2020), Theorem 3.1). Under Assumptions 1, assuming there is δ such that∥∥∥ ∂α(l)∂α(l−1) ∥∥∥2 ≤ δ, with C1 ≤ L2δ2L + LδL + L and C2 ≤ LδL, we have∥∥∇2θf(θ;x)∥∥2 ≤ 2C1Q2,2,1(f)Q∞(f) + 2√mC2Q2(f) , (8) In order to prove Theorem 4.1, we prove that Theorem 4.2 holds with high-probability where δ = γ, Q2(f) = O(L(1+γL)), Q2,2,1(f) = O(L3(1+γ3L)), and Q∞(f) = O ( (1+γL)(1+ρ1)√ m ) . Thus we obtain that the upper bound (4.2) becomes O(poly(L)(1+γ 6L)(1+ρ1)√ m ), providing a benign polynomial dependence on L when γ ≤ 1, rather than an exponential dependence on the radius ρ as in (Liu et al., 2020). The analysis for bounding the spectral norm of the Hessian can be used to establish additional bounds, which we believe are of independent interest, some of which will be used later in Section 5. First, we bound the norms of gradient of the predictor and the loss w.r.t. the weight vector θ and the input data x. Lemma 4.1 (Predictor gradient bounds). Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least ( 1− 2(L+1)m ) , we have ∥∇θf(θ;x)∥2 ≤ ϱ and ∥∇xf(θ;x)∥2 ≤ γL√ m (1 + ρ1) , (9) with ϱ2 = (h(L + 1))2 + 1m (1 + ρ1) 2 ∑L l=1(h(l)) 2γ2(L−l), γ = σ1 + ρ√m , h(l) = γ l−1 + |ϕ(0)| ∑l−1 i=1 γ i−1. Remark 4.4. Our analysis in Lemma 4.1 provides a bound on the Lipschitz constant of the predictor, a quantity which has generated interest in recent work on robust training (Salman et al., 2019; Cohen et al., 2020; Bubeck & Sellke, 2021). Under the assumption of square losses, further bounds can be obtained. Lemma 4.2 (Loss bounds). Consider the square loss. Under Assumptions 1, and 2, for γ = σ1+ ρ√m , each of the following inequalities hold with probability at least ( 1− 2(L+1)m ) : L(θ0) ≤ c0,σ1 and L(θ) ≤ cρ1,γ for θ ∈ BSpecρ,ρ1 (θ0), where ca,b = 2 n ∑n i=1 y 2 i + 2(1 + a) 2|g(b)|2 and g(a) = aL + |ϕ(0)| ∑L i=1 a i for any a, b ∈ R. Corollary 4.1 (Loss gradient bound). Consider the square loss. Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least ( 1− 2(L+1)m ) , we have ∥∇θL(θ)∥2 ≤ 2 √ L(θ)ϱ ≤ 2√cρ1,γϱ, with ϱ as in Lemma 4.1 and cρ1,γ as in Lemma 4.2. 5 OPTIMIZATION GUARANTEES WITH RESTRICTED STRONG CONVEXITY We focus on minimizing the empirical loss L(θ) over θ ∈ BSpecρ,ρ1 (θ0), the layerwise spectral norm ball in (3). Our analysis is based on Restricted Strong Convexity (RSC) (Negahban et al., 2012; Banerjee et al., 2014; Chen & Banerjee, 2015; Wainwright, 2019), which relaxes the definition of strong convexity by only needing strong convexity in certain directions or over a subset of the ambient space. We introduce the following specific definition of RSC with respect to a tuple (S, θ). Definition 5.1 (Restricted Strong Convexity (RSC)). A function L is said to satisfy α-restricted strong convexity (α-RSC) with respect to the tuple (S, θ) if for any θ′ ∈ S ⊆ Rp and some fixed θ ∈ Rp, we have L(θ′) ≥ L(θ) + ⟨θ′ − θ,∇θL(θ)⟩+ α2 ∥θ ′ − θ∥22, with α > 0. Note that L being α-RSC w.r.t. (S, θ) does not need L to be convex on Rp. Let us consider a sequence of iterates {θt}t≥0 ⊂ Rp. Our RSC analysis will rely on the following Qtκ-sets at step t, which avoid directions almost orthogonal to the average gradient of the predictor. We define the following notation: for two vectors π and π̄, cos(π, π̄) denotes the cosine of the angle between π and π̄. Definition 5.2 (Qtκ sets). For iterate θt ∈ Rp, let ḡt = 1n ∑n i=1 ∇θf(θt;xi). For any κ ∈ (0, 1], define Qtκ := {θ ∈ Rp | | cos(θ − θt, ḡt)| ≥ κ}. We define the set Bt := Qtκ ∩BSpecρ,ρ1 (θ0) ∩B Euc ρ2 (θt). We focus on establishing RSC w.r.t. the tuple (Bt, θt), where BSpecρ,ρ1 (θ0) becomes the feasible set for the optimization and B Euc ρ2 (θt) is a Euclidean ball around the current iterate. Assumption 3 (Loss function). The loss ℓi, i ∈ [n], is (i) strongly convex, i.e., ℓ′′i ≥ a > 0 and (ii) smooth, i.e., ℓ′′i ≤ b. Assumption 3 is satisfied by commonly used loss functions such as square loss, where a = b = 2. We state the RSC result for square loss; the result for other losses and proofs of all technical results in this section are in Appendix B. Theorem 5.1 (RSC for Square Loss). For square loss, under Assumptions 1 and 2, with probability at least (1− 2(L+1)m ), ∀θ ′ ∈ Qtκ ∩BSpecρ,ρ1 (θ0) ∩B Euc ρ2 (θt) with θt ∈ B Spec ρ,ρ1 (θ0), L(θ′) ≥ L(θt) + ⟨θ′ − θt,∇θL(θt)⟩+ αt 2 ∥θ′ − θt∥22 , with αt = c1 ∥ḡt∥ 2 2 − c2√ m , (10) where ḡt = 1n ∑n i=1 ∇θf(θt;xi), c1 = 2κ2 and c2 = 2cH(2ϱρ2 + √ cρ1,γ), with cH as in Theorem 4.1, ϱ as in Lemma 4.1, and cρ1,γ as in Lemma 4.2. Consequently, L satisfies RSC w.r.t. (Qtκ ∩BSpecρ,ρ1 (θ0) ∩B Euc ρ2 (θt), θt) whenever αt > 0. Remark 5.1. The RSC condition αt > 0 is satisfied at iteration t as long as ∥ḡt∥22 > c2c1√m where c1, c2 are exactly specified in Theorem 5.1. Indeed, if γ (and so σ1 and ρ) is chosen according to the desirable operating regimes (see Remark 4.1), ρ1 = O(poly(L)) and ρ2 = O(poly(L)), then we can use the bounds from Lemma 4.2 and obtain that the RSC condition is satisfied when ∥ḡt∥22 > O(poly(L))√ m . The condition is arguably mild, does not need the NTK condition λmin(Kntk(·; θt)) > 0, and is expected to hold till convergence (see Remark 5.3). Moreover, it is a local condition at step t and has no dependence on being ”near initialization” in the sense of θt ∈ BEucρ (θ0) for ρ = O(1) as in (Liu et al., 2020; 2022). For the convergence analysis, we also need to establish a smoothness property of the total loss. Theorem 5.2 (Local Smoothness for Square Loss). For square loss, under Assumptions 1 and 2, with probability at least (1− 2(L+1)m ), ∀θ, θ ′ ∈ BSpecρ,ρ1 (θ0), L(θ′) ≤ L(θ) + ⟨θ′ − θ,∇θL(θ)⟩+ β 2 ∥θ′ − θ∥22 , with β = 2ϱ2 + 2cH √ cρ1,γ√ m , (11) with cH as in Theorem 4.1, ϱ as in Lemma 4.1, and cρ1,γ as in Lemma 4.2. Consequently, L is locally β-smooth. Moreover, if γ (and so σ1 and ρ) is chosen according to the desirable operating regimes (see Remark 4.1) and ρ1 = O(poly(L)), then β = O(poly(L)). Remark 5.2. Similar to the case of the standard strong convexity and smoothness, the RSC and smoothness parameters respectively in Theorems 5.1 and 5.2 satisfy αt < β. To see this note that αt < 2κ 2∥ḡt∥22 ≤ 2ϱ2 ≤ β, where the second inequality follows since κ ≤ 1, and ∥ḡt∥22 ≤ ϱ2 using Lemma 4.1. Next, we show that the RSC condition w.r.t. the tuple (Bt, θt) implies a restricted Polyak-Łojasiewicz (RPL) condition w.r.t. the tuple (Bt, θt), unlike standard PL which holds without restrictions (Karimi et al., 2016). Lemma 5.1 (RSC ⇒ RPL). Let Bt := Qtκ ∩BSpecρ,ρ1 (θ0) ∩B Euc ρ2 (θt). In the setting of Theorem 5.1, if αt > 0, then the tuple (Bt, θt) satisfies the Restricted Polyak-Łojasiewicz (RPL) condition, i.e., L(θt)− inf θ∈Bt L(θ) ≤ 1 2αt ∥∇θL(θt)∥22 , (12) with probability at least (1− 2(L+1)m ). For the rest of the convergence analysis, we make the following assumption where T can be viewed as the stopping time so the convergence analysis holds given the assumptions are satisfied. Assumption 4 (Iterates’ conditions). For iterates {θt}t=0,1,...,T : (A4.1) αt > 0; (A4.2) θt ∈ BSpecρ,ρ1 (θ0). Remark 5.3 (Assumption (A4.1)). From Remark 5.1, (A4.1) is satisfied as long as ∥ḡt∥22 > c2c1√m where c1, c2 are as in Theorem 5.1, which is arguably a mild condition. In Section 6 we will present some empirical findings that show that this condition on ∥ḡt∥22 behaves well empirically. We now consider the particular case of gradient descent (GD) for the iterates: θt+1 = θt − ηt∇L(θt), where ηt is chosen so that θt+1 ∈ BSpecρ,ρ1 (θ0) and ρ2 is chosen so that θt+1 ∈ B Euc ρ2 (θt), which are sufficient for the analysis of Theorem 5.1 — we specify suitable choices in the sequel (see Remark 5.4). Given RPL w.r.t. (Bt, θt), gradient descent leads to a strict decrease of loss in Bt. Lemma 5.2 (Local Loss Reduction in Bt). Let αt, β be as in Theorems 5.1 and 5.2 respectively, and Bt := Qtκ ∩ BSpecρ,ρ1 (θ0) ∩ B Euc ρ2 (θt). Consider Assumptions 1, 2, and 4, and gradient descent with step size ηt = ωtβ , ωt ∈ (0, 2). Then, for any θt+1 ∈ arginfθ∈Bt L(θ), we have with probability at least (1− 2(L+1)m ), L(θt+1)− L(θt+1) ≤ ( 1− αtωt β (2− ωt) ) (L(θt)− L(θt+1)) . (13) Building on Lemma 5.2, we show that GD in fact leads to a geometric decrease in the loss relative to the minimum value of L(·) in the set BSpecρ,ρ1 (θ0). Theorem 5.3 (Global Loss Reduction in BSpecρ,ρ1 (θ0)). Let αt, β be as in Theorems 5.1 and 5.2 respectively, and Bt := Qtκ ∩ BSpecρ,ρ1 (θ0) ∩ B Euc ρ2 (θt). Let θ ∗ ∈ arginfθ∈BSpecρ,ρ1 (θ0) L(θ), θt+1 ∈ arginfθ∈Bt L(θ), and γt := L(θt+1)−L(θ∗) L(θt)−L(θ∗) . Let αt, β be as in Theorems 5.1 and 5.2 respectively, and Bt := Qtκ ∩ BSpecρ,ρ1 (θ0) ∩ B Euc ρ2 (θt). Consider Assumptions 1, 2, and 4, and gradient descent with step size ηt = ωtβ , ωt ∈ (0, 2). Then, with probability at least (1− 2(L+1) m ), we have we have γt ∈ [0, 1) and L(θt+1)− L(θ∗) ≤ ( 1− αtωt β (1− γt)(2− ωt) ) (L(θt)− L(θ∗)) . (14) As long as the conditions in Theorem 5.3 are kept across iterations, there will be a geometric decrease in loss. For Assumption 4, we have discussed (A4.1) in Remark 5.1, and we discuss (A4.2) next. Remark 5.4 (Assumption (A4.2)). Consider we run gradient descent iterations until some stopping time T > 0. Given radius ρ < √ m , Assumption (A4.2) θt ∈ BSpecρ,ρ1 (θ0), t = 0, . . . , T , can be verified empirically. Alternatively, we can choose suitable step sizes ηt to ensure the property using the geometric convergence from Theorem 5.3. Assume that our goal is to get L(θT )− L(θ∗) ≤ ϵ. Then, with χT := mint∈[T ] αtωtβ (1 − γt)(2 − ωt), Assumption (A4.1) along with Remark 5.2 ensures χT < 1. Then, it suffices to have T = ⌈log(L(θ0)−L(θ ∗) ϵ )/ log 1 1−χT ⌉ = Θ(log 1 ϵ ). Then, to ensure θt ∈ BSpecρ,ρ1 (θ0), t ∈ [T ], in the case of the square loss, since ∥∇L(θt)∥2 ≤ c for some constant c (see Corollary 4.1), it suffices to have ηt ≤ min{ρ,ρ1}Θ(log 1ϵ ) . Moreover, we point out that having ρ2 ≥ ηtc ensures ∥θt+1 − θt∥2 ≤ ρ2 ⇒ θt+1 ∈ BEucρ2 (θt), which in this case can be guaranteed if ρ2 ≥ min{ρ,ρ1}Θ(log 1ϵ ) . The argument above is informal, but illustrates that Assumption (A4.1) along with suitable constant step sizes ηt would ensure (A4.2). Thus, Assumption (A4.1), which ensures the RSC condition, is the main assumption behind the analysis. The conditions in Assumption 4 (see Remarks 5.1 and 5.4) along with Theorem 5.3 imply that the RSC based convergence analysis holds for a much larger layerwise spectral radius norm ball BSpecρ,ρ1 (θ0) with any radius ρ < √ m and ρ1 = O(poly(L)). Remark 5.5 (RSC and NTK). In the context of square loss, the NTK condition for geometric convergence needs λmin(Kntk(·; θt)) ≥ c0 > 0 for every t, i.e., uniformly bounded away from 0 by a constant c0 > 0. The NTK condition can also be written as inf v:∥v∥2=1 ∥∥∥∥∥ n∑ i=1 vi∇θf(θt;xi) ∥∥∥∥∥ 2 2 ≥ c0 > 0 . (15) In contrast, the proposed RSC condition (Theorem 5.1) needs∥∥∥∥∥ 1n n∑ i=1 ∇θf(θt;xi) ∥∥∥∥∥ 2 2 ≥ c̄0√ m , (16) where m is the width and c̄0 = c2c1 where c1, c2 are constants defined in Theorem 5.1. As a quadratic form on the NTK, the RSC condition can be viewed as using a specific v in (15), i.e., vi = 1√n for i ∈ [n], since the RSC condition is ∥∥∥∑ni=1 1√n∇θf(θt;xi)∥∥∥22 ≥ c̄0n√m . For m = Ω(n2), the RSC condition is more general since NTK ⇒ RSC, but the converse is not necessarily true. Remark 5.6 (RSC covers different settings than NTK). The NTK condition may be violated in certain settings, e.g., ∇θf(θt;xi), i = 1, . . . , n are linearly dependent, xi ≈ xj for some i ̸= j, layer widths are small ml < n, etc., but the optimization may work in practice. The RSC condition provides a way to analyze convergence in such settings. The RSC condition gets violated when 1 n ∑n i=1 ∇θf(θt;xi) ≈ 0, which does not seem to happen in practice (see Section 6), and future work will focus on understanding the phenomena. Finally, note that it is possible to construct a set of gradient vectors which satisfy the NTK condition but violates the RSC condition. Our perspective is to view the NTK and the RSC as two different sufficient conditions and geometric convergence of gradient descent (GD) is guaranteed as long as one of them is satisfied in any step. 6 RSC CONDITION: EXPERIMENTAL RESULTS In this section, we present experimental results verifying the RSC condition∥∥ 1 n ∑n i=1 ∇θf(θt;xi) ∥∥2 2 = Ω ( poly(L)√ m ) , t = 1, . . . , T , on standard benchmarks: CIFAR- 10, MNIST, and Fashion-MNIST. For simplicity, as before, we use ḡt = 1n ∑n i=1 ∇θf(θt;xi). In Figure 1(a), we consider CIFAR-10 and show the trajectory of ∥ḡt∥2 over iterations t, for different values of the network width m. For any width, the value of ∥ḡt∥2 stabilizes to a constant value over iterations, empirically validating the RSC condition ∥ḡt∥22 = Ω(poly(L)/ √ m). Interestingly, the smallest value of ∥ḡt∥2 seems to increase with the width. To study the width dependence further, in Figure 1(b), we plot mint∈[T ] ∥ḡt∥2 as a function of width m for several values of the width. The plot shows that mint∈[T ] ∥ḡt∥2 increases steadily with m illustrating that the RSC condition is empirically satisfied more comfortably for wider networks. In Figure 1(c) and (d), we show similar plots for MNIST and Fashion-MNIST illustrating the same phenomena of mint∈[T ] ∥ḡt∥2 increasing with m. For the experiments, the network architecture we used had 3-layer fully connected neural network with tanh activation function. The training algorithm is gradient descent (GD) width constant learning rate, chosen appropriately to keep the training in NTK regime. Since we are using GD, we use 512 randomly chosen training points for the experiments. The stopping criteria is either training loss < 10−3 or number of iterations larger than 3000. 7 CONCLUSIONS In this paper, we revisit deep learning optimization for feedforward models with smooth activations, and make two technical contributions. First, we bound the spectral norm of the Hessian over a large layerwise spectral norm radius ball, highlighting the role of initialization in such analysis. Second, we introduce a new approach to showing geometric convergence in deep learning optimization using restricted strong convexity (RSC). Our analysis sheds considerably new light on deep learning optimization problems, underscores the importance of initialization variance, and introduces a RSC based alternative to the prevailing NTK based analysis, which may fuel future work. ACKNOWLEDGMENTS AB is grateful for support from the National Science Foundation (NSF) through awards IIS 21-31335, OAC 21-30835, DBI 20-21898, as well as a C3.ai research award. MB and LZ are grateful for support from the National Science Foundation (NSF) and the Simons Foundation for the Collaboration on the Theoretical Foundations of Deep Learning2 through awards DMS-2031883 and 814639 as well as NSF IIS-1815697 and the TILOS institute (NSF CCF-2112665). A SPECTRAL NORM OF THE HESSIAN We establish the main theorem from Section 4 in this Appendix. Theorem 4.1 (Hessian Spectral Norm Bound). Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least (1− 2(L+1)m ), for any xi, i ∈ [n], we have∥∥∇2θf(θ;xi)∥∥2 ≤ cH√m , (5) with cH = O(L5(1 + γ6L)(1 + ρ1)) where γ := σ1 + ρ√m . A.1 ANALYSIS OUTLINE Our analysis follows that of Liu et al. (2020) and sharpens the analysis to get better dependence on the depth L of the neural network. We start by defining the following quantities: Q∞(f) := max 1≤l≤L {∥∥∥∥ ∂f∂α(l) ∥∥∥∥ ∞ } , ∂f ∂α(l) ∈ Rm , (17) Q2(f) := max 1≤l≤L {∥∥∥∥ ∂α(l)∂w(l) ∥∥∥∥ 2 } , w(l) := vec(W (l)) , ∂α(l) ∂w(l) ∈ Rm×m 2 , (18) Q2,2,1(f) := max 1≤l1<l2<l3≤L {∥∥∥∥ ∂2α(l1) ∂w(l1) 2 ∥∥∥∥ 2,2,1 , ∥∥∥∥ ∂α(l1)∂w(l1) ∥∥∥∥ 2 ∥∥∥∥ ∂2α(l2)∂α(l2−1)∂w(l2) ∥∥∥∥ 2,2,1 , (19) ∥∥∥∥ ∂α(l1)∂w(l1) ∥∥∥∥ 2 ∥∥∥∥ ∂α(l2)∂w(l2) ∥∥∥∥ 2 ∥∥∥∥ ∂2α(l3) ∂α(l3−1) 2 ∥∥∥∥ 2,2,1 } (20) where for an order-3 tensor T ∈ Rd1×d2×d3 we define the (2, 2, 1)−norm as follows, ∥T∥2,2,1 := sup ∥x∥2=∥z∥2=1 d3∑ k=1 ∣∣∣∣∣∣ d1∑ i=1 d2∑ j=1 Tijkxizj ∣∣∣∣∣∣ , x ∈ Rd1 , z ∈ Rd2 . (21) We will also use the notation W (L+1) := v. A key result established in Liu et al. (2020) provides an upper bound to the spectral norm of the Hessian: Theorem 4.2 (Liu et al. (2020), Theorem 3.1). Under Assumptions 1, assuming there is δ such that∥∥∥ ∂α(l)∂α(l−1) ∥∥∥2 ≤ δ, with C1 ≤ L2δ2L + LδL + L and C2 ≤ LδL, we have∥∥∇2θf(θ;x)∥∥2 ≤ 2C1Q2,2,1(f)Q∞(f) + 2√mC2Q2(f) , (8) In order to prove Theorem 4.1, we prove that Theorem 4.2 holds with high-probability where • δ = γ follows from Lemma A.3, • Q2(f) = O(L(1 + γL)) follows from Lemma A.4, • Q2,2,1(f) = O(L3(1 + γ3L)) follows from Lemma A.4 and Lemma A.5, and • Q∞(f) = O ( (1+γL)(1+ρ1)√ m ) follows from Lemma A.7 , while also establishing precise constants to get a proper form for the constant cH in Theorem 4.1. As a result, cH ≤ O(L 5(1+γ6L)(1+ρ1)√ m ). A.2 SPECTRAL NORMS OF W (l) AND L2 NORMS OF α(l) We start by bounding the spectral norm of the layer-wise matrices at initialization. Lemma A.1. Consider any l ∈ [L]. If the parameters are initialized as w(l)0,ij ∼ N (0, σ20) where σ0 = σ1 2(1+ √ log m 2m ) as in Assumption 2, then with probability at least ( 1− 2m ) , we have ∥W (l)0 ∥2 ≤ σ1 √ m . (22) Proof. For a (ml ×ml−1) random matrix W (l)0 with i.i.d. entries w (l) 0,ij ∈ N (0, σ20), with probability at least (1− 2 exp(−t2/2σ20)), the largest singular value of W0 is bounded by σmax(W (ℓ) 0 ) ≤ σ0( √ ml + √ ml−1) + t . (23) This concentration result can be easily derived as follows: notice that W0 = σ0W̄ (ℓ) 0 , where w̄ (ℓ) 0,ij ∼ N(0, 1), thus we can use the expectation E[∥W0∥(ℓ)2 ] = σ0E[ ∥∥W̄0∥∥(ℓ)2 ] = σ0(√mℓ + √mℓ−1) from Gordon’s Theorem for Gaussian matrices (Vershynin, 2012, Theorem 5.32) in the Gaussian concentration result for Lipschitz functions (Vershynin, 2012, Proposition 3.4) considering that B 7→ ∥σ0B∥2 is a σ0-Lipschitz function when the matrix B is treated as a vector. Let us choose t = σ0 √ 2 logm so that (23) holds with probability at least (1− 2m ). Then, to obtain (22), Case 1: l = 1. With m0 = d and m1 = m, ∥W (1)0 ∥2 ≤ σ0( √ d+ √ m+ √ 2 logm) ≤ σ0(2 √ m+ √ 2 logm) since we are in the over-parameterized regime m ≥ d. Case 2: 2 ≤ l ≤ L. With ml = ml−1 = m, ∥W (l)0 ∥2 ≤ σ0(2 √ m+ √ 2 logm) . Now, using σ0 = σ1 2(1+ √ log m 2m ) in both cases completes the proof. Next we bound the spectral norm of layerwise matrices. Proposition A.1. Under Assumptions 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least ( 1− 2m ) , ∥W (l)∥2 ≤ ( σ1 + ρ√ m )√ m , l ∈ [L]. Proof. By triangle inequality, for l ∈ [L], ∥W (l)∥2 ≤ ∥W (l)0 ∥2 + ∥W (l) −W (l) 0 ∥2 (a) ≤ σ1 √ m+ ρ , where (a) follows from Lemma A.1. This completes the proof. Next, we show that the output α(l) of layer l has an L2 norm bounded by O( √ m). Lemma A.2. Consider any l ∈ [L]. Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least ( 1− 2lm ) , we have ∥α(l)∥2 ≤ √ m ( σ1 + ρ√ m )l + √ m l∑ i=1 ( σ1 + ρ√ m )i−1 |ϕ(0)| = ( γl + |ϕ(0)| l∑ i=1 γi−1 ) √ m . Proof. Following Allen-Zhu et al. (2019); Liu et al. (2020), we prove the result by recursion. First, recall that since ∥x∥22 = d, we have ∥α(0)∥2 = √ d. Then, since m0 = d and ϕ is 1-Lipschitz,∥∥∥∥ϕ( 1√dW (1)α(0) )∥∥∥∥ 2 − ∥ϕ(0)∥2 ≤ ∥∥∥∥ϕ( 1√dW (1)α(0) ) − ϕ(0) ∥∥∥∥ 2 ≤ ∥∥∥∥ 1√dW (1)α(0) ∥∥∥∥ 2 , so that ∥α(1)∥2 = ∥∥∥∥ϕ( 1√dW (1)α(0) )∥∥∥∥ 2 ≤ ∥∥∥∥ 1√dW (1)α(0) ∥∥∥∥ 2 + ∥ϕ(0)∥2 ≤ 1√ d ∥W (1)∥2∥α(0)∥2 + |ϕ(0)| √ m ≤ ( σ1 + ρ√ m )√ m+ |ϕ(0)| √ m , where we used Proposition A.1 in the last inequality, which holds with probability at least 1− 2m . For the inductive step, we assume that for some l − 1, we have ∥α(l−1)∥2 ≤ √ m ( σ1 + ρ√ m )l−1 + √ m l−1∑ i=1 ( σ1 + ρ√ m )i−1 |ϕ(0)|, which holds with the probability at least 1− 2(l−1)m . Since ϕ is 1-Lipschitz, for layer l, we have∥∥∥∥ϕ( 1√mW (l)α(l−1) )∥∥∥∥ 2 − ∥ϕ(0)∥2 ≤ ∥∥∥∥ϕ( 1√mW (l)α(l−1) ) − ϕ(0) ∥∥∥∥ 2 ≤ ∥∥∥∥ 1√mW (l)α(l−1) ∥∥∥∥ 2 , so that ∥α(l)∥2 = ∥∥∥∥ϕ( 1√mW (l)α(l−1) )∥∥∥∥ 2 ≤ ∥∥∥∥ 1√mW (l)α(l−1) ∥∥∥∥ 2 + ∥ϕ(0)∥2 ≤ 1√ m ∥W (l)∥2∥α(l−1)∥2 + √ m|ϕ(0)| (a) ≤ ( σ1 + ρ√ m ) ∥α(l−1)∥2 + √ m|ϕ(0)| (b) = √ m ( σ1 + ρ√ m )l + √ m l∑ i=1 ( σ1 + ρ√ m )i−1 |ϕ(0)|, where (a) follows from Proposition A.1 and (b) from the inductive step. Since we have used Proposition A.1 l times, after a union bound, our result would hold with probability at least 1− 2lm . This completes the proof. A.3 SPECTRAL NORMS OF ∂α (l) ∂w(l) AND ∂α (l) ∂α(l−1) Recall that in our setup, the layerwise outputs and pre-activations are respectively given by: α(l) = ϕ ( α̃(l) ) , α̃(l) := 1 √ ml−1 W (l)α(l−1) . (24) Lemma A.3. Consider any l ∈ {2, . . . , L}. Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least ( 1− 2m ) , ∥∥∥∥ ∂α(l)∂α(l−1) ∥∥∥∥2 2 ≤ ( σ1 + ρ√ m )2 = γ2 . (25) Proof. By definition, we have [ ∂α(l) ∂α(l−1) ] i,j = 1√ m ϕ′(α̃ (l) i )W (l) ij . (26) Since ∥A∥2 = sup∥v∥2=1 ∥Av∥2, so that ∥A∥ 2 2 = sup∥v∥2=1 ∑ i⟨ai,v⟩2, we have that for 2 ≤ l ≤ L, ∥∥∥∥ ∂α(l)∂α(l−1) ∥∥∥∥2 2 = sup ∥v∥2=1 1 m m∑ i=1 ϕ′(α̃(l)i ) m∑ j=1 W (l) ij vj 2 (a) ≤ sup ∥v∥2=1 1 m ∥W (l)v∥22 = 1 m ∥W (l)∥22 (b) ≤ γ2 , where (a) follows from ϕ being 1-Lipschitz by Assumption 1 and (b) from Proposition A.1. This completes the proof. Lemma A.4. Consider any l ∈ [L]. Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least ( 1− 2lm ) ,∥∥∥∥ ∂α(l)∂w(l) ∥∥∥∥2 2 ≤ 1 m [ √ m ( σ1 + ρ√ m )l−1 + √ m l−1∑ i=1 ( σ1 + ρ√ m )i−1 |ϕ(0)| ]2 = ( γl−1 + |ϕ(0)| l−1∑ i=1 γi−1 )2 . (27) Proof. Note that the parameter vector w(l) = vec(W (l)) and can be indexed with j ∈ [m] and j′ ∈ [d] when l = 1 and j′ ∈ [m] when l ≥ 2. Then, we have[ ∂α(l) ∂w(l) ] i,jj′ = [ ∂α(l) ∂W (l) ] i,jj′ = 1√ m ϕ′(α̃ (l) i )α (l−1) j′ 1[i=j] . (28) For l ∈ {2, . . . , L}, noting that ∂α (l) ∂w(l) ∈ Rm×m2 and ∥V ∥F = ∥vec(V )∥2 for any matrix V , we have∥∥∥∥ ∂α(l)∂w(l) ∥∥∥∥2 2 = sup ∥V ∥F=1 1 m m∑ i=1 ϕ′(α̃(l)i ) m∑ j,j′=1 α (l−1) j′ 1[i=j]Vjj′ 2 ≤ sup ∥V ∥F=1 1 m ∥V α(l−1)∥22 ≤ 1 m sup ∥V ∥F=1 ∥V ∥22∥α(l−1)∥22 (a) ≤ 1 m ∥α(l−1)∥22 (b) ≤ 1 m [ √ m ( σ1 + ρ√ m )l−1 + √ m l−1∑ i=1 ( σ1 + ρ√ m )i−1 |ϕ(0)| ]2 = ( γl−1 + |ϕ(0)| l−1∑ i=1 γi−1 )2 where (a) follows from ∥V ∥22 ≤ ∥V ∥ 2 F for any matrix V , and (b) from Lemma A.2. The l = 1 case follows in a similar manner:∥∥∥∥ ∂α(1)∂w(1) ∥∥∥∥2 2 ≤ 1 d ∥α(0)∥22 = 1 d ∥x∥22 = 1 which satisfies the form for l = 1. That completes the proof. A.4 (2, 2, 1)-NORMS OF ORDER 3 TENSORS Lemma A.5. Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), each of the following inequalities hold with probability at least ( 1− 2lm ) ,∥∥∥∥ ∂2α(l)(∂α(l−1))2 ∥∥∥∥ 2,2,1 ≤ βϕγ2, (29) ∥∥∥∥ ∂2α(l)∂α(l−1)∂W (l) ∥∥∥∥ 2,2,1 ≤ βϕ 2 γ2 +(γl−1 + |ϕ(0)| l−1∑ i=1 γi−1 )2+ 1, (30) for l = 2, . . . , L; and ∥∥∥∥ ∂2α(l)(∂W (l))2 ∥∥∥∥ 2,2,1 ≤ βϕ ( γl−1 + |ϕ(0)| l−1∑ i=1 γi−1 )2 , (31) for l ∈ [L]. Proof. For the inequality (29), note that from (26) we obtain ( ∂2α(l) (∂α(l−1))2 ) i,j,k = 1 mϕ ′′(α̃ (l) i )W (l) ik W (l) ij , and so∥∥∥∥ ∂2α(l)(∂α(l−1))2 ∥∥∥∥ 2,2,1 = sup ∥v1∥2=∥v2∥2=1 1 m m∑ i=1 ∣∣∣ϕ′′(α̃(l)i )(W (l)v1)i(W (l)v2)i∣∣∣ ≤ sup ∥v1∥2=∥v2∥2=1 1 m βϕ m∑ i=1 ∣∣∣(W (l)v1)i(W (l)v2)i∣∣∣ (a) ≤ sup ∥v1∥2=∥v2∥2=1 1 2m βϕ m∑ i=1 (W (l)v1) 2 i + (W (l)v2) 2 i ≤ 1 2m βϕ sup ∥v1∥2=∥v2∥2=1 (∥W (l)v1∥22 + ∥W (l)v2∥22) ≤ 1 2m βϕ(∥W (l)∥22 + ∥W (l)∥22) (b) ≤ βϕ(σ1 + ρ/ √ m)2 = βϕγ 2, (32) where (a) follows from 2ab ≤ a2 + b2 for a, b ∈ R, and (b) from Proposition A.1, with probability at least 1− 2m . For the inequality (30), carefully following the chain rule in (28) we obtain( ∂2α(l) ∂α(l−1)∂W (l) ) i,jj′,k = 1 m ϕ′′(α̃ (l) i )W (l) ik α (l−1) j′ 1[j=i] + 1√ m ϕ′(α̃ (l) i )1[i=j]1[j′=k]. Then, we have ∥∥∥∥ ∂2α(l)∂α(l−1)∂W (l) ∥∥∥∥ 2,2,1 = sup ∥v1∥2=∥V2∥F=1 m∑ i=1 ∣∣∣∣∣∣ m∑ k=1 m∑ j=1 m∑ j′=1 ( 1 m ϕ′′(α̃ (l) i )W (l) ik α (l−1) j′ 1[j=i] + 1√ m ϕ′(α̃ (l) i )1[i=j]1[j′=k] ) v1,kV2,jj′ ∣∣∣∣ = sup ∥v1∥2=∥V2∥F=1 m∑ i=1 ∣∣∣∣∣∣ 1m m∑ j′=1 ϕ′′(α̃ (l) i )α (l−1) j′ V2,ij′ ( m∑ k=1 W (l) ik v1,k ) + 1√ m m∑ k=1 ϕ′(α̃ (l) i )v1,kV2,ik ∣∣∣∣∣ ≤ sup ∥v1∥2=∥V2∥F=1 1 m βϕ m∑ i=1 ∣∣∣(W (l)v1)i(V2α(l−1))i∣∣∣+ 1√ m m∑ i=1 m∑ k=1 |v1,kV2,ik| ≤ sup ∥v1∥2=∥v2∥F=1 1 2m βϕ m∑ i=1 (W (l)v1) 2 i + (V2α (l−1))2i + 1√ m m∑ i=1 ∥v1∥2 ∥∥V2,i,:∥∥2 = sup ∥v1∥2=∥V2∥F=1 1 2m βϕ( ∥∥∥W (l)v1∥∥∥2 2 + ∥∥∥V2α(l−1)∥∥∥2 2 ) + 1√ m m∑ i=1 ∥∥V2,i,:∥∥2 (a) ≤ 1 2m βϕ( ∥∥∥W (l)∥∥∥2 2 + ∥∥∥α(l−1)∥∥∥2 2 ) + ∥V2∥F (b) ≤ βϕ 2 γ2 +(γl−1 + |ϕ(0)| l−1∑ i=1 γi−1 )2+ 1 where (a) follows from ∥∥V2α(l−1)∥∥2 ≤ ∥V2∥2 ∥∥αl−1∥∥ ≤ ∥V2∥F ∥∥αl−1∥∥2 = ∥∥αl−1∥∥2 and∑m i=1 ∥∥V2,i,:∥∥2 ≤ √m√∑mi=1 ∥∥V2,i,:∥∥22, and (b) follows from Proposition A.1 and Lemma A.2, with altogether holds with probability at least 1− 2lm . For the last inequality (31), we start with the analysis for l ≥ 2. Carefully following the chain rule in (28) we obtain ( ∂2α(l) (∂W (l))2 ) i,jj′,kk′ = 1 m ϕ′′(α̃ (l) i )α (l−1) k′ α (l−1) j′ 1[j=i]1[k=i]. Then, we have∥∥∥∥ ∂2α(l)(∂W (l))2 ∥∥∥∥ 2,2,1 = sup ∥V1∥F=∥V2∥F=1 m∑ i=1 ∣∣∣∣∣∣ m∑ j,j′=1 m∑ k,k′=1 ( 1 m ϕ′′(α̃ (l) i )α (l−1) k′ α (l−1) j′ 1[j=i]1[k=i]V1,jj′V2,kk′ )∣∣∣∣∣∣ = sup ∥V1∥F=∥V2∥F=1 m∑ i=1 ∣∣∣∣∣∣ϕ ′′(α̃ (l) i ) m m∑ j′=1 ( α (l−1) j′ V1,ij′ ) m∑ k′=1 ( α (l−1) k′ V2,ik′ )∣∣∣∣∣∣ ≤ sup ∥V1∥F=∥V2∥F=1 1 m βϕ m∑ i=1 ∣∣∣(V1α(l−1))i(V2α(l−1))i∣∣∣ ≤ sup ∥V1∥F=∥v2∥F=1 1 2m βϕ m∑ i=1 (V2α (l−1))2i + (V2α (l−1))2i = sup ∥V1∥F=∥V2∥F=1 1 2m βϕ( ∥∥∥V2α(l−1)∥∥∥2 2 + ∥∥∥V2α(l−1)∥∥∥2 2 ) ≤ 1 2m βϕ( ∥∥∥α(l−1)∥∥∥2 2 + ∥∥∥α(l−1)∥∥∥2 2 ) ≤ βϕ ( γl−1 + |ϕ(0)| l−1∑ i=1 γi−1 )2 , which holds with probability at least 1 − 2(l−1)m . For the case l = 1, it is easy to show that( ∂2α(1) (∂W (1))2 ) i,jj′,kk′ = 1dϕ ′′(α̃ (1) i )xk′xj′1[j=i]1[k=i] and so ∥∥∥ ∂2α(1)(∂W (1))2 ∥∥∥2,2,1 ≤ βϕ. This completes the proof. A.5 L∞ NORM OF ∂f∂α(l) Let b(l) := ∂f ∂α(l) ∈ Rm for any l ∈ [L]. Let b(l)0 denote b(l) at initialization. By a direct calculation, we have b(l) = ∂f ∂α(l) = ( L∏ l′=l+1 ∂α(l) ∂α(l−1) ) ∂f ∂α(L) = ( L∏ l′=l+1 1√ m (W (l ′))⊤D(l ′) ) 1√ m v , where D(l ′) is a diagonal matrix of the gradient of activations, i.e., D(l ′) ii = ϕ ′(α̃ (l′) i ). Note that we also have the following recursion: b(l) = ∂f ∂α(l) = ∂α(l+1) ∂α(l) ∂f ∂α(l+1) = 1√ m (W (l+1))⊤D(l+1)b(l+1) . Lemma A.6. Consider any l ∈ [L]. Under Assumptions1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least 1− 2(L−l+1)m , ∥b(l)∥2 ≤ 1√ m ( σ1 + ρ√ m )L−l (1 + ρ1) (33) and ∥b(l)0 ∥2 ≤ σL−l1√ m ≤ γ L−l √ m . (34) Proof. First, note that ∥∥b(L)∥∥ 2 = 1√ m ∥v∥2 ≤ 1√ m (∥v0∥2 + ∥v − v0∥2) ≤ 1√ m (1 + ρ1), where the inequality follows from from Proposition A.1. Now, for the inductive step, assume ∥∥b(l)∥∥ 2 ≤( σ1 + ρ√ m )L−l 1√ m (1 + ρ1) with probability at least 1− 2lm . Then,∥∥∥b(l−1)∥∥∥ 2 = ∥∥∥∥ ∂α(l)∂α(l−1)b(l) ∥∥∥∥ 2 ≤ ∥∥∥∥ ∂α(l)∂α(l−1) ∥∥∥∥ 2 ∥∥∥b(l)∥∥∥ 2 ≤ ( σ1 + ρ√ m )( σ1 + ρ√ m )L−l 1√ m (1 + ρ1) = ( σ1 + ρ√ m )L−l+1 1√ m (1 + ρ1) where the last inequality follows from Lemma A.3 with probability at least 1− 2m (l + 1). Since we use Proposition A.1 once at layer L and then Lemma A.3 (L− l) times at layer l, then we have that everything holds altogether with probability at least 1− 2m (L− l + 1). We have finished the proof by induction. Lemma A.7. Consider any l ∈ [L]. Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least 1− 2(L−l)m , ∥∥∥b(l)∥∥∥ ∞ ≤ γ L−l √ m (1 + ρ1). (35) Proof. For any l ∈ [L], by definition i-th component of b(l), i.e., b(l)i , takes the form b (l) i = ∂α(L) ∂α (l) i ∂f ∂α(L) = ∂α(L) ∂α (l) i 1√ m v. Then, with W (l):,i denoting the i-th column of the matrix W (l),∥∥∥∥∥∂α(L)∂α(l)i ∥∥∥∥∥ 2 (a) = ∥∥∥∥∥ϕ′(α̃(l)i )√m (W (l):,i )⊤ L∏ l′=l+2 ( ∂α(l ′) ∂α(l′−1) )∥∥∥∥∥ 2 (b) ≤ 1√ m ∥∥∥W (l):,i ∥∥∥ 2 L∏ l′=l+2 ∥∥∥∥∥ ∂α(l ′) ∂α(l′−1) ∥∥∥∥∥ 2 (c) ≤ 1√ m ∥∥∥W (l):,i ∥∥∥ 2 γL−l−1 (d) ≤ γ γL−l−1 = γL−l (36) where (a) follows from ∂α (l+1) ∂α (l) i = 1√ m ϕ′(α̃ (l) i )(W (l) :,i ) ⊤, (b) from ϕ being 1-Lipschitz, (c) from Lemma A.3, and (d) from ∥∥∥W (l):,i ∥∥∥ 2 ≤ ∥∥W (l)∥∥ 2 and Proposition A.1, which altogether holds with probability 1− 2m (L− l). Therefore, for every i ∈ [m], ∣∣∣b(l)i ∣∣∣ ≤ ∣∣∣∣∣ 1√m ∂α(L)∂α(l)i v ∣∣∣∣∣ ≤ 1√ m ∥∥∥∥∥∂α(L)∂α(l)i ∥∥∥∥∥ 2 ∥v∥2 ≤ 1√ m γL−l(1 + ρ1) , where the last inequality follows from (36) and ∥v∥2 ≤ ∥v0∥2+∥v − v0∥2 ≤ 1+ρ1. This completes the proof. A.6 USEFUL BOUNDS Lemma 4.1 (Predictor gradient bounds). Under Assumptions 1 and 2, for θ ∈ BSpecρ,ρ1 (θ0), with probability at least ( 1− 2(L+1)m ) , we have ∥∇θf(θ;x)∥2 ≤ ϱ and ∥∇xf(θ;x)∥2 ≤ γL√ m (1 + ρ1) , (9) with ϱ2 = (h(L + 1))2 + 1m (1 + ρ1) 2 ∑L l=1(h(l)) 2γ2(L−l), γ = σ1 + ρ√m , h(l) = γ l−1 + |ϕ(0)| ∑l−1 i=1 γ i−1. Proof. We first prove the bound on the gradient with respect to the weights. Using the chain rule, ∂f ∂w(l) = ∂α(l) ∂w(l) L∏ l′=l+1 ∂α(l ′) ∂α(l′−1) ∂f ∂α(L) and so∥∥∥∥ ∂f∂w(l) ∥∥∥∥2 2 ≤ ∥∥∥∥ ∂α(l)∂w(l) ∥∥∥∥2 2 ∥∥∥∥∥ L∏ l′=l+1 ∂α(l ′) ∂α(l′−1) ∂f ∂α(L) ∥∥∥∥∥ 2 2 (a) ≤ ∥∥∥∥ ∂α(l)∂w(l) ∥∥∥∥2 2 γ2(L−l) · 1 m (1 + ρ1) 2 (b) ≤ ( γl−1 + |ϕ
1. What is the main contribution of the paper regarding convergence rates for gradient descent on overparameterized neural networks? 2. What are the strengths and weaknesses of the proposed approach, particularly in comparison to prior works like Liu et al.? 3. Do you have any concerns or questions about the assumptions made in the paper, such as Assumption 2 or Assumption A4.1? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper analyzes the convergence of gradient descent on overparameterized neural networks with smooth activation functions. The authors show that the Hessian of the neural network has a small spectral norm within a special ball of the initialization, and this ball is allowed to be significantly larger than those allowed by prior works, such as those that use the NTK framework. The authors then show that the square loss is strongly convex within a restricted set around the initialization (under certain assumptions). This along with the spectral norm bound (which implies smoothness for the loss function) gives a geometric convergence rate for gradient descent. Strengths And Weaknesses Strengths: This paper considerably improves the results in Liu et. al. with polynomial factors wherever the prior work had exponential factors. The use of RSC matches empirical observations that the Hessian of neural networks have many flat directions with eigenvalues = 0 . The gradient descent iterates are allowed to be in a much larger ball when compared to prior work. Weaknesses: The flow of the paper is a little confusing. The authors first show an upper bound on the spectral norm of the Hessian (in an appropriate ball around the initialization), and then in the second paragraph of page 2, say that this will be used to show Restricted Strong Convexity. This doesn't make much sense when phrased this way, since RSC would require a lower bound on the smallest eigenvalue of the Hessian within the restricted set. My understanding of the paper is that the Hessian spectral bound has nothing to do with RSC, and is instead used for showing smoothness of the loss function, as in Theorem 5.2. In Theorem 4.1, the role of σ 1 is not clear. What stops you from choosing an extremely small σ 1 ? It seems like this would give a good value for γ . Assumption 2 requires it to bounded away from 0, but its significance in Theorem 4.1 is unclear. I'm a little concerned about the connections between the Lipschitz properties of the network (via the Lipschitz activations in Assumption 1) and the Restricted Strong Convexity properties. These two are conflicting properties, and the choice of ρ , ρ 1 would dictate whether the Lipschitz bound in Assumption 1 can be satisfied. Any clarification on this point would be appreciated. The geometric convergence in Theorem 5.3 should also come with the disclaimer that the loss cannot be made arbitrarily small. In order for A4.1 to hold, the average gradient needs to be sufficiently large, which implies that the algorithm cannot be arbitrarily close to the minimum (in which case the gradient would be arbitrarily close to zero). THis would also contradict the claim in Remark 5.4, where the authors say that there exist parameters that can drive the loss to an arbitrary ϵ . Assumption A4.1 should appear before Theorem 5.1. Currently it is buried in the last sentence of Theorem 5.1. Clarifications: This is not a complaint, but perhaps the choice of σ 0 can be compared with common initialization schemes, such as, Xavier initialization. The line below Assumption 3 says that cross-entropy is strongly convex. Is this true? (the loss in logistic regression for e.g. is certainly not strongly convex) If your results in Theorem 5.1 are for the square loss, why mention cross-entropy at all? Minor comments: Para above lemma 4.1; "to established" -> "to establish" Clarity, Quality, Novelty And Reproducibility Clarity: The paper has some issues with clarity (as pointed out in the weaknesses section above), but overall I found it clear enough. Originality: This paper uses a considerably finer analysis than Liu et al, and obtain interesting results that are strictly better than the prior work. The work also establishes poly-time guarantees without the NTK assumption.
ICLR
Title Attacking Binarized Neural Networks Abstract Neural networks with low-precision weights and activations offer compelling efficiency advantages over their full-precision equivalents. The two most frequently discussed benefits of quantization are reduced memory consumption, and a faster forward pass when implemented with efficient bitwise operations. We propose a third benefit of very low-precision neural networks: improved robustness against some adversarial attacks, and in the worst case, performance that is on par with full-precision models. We focus on the very low-precision case where weights and activations are both quantized to ±1, and note that stochastically quantizing weights in just one layer can sharply reduce the impact of iterative attacks. We observe that non-scaled binary neural networks exhibit a similar effect to the original defensive distillation procedure that led to gradient masking, and a false notion of security. We address this by conducting both black-box and white-box experiments with binary models that do not artificially mask gradients.1 1 INTRODUCTION The ability to fool machine learning models by making small changes to their input severely limits their potential for safe use in many real-world scenarios. Example vulnerabilities include a seemingly innocuous audio broadcast that is interpreted by a speech recognition model in a smartphone, with the intent to trigger an e-transfer, as well as pictures or identity documents that are automatically tagged as someone other than the real individual. The two most common threat models when evaluating the security of a system are the black-box and white-box assumptions, which represent varying degrees of information that an adversary may possess. In a black-box threat model, an adversary has similar abilities to a normal user that interacts with a system by providing inputs and observing the corresponding outputs. Under this threat model, an adversary generally does not know details of the model architecture or dataset used to train the model. Of course, an adversary is free to assume that a convolutional architecture was likely used if the input domain is images, or a recurrent model for speech or text. In a white-box threat model, an adversary has complete access to the model architecture and parameters. In the case of neural networks, white-box attacks frequently rely on gradient information to craft especially strong adversarial examples, where strong means that the example is very close to the original input as defined by some distance norm (e.g. L0–number of features modified, L2–mean squared distance), yet is very likely to cause the model to yield the incorrect output. For both threat types, targeted attacks where a model is made to fail in a specific way (e.g. causing a handwritten ‘7’ look like a ‘3’) represents a stronger attack than simple misclassification. The problem with deploying machine learning systems that are secured in a traditional sense, is that adversarial examples have been shown to generalize well between models with different source and target architectures (Szegedy et al., 2013; Papernot et al., 2017b; Tramèr et al., 2017). This means that a secured model can be compromised in an approximately white-box setting by training and attacking a substitute model that approximates the decision boundary of the model under attack (Papernot et al., 2017b). Thus, to make strong conclusions about the robustness of a machine learning model to adversarial attacks, both threat models should be considered. 1Source code available at https://github.com/AngusG/cleverhans-attacking-bnns Tangent to research on defences against adversarial attacks, significant progress has been made towards training very low-precision deep neural networks to accuracy levels that are competitive with full-precision models (Courbariaux & Bengio, 2016; Zhou et al., 2016; Tang et al., 2017). The current motivation for extreme quantization is the ability to deploy these models under hardware resource constraints, acceleration, or reduced power consumption. Ideally, 32× compression is possible by using 1-bit to represent single-precision floating point parameters. By similarly quantizing activations, we can reduce run-time memory consumption as well. These savings enable large scale deployment of neural networks on the billions of existing embedded devices. Very low-precision models were designed with deployment in mind, and may be responsible for making critical decisions in embedded systems, all subject to reverse engineering and a diverse set of real world attacks. With much at stake in applications like autonomous navigation, robotics, and network infrastructure, understanding how very low-precision neural networks behave in adversarial settings is essential. To that end, we make the following contributions: • To the best of our knowledge, we are the first to formally evaluate and interpret the robustness of binary neural networks (BNNs) to adversarial attacks on the MNIST (LeCun & Cortes, 1998) and CIFAR-10 (Krizhevsky, 2009) datasets. • We compare and contrast the properties of low-precision neural networks that confer adversarial robustness to previously proposed defense strategies. We then combine these properties to propose an optimal defense strategy. • We attempt to generalize and make recommendations regarding the suitability of lowprecision neural networks against various classes of attacks (e.g. single step vs. iterative). 2 BACKGROUND Since the initial disclosure of adversarial examples by Szegedy et al. (2013) and Biggio et al. (2013), many defense strategies have been proposed and subsequently defeated. It is generally accepted that strategies for mitigating the impact of these examples still lag behind state of the art attacks, which are capable of producing adversarial examples that are indistinguishable from unmodified inputs as perceived by humans. In general, there are two approaches to defending against adversarial examples: reactive–detecting the presence of adversarial examples, such as through some notion of confidence-based outlier detection. On the other hand, a proactive approach aims to improve the robustness of the underlying model, which may involve adding an extra class to which malicious inputs should be assigned (Papernot & McDaniel, 2017). The latter approach is important for building reliable systems where a sensible decision must be made at all times. In this work, we focus solely on the proactive approach. To define adversarial examples, we require some measurement of distance that can be computed between perturbed inputs and naturally occurring inputs. In the visual domain, it is convenient if the metric approximates human perceptual similarity, but is not required. Various Lp norms have been used in the literature: L0–number of features modified, L2–mean squared distance, L∞–limited only in the maximum perturbation applied to any feature. We evaluate at least one attack that is cast in terms of each respective distance metric, and leave discussion of the optimal distance metric to future work. The most compelling explanation for the existence of adversarial examples proposed to date is the linearity hypothesis (Goodfellow et al., 2015). The elementary operators, matrix dot-products and convolutions used at each layer of a neural network are fundamentally too linear. Furthermore, the non-linearity applied at each layer is usually itself either piecewise linear (e.g. ReLU), or we have specifically encouraged the network through initialization or regularization to have small weights and activations such that its units (e.g. sigmoid, tanh) operate in their linear regions. By adding noise to inputs which is highly correlated with the sign of the model parameters, a large swing in activation can be induced. Additionally, the magnitude by which this noise must be scaled to have this effect tends to diminish as the input dimensionality grows. This piecewise linearity also makes neural networks easy to attack using the gradient of the output with respect to the input, and consequently, the resulting incorrect predictions are made with high-confidence. Fortunately, we are reminded that the universal approximation theorem suggests that given sufficient capacity, a neural network should at least be able to represent the type of function that resists adversarial examples (Goodfellow et al., 2015). The most successful defense mechanism to date, adversarial training, is based on this premise, and attempts to learn such a function. The fast gradient sign method (FGSM) is one such procedure for crafting this damaging noise, and is still used today despite not being state-of-the-art in the white-box setting, as it is straightforward to compute, and yields examples that transfer well between models (Goodfellow et al., 2015). The linearity hypothesis was one of the main reasons for initially considering binarized neural networks as a natural defense against adversarial examples. Not only are they highly regularized by default through severely quantized weights, but they appear to be more non-linear and discontinuous than conventional deep neural networks (DNNs). Additionally, we suspect that the same characteristics making them challenging to train, make them difficult to attack with an iterative procedure. At the same time, assumptions regarding the information required by an effective adversary have become more and more relaxed, to the extent that black-box attacks can be especially damaging with just a small set of labeled input-output pairs (Papernot et al., 2017b). Perhaps the most striking feature of adversarial examples is how well they generalize between models with different architectures while trained on different datasets (Goodfellow et al., 2015; Papernot et al., 2017b; Kurakin et al., 2017a). It was shown by Kurakin et al. (2017b) that 2/3 of adversarial ImageNet examples survive various camera and perspective transformations after being printed on paper and subsequently photographed and classified by a mobile phone. The most successful black-box attacks have the secured model (Oracle) assign labels to a set of real or synthetic inputs, which can be used to train a substitute model that mimics the Oracle’s decision boundary (Papernot et al., 2017b). A single step attack, such as FGSM, can be used on the smooth substitute model to generate examples that transfer, without having access to the original training data, architecture, or training procedure used by the Oracle. Papernot et al. (2017b) showed they are able to compromise machine learning models 80% of the time on small datasets like MNIST using various shallow MLP-based substitute models. There is not a particularly high correlation between test accuracy and transferability of adversarial examples; therefore despite not attaining great results on the original MNIST task, a simple substitute learns enough to compromise the Oracle. This technique was shown to overcome gradient masking approaches, such as in the case with models that either obscure or have no gradient information, such as k-nearest neighbors or decision trees. With strong adversarial training of the model to be defended, attacks generated using the substitute model do not transfer as well. Therefore, to be compelling, BNNs should be able to handle training with large while maintaining competitive test accuracy on clean inputs relative to full-precision. The strongest white-box attacks all use an iterative procedure; however, the resulting examples do not transfer as well as single step methods (Goodfellow et al., 2015). An iterative attack using the Adam optimizer was proposed by Carlini & Wagner (2017) that outperforms other expensive optimization based approaches Szegedy et al. (2013), the Jacobian-based saliency map attack (JSMA) (Papernot et al., 2015), and Deepfool (Moosavi-Dezfooli et al., 2015) in terms of three Lp norms previously used as an adversarial example distance metrics in the literature. We have made our best attempt to use state-of-the-art attacks in our experiments. 3 EXPERIMENTS In Figure 1, we depict the quantization scheme applied to the base convolutional neural network provided in the CleverHans library tutorials (Papernot et al., 2017a). In the first layer, we retain weights and activations in single-precision floating point. Weights in hidden layers are binarized either de- terministically or stochastically, as in Courbariaux & Bengio (2016), and activations were always binarized deterministically. Unlike in Courbariaux & Bengio (2016), we stochastically quantize weights at test time as a possible defense against iterative attacks. Under the stochastic binarization scheme, weights are sampled once per forward pass from a Bernoulli distribution with probability given by passing the real valued weight through the hard sigmoid function from Courbariaux & Bengio (2016). Lastly, we map the Bernoulli samples ∈ [0, 1] to±1 by multiplying by 2 and subtracting 12.We do not find that this significantly slows down training with TensorFlow (Abadi et al., 2015) on a modern GPU, but these networks take between 3–4× as many epochs as a deterministically quantized binary network to converge. We use the straight through estimator (STE) to back-propagate gradients through the quantization step (Bengio et al., 2013). We optionally insert a small (e.g 1e-2) tunable scalar after the ReLU in hidden layers, to compensate for an increase in the L1 norm of the activations due to binarization. Tang et al. (2017) also used this approach to reach similar accuracy gains as those conferred by the more expensive XNOR-Net channel-wise normalization scheme (Rastegari et al., 2016). Convolution kernels were initialized from a truncated normal distribution with σ=0.2 for accumulating full-precision weight updates, and were quantized to ±1 in the forward pass. Batch normalization was applied before quantizing activations to ensure they were centered around zero (Ioffe & Szegedy, 2015). We report test error rates for these models on MNIST (LeCun & Cortes, 1998) with varying capacity in Table 6 of Appendix A. Capacity is denoted by the number of kernels in the first layer, KLayer1. All subsequent layers had exactly double this number of kernels. Models were trained for 15 epochs unless indicated otherwise. In general, models with full-precision weights and activations under-fit the naturally occurring data less than a binary equivalent, with error rates of approximately 1% and 2%, respectively. With the addition of the small learned scaling factor, the binary models converge to approximately the same error rate as the full-precision model on MNIST and CIFAR-10. We experiment with three different types of adversarial training, depending on the combination of dataset and attack: FGSM with fixed , FGSM with sampled from a truncated normal distribution as in Kurakin et al. (2017a), and projected gradient descent (PGD) (Madry et al., 2017), which is the state-of-the-art adversarial training procedure for MNIST. We do not necessarily pair all training methods against all attacks. The model’s own best prediction is used as the true label to minimize in adversarial training, unless otherwise noted to prevent the label leaking effect (Kurakin et al., 2017a). We first attempt to fool our binarized networks with single step attacks in a whitebox setting, and progressively scale up to stronger state-of-the-art attacks. All experiments were conducted by seeding the TensorFlow random number generator with the value 1234. 3.1 WHITE-BOX ATTACKS All experiments were conducted in TensorFlow, and used either v2.0.0 of CleverHans (Papernot et al., 2017a), or Foolbox, a Python toolbox for creating adversarial examples (Rauber et al., 2017). All attacks were clipped to the anticipated input range during adversarial training and evaluation. For single step attacks, we fix the magnitude of the perturbation and attack the whole test set, then report accuracy on the new test set. The general procedure for iterative attacks is to fix the step size per iteration or learning rate, and number of iterations. We then report accuracy on the perturbed test set after this many iterations while keeping other hyper-parameters constant. 3.1.1 FAST GRADIENT SIGN METHOD The FGSM is a simple but effective single step attack first introduced in Goodfellow et al. (2015), and defined in eq (1). The attack linearly approximates the gradient of the loss used to train the model with respect to the input. The gradient is thresholded by taking its sign, scaled by a uniform constant , and added to, or subtracted from, the input, depending on if we wish to minimize the current class, or move in the direction of a target class: xadv = x+ × sign(∆xJ(θ, x, y)) (1) 2In TensorFlow this can be accomplished with: 2 * Bernoulli(probs=tf.clip by value((x + 1.)/ 2., 0., 1.))).sample() -1 To confer robustness to more than one value of with which an adversary may use to attack, the adversarial training procedure from Kurakin et al. (2017a) proposes to sample a unique for each training example from a truncated normal distribution. We set the standard deviation to σ = ceil( max ∗ 255/2). We consider up to max = 0.3, as this is a common upper limit for a L∞ norm perturbation that is not easily perceived by humans, and corresponds to a 30% change in pixel intensity for an arbitrary number of pixels. In Table 1, it can be observed that a plain binary network without adversarial training (B) achieves the best robustness to FGSM, with nearly 90% accuracy for = 0.1 for the highest capacity model. We postpone a formal explanation of this outlier for the discussion. Our results for large agree with observations made by Madry et al. (2017) where they found FGSM to be suboptimal for training as it yields a limited set of adversarial examples. We suspect that the reason neither scaled nor unscaled binary models performed well when trained with an adversary and tested on larger values of is because by the time adversarial training was introduced at epoch 10, both had entered into a state of decreased learning. Our binary weight implementation makes updates to real valued weights during training, which are binarized in the forward pass. The real valued weights tend to polarize as the model converges, resulting in fewer sign changes. Regularization schemes actually encouraging the underlying real valued weights to polarize around ±1 have been proposed (Tang et al., 2017), but we do not find this to be particularly helpful after sweeping a range of settings for the regularization constant λ. Regardless, in this case, the binary models did not benefit from adversarial training to the same extent that the full-precision models did. We find that adversarial training with binary models is somewhat of a balancing act. If a strong adversary is introduced to the model too early, it may fail to converge for natural inputs. If introduced too late, it may be difficult to bring the model back into its malleable state, where it is willing to flip the sign of its weights. Despite this challenge, the scaled binary model (C+) (see Figure 1 for location of optional scalar) reaped significant benefits from adversarial training and its accuracy was on par with the full-precision model for = 0.1. To investigate the low performance observed against large in Table 1, models A and C were trained from scratch with 40 iterations of PGD (Madry et al., 2017). Table 2 shows the result of this new training and subsequent FGSM attack performed identically to that of Table 1. A similar trend was found in Tables 1 and 2, where the lowest capacity models struggle to become robust against large . Once the scaled binary model had sufficient capacity, it actually slightly outperforms its fullprecision equivalent for all values of . With this, we have demonstrated that not only can BNNs achieve competitive accuracy on clean inputs with significantly fewer resources, but they can also allocate excess capacity in response to state-of-the-art adversaries. 3.1.2 CARLINI-WAGNER ATTACK CARLINI & WAGNER (2017) The Carlini-Wagner L2 attack Carlini & Wagner (2017) (CWL2) is an iterative process guided by an optimizer such as Adam, that produces strong adversarial examples by simultaneously minimizing distortion, and manipulating the logits per the attack goal. We use the implementation from CleverHans (Papernot et al., 2017a) and show results in Table 3 and Figure 2. Only binary models are shown in Table 3 because all but two full-precision models had zero accuracy after running CWL2 for 100 iterations. The best full-precision model was A256+ with 1.8±0.9% accuracy. We note that the stochastically quantized binary models with scaling to prevent gradient masking (‘S’ prefix) underfit somewhat on the training set, and had test error rates of 8±1%, 5±2%, and 3±1% for each of S64, S128, and S256 averaged over four runs. For S256, this test error can be compared with an unscaled binary model which only achieves 22±3% accuracy with gradient masking compared to 46±3% without. In Figure 2, it can be observed that binary and full-precision models perform somewhat similarly for the first few iterations of the CWL2 attack, but beyond 10–20 iterations, the accuracy of fullprecision models drops off quickly, regardless of having performed adversarial training. We note that PGD, defined with respect to the L∞ norm, makes no claim of increasing robustness to L2 attacks, such as CWL2. Interestingly, it can be seen that the binary model benefited from adversarial training considerably when evaluated at 10 to 100 attack iterations, while the full-precision model did not. These benefits eventually disappear to within the margin of random error after continuing to 1000 iterations, as recommended by Carlini & Wagner (2017). At this point, both B and B+ had accuracy of 19±3%, by which time the full-precision models had long flatlined at zero. Meanwhile, S64 maintained 38 ± 3% accuracy after 1000 iterations, nearly double that of the deterministically quantized models. Running these attacks to 1000 iterations was two orders of magnitude more time consuming than training these models from scratch (without PGD training); therefore we believe this targeted attack represents a fairly substantial level of effort on behalf of the adversary. 3.2 BLACK-BOX ATTACKS We run the substitute model training procedure from Papernot et al. (2017b) using CleverHans v2.0.0, for both MNIST and CIFAR-10 datasets with and without FGSM adversarial training. As a substitute model, we use a two-layer MLP with 200 hidden units and ReLU activations. The substitute is trained on 150 images withheld from the test set, and augmented by perturbing the images in the direction of maximal variability of the substitute model, as defined by the Jacobian. Six epochs of data augmentation with λ = 0.1 were used in combination with 10 substitute model training epochs after each augmentation step. The oracle was again trained for 15 epochs for MNIST, and 20 epochs for CIFAR-10. Results for the black-box experiment on the MNIST dataset are shown in Table 4. Full-precision networks had a moderate advantage over undefended binary models B and C. Only the highest capacity full-precision model benefited from FGSM adversarial training, while the scaled binary model benefited regardless of capacity. There was a small positive relationship between accuracy and capacity for both A and C when trained with PGD, and there was almost no loss in accuracy in this setting after binarization. PGD was more effective than stochasticity here as it leads to learning a more optimal decision boundary, rather than confusing an adversary with dynamic gradient information. 4 DISCUSSION We suspect that plain BNNs implement two different kinds of gradient masking. We discovered the first by tracking the L1 norm of the hidden layer activations and unscaled logits. BNNs operate with larger range and variance than ‘normal’ networks, which can be explained by virtue of convolving inputs with greater magnitude (±1) compared with the typically small values taken by weights and activations. For our 64 kernel CNN, the logits were about 4× larger than the scaled or full-precision networks. This is analogous to the more complex defensive distillation procedure in which the model to be secured is trained with soft-labels generated by a teacher model. When training the teacher, a softmax temperature, T 1 is used. The distilled model is trained on the labels assigned by the teacher and using the same T . At test time, the model is deployed with T = 1, which causes the logits to explode with respect to their learned values. The logits saturate the softmax function and cause gradients to vanish, leading FGSM and JSMA to fail at a higher rate. However, this defense is defeated with a close enough guess for T , or via a black box attack (Carlini & Wagner, 2017). The second type of gradient masking is less easily overcome, and has to do with gradients being inherently discontinuous and non-smooth, as seen in Figure 3 of Appendix B. We believe that this effect is what gives scaled BNNs an advantage over full-precision with respect to targeted attacks. Even more importantly, through a regularization effect, the decision boundary for the MLP with binary units (Figure 3) better represents the actual function to be learned, and is less susceptible to adversarial examples. But why does gradient masking have a disproportionate effect when attacking compared with training on clean inputs? Models ‘A’ and ‘B’ were trained to within 1.2% test accuracy, while ‘B’ had improvements of 9.0% and 29.5% on JSMA and CWL2 attacks respectively, corresponding to 8× and 25× difference in accuracy, respectively, for adversarial vs. clean inputs. For JSMA, the performance gap can be attributed to the sub-optimality of the attack as it uses logits rather than softmax probabilities. Furthermore, to achieve its L0 goal, pairs of individual pixels are manipulated which is noisy process in a binarized model. The success of model ‘S’ with stochastically quantized weights in its third convolutional layer against iterative attacks is more easily explained. Adversarial examples are not random noise, and do not occur in random directions. In fact, neural networks are extremely robust to large amounts of benign noise. An iterative attack that attempts to fool our stochastically quantized model faces a unique model at every step, with unique gradients. Thus, the direction that minimizes the probability of the true class in the first iteration is unlikely to be the same in the second. An iterative attack making n steps is essentially attacking an ensemble of n models. By making a series of small random steps, the adversary is sent on the equivalent of a wild goose chase and has a difficult time making progress in any particularly relevant direction to cause an adversarial example. 5 CONCLUSION We have shown that for binarized neural networks, difficulty in training leads to difficulty when attacking. Although we did not observe a substantial improvement in robustness to single step attacks through binarization, by introducing stochasticity we have reduced the impact of the strongest attacks. Stochastic quantization is clearly far more computationally and memory efficient than a traditional ensemble of neural networks, and could be run entirely on a micro-controller with a pseudo random number generator. Our adversarial accuracy on MNIST against the best white-box attack (CWL2) is 71±2% (S64+) compared with the best full-precision model 1.8±0.9% (A256+). Blackbox results were competitive between binary and full-precision on MNIST, and binary models were slightly more robust for CIFAR-10, which we attribute to their improved regularization. Beyond their favourable speed and resource usage, we have demonstrated another benefit of deploying binary neural networks in industrial settings. Future work will consider other types of low-precision models as well as other adversarial attack methods. ACKNOWLEDGMENTS The authors wish to acknowledge the financial support of NSERC, CFI and CIFAR. The authors also acknowledge hardware support from NVIDIA and Compute Canada. We thank Brittany Reiche for helpful edits and suggestions that improved the clarity of our manuscript. A CLEAN TEST ERROR RATES B MLP TOY AND PROBLEM We reproduce the toy problem in Papernot et al. (2015) of learning the two-input logical AND function with a simple MLP having two neurons in each layer. The only difference between our experiment and the original is that we train a 3-hidden-layer MLP (as opposed to 2-layers) with the Adam optimizer for 1k epochs, with a learning rate of 0.1. We use 3 layers since this is the smallest number of layers where the middle one can be quantized without directly touching the input or output, which would adversely impact learning. Here, a “quantized” layer means that its weights and activations are thresholded to +1 and -1, and a straight through estimator (Bengio et al., 2013) is used to backpropagate gradients for learning. All configurations in the AND experiment learn a reasonable decision boundary; however, the MLPs with a single quantized hidden layer had highly non-linear forward gradients, as can be seen in Figure 3(d). As training progresses, the forward derivative was highly dynamic and took on a variety of different shapes with sharp edges and peaks. When the MLP was allowed more capacity by doubling the number of hidden units (see Figure 4), the forward derivative was almost entirely destroyed. If one was to use this information to construct a saliency map, only two regions would be proposed (with poor directional information), and once exhausted there would be no further choices more insightful than random guessing. C VISUALIZING LOGITS WITH SCALING FACTORS In Figure 5 we compare the logits of full-precision and binary networks under varying degrees of FGSM perturbation. We noticed that for softmax temperature T between 0.6–0.7 the direction in which increasing the perturbation causes an adversarial example flips. We observe no similar effect for full-precision models. Additionally the full-precision logits respond to scaling in an approximately linear manner, whereas there is very little change in logits for the binary case apart from the 180 degree flip. We used values of in the range of actual attacks conducted in the paper, however the piecewise linear effect from (Goodfellow et al., 2015) is still there for with large absolute value.
1. What is the main contribution of the paper regarding adversarial attacks on neural networks? 2. What are the strengths and weaknesses of the proposed approach in terms of robustness, training, and deployment? 3. How does the paper compare binary neural networks (BNNs) and normal networks (NNs) in terms of their vulnerabilities to adversarial attacks? 4. What are the limitations of the experimental results, especially in terms of dataset choice and attack varieties? 5. How might the findings of this paper impact future research on adversarial attacks and defenses in deep learning?
Review
Review This paper starts by gently going over the concept of adversarial attacks on neural networks (black box vs white box, reactive vs proactive, transfer of attacks, linearity hypothesis), as well as low-precision nets and their deployement advantages. Adversarial examples are introduced as a norm-measurable deviation from natural inputs to a system. We are reminded of adversarial training, and of the fact that binarized nets are highly non linear due to the nature of their weights and activations. This paper then proposes to examine the robustness of binarized neural networks to adversarial attacks on MNIST and CIFAR-10. The quantization scheme used here is v32 Conv2D -> ReLU -> BNorm -> sign -> bit Conv2D -> ReLU -> Scalar -> BNorm -> sign, but sign is really done with the stochastic quantization method of Courbariaux et al, even at test time (in order to make it more robust). What the experimental results show: - High capacity BNNs are usually more robust to white-box attacks than normal networks, probably because the gradient information that an adversary would use becomes very poor as training progresses. - BNNs are harder to properly train with adversarial examples because of the polarized weight distribution that they induce - Against black-box attacks, it seems there is little difference between NNs and BNNs. Some comments and questions: - In figure 1 I'm not sure what "Scalar" refers to, and it is not explained in the paper (nor could I find it explained in Papernot et al 2017a). - Do you adopt the "Shift based Batch Normalizing Transform" of Courbariaux et al? If not, why? - It might be worth at least _quickly_ explaining what the 'Carlini-Wagner L2 from CleverHans' is rather than simply offering a citation with no explanation. Idem for 'smooth substitute model black-box misclassificiation attack'. We often assume our readers know most of what we know, but I find this is often not the case and can discourage the many newcomers of our field. - "Running these attacks to 1000 iterations [...], therefore we believe this targeted attack represents a fairly substantial level of effort on behalf of the adversary." while true for us researchers, computational difficulty will not be a criterion to stop for state actor or multinational tech companies, unless it can be proven that e.g. the number of iterations needs to grow exponentially (or in some other unreasonable way) in order to get reliable attacks. - "MLP with binary units 3 better", 'as in Fig.' is missing before '3', or something of the sort. - You say "We postpone a formal explanation of this outlier for the discussion." but really you're explaining it in the next paragraph (unless there's also another explanation I'm missing). Training BNNs with adversarial examples is hard. - You compare stochastic BNNs with deterministic NNs, but not with stochastic NNs. What do you think would happen? Some of arguments that you make in favour of BNNs could also maybe be applied to stochastic NNs. My opinions on this paper: - Novelty: it certainly seems to be the first time someone has tackled BNNs and adversarial examples - Relevance: BNNs can be a huge deal when deploying applications, it makes sense to study their vulnerabilities - Ease of understanding: To me the paper was mostly easy to understand, yet, considering there is no page limit in this conference, I would have buffed up the appendix, e.g. to include more details about the attacks used and how various hyperparameters affect things. - Clarity: I feel like some details are lacking that would hinder reproducing and extending the work presented here. Mostly, it isn't always clear why the chosen prodecures and hyperparameters were chosen (wrt the model being a BNN) - Method: I'm concerned by the use of MNIST to study such a problem. MNIST is almost linearly separable, has few examples, and given the current computational landscape, much better alternatives are available (SVHN for example if you wish to stay in the digits domain). Concerning black-box attacks, it seems that BNNs less beneficial in a way; trying more types of attacks and/or delving a bit deeping into that would have been nice. The CIFAR-10 results are barely discussed. Overall I think this paper is interesting and relevant to ICLR. It could have stronger results both in terms of the datasets used and the variety of attacks tested, as well as some more details concerning how to perform adversarial training with BNNs (or why that's not a good idea).
ICLR
Title Attacking Binarized Neural Networks Abstract Neural networks with low-precision weights and activations offer compelling efficiency advantages over their full-precision equivalents. The two most frequently discussed benefits of quantization are reduced memory consumption, and a faster forward pass when implemented with efficient bitwise operations. We propose a third benefit of very low-precision neural networks: improved robustness against some adversarial attacks, and in the worst case, performance that is on par with full-precision models. We focus on the very low-precision case where weights and activations are both quantized to ±1, and note that stochastically quantizing weights in just one layer can sharply reduce the impact of iterative attacks. We observe that non-scaled binary neural networks exhibit a similar effect to the original defensive distillation procedure that led to gradient masking, and a false notion of security. We address this by conducting both black-box and white-box experiments with binary models that do not artificially mask gradients.1 1 INTRODUCTION The ability to fool machine learning models by making small changes to their input severely limits their potential for safe use in many real-world scenarios. Example vulnerabilities include a seemingly innocuous audio broadcast that is interpreted by a speech recognition model in a smartphone, with the intent to trigger an e-transfer, as well as pictures or identity documents that are automatically tagged as someone other than the real individual. The two most common threat models when evaluating the security of a system are the black-box and white-box assumptions, which represent varying degrees of information that an adversary may possess. In a black-box threat model, an adversary has similar abilities to a normal user that interacts with a system by providing inputs and observing the corresponding outputs. Under this threat model, an adversary generally does not know details of the model architecture or dataset used to train the model. Of course, an adversary is free to assume that a convolutional architecture was likely used if the input domain is images, or a recurrent model for speech or text. In a white-box threat model, an adversary has complete access to the model architecture and parameters. In the case of neural networks, white-box attacks frequently rely on gradient information to craft especially strong adversarial examples, where strong means that the example is very close to the original input as defined by some distance norm (e.g. L0–number of features modified, L2–mean squared distance), yet is very likely to cause the model to yield the incorrect output. For both threat types, targeted attacks where a model is made to fail in a specific way (e.g. causing a handwritten ‘7’ look like a ‘3’) represents a stronger attack than simple misclassification. The problem with deploying machine learning systems that are secured in a traditional sense, is that adversarial examples have been shown to generalize well between models with different source and target architectures (Szegedy et al., 2013; Papernot et al., 2017b; Tramèr et al., 2017). This means that a secured model can be compromised in an approximately white-box setting by training and attacking a substitute model that approximates the decision boundary of the model under attack (Papernot et al., 2017b). Thus, to make strong conclusions about the robustness of a machine learning model to adversarial attacks, both threat models should be considered. 1Source code available at https://github.com/AngusG/cleverhans-attacking-bnns Tangent to research on defences against adversarial attacks, significant progress has been made towards training very low-precision deep neural networks to accuracy levels that are competitive with full-precision models (Courbariaux & Bengio, 2016; Zhou et al., 2016; Tang et al., 2017). The current motivation for extreme quantization is the ability to deploy these models under hardware resource constraints, acceleration, or reduced power consumption. Ideally, 32× compression is possible by using 1-bit to represent single-precision floating point parameters. By similarly quantizing activations, we can reduce run-time memory consumption as well. These savings enable large scale deployment of neural networks on the billions of existing embedded devices. Very low-precision models were designed with deployment in mind, and may be responsible for making critical decisions in embedded systems, all subject to reverse engineering and a diverse set of real world attacks. With much at stake in applications like autonomous navigation, robotics, and network infrastructure, understanding how very low-precision neural networks behave in adversarial settings is essential. To that end, we make the following contributions: • To the best of our knowledge, we are the first to formally evaluate and interpret the robustness of binary neural networks (BNNs) to adversarial attacks on the MNIST (LeCun & Cortes, 1998) and CIFAR-10 (Krizhevsky, 2009) datasets. • We compare and contrast the properties of low-precision neural networks that confer adversarial robustness to previously proposed defense strategies. We then combine these properties to propose an optimal defense strategy. • We attempt to generalize and make recommendations regarding the suitability of lowprecision neural networks against various classes of attacks (e.g. single step vs. iterative). 2 BACKGROUND Since the initial disclosure of adversarial examples by Szegedy et al. (2013) and Biggio et al. (2013), many defense strategies have been proposed and subsequently defeated. It is generally accepted that strategies for mitigating the impact of these examples still lag behind state of the art attacks, which are capable of producing adversarial examples that are indistinguishable from unmodified inputs as perceived by humans. In general, there are two approaches to defending against adversarial examples: reactive–detecting the presence of adversarial examples, such as through some notion of confidence-based outlier detection. On the other hand, a proactive approach aims to improve the robustness of the underlying model, which may involve adding an extra class to which malicious inputs should be assigned (Papernot & McDaniel, 2017). The latter approach is important for building reliable systems where a sensible decision must be made at all times. In this work, we focus solely on the proactive approach. To define adversarial examples, we require some measurement of distance that can be computed between perturbed inputs and naturally occurring inputs. In the visual domain, it is convenient if the metric approximates human perceptual similarity, but is not required. Various Lp norms have been used in the literature: L0–number of features modified, L2–mean squared distance, L∞–limited only in the maximum perturbation applied to any feature. We evaluate at least one attack that is cast in terms of each respective distance metric, and leave discussion of the optimal distance metric to future work. The most compelling explanation for the existence of adversarial examples proposed to date is the linearity hypothesis (Goodfellow et al., 2015). The elementary operators, matrix dot-products and convolutions used at each layer of a neural network are fundamentally too linear. Furthermore, the non-linearity applied at each layer is usually itself either piecewise linear (e.g. ReLU), or we have specifically encouraged the network through initialization or regularization to have small weights and activations such that its units (e.g. sigmoid, tanh) operate in their linear regions. By adding noise to inputs which is highly correlated with the sign of the model parameters, a large swing in activation can be induced. Additionally, the magnitude by which this noise must be scaled to have this effect tends to diminish as the input dimensionality grows. This piecewise linearity also makes neural networks easy to attack using the gradient of the output with respect to the input, and consequently, the resulting incorrect predictions are made with high-confidence. Fortunately, we are reminded that the universal approximation theorem suggests that given sufficient capacity, a neural network should at least be able to represent the type of function that resists adversarial examples (Goodfellow et al., 2015). The most successful defense mechanism to date, adversarial training, is based on this premise, and attempts to learn such a function. The fast gradient sign method (FGSM) is one such procedure for crafting this damaging noise, and is still used today despite not being state-of-the-art in the white-box setting, as it is straightforward to compute, and yields examples that transfer well between models (Goodfellow et al., 2015). The linearity hypothesis was one of the main reasons for initially considering binarized neural networks as a natural defense against adversarial examples. Not only are they highly regularized by default through severely quantized weights, but they appear to be more non-linear and discontinuous than conventional deep neural networks (DNNs). Additionally, we suspect that the same characteristics making them challenging to train, make them difficult to attack with an iterative procedure. At the same time, assumptions regarding the information required by an effective adversary have become more and more relaxed, to the extent that black-box attacks can be especially damaging with just a small set of labeled input-output pairs (Papernot et al., 2017b). Perhaps the most striking feature of adversarial examples is how well they generalize between models with different architectures while trained on different datasets (Goodfellow et al., 2015; Papernot et al., 2017b; Kurakin et al., 2017a). It was shown by Kurakin et al. (2017b) that 2/3 of adversarial ImageNet examples survive various camera and perspective transformations after being printed on paper and subsequently photographed and classified by a mobile phone. The most successful black-box attacks have the secured model (Oracle) assign labels to a set of real or synthetic inputs, which can be used to train a substitute model that mimics the Oracle’s decision boundary (Papernot et al., 2017b). A single step attack, such as FGSM, can be used on the smooth substitute model to generate examples that transfer, without having access to the original training data, architecture, or training procedure used by the Oracle. Papernot et al. (2017b) showed they are able to compromise machine learning models 80% of the time on small datasets like MNIST using various shallow MLP-based substitute models. There is not a particularly high correlation between test accuracy and transferability of adversarial examples; therefore despite not attaining great results on the original MNIST task, a simple substitute learns enough to compromise the Oracle. This technique was shown to overcome gradient masking approaches, such as in the case with models that either obscure or have no gradient information, such as k-nearest neighbors or decision trees. With strong adversarial training of the model to be defended, attacks generated using the substitute model do not transfer as well. Therefore, to be compelling, BNNs should be able to handle training with large while maintaining competitive test accuracy on clean inputs relative to full-precision. The strongest white-box attacks all use an iterative procedure; however, the resulting examples do not transfer as well as single step methods (Goodfellow et al., 2015). An iterative attack using the Adam optimizer was proposed by Carlini & Wagner (2017) that outperforms other expensive optimization based approaches Szegedy et al. (2013), the Jacobian-based saliency map attack (JSMA) (Papernot et al., 2015), and Deepfool (Moosavi-Dezfooli et al., 2015) in terms of three Lp norms previously used as an adversarial example distance metrics in the literature. We have made our best attempt to use state-of-the-art attacks in our experiments. 3 EXPERIMENTS In Figure 1, we depict the quantization scheme applied to the base convolutional neural network provided in the CleverHans library tutorials (Papernot et al., 2017a). In the first layer, we retain weights and activations in single-precision floating point. Weights in hidden layers are binarized either de- terministically or stochastically, as in Courbariaux & Bengio (2016), and activations were always binarized deterministically. Unlike in Courbariaux & Bengio (2016), we stochastically quantize weights at test time as a possible defense against iterative attacks. Under the stochastic binarization scheme, weights are sampled once per forward pass from a Bernoulli distribution with probability given by passing the real valued weight through the hard sigmoid function from Courbariaux & Bengio (2016). Lastly, we map the Bernoulli samples ∈ [0, 1] to±1 by multiplying by 2 and subtracting 12.We do not find that this significantly slows down training with TensorFlow (Abadi et al., 2015) on a modern GPU, but these networks take between 3–4× as many epochs as a deterministically quantized binary network to converge. We use the straight through estimator (STE) to back-propagate gradients through the quantization step (Bengio et al., 2013). We optionally insert a small (e.g 1e-2) tunable scalar after the ReLU in hidden layers, to compensate for an increase in the L1 norm of the activations due to binarization. Tang et al. (2017) also used this approach to reach similar accuracy gains as those conferred by the more expensive XNOR-Net channel-wise normalization scheme (Rastegari et al., 2016). Convolution kernels were initialized from a truncated normal distribution with σ=0.2 for accumulating full-precision weight updates, and were quantized to ±1 in the forward pass. Batch normalization was applied before quantizing activations to ensure they were centered around zero (Ioffe & Szegedy, 2015). We report test error rates for these models on MNIST (LeCun & Cortes, 1998) with varying capacity in Table 6 of Appendix A. Capacity is denoted by the number of kernels in the first layer, KLayer1. All subsequent layers had exactly double this number of kernels. Models were trained for 15 epochs unless indicated otherwise. In general, models with full-precision weights and activations under-fit the naturally occurring data less than a binary equivalent, with error rates of approximately 1% and 2%, respectively. With the addition of the small learned scaling factor, the binary models converge to approximately the same error rate as the full-precision model on MNIST and CIFAR-10. We experiment with three different types of adversarial training, depending on the combination of dataset and attack: FGSM with fixed , FGSM with sampled from a truncated normal distribution as in Kurakin et al. (2017a), and projected gradient descent (PGD) (Madry et al., 2017), which is the state-of-the-art adversarial training procedure for MNIST. We do not necessarily pair all training methods against all attacks. The model’s own best prediction is used as the true label to minimize in adversarial training, unless otherwise noted to prevent the label leaking effect (Kurakin et al., 2017a). We first attempt to fool our binarized networks with single step attacks in a whitebox setting, and progressively scale up to stronger state-of-the-art attacks. All experiments were conducted by seeding the TensorFlow random number generator with the value 1234. 3.1 WHITE-BOX ATTACKS All experiments were conducted in TensorFlow, and used either v2.0.0 of CleverHans (Papernot et al., 2017a), or Foolbox, a Python toolbox for creating adversarial examples (Rauber et al., 2017). All attacks were clipped to the anticipated input range during adversarial training and evaluation. For single step attacks, we fix the magnitude of the perturbation and attack the whole test set, then report accuracy on the new test set. The general procedure for iterative attacks is to fix the step size per iteration or learning rate, and number of iterations. We then report accuracy on the perturbed test set after this many iterations while keeping other hyper-parameters constant. 3.1.1 FAST GRADIENT SIGN METHOD The FGSM is a simple but effective single step attack first introduced in Goodfellow et al. (2015), and defined in eq (1). The attack linearly approximates the gradient of the loss used to train the model with respect to the input. The gradient is thresholded by taking its sign, scaled by a uniform constant , and added to, or subtracted from, the input, depending on if we wish to minimize the current class, or move in the direction of a target class: xadv = x+ × sign(∆xJ(θ, x, y)) (1) 2In TensorFlow this can be accomplished with: 2 * Bernoulli(probs=tf.clip by value((x + 1.)/ 2., 0., 1.))).sample() -1 To confer robustness to more than one value of with which an adversary may use to attack, the adversarial training procedure from Kurakin et al. (2017a) proposes to sample a unique for each training example from a truncated normal distribution. We set the standard deviation to σ = ceil( max ∗ 255/2). We consider up to max = 0.3, as this is a common upper limit for a L∞ norm perturbation that is not easily perceived by humans, and corresponds to a 30% change in pixel intensity for an arbitrary number of pixels. In Table 1, it can be observed that a plain binary network without adversarial training (B) achieves the best robustness to FGSM, with nearly 90% accuracy for = 0.1 for the highest capacity model. We postpone a formal explanation of this outlier for the discussion. Our results for large agree with observations made by Madry et al. (2017) where they found FGSM to be suboptimal for training as it yields a limited set of adversarial examples. We suspect that the reason neither scaled nor unscaled binary models performed well when trained with an adversary and tested on larger values of is because by the time adversarial training was introduced at epoch 10, both had entered into a state of decreased learning. Our binary weight implementation makes updates to real valued weights during training, which are binarized in the forward pass. The real valued weights tend to polarize as the model converges, resulting in fewer sign changes. Regularization schemes actually encouraging the underlying real valued weights to polarize around ±1 have been proposed (Tang et al., 2017), but we do not find this to be particularly helpful after sweeping a range of settings for the regularization constant λ. Regardless, in this case, the binary models did not benefit from adversarial training to the same extent that the full-precision models did. We find that adversarial training with binary models is somewhat of a balancing act. If a strong adversary is introduced to the model too early, it may fail to converge for natural inputs. If introduced too late, it may be difficult to bring the model back into its malleable state, where it is willing to flip the sign of its weights. Despite this challenge, the scaled binary model (C+) (see Figure 1 for location of optional scalar) reaped significant benefits from adversarial training and its accuracy was on par with the full-precision model for = 0.1. To investigate the low performance observed against large in Table 1, models A and C were trained from scratch with 40 iterations of PGD (Madry et al., 2017). Table 2 shows the result of this new training and subsequent FGSM attack performed identically to that of Table 1. A similar trend was found in Tables 1 and 2, where the lowest capacity models struggle to become robust against large . Once the scaled binary model had sufficient capacity, it actually slightly outperforms its fullprecision equivalent for all values of . With this, we have demonstrated that not only can BNNs achieve competitive accuracy on clean inputs with significantly fewer resources, but they can also allocate excess capacity in response to state-of-the-art adversaries. 3.1.2 CARLINI-WAGNER ATTACK CARLINI & WAGNER (2017) The Carlini-Wagner L2 attack Carlini & Wagner (2017) (CWL2) is an iterative process guided by an optimizer such as Adam, that produces strong adversarial examples by simultaneously minimizing distortion, and manipulating the logits per the attack goal. We use the implementation from CleverHans (Papernot et al., 2017a) and show results in Table 3 and Figure 2. Only binary models are shown in Table 3 because all but two full-precision models had zero accuracy after running CWL2 for 100 iterations. The best full-precision model was A256+ with 1.8±0.9% accuracy. We note that the stochastically quantized binary models with scaling to prevent gradient masking (‘S’ prefix) underfit somewhat on the training set, and had test error rates of 8±1%, 5±2%, and 3±1% for each of S64, S128, and S256 averaged over four runs. For S256, this test error can be compared with an unscaled binary model which only achieves 22±3% accuracy with gradient masking compared to 46±3% without. In Figure 2, it can be observed that binary and full-precision models perform somewhat similarly for the first few iterations of the CWL2 attack, but beyond 10–20 iterations, the accuracy of fullprecision models drops off quickly, regardless of having performed adversarial training. We note that PGD, defined with respect to the L∞ norm, makes no claim of increasing robustness to L2 attacks, such as CWL2. Interestingly, it can be seen that the binary model benefited from adversarial training considerably when evaluated at 10 to 100 attack iterations, while the full-precision model did not. These benefits eventually disappear to within the margin of random error after continuing to 1000 iterations, as recommended by Carlini & Wagner (2017). At this point, both B and B+ had accuracy of 19±3%, by which time the full-precision models had long flatlined at zero. Meanwhile, S64 maintained 38 ± 3% accuracy after 1000 iterations, nearly double that of the deterministically quantized models. Running these attacks to 1000 iterations was two orders of magnitude more time consuming than training these models from scratch (without PGD training); therefore we believe this targeted attack represents a fairly substantial level of effort on behalf of the adversary. 3.2 BLACK-BOX ATTACKS We run the substitute model training procedure from Papernot et al. (2017b) using CleverHans v2.0.0, for both MNIST and CIFAR-10 datasets with and without FGSM adversarial training. As a substitute model, we use a two-layer MLP with 200 hidden units and ReLU activations. The substitute is trained on 150 images withheld from the test set, and augmented by perturbing the images in the direction of maximal variability of the substitute model, as defined by the Jacobian. Six epochs of data augmentation with λ = 0.1 were used in combination with 10 substitute model training epochs after each augmentation step. The oracle was again trained for 15 epochs for MNIST, and 20 epochs for CIFAR-10. Results for the black-box experiment on the MNIST dataset are shown in Table 4. Full-precision networks had a moderate advantage over undefended binary models B and C. Only the highest capacity full-precision model benefited from FGSM adversarial training, while the scaled binary model benefited regardless of capacity. There was a small positive relationship between accuracy and capacity for both A and C when trained with PGD, and there was almost no loss in accuracy in this setting after binarization. PGD was more effective than stochasticity here as it leads to learning a more optimal decision boundary, rather than confusing an adversary with dynamic gradient information. 4 DISCUSSION We suspect that plain BNNs implement two different kinds of gradient masking. We discovered the first by tracking the L1 norm of the hidden layer activations and unscaled logits. BNNs operate with larger range and variance than ‘normal’ networks, which can be explained by virtue of convolving inputs with greater magnitude (±1) compared with the typically small values taken by weights and activations. For our 64 kernel CNN, the logits were about 4× larger than the scaled or full-precision networks. This is analogous to the more complex defensive distillation procedure in which the model to be secured is trained with soft-labels generated by a teacher model. When training the teacher, a softmax temperature, T 1 is used. The distilled model is trained on the labels assigned by the teacher and using the same T . At test time, the model is deployed with T = 1, which causes the logits to explode with respect to their learned values. The logits saturate the softmax function and cause gradients to vanish, leading FGSM and JSMA to fail at a higher rate. However, this defense is defeated with a close enough guess for T , or via a black box attack (Carlini & Wagner, 2017). The second type of gradient masking is less easily overcome, and has to do with gradients being inherently discontinuous and non-smooth, as seen in Figure 3 of Appendix B. We believe that this effect is what gives scaled BNNs an advantage over full-precision with respect to targeted attacks. Even more importantly, through a regularization effect, the decision boundary for the MLP with binary units (Figure 3) better represents the actual function to be learned, and is less susceptible to adversarial examples. But why does gradient masking have a disproportionate effect when attacking compared with training on clean inputs? Models ‘A’ and ‘B’ were trained to within 1.2% test accuracy, while ‘B’ had improvements of 9.0% and 29.5% on JSMA and CWL2 attacks respectively, corresponding to 8× and 25× difference in accuracy, respectively, for adversarial vs. clean inputs. For JSMA, the performance gap can be attributed to the sub-optimality of the attack as it uses logits rather than softmax probabilities. Furthermore, to achieve its L0 goal, pairs of individual pixels are manipulated which is noisy process in a binarized model. The success of model ‘S’ with stochastically quantized weights in its third convolutional layer against iterative attacks is more easily explained. Adversarial examples are not random noise, and do not occur in random directions. In fact, neural networks are extremely robust to large amounts of benign noise. An iterative attack that attempts to fool our stochastically quantized model faces a unique model at every step, with unique gradients. Thus, the direction that minimizes the probability of the true class in the first iteration is unlikely to be the same in the second. An iterative attack making n steps is essentially attacking an ensemble of n models. By making a series of small random steps, the adversary is sent on the equivalent of a wild goose chase and has a difficult time making progress in any particularly relevant direction to cause an adversarial example. 5 CONCLUSION We have shown that for binarized neural networks, difficulty in training leads to difficulty when attacking. Although we did not observe a substantial improvement in robustness to single step attacks through binarization, by introducing stochasticity we have reduced the impact of the strongest attacks. Stochastic quantization is clearly far more computationally and memory efficient than a traditional ensemble of neural networks, and could be run entirely on a micro-controller with a pseudo random number generator. Our adversarial accuracy on MNIST against the best white-box attack (CWL2) is 71±2% (S64+) compared with the best full-precision model 1.8±0.9% (A256+). Blackbox results were competitive between binary and full-precision on MNIST, and binary models were slightly more robust for CIFAR-10, which we attribute to their improved regularization. Beyond their favourable speed and resource usage, we have demonstrated another benefit of deploying binary neural networks in industrial settings. Future work will consider other types of low-precision models as well as other adversarial attack methods. ACKNOWLEDGMENTS The authors wish to acknowledge the financial support of NSERC, CFI and CIFAR. The authors also acknowledge hardware support from NVIDIA and Compute Canada. We thank Brittany Reiche for helpful edits and suggestions that improved the clarity of our manuscript. A CLEAN TEST ERROR RATES B MLP TOY AND PROBLEM We reproduce the toy problem in Papernot et al. (2015) of learning the two-input logical AND function with a simple MLP having two neurons in each layer. The only difference between our experiment and the original is that we train a 3-hidden-layer MLP (as opposed to 2-layers) with the Adam optimizer for 1k epochs, with a learning rate of 0.1. We use 3 layers since this is the smallest number of layers where the middle one can be quantized without directly touching the input or output, which would adversely impact learning. Here, a “quantized” layer means that its weights and activations are thresholded to +1 and -1, and a straight through estimator (Bengio et al., 2013) is used to backpropagate gradients for learning. All configurations in the AND experiment learn a reasonable decision boundary; however, the MLPs with a single quantized hidden layer had highly non-linear forward gradients, as can be seen in Figure 3(d). As training progresses, the forward derivative was highly dynamic and took on a variety of different shapes with sharp edges and peaks. When the MLP was allowed more capacity by doubling the number of hidden units (see Figure 4), the forward derivative was almost entirely destroyed. If one was to use this information to construct a saliency map, only two regions would be proposed (with poor directional information), and once exhausted there would be no further choices more insightful than random guessing. C VISUALIZING LOGITS WITH SCALING FACTORS In Figure 5 we compare the logits of full-precision and binary networks under varying degrees of FGSM perturbation. We noticed that for softmax temperature T between 0.6–0.7 the direction in which increasing the perturbation causes an adversarial example flips. We observe no similar effect for full-precision models. Additionally the full-precision logits respond to scaling in an approximately linear manner, whereas there is very little change in logits for the binary case apart from the 180 degree flip. We used values of in the range of actual attacks conducted in the paper, however the piecewise linear effect from (Goodfellow et al., 2015) is still there for with large absolute value.
1. What is the focus of the paper regarding low-precision neural networks? 2. What are the strengths of the proposed approach, particularly in terms of robustness and efficiency? 3. What are the weaknesses of the paper, especially regarding the experiments and comparisons with other works? 4. Do you have any concerns or suggestions regarding the analysis and discussion of the results? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review 1) Summary This paper proposes a study on the robustness of one low-precision neural networks class - binarized neural networks (BNN) - against adversarial attacks. Specifically, the authors show that these low precision networks are not just efficient in terms of memory consumption and forward computation, but also more immune to adversarial attacks than their high-precision counterparts. In experiments, they show the advantage of BNNs by conducting experiments based on black-box and white-box adversarial attacks without the need to artificially mask gradients. 2) Pros: + Introduced, studied, and supported the novel idea that BNNs are robust to adversarial attacks. + Showed that BNNs are robust to the Fast Gradient Sign Method (FGSM) and Carlini-Wagner attacks in white-box adversarial attacks by presenting evidence that BNNs either outperform or perform similar to the high-precision baseline against the attacks. + Insightful analysis and discussion of the advantages of using BNNs against adversarial attacks. 3) Cons: Missing full-precision model trained with PGD in section 3.2: The authors mention that the full-precision model would also likely improve with PGD training, but do not have the numbers. It would be useful to have such numbers to make a better evaluation of the BNN performance in the black-box attack setting. Additional comments: Can the authors provide additional analysis on why BNNs perform worse than full-precision networks against black-box adversarial attacks? This could be insightful information that this paper could provide if possible. 4) Conclusion: Overall, this paper proposes great insightful information about BNNs that shows the additional benefit of using them besides less memory consumption and efficient computation. This paper shows that the used architecture for BBNs makes them less susceptible to known white-box adversarial attack techniques.
ICLR
Title Attacking Binarized Neural Networks Abstract Neural networks with low-precision weights and activations offer compelling efficiency advantages over their full-precision equivalents. The two most frequently discussed benefits of quantization are reduced memory consumption, and a faster forward pass when implemented with efficient bitwise operations. We propose a third benefit of very low-precision neural networks: improved robustness against some adversarial attacks, and in the worst case, performance that is on par with full-precision models. We focus on the very low-precision case where weights and activations are both quantized to ±1, and note that stochastically quantizing weights in just one layer can sharply reduce the impact of iterative attacks. We observe that non-scaled binary neural networks exhibit a similar effect to the original defensive distillation procedure that led to gradient masking, and a false notion of security. We address this by conducting both black-box and white-box experiments with binary models that do not artificially mask gradients.1 1 INTRODUCTION The ability to fool machine learning models by making small changes to their input severely limits their potential for safe use in many real-world scenarios. Example vulnerabilities include a seemingly innocuous audio broadcast that is interpreted by a speech recognition model in a smartphone, with the intent to trigger an e-transfer, as well as pictures or identity documents that are automatically tagged as someone other than the real individual. The two most common threat models when evaluating the security of a system are the black-box and white-box assumptions, which represent varying degrees of information that an adversary may possess. In a black-box threat model, an adversary has similar abilities to a normal user that interacts with a system by providing inputs and observing the corresponding outputs. Under this threat model, an adversary generally does not know details of the model architecture or dataset used to train the model. Of course, an adversary is free to assume that a convolutional architecture was likely used if the input domain is images, or a recurrent model for speech or text. In a white-box threat model, an adversary has complete access to the model architecture and parameters. In the case of neural networks, white-box attacks frequently rely on gradient information to craft especially strong adversarial examples, where strong means that the example is very close to the original input as defined by some distance norm (e.g. L0–number of features modified, L2–mean squared distance), yet is very likely to cause the model to yield the incorrect output. For both threat types, targeted attacks where a model is made to fail in a specific way (e.g. causing a handwritten ‘7’ look like a ‘3’) represents a stronger attack than simple misclassification. The problem with deploying machine learning systems that are secured in a traditional sense, is that adversarial examples have been shown to generalize well between models with different source and target architectures (Szegedy et al., 2013; Papernot et al., 2017b; Tramèr et al., 2017). This means that a secured model can be compromised in an approximately white-box setting by training and attacking a substitute model that approximates the decision boundary of the model under attack (Papernot et al., 2017b). Thus, to make strong conclusions about the robustness of a machine learning model to adversarial attacks, both threat models should be considered. 1Source code available at https://github.com/AngusG/cleverhans-attacking-bnns Tangent to research on defences against adversarial attacks, significant progress has been made towards training very low-precision deep neural networks to accuracy levels that are competitive with full-precision models (Courbariaux & Bengio, 2016; Zhou et al., 2016; Tang et al., 2017). The current motivation for extreme quantization is the ability to deploy these models under hardware resource constraints, acceleration, or reduced power consumption. Ideally, 32× compression is possible by using 1-bit to represent single-precision floating point parameters. By similarly quantizing activations, we can reduce run-time memory consumption as well. These savings enable large scale deployment of neural networks on the billions of existing embedded devices. Very low-precision models were designed with deployment in mind, and may be responsible for making critical decisions in embedded systems, all subject to reverse engineering and a diverse set of real world attacks. With much at stake in applications like autonomous navigation, robotics, and network infrastructure, understanding how very low-precision neural networks behave in adversarial settings is essential. To that end, we make the following contributions: • To the best of our knowledge, we are the first to formally evaluate and interpret the robustness of binary neural networks (BNNs) to adversarial attacks on the MNIST (LeCun & Cortes, 1998) and CIFAR-10 (Krizhevsky, 2009) datasets. • We compare and contrast the properties of low-precision neural networks that confer adversarial robustness to previously proposed defense strategies. We then combine these properties to propose an optimal defense strategy. • We attempt to generalize and make recommendations regarding the suitability of lowprecision neural networks against various classes of attacks (e.g. single step vs. iterative). 2 BACKGROUND Since the initial disclosure of adversarial examples by Szegedy et al. (2013) and Biggio et al. (2013), many defense strategies have been proposed and subsequently defeated. It is generally accepted that strategies for mitigating the impact of these examples still lag behind state of the art attacks, which are capable of producing adversarial examples that are indistinguishable from unmodified inputs as perceived by humans. In general, there are two approaches to defending against adversarial examples: reactive–detecting the presence of adversarial examples, such as through some notion of confidence-based outlier detection. On the other hand, a proactive approach aims to improve the robustness of the underlying model, which may involve adding an extra class to which malicious inputs should be assigned (Papernot & McDaniel, 2017). The latter approach is important for building reliable systems where a sensible decision must be made at all times. In this work, we focus solely on the proactive approach. To define adversarial examples, we require some measurement of distance that can be computed between perturbed inputs and naturally occurring inputs. In the visual domain, it is convenient if the metric approximates human perceptual similarity, but is not required. Various Lp norms have been used in the literature: L0–number of features modified, L2–mean squared distance, L∞–limited only in the maximum perturbation applied to any feature. We evaluate at least one attack that is cast in terms of each respective distance metric, and leave discussion of the optimal distance metric to future work. The most compelling explanation for the existence of adversarial examples proposed to date is the linearity hypothesis (Goodfellow et al., 2015). The elementary operators, matrix dot-products and convolutions used at each layer of a neural network are fundamentally too linear. Furthermore, the non-linearity applied at each layer is usually itself either piecewise linear (e.g. ReLU), or we have specifically encouraged the network through initialization or regularization to have small weights and activations such that its units (e.g. sigmoid, tanh) operate in their linear regions. By adding noise to inputs which is highly correlated with the sign of the model parameters, a large swing in activation can be induced. Additionally, the magnitude by which this noise must be scaled to have this effect tends to diminish as the input dimensionality grows. This piecewise linearity also makes neural networks easy to attack using the gradient of the output with respect to the input, and consequently, the resulting incorrect predictions are made with high-confidence. Fortunately, we are reminded that the universal approximation theorem suggests that given sufficient capacity, a neural network should at least be able to represent the type of function that resists adversarial examples (Goodfellow et al., 2015). The most successful defense mechanism to date, adversarial training, is based on this premise, and attempts to learn such a function. The fast gradient sign method (FGSM) is one such procedure for crafting this damaging noise, and is still used today despite not being state-of-the-art in the white-box setting, as it is straightforward to compute, and yields examples that transfer well between models (Goodfellow et al., 2015). The linearity hypothesis was one of the main reasons for initially considering binarized neural networks as a natural defense against adversarial examples. Not only are they highly regularized by default through severely quantized weights, but they appear to be more non-linear and discontinuous than conventional deep neural networks (DNNs). Additionally, we suspect that the same characteristics making them challenging to train, make them difficult to attack with an iterative procedure. At the same time, assumptions regarding the information required by an effective adversary have become more and more relaxed, to the extent that black-box attacks can be especially damaging with just a small set of labeled input-output pairs (Papernot et al., 2017b). Perhaps the most striking feature of adversarial examples is how well they generalize between models with different architectures while trained on different datasets (Goodfellow et al., 2015; Papernot et al., 2017b; Kurakin et al., 2017a). It was shown by Kurakin et al. (2017b) that 2/3 of adversarial ImageNet examples survive various camera and perspective transformations after being printed on paper and subsequently photographed and classified by a mobile phone. The most successful black-box attacks have the secured model (Oracle) assign labels to a set of real or synthetic inputs, which can be used to train a substitute model that mimics the Oracle’s decision boundary (Papernot et al., 2017b). A single step attack, such as FGSM, can be used on the smooth substitute model to generate examples that transfer, without having access to the original training data, architecture, or training procedure used by the Oracle. Papernot et al. (2017b) showed they are able to compromise machine learning models 80% of the time on small datasets like MNIST using various shallow MLP-based substitute models. There is not a particularly high correlation between test accuracy and transferability of adversarial examples; therefore despite not attaining great results on the original MNIST task, a simple substitute learns enough to compromise the Oracle. This technique was shown to overcome gradient masking approaches, such as in the case with models that either obscure or have no gradient information, such as k-nearest neighbors or decision trees. With strong adversarial training of the model to be defended, attacks generated using the substitute model do not transfer as well. Therefore, to be compelling, BNNs should be able to handle training with large while maintaining competitive test accuracy on clean inputs relative to full-precision. The strongest white-box attacks all use an iterative procedure; however, the resulting examples do not transfer as well as single step methods (Goodfellow et al., 2015). An iterative attack using the Adam optimizer was proposed by Carlini & Wagner (2017) that outperforms other expensive optimization based approaches Szegedy et al. (2013), the Jacobian-based saliency map attack (JSMA) (Papernot et al., 2015), and Deepfool (Moosavi-Dezfooli et al., 2015) in terms of three Lp norms previously used as an adversarial example distance metrics in the literature. We have made our best attempt to use state-of-the-art attacks in our experiments. 3 EXPERIMENTS In Figure 1, we depict the quantization scheme applied to the base convolutional neural network provided in the CleverHans library tutorials (Papernot et al., 2017a). In the first layer, we retain weights and activations in single-precision floating point. Weights in hidden layers are binarized either de- terministically or stochastically, as in Courbariaux & Bengio (2016), and activations were always binarized deterministically. Unlike in Courbariaux & Bengio (2016), we stochastically quantize weights at test time as a possible defense against iterative attacks. Under the stochastic binarization scheme, weights are sampled once per forward pass from a Bernoulli distribution with probability given by passing the real valued weight through the hard sigmoid function from Courbariaux & Bengio (2016). Lastly, we map the Bernoulli samples ∈ [0, 1] to±1 by multiplying by 2 and subtracting 12.We do not find that this significantly slows down training with TensorFlow (Abadi et al., 2015) on a modern GPU, but these networks take between 3–4× as many epochs as a deterministically quantized binary network to converge. We use the straight through estimator (STE) to back-propagate gradients through the quantization step (Bengio et al., 2013). We optionally insert a small (e.g 1e-2) tunable scalar after the ReLU in hidden layers, to compensate for an increase in the L1 norm of the activations due to binarization. Tang et al. (2017) also used this approach to reach similar accuracy gains as those conferred by the more expensive XNOR-Net channel-wise normalization scheme (Rastegari et al., 2016). Convolution kernels were initialized from a truncated normal distribution with σ=0.2 for accumulating full-precision weight updates, and were quantized to ±1 in the forward pass. Batch normalization was applied before quantizing activations to ensure they were centered around zero (Ioffe & Szegedy, 2015). We report test error rates for these models on MNIST (LeCun & Cortes, 1998) with varying capacity in Table 6 of Appendix A. Capacity is denoted by the number of kernels in the first layer, KLayer1. All subsequent layers had exactly double this number of kernels. Models were trained for 15 epochs unless indicated otherwise. In general, models with full-precision weights and activations under-fit the naturally occurring data less than a binary equivalent, with error rates of approximately 1% and 2%, respectively. With the addition of the small learned scaling factor, the binary models converge to approximately the same error rate as the full-precision model on MNIST and CIFAR-10. We experiment with three different types of adversarial training, depending on the combination of dataset and attack: FGSM with fixed , FGSM with sampled from a truncated normal distribution as in Kurakin et al. (2017a), and projected gradient descent (PGD) (Madry et al., 2017), which is the state-of-the-art adversarial training procedure for MNIST. We do not necessarily pair all training methods against all attacks. The model’s own best prediction is used as the true label to minimize in adversarial training, unless otherwise noted to prevent the label leaking effect (Kurakin et al., 2017a). We first attempt to fool our binarized networks with single step attacks in a whitebox setting, and progressively scale up to stronger state-of-the-art attacks. All experiments were conducted by seeding the TensorFlow random number generator with the value 1234. 3.1 WHITE-BOX ATTACKS All experiments were conducted in TensorFlow, and used either v2.0.0 of CleverHans (Papernot et al., 2017a), or Foolbox, a Python toolbox for creating adversarial examples (Rauber et al., 2017). All attacks were clipped to the anticipated input range during adversarial training and evaluation. For single step attacks, we fix the magnitude of the perturbation and attack the whole test set, then report accuracy on the new test set. The general procedure for iterative attacks is to fix the step size per iteration or learning rate, and number of iterations. We then report accuracy on the perturbed test set after this many iterations while keeping other hyper-parameters constant. 3.1.1 FAST GRADIENT SIGN METHOD The FGSM is a simple but effective single step attack first introduced in Goodfellow et al. (2015), and defined in eq (1). The attack linearly approximates the gradient of the loss used to train the model with respect to the input. The gradient is thresholded by taking its sign, scaled by a uniform constant , and added to, or subtracted from, the input, depending on if we wish to minimize the current class, or move in the direction of a target class: xadv = x+ × sign(∆xJ(θ, x, y)) (1) 2In TensorFlow this can be accomplished with: 2 * Bernoulli(probs=tf.clip by value((x + 1.)/ 2., 0., 1.))).sample() -1 To confer robustness to more than one value of with which an adversary may use to attack, the adversarial training procedure from Kurakin et al. (2017a) proposes to sample a unique for each training example from a truncated normal distribution. We set the standard deviation to σ = ceil( max ∗ 255/2). We consider up to max = 0.3, as this is a common upper limit for a L∞ norm perturbation that is not easily perceived by humans, and corresponds to a 30% change in pixel intensity for an arbitrary number of pixels. In Table 1, it can be observed that a plain binary network without adversarial training (B) achieves the best robustness to FGSM, with nearly 90% accuracy for = 0.1 for the highest capacity model. We postpone a formal explanation of this outlier for the discussion. Our results for large agree with observations made by Madry et al. (2017) where they found FGSM to be suboptimal for training as it yields a limited set of adversarial examples. We suspect that the reason neither scaled nor unscaled binary models performed well when trained with an adversary and tested on larger values of is because by the time adversarial training was introduced at epoch 10, both had entered into a state of decreased learning. Our binary weight implementation makes updates to real valued weights during training, which are binarized in the forward pass. The real valued weights tend to polarize as the model converges, resulting in fewer sign changes. Regularization schemes actually encouraging the underlying real valued weights to polarize around ±1 have been proposed (Tang et al., 2017), but we do not find this to be particularly helpful after sweeping a range of settings for the regularization constant λ. Regardless, in this case, the binary models did not benefit from adversarial training to the same extent that the full-precision models did. We find that adversarial training with binary models is somewhat of a balancing act. If a strong adversary is introduced to the model too early, it may fail to converge for natural inputs. If introduced too late, it may be difficult to bring the model back into its malleable state, where it is willing to flip the sign of its weights. Despite this challenge, the scaled binary model (C+) (see Figure 1 for location of optional scalar) reaped significant benefits from adversarial training and its accuracy was on par with the full-precision model for = 0.1. To investigate the low performance observed against large in Table 1, models A and C were trained from scratch with 40 iterations of PGD (Madry et al., 2017). Table 2 shows the result of this new training and subsequent FGSM attack performed identically to that of Table 1. A similar trend was found in Tables 1 and 2, where the lowest capacity models struggle to become robust against large . Once the scaled binary model had sufficient capacity, it actually slightly outperforms its fullprecision equivalent for all values of . With this, we have demonstrated that not only can BNNs achieve competitive accuracy on clean inputs with significantly fewer resources, but they can also allocate excess capacity in response to state-of-the-art adversaries. 3.1.2 CARLINI-WAGNER ATTACK CARLINI & WAGNER (2017) The Carlini-Wagner L2 attack Carlini & Wagner (2017) (CWL2) is an iterative process guided by an optimizer such as Adam, that produces strong adversarial examples by simultaneously minimizing distortion, and manipulating the logits per the attack goal. We use the implementation from CleverHans (Papernot et al., 2017a) and show results in Table 3 and Figure 2. Only binary models are shown in Table 3 because all but two full-precision models had zero accuracy after running CWL2 for 100 iterations. The best full-precision model was A256+ with 1.8±0.9% accuracy. We note that the stochastically quantized binary models with scaling to prevent gradient masking (‘S’ prefix) underfit somewhat on the training set, and had test error rates of 8±1%, 5±2%, and 3±1% for each of S64, S128, and S256 averaged over four runs. For S256, this test error can be compared with an unscaled binary model which only achieves 22±3% accuracy with gradient masking compared to 46±3% without. In Figure 2, it can be observed that binary and full-precision models perform somewhat similarly for the first few iterations of the CWL2 attack, but beyond 10–20 iterations, the accuracy of fullprecision models drops off quickly, regardless of having performed adversarial training. We note that PGD, defined with respect to the L∞ norm, makes no claim of increasing robustness to L2 attacks, such as CWL2. Interestingly, it can be seen that the binary model benefited from adversarial training considerably when evaluated at 10 to 100 attack iterations, while the full-precision model did not. These benefits eventually disappear to within the margin of random error after continuing to 1000 iterations, as recommended by Carlini & Wagner (2017). At this point, both B and B+ had accuracy of 19±3%, by which time the full-precision models had long flatlined at zero. Meanwhile, S64 maintained 38 ± 3% accuracy after 1000 iterations, nearly double that of the deterministically quantized models. Running these attacks to 1000 iterations was two orders of magnitude more time consuming than training these models from scratch (without PGD training); therefore we believe this targeted attack represents a fairly substantial level of effort on behalf of the adversary. 3.2 BLACK-BOX ATTACKS We run the substitute model training procedure from Papernot et al. (2017b) using CleverHans v2.0.0, for both MNIST and CIFAR-10 datasets with and without FGSM adversarial training. As a substitute model, we use a two-layer MLP with 200 hidden units and ReLU activations. The substitute is trained on 150 images withheld from the test set, and augmented by perturbing the images in the direction of maximal variability of the substitute model, as defined by the Jacobian. Six epochs of data augmentation with λ = 0.1 were used in combination with 10 substitute model training epochs after each augmentation step. The oracle was again trained for 15 epochs for MNIST, and 20 epochs for CIFAR-10. Results for the black-box experiment on the MNIST dataset are shown in Table 4. Full-precision networks had a moderate advantage over undefended binary models B and C. Only the highest capacity full-precision model benefited from FGSM adversarial training, while the scaled binary model benefited regardless of capacity. There was a small positive relationship between accuracy and capacity for both A and C when trained with PGD, and there was almost no loss in accuracy in this setting after binarization. PGD was more effective than stochasticity here as it leads to learning a more optimal decision boundary, rather than confusing an adversary with dynamic gradient information. 4 DISCUSSION We suspect that plain BNNs implement two different kinds of gradient masking. We discovered the first by tracking the L1 norm of the hidden layer activations and unscaled logits. BNNs operate with larger range and variance than ‘normal’ networks, which can be explained by virtue of convolving inputs with greater magnitude (±1) compared with the typically small values taken by weights and activations. For our 64 kernel CNN, the logits were about 4× larger than the scaled or full-precision networks. This is analogous to the more complex defensive distillation procedure in which the model to be secured is trained with soft-labels generated by a teacher model. When training the teacher, a softmax temperature, T 1 is used. The distilled model is trained on the labels assigned by the teacher and using the same T . At test time, the model is deployed with T = 1, which causes the logits to explode with respect to their learned values. The logits saturate the softmax function and cause gradients to vanish, leading FGSM and JSMA to fail at a higher rate. However, this defense is defeated with a close enough guess for T , or via a black box attack (Carlini & Wagner, 2017). The second type of gradient masking is less easily overcome, and has to do with gradients being inherently discontinuous and non-smooth, as seen in Figure 3 of Appendix B. We believe that this effect is what gives scaled BNNs an advantage over full-precision with respect to targeted attacks. Even more importantly, through a regularization effect, the decision boundary for the MLP with binary units (Figure 3) better represents the actual function to be learned, and is less susceptible to adversarial examples. But why does gradient masking have a disproportionate effect when attacking compared with training on clean inputs? Models ‘A’ and ‘B’ were trained to within 1.2% test accuracy, while ‘B’ had improvements of 9.0% and 29.5% on JSMA and CWL2 attacks respectively, corresponding to 8× and 25× difference in accuracy, respectively, for adversarial vs. clean inputs. For JSMA, the performance gap can be attributed to the sub-optimality of the attack as it uses logits rather than softmax probabilities. Furthermore, to achieve its L0 goal, pairs of individual pixels are manipulated which is noisy process in a binarized model. The success of model ‘S’ with stochastically quantized weights in its third convolutional layer against iterative attacks is more easily explained. Adversarial examples are not random noise, and do not occur in random directions. In fact, neural networks are extremely robust to large amounts of benign noise. An iterative attack that attempts to fool our stochastically quantized model faces a unique model at every step, with unique gradients. Thus, the direction that minimizes the probability of the true class in the first iteration is unlikely to be the same in the second. An iterative attack making n steps is essentially attacking an ensemble of n models. By making a series of small random steps, the adversary is sent on the equivalent of a wild goose chase and has a difficult time making progress in any particularly relevant direction to cause an adversarial example. 5 CONCLUSION We have shown that for binarized neural networks, difficulty in training leads to difficulty when attacking. Although we did not observe a substantial improvement in robustness to single step attacks through binarization, by introducing stochasticity we have reduced the impact of the strongest attacks. Stochastic quantization is clearly far more computationally and memory efficient than a traditional ensemble of neural networks, and could be run entirely on a micro-controller with a pseudo random number generator. Our adversarial accuracy on MNIST against the best white-box attack (CWL2) is 71±2% (S64+) compared with the best full-precision model 1.8±0.9% (A256+). Blackbox results were competitive between binary and full-precision on MNIST, and binary models were slightly more robust for CIFAR-10, which we attribute to their improved regularization. Beyond their favourable speed and resource usage, we have demonstrated another benefit of deploying binary neural networks in industrial settings. Future work will consider other types of low-precision models as well as other adversarial attack methods. ACKNOWLEDGMENTS The authors wish to acknowledge the financial support of NSERC, CFI and CIFAR. The authors also acknowledge hardware support from NVIDIA and Compute Canada. We thank Brittany Reiche for helpful edits and suggestions that improved the clarity of our manuscript. A CLEAN TEST ERROR RATES B MLP TOY AND PROBLEM We reproduce the toy problem in Papernot et al. (2015) of learning the two-input logical AND function with a simple MLP having two neurons in each layer. The only difference between our experiment and the original is that we train a 3-hidden-layer MLP (as opposed to 2-layers) with the Adam optimizer for 1k epochs, with a learning rate of 0.1. We use 3 layers since this is the smallest number of layers where the middle one can be quantized without directly touching the input or output, which would adversely impact learning. Here, a “quantized” layer means that its weights and activations are thresholded to +1 and -1, and a straight through estimator (Bengio et al., 2013) is used to backpropagate gradients for learning. All configurations in the AND experiment learn a reasonable decision boundary; however, the MLPs with a single quantized hidden layer had highly non-linear forward gradients, as can be seen in Figure 3(d). As training progresses, the forward derivative was highly dynamic and took on a variety of different shapes with sharp edges and peaks. When the MLP was allowed more capacity by doubling the number of hidden units (see Figure 4), the forward derivative was almost entirely destroyed. If one was to use this information to construct a saliency map, only two regions would be proposed (with poor directional information), and once exhausted there would be no further choices more insightful than random guessing. C VISUALIZING LOGITS WITH SCALING FACTORS In Figure 5 we compare the logits of full-precision and binary networks under varying degrees of FGSM perturbation. We noticed that for softmax temperature T between 0.6–0.7 the direction in which increasing the perturbation causes an adversarial example flips. We observe no similar effect for full-precision models. Additionally the full-precision logits respond to scaling in an approximately linear manner, whereas there is very little change in logits for the binary case apart from the 180 degree flip. We used values of in the range of actual attacks conducted in the paper, however the piecewise linear effect from (Goodfellow et al., 2015) is still there for with large absolute value.
1. What is the main contribution of the paper in terms of robustness in deep learning? 2. What are the strengths and weaknesses of the proposed approach in terms of its ability to handle higher-dimensional inputs? 3. How does the reviewer assess the validity and significance of the experimental results presented in the paper? 4. Are there any concerns regarding the correlation between precision reduction and robustness in the context of the paper's findings?
Review
Review his work presents an empirical study demonstrating that binarized networks are more robust to adversarial examples. The authors follow the stochastic binarization procedure proposed by Courbariaux et al. The robustness is tested with various attacks such as the fast gradient sign method and the projected gradient method on MNIST and CIFAR. The experimental results validate the main claims of the paper on some datasets. While reducing the precision can intuitively improve the robustness, It remains unclear if this method would work on higher dimensional inputs such as Imagenet. Indeed: (1) state of the art architectures on Imagenet such as Residual networks are known to be very fragile to precision reduction. Therefore, reducing the precision can also reduce the robustness as it is positively correlated with accuracy. (2) Compressing reduces the size of the hypothesis space explored. Therefore, larger models may be needed to make this method work for higher dimensional inputs. The paper is well written overall and the main idea is simple and elegant. I am less convinced by the experiments.
ICLR
Title Enforcing physics-based algebraic constraints for inference of PDE models on unstructured grids Abstract The lack of relevant physical constraints in data-driven models of physical systems, such as neural network parameterized partial differential equations (PDEs), might lead to unrealistic modeling outcomes. A majority of approaches to solving this problem are based on forcing a model to satisfy a set of equations representing physical constraints. Currently available approaches can enforce a very limited set of constraints and are applicable only to uniform spatial grids. We propose a method for enforcing general pointwise, differential and integral constraints on unstructured spatial grids. Our method is based on representing a model’s output in terms of a function approximation and enforcing constraints on that approximation. We demonstrate wide applicability and strong performance of our approach in data-driven learning of dynamical PDE systems and distributions of physical fields. 1 INTRODUCTION Multiple works have shown the capability of neural networks to solve complex physical problems and learn the behavior of physical systems from data. Examples include learning and solving ordinary differential equations (ODEs) [6], partial differential equations (PDEs) [28; 20] and rigid body dynamics [31; 5]. Purely data-driven models are typically not forced to satisfy physical constraints of the system that generated the data. This might lead to unrealistic predictions that violate some known properties of the underlying physical system. Incorporation of relevant constraints allows to make a better use of the available data and makes predictions more physically plausible. The field dealing with physics-constrained learning is diverse and offers many approaches to adding constraints to models. We refer the reader to many reviews for details [30; 3; 36; 19]. The approach we consider in this work is based on forcing a model to satisfy algebraic constraints represented by a set of equalities and inequalities. This is the most commonly used approach which allows to represent a wide range of constraints and has been shown to work well in many cases [18; 17; 25]. However, while many constraints can be represented algebraically, it is not always clear how to evaluate and enforce them. Currently available approaches to enforcing algebraic constraints are limited to uniform grids and have a very narrow range of constraints they can enforce (e.g. only pointwise, or specific differential constraints), see Section 5 for details of related work. Such approaches can be readily applied to models based on convolutional neural networks (CNNs) but cannot be extended to recently developed models based on graph neural networks (GNNs) [33; 27; 15] and other models working on unstructured grids. We propose a much more general method which allows to enforce pointwise, differential and integral constraints on unstructured spatial grids and demonstrate its applicability in learning of PDE-driven dynamical systems and distributions of physical fields. The method is based on using a models’s output at the nodes of a grid to construct an interpolant and applying constraints directly to that interpolant (Section 3). Code and data will be made publicly available. 2 BACKGROUND PDE-driven dynamical systems. Many physical systems can be described in terms of PDEs. Such systems are defined on a bounded domain on which they evolve over time. We consider continuous dynamical systems with state u(t,x) ∈ Rp that evolves over time t ∈ R≥0 and spatial locations x ∈ Ω ⊂ RD. For physical systems, D is typically limited to {1, 2, 3} although our method will work with any value of D. We assume the system is governed by an unknown PDE ∂u(t,x) ∂t = F (x, u(t,x),∇xu(t,x),∇2xu(t,x), ...) (1) which describes the temporal evolution of the system in terms of the locations x, state u and its first and higher-order partial derivatives w.r.t. x. The goal of a data-driven PDE model is to learn the dynamics F from data. Data for learning F is collected by measuring the state of the system at observation locations (x1, . . . ,xN ) over increasing time points (t0, . . . , tM ). This results in a dataset (y(t0), . . . ,y(tM )), where y(ti) = (u(ti,x1), . . . , u(ti,xN )) is a collection of observations. The dataset is used to train the model to predict (y(t1), . . . ,y(tM )) starting from the initial state y(t0). Training is typically done by minimizing an average loss between the model’s predictions u(t) and the data y(t). PDE models differ in restrictions they impose on time points (temporal grid) and observation locations (spatial grid). Some models require both grids to be uniform [23], other models relax these requirements and allow arbitrary spatial [27] and spatio-temporal grids [15]. We build our algebraic constraints method using the model from [15] as the most general one. The model is based on application of the method of lines [32] to Equation 1 which results into a system of ODEs u̇(t) := du(t,x1) dt ... du(t,xN ) dt ≈ Fθ(x1,xN (1), u1, uN (1))... Fθ(xN ,xN (N), uN , uN (N)) (2) which approximates the solution of Equation 1 at the observation locations xi using their neighboring points N (i), where xN (i) and uN (i) are the neighbors’ positions and states respectively, and ui is u(t,xi). The approximate solution converges to the true solution as N increases. The true dynamics F is approximated by a parametric model Fθ whose parameters θ are learned by minimizing the difference between the model’s predictions u(t) = u(0) + ∫ t 0 u̇(τ)dτ (3) and the data y(t). The integral in Equation 3 is solved using a numerical ODE solver. In [15], the function Fθ was represented by a graph neural network (GNN) which takes states and locations at an observation point i and its neighboring points N (i). The observation points are connected into a grid using Delaunay triangulation which allows to naturally defineN (i) as a set of points connected to the point i. However, Fθ can be represented by other models and a different neighbor selection criterion can be used. The model parameters θ are learned by minimizing the MSE between y(t) and u(t) Ldata = 1 M M∑ i=1 ‖u(ti)− y(ti)‖22. (4) The gradient of Ldata w.r.t. θ is evaluated using the adjoint method as shown in [7]. Generative Adversarial Networks One of the tasks that we consider is learning distributions of physical fields. For that purpose we utilize generative adversarial networks (GANs). A GAN is a generative model consisting of a generator and a discriminator [12]. The generator, G, learns to transform a random variable Z ∼ pZ over a latent space Z to the data space Y in such a way that the discriminator, D, cannot tell the difference between samples generated by G and samples from the data distribution pdata. Both, G and D are learned by solving the following minimax problem min G max D V (G,D) = EY∼pdata [logD(Y )] + EZ∼pZ [log (1−D(G(Z)))] . (5) Solution of this problem exists and is unique with the optimal generator perfectly mimicking the data distribution [12]. 3 METHODS In this section we presents an approach to evaluating pointwise, differential and integral constraints on unstructured grids. Then, we demonstrate how this approach can be used to enforce arbitrary soft and linear hard constraints. 3.1 EVALUATING CONSTRAINTS ON UNSTRUCTURED GRIDS We assume the data y(t) is available at observation points (x1, . . . ,xN ) and time points (t1, . . . , tM ) and that a model makes predictions u(t) at these points. We assume the predictions to be evaluations of an unknown underlying function. Since the underlying function is unknown, we cannot impose constraints on it directly. Instead, we approximate it by an interpolant uf (t,x) and impose constraints on uf (t,x) (Figure 1). The approximation is constructed from u(t) by placing a basis function at each xi and representing uf (t,x) as uf (t,x) = N∑ j=1 αj(t)φj(x), (6) where φj is a scalar basis function at xj and αj ∈ Rp. The coefficients αj(t) are obtained from u(t) (see Section 3.4). Next, we show how to evaluate constraints on uf (t,x) using basic building blocks. To avoid cluttered notation, we consider equality constraints and assume u(t,x),x ∈ R. Generalization to inequality constraints, vector fields and higher spatial dimensions is straightforward. Pointwise constraints. Consider points z = (z1, . . . , zK) in Ω on which a pointwise constraint h(uf (t, zi)) = 0 should be evaluated. Assume the function h : R→ R is representable in terms of a finite number of functions γm(uf (t, zi)) : R → R indexed by m. For example, should the constraint be h(uf ) = 3uf + u2f = 0, then we would define γ1(uf ) = uf , γ2(uf ) = u2f and h(uf ) = 3 · γ1(uf ) + γ2(uf ) = 0. Then, h can be evaluated by evaluating each γm as γm(uf (t, zi)) = γm N∑ j=1 αj(t)φj(zi) = γm (Φi,·α(t)) , (7) where α(t) = (α1(t), . . . , αN (t))T , Φ is K-by-N matrix with elements Φi,j = φj(zi), and Φi,· is the i’th row of Φ. Differential constraints. Consider the same setup as before but now h(uf (t, zi)) = 0 consists of differential operators and is representable in terms of a finite number of functions ∂ qγm(uf (t,zi)) ∂zqi : R→ R indexed bym, where the derivative order q could be different for eachm. For example, should the constraint be h(uf ) = 3uf + uf · ∂u2f ∂x = 0, then we would define γ1(uf ) = uf , γ2(uf ) = u 2 f and h(uf ) = 3 · γ1(uf ) + γ1(uf ) · ∂γ2(uf )∂z = 0. Then, h can be evaluated by evaluating each ∂qγm(uf (t,zi)) ∂zqi using the generalization of the chain rule (Appendix A) which contains only two types of terms. The first type of terms dγmduf , . . . , dqγm duqf can be evaluated using automatic differentiation while the second type of terms ∂uf∂zi , . . . , ∂quf ∂zqi can be evaluated as ∂quf ∂zqi = N∑ j=1 αj(t) ∂qφj(zi) ∂zqi = Φ (q) i,· α(t), (8) where Φ(q)i,j = ∂qφj(zi) ∂zqi . Mixed partial derivatives can be handled in a similar way (Appendix A). Integral constraints. Consider the same setup as before but with h(uf (t,x)) =∫ Ω τ(uf (t,x))dx = 0, where the function τ : R → R is representable in terms of functions γm(uf (t, zi)) : R→ R similarly to the pointwise constraints. Then, ∫ Ω τ(uf (t,x))dx can be evalu- ated using a numerical integration technique, e.g. midpoint rule, Gaussian quadrature or Monte-Carlo integration, as ∫ Ω τ(uf (t,x))dx ≈ K∑ i=1 τ(uf (t, zi))µi, (9) where K is the number of integration points, µi are integration coefficients which depend on the grid and integration method, and τ(uf (t, zi)) is evaluated as in Equation 7. 3.2 SOFT CONSTRAINTS Soft constraints are implemented by minimizing the following loss Ldata + λr(h(uf )), where λ ∈ R and Ldata is defined as in Equation 4. We set r(h(uf )) = 1KM ∑K i=1 ∑M j=1 h(uf (tj , zi)) 2 for point- wise and differential constraints and r(h(uf )) = 1M ∑M j=1 h(uf (tj ,x)) 2 for integral constraints. 3.3 HARD CONSTRAINTS Our method allows to implement hard constraints by projecting the interpolant uf (t,x) to a subset of functions which satisfy the required constraints. Namely, if uf (t,x) does not satisfy constraints g and h, it is projected to a subset of functions which satisfy the constraint by solving the following optimization problem min ûf∈Vφ ‖uf − ûf‖2L2 s.t. h(ûf ) = 0, g(ûf ) ≤ 0, (10) where the projection is denoted by ûf (t,x) and Vφ is spanned by the basis functions. Using the basis representation uf (t,x) = ∑N i=1 αi(t)φi(x) and ûf (t,x) = ∑N i=1 βi(t)φi(x) we can rewrite the optimization problem (10) as min β(t)∈RN (α(t)− β(t))T Φ̂(α(t)− β(t)) s.t. h(ûf ) = 0, g(ûf ) ≤ 0, (11) where β(t) = (β1(t), . . . , βN (t))T and Φ̂i,j = ∫ Ω φi(x)φj(x)dx. To train the model end-to-end, the problem (11) should be differentiable. Agrawal et. al. [1] proposed differentiable convex optimization which could be used in this case if the problem (11) could be expressed in a DPP-compliant way (see [1]). To do that, we restrict ourselves to constraints that can be expressed as an equality or inequality between Aβ(t) and b, where A is a constant matrix and b is a constant vector. This formulation admits pointwise, differential and integral constraints on untransformed uf . The objective function is convex since its Hessian is positive-semidefinite i.e. for any v ∈ RN vT Φ̂v = N∑ i,j=1 vivjΦ̂i,j = N∑ i,j=1 〈viφi, vjφj〉L2 = 〈 N∑ i=1 viφi, N∑ j=1 vjφj〉L2 ≥ 0. (12) This allows to solve the problem (11) and differentiate its solution β∗(t) w.r.t. α(t). The model parameters are found by minimizing the following loss function Ldata +λLproj, where λ ∈ R and Ldata is defined as in Equation 4 but with u(ti) replaced by û(ti) = (ûf (ti,x1), . . . , ûf (ti,xN )). We set Lproj = 1NM ∑N i=1 ∑M j=1 ‖uf (tj ,xi)− ûf (tj ,xi)‖ 2 2. The second term makes the optimization procedure prefer models that predict uf close to the feasible set of the problem (11). We note that the proposed approach is currently limited to small-scale problems due to existing computational bottlenecks in the implementation of differentiable convex optimization [1]. 3.4 BASIS FUNCTIONS Selecting appropriate basis is crucial for efficiency and applicability of the proposed method. Ideally, the basis should allow efficient construction of uf (t,x) from u(t), contain no tunable parameters, and lead to sparse matrices Φ,Φ(q) and Φ̂. We consider bases from two families: Lagrange basis functions and radial basis functions (RBFs). Lagrange basis functions do not have tunable parameters and have compact support which leads to sparse Φ,Φ(q) and Φ̂. For the piecewise linear basis the interpolant uf (t,x) can be constructed directly from the predictions by setting α(t) = u(t). However, constructing uf (t,x) for a higher order basis, e.g. piecewise quadratic, requires the model to make predictions not only at the observation points, but also at some extra points where the data is not available. In Section 4 we demonstrate one approach to solving this problem. After extending the state u(t) by predictions at the extra nodes, the coefficients α(t) can be evaluated similarly to the piecewise linear basis. In this work we use piecewise linear (PWL) and piecewise quadratic (PWQ) bases. Examples of PWL basis functions are shown in Figure 2. Radial basis functions have a wider range of properties. Some RBFs have tunable parameters, some don’t. The matrices Φ,Φ(q) and Φ̂ evaluated with RBFs are typically dense, but RBFs with compact support exist (e.g. bump function). The interpolant uf (t,x) can be constructed by evaluating α(t) = K−1u(t), where K−1 is the inverse of the interpolation matrix of the given RBF and Kij = φ(‖xi − xj‖), where φ is an RBF and xi,xj are observation locations. In this work we use the cubic RBF basis i.e. φ(r) = r3. We use PyTorch [26] to handle sparse matrices and to evaluateK−1u(t) in a differentiable way. 4 EXPERIMENTS In the following experiments we use the relative error between the data y(t) and model predictions u(t) defined as ‖y(t)−u(t)‖2‖y(t)‖2 and consider only soft constraints. We present an experiment with hard constraints implemented as shown in Section 3.3 in Appendix D. Data generation is described in Appendix B. Training, testing and modeling details are in Appendix C. All experiments were run on a single NVIDIA Quadro P5000 GPU. All errors bars represent one standard deviation of the results over five random seeds. 4.1 REPLACING EXISTING METHODS In this experiment we take existing models which incorporate physics-based constraints in training and replace their constraint enforcing approaches with ours. We consider two works. First, [37] which trains a GAN to produce divergence-free vector fields using zero-divergence constraint. Second, [10] which predicts warping fields driving the evolution of sea surface temperature by observing snapshots of the temperature over time while enforcing gradient and divergence constraints on the warping fields (see Appendix C for more details). Both models work on uniform grids which allows them to evaluate constraints using finite differences. For comparison, we replace finite differences with our method and observe how it changes the models’ performance. In both cases we use the PWL basis. For [37] we track the mean divergence and discriminator loss. Results of the original approach are as follows: mean divergence 0.079 and discriminator loss 0.091. With our method the mean divergence was 0.014 and the discriminator loss was 0.088. Both approaches results in similar discriminator losses but our approach produces a smaller mean divergence (smaller is better). Our method increased the runtime per epoch by 6%. For [10] we track the total, divergence and smoothness losses which, with the original approach, were 0.139, 8.4 · 10−5 and 1.51 · 10−4, respectively. With our approach the losses were 0.139, 8.3 · 10−5 and 1.51 · 10−4, respectively. Both methods produce very similar results. Our method increased the runtime per epoch by 30%. Overall, replacing existing constraint enforcing approaches by ours on data from uniform grids resulted in comparable model performance, except for runtime which was slightly increased. 4.2 CAHN-HILLIARD EQUATION WITH AN INTEGRAL CONSTRAINT We start with the 1D Cahn-Hilliard equation ∂u ∂t = 2∇2(u(1− u)2 − u2(1− u)− 2∇2u) (13) which is known to conserve the state u i.e. h(u) = ∫ Ω u(t, x)dx− C = 0 at all time points, where C = ∫ Ω u(0, x)dx is a constant. This is an example of a conservation law which are abundant in nature and are important class of constraints that data-driven models of physical systems should satisfy. Conservation laws can be expressed in differential and integral forms and this experiment demonstrates how the integral form can be enforced. The constraint is evaluated using the midpoint rule as shown in the previous section with a single γ1 being the identity function. We use PWL, PWQ and cubic RBF bases and compare the results to an unconstrained model. For training we use 30, 60 and 120 simulations while the test set consist of 60 simulations. Simulations in the training/test data last for 0.0015/0.0030 seconds and contain 50/100 uniformly spaced time points. The full spatial grid consists of 101 uniformly spaced nodes. We randomly sample 50%, 75% and 100% of the nodes and train/test on the resulting (irregular) spatial grid. Training and testing is done with identical spatial grids. An example of a spatial grid with 50% of nodes is shown in Figure 3. We evaluate the constraint on a uniform grid with 200 nodes placed on top of the original grid. To learn the dynamics of the system we use the model from [15] (Section 2). We found that using a GNN produced poor results. For that reason we represented the function Fθ with a multiplayer perceptron (MLP) which updates the state of each node based on the states of all other nodes in the grid (results for a GNN are in Appendix E). The MLP contains two hidden layers with Leaky ReLU nonlinearities. The number of hidden neurons was set to the number of nodes in the grid. The coefficients α(t) for the PWL and cubic bases can be evaluated directly from the model predictions at the grid nodes. But the PWQ basis requires extra predictions to be available between the nodes. This is problematic since there is no data at these points to guide the model’s predictions. To solve this problem we introduce a small MLP which is applied to consecutive pairs of nodes. The MLP takes the states at both nodes and the distance between them as the input and estimates the state at the midpoint between the two nodes. The MLP is trained jointly with the main model and uses only the constraint-related loss term during training. For testing, we construct the interpolant uf (t,x) using the thin plate spline basis (φ(r) = r2 log r) and evaluate the constraint on that interpolant. This allows to make a fair comparison between the unconstrained model and different bases and avoid biasing or ovefitting to bases used for training. Figure 4 shows results of the experiment. We observe that changing the node fraction does not significantly affect the relative errors but has noticeable effect on constraint violations, especially for the unconstrained model. Constrained models tend to show similar or better performance than the unconstrained model. Among all bases, the cubic basis consistently results in lower relative errors and constraint violations. However, the simpler PWL basis often performs on par with the cubic basis, especially on denser spatial grids. We also observe that coarsening of the grid increases the constraint violation gap between constrained and unconstrained models and that this gap seems to not close as we increase the amount of training data. The PWQ basis performs rather poorly on fine grids which is likely due to a suboptimal approach to evaluating the state at the extra nodes. A better approach could consider not only pairs of points but also larger neighborhoods. Nonetheless, the PWQ basis achieves good performance on coarse grids which shows that piecewise bases of higher order could potentially be used to enforce constraints. This will allow to scale to grids with a large number of nodes due to sparsity of the constraint matrices and efficient evaluation of α. 4.3 HEAT EQUATION WITH A MONOTONICITY CONSTRAINT We impose constraints on a 2D system governed by the heat equation ∂u∂t = ∇2u for which the generated initial conditions (ICs) are monotone in one direction. Since the ICs are monotone, the state u remains monotone at all time points as well. We enforce the monotonicity constraint as ∂u∂x ≥ 0. The constraint is evaluated as shown in the previous section with γ1 being the identity function. For training we use 15, 30 and 90 simulations while the test set consist of 120 simulations. Simulations in the training/test data last for 0.1/0.2 seconds and contain 21/41 uniformly spaced time points. The full spatial grid consists of 1087 nodes. We randomly sample 33%, 66% and 100% of the nodes and train/test on the resulting (irregular) spatial grid. Training and testing is done with identical spatial grids. Spatial grid with 100% of nodes is shown in Figure 5. The constraint is evaluated at the nodes of a uniform 51 × 51 grid placed on top of the original grid. To learn the dynamics of the system we use the model from [15] directly with the messaging and aggregation networks being MLPs with a single hidden layer consisting of 60 neurons with Tanh nonlinearities and the input/output sizes of 4/40 and 41/1 respectively. During testing, we use predictions of the models to construct an interpolant uf (t,x) using the thin plate spline basis and evaluate the constraint on that interpolant. This allows to make a fair comparison between the unconstrained model and different bases. Figure 7 shows results of the experiment. We observe that changing the node fraction equally increases relative errors of all models and has noticeable effect on constraint violations, especially for the unconstrained model. Constrained models tend to show slightly higher or comparable relative errors but noticeably lower constraint violations than the unconstrained model. The cubic and PWL bases perform equally well in this case. Similarly to the experiment in the previous section, we observe that coarsening of the grid introduces a larger constraint violation gap between constrained and unconstrained models and that this gap seems to not close as we increase the amount of training data. Figure 6 shows qualitative difference between predictions of constrained and unconstrained models. It can be noted that predictions from the constrained model have noticeably smoother contours thus making the field more monotonous in the horizontal direction. 4.4 LEARNING DISTRIBUTIONS OF PHYSICAL FIELDS We demonstrate the effect of adding constraints to a GAN when learning distributions of physical fields on unstructured grids. We use Wasserstein GAN (WGAN) [2] as a more stable variant of a GAN. We use MLPs as a generator and discriminator. Unconstrained and constrained models are trained for 1.2M iterations. Constraints are enabled only after 600k iterations. Constrained models are trained similarly to the unconstrained ones but with a modified generator loss defined as LG + λ ln (1 + LC), where LG is the standard generator loss and LC is the constraint-based loss. We defineLC as the mean value of h(u)2, where h is a constraint evaluated at the centroid of each cell in the grid. 4.4.1 ZERO-DIVERGENCE FIELDS Divergence-free vector fields are often encountered in solutions of fluid dynamics problems. The divergence-free constraint on a vector field u(x, y) = (u1(x, y), u2(x, y)) T is defined as h(u) = ∂u1∂x + ∂u2 ∂y = 0. The constraint is enforced using the PWL basis. We generated a dataset with 10k divergence-free fields on an unstructured grid with 1050 nodes (Figure 14) and used a WGAN to learn a distribution over such fields. Note that the generated fields are not entirely divergence-free but have small residual divergence due to discretization errors. Figure 9a shows that there is a clear difference in the quality of the samples generated by the unconstrained and constrained models. Samples from the constrained model are smoother and more similar to the data. Quantitative comparison of the samples presented in Figure 8 shows that the constrained model generates fields that have much lower constraint violation and divergence distribution very similar to that of the data. 4.4.2 ZERO-LAPLACIAN FIELDS Fields with zero Laplacian represent solutions to some PDEs, for example the steady-state heat equation. The zero-Laplacian constraint on a scalar field u(x, y) is defined as h(u) = ∂ 2u ∂x2 + ∂2u ∂y2 = 0. The constraint is enforced using the cubic basis as the PWL basis has zero second derivatives everywhere. We generated a dataset with 10k Laplacian-free fields on an unstructured grid with 1050 nodes (Figure 14) and used a WGAN to learn a distribution over such fields. Note that the generated fields are not entirely Laplacian-free due to discretization errors. Results of the experiment are shown in Figures 9b and 8. Similarly to the divergence-free case, visual quality of the fields generated by the constrained model is significantly better than for the unconstrained model. Quantitative comparison of the samples presented in Figure 8 shows that the constrained model generates fields that have much lower constraint violation and Laplacian distribution very similar to that of the data. 5 RELATED WORK Soft constraints. Soft constraints are widely used due to being relatively easy to implement. Examples include lake temperature prediction [18; 16], traffic simulation [22], fluid and climate modeling [11; 10; 4], where constraints are evaluated pointwise or using finite differences. Hard constraints. Approaches to implementing hard constraints are diverse and can be categorized as processing the output of an unconstrained model [4; 17; 24; 34] and designing a model that produces feasible predictions by default [23; 25; 14; 9; 13; 8; 38]. Constrained PDE models Current approaches to enforcing soft [10; 11] and hard [21; 25; 17] constraints are limited to specific types of constraints and spatial grids. For example, [25; 17] implement only hard differential constraints and both are limited to uniform grids. Uniform grids allow to evaluate constraints efficiently e.g. using finite differences [10; 21; 25] or fast Fourier transform [17] but assuming that the data lies on a uniform grid might be limiting. Constrained GANs Works such as [37; 17] showed how physics-based constraints benefit training and quality of the generated samples but are also limited to uniform grids. 6 CONCLUSION We presented a general approach to enforcing algebraic constraints on unstructured grids and showed how it can be used to enforce soft and hard constraints. We demonstrated applicability of the approach to learning of PDE-driven dynamical systems and distributions of physical fields. We considered two families of basis functions for constructing the interpolant and showed how Lagrange basis functions of order higher than one can be used. Our method allows to drop the unrealistic assumption about uniformity of spatial grids and shows promising results on various tasks. REPRODUCIBILITY STATEMENT All details required to reproduce the experiments are provided in Section 4 and Appendices. Code and data used to run the experiments will be made publicly available after the review process. A GENERALIZED CHAIN RULE AND HANDLING MIXED PARTIAL DERIVATIVES. Let y = g(x1, . . . , xn) with all arguments being either identical, distinct or grouped. Then, partial derivatives of f(y) can be evaluated using the Faà di Bruno’s formula ∂nf(y) ∂x1 · · · ∂xn = ∑ π∈Π f (|π|)(y) ∏ B∈π ∂|B|y∏ j∈B ∂xj , where Π is the set of all partitions of the set {1, . . . , n}, B runs through elements of the partition π, f (m) denotes m’th derivative, and | · | is cardinality. The formula consists of two terms: f (|π|)(y), which can be evaluated using automatic differentiation, and ∂ |B|y∏ j∈B ∂xj , which can be evaluated as shown in Equation 8. In case that all x1, . . . , xn are identical, the mixed derivative ∂nf(y) ∂x1···∂xn reduces to ∂nf(y) ∂xn1 . B DATA GENERATION In all cases we run simulation on a fine grid and then interpolate the results to a coarser grid represented as the "full grid" in the experiments. B.1 CAHN-HILLIARD EQUATION WITH AN INTEGRAL CONSTRAINT Training and testing data was obtained by solving ∂u ∂t = 2∇2(u(1− u)2 − u2(1− u)− 2∇2u) (14) on a unit interval with periodic boundary conditions and = 0.04. The domain was represented by a uniform grid with 100 nodes and the time step was set to 1.0e-6 sec. The initial conditions u0(x) were generated as follows ũ0(x) = 10∑ i=1 (λi cos ((x− s)2π) + γi sin ((x− s)2π)) + λ0 2 , (15) u0(x) = ũ0(x)−min ũ0(x) max ũ0(x)−min ũ0(x) , (16) where λi, γi ∼ Unif(−1, 1) and s ∼ Unif(0, 1). Examples of the simulations are shown in Figure 10. B.2 HEAT EQUATION WITH A MONOTONICITY CONSTRAINT Training and testing data was obtained by solving ∂u ∂t = D∇2u (17) on a unit square with zero Neumann boundary conditions and D = 0.2. The domain was represented by an unstructured grid with 2971 nodes and the time step was set to 0.001 sec. The initial conditions u0(x) were generated as f(x) = 6∑ i=0 ωix i, (18) g(y) = 1 2 3∑ i=1 (λi cos ((x+ s)2π) + γi sin ((x+ s)2π)) + λ0 2 , (19) ũ0(x, y) = f(x) + g(y), (20) u0(x, y) = ũ0(x, y)−min ũ0(x, y) max ũ0(x, y)−min ũ0(x, y) , (21) where ωi ∼ Unif(0.1, 1.1), λi, γi, s ∼ Unif(−1, 1). Examples of the simulations are shown in Figure 11. B.3 GAN WITH A DIVERGENCE CONSTRAINT The data was generated by sampling random velocity fields and then projecting them to the space of divergence-free fields. The procedure was as follows. First, a random velocity field u0(x, y) was generated on a unit square by generating each component i as ũ0i(x, y) = N∑ k,l=−N λkl cos (kx+ ly) + γkl sin (kx+ ly), (22) u0i(x, y) = 6× ( ũ0i(x, y)−min ũ0i(x, y) max ũ0i(x, y)−min ũ0i(x, y) − 0.5 ) , (23) where N = 10 and λkl, γkl ∼ N (0, 1). Then, the divergence-free component of u0(x, y), denoted by u∗0(x, y), was extracted by using the projection method by solving∇ · u0 = ∇2φ for φ and then evaluating u∗0(x, y) = u0(x, y)−∇φ. Finally, the data was scaled to [−1, 1]. B.4 GAN WITH A LAPLACIAN CONSTRAINT The data was generated by solving ∇2u = 0 (24) on a unit square with Dirichlet boundary conditions. The domain was represented by an unstructured grid with 2971 nodes. The boundary conditions were generated by generating random functions u0(x) and using their boundary values as the boundary conditions. The functions u0(x) were generated as u0(x, y) = N∑ k,l=−N λkl cos (kx+ ly) + γkl sin (kx+ ly) (25) where N = 5 and λkl, γkl ∼ N (0, 1). The data was then scaled to [0, 1]. C MODELS, TRAINING AND TESTING C.1 REPLACING EXISTING METHODS For our comparisons we considered experiments from two works. Next, we provide some details about these experiments. The first experiment was taken from [37] Section 3.2. The experiment shows how soft physics-based constraints affect predictions of a GAN learning a distribution of divergence-free fields. The data is generated on a uniform grid which allows to evaluate divergence using finite differences. The constraint is enforced through an extra loss term which penalizes violation of the constraint. The performance metric used is the Frobenius norm of the divergence averaged over all fields in a batch. For training we used code provided by the authors with original parameters. We replaced finite differences in the constrained evaluation function with our method. The second experiment was taken from [10]. This work deals with the task of predicting sea surface temperatures at future times given snapshots of the temperature over current and previous times. The model proposed by the authors accomplishes this tasks by taking a sequence of surface temperatures at times ti−k, . . . , ti and predicting the underlying motion field which is then used to predict the temperature at time ti+1. Insights about physical properties of the motion field were used to constrain the model’s predictions. Constraints are imposed on divergence, magnitude and gradients of the motion field. The data is generated on a uniform grid which allows to evaluate the constraints using finite differences. The constraints are enforced through extra loss terms which penalize violation of the constraints. Performance metrics that were used are MSE between the data and model predictions, smoothness loss and divergence loss. For training we used code provided by the authors with original parameters. We replaced finite differences in the constrained evaluation function with our method. C.2 CAHN-HILLIARD EQUATION WITH AN INTEGRAL CONSTRAINT In all experiments with the Cahn-Hilliard equation we represent the dynamics function Fθ by an MLP with 2 hidden layers and LeakyReLU nonlinearities (negative slope 0.2). The number of neurons in each layer was set to the number of nodes in the spatial grid on which the model was trained. The predictions u(t) were obtained by simulating the system forward in time using adaptive Heun solver from torchdiffeq package [6] with rtol and atol set to 1.0e-5 and 1.0e-5 respectively. All models were trained for 1500 epochs using Rprop optimizer [29] with learning rate set to 1.0 · 10−6 and batch size set to the number of simulations in the training set. Mean squared error was used as the loss function. Spatial and temporal grids in the testing data were the same as in the training data. We set λ = 2. C.3 HEAT EQUATION WITH A MONOTONICITY CONSTRAINT In all experiments with the heat equation we represent the dynamics function Fθ by a GNN with the messaging and aggregation networks being MLPs with a single hidden layer consisting of 60 neurons with Tanh nonlinearities and the input/output sizes of 4/40 and 41/1 respectively. The predictions u(t) were obtained by simulating the system forward in time using adaptive Heun solver from torchdiffeq package [6] with rtol and atol set to 1.0e-5 and 1.0e-5 respectively. All models were trained for 750 epochs using Rprop optimizer [29] with learning rate set to 1.0 · 10−6 and batch size set to the number of simulations in the training set. Mean squared error was used as the loss function. Spatial and temporal grids in the testing data were the same as in the training data. We set λ = 0.1. C.4 LEARNING DISTRIBUTIONS OF PHYSICAL FIELDS In both cases we used identical architectures and training process for the constrained and unconstrained models. Both models were trained for 1.2M iterations using the same random seed. Constraints in the constrained model were enabled only after 600k iterations. The base distribution was set to a 128-dimensional isotropic standard normal. Models were trained using RMSProp optimizer [35] with batch size and learning rate set to 64 and 0.00001 respectively. The discriminator’s weights were clipped to [−0.01, 0.01]. C.4.1 ZERO-DIVERGENCE FIELDS We used MLPs as a discriminator and generator. The discriminator consisted of 3 hidden layers of sizes 1024-512-256 with LeakyReLU nonlinearities (negative slope 0.2) and input/output size of 2010 and 1 respectively. The generator consisted of 3 hidden layers of sizes 256-512-1024 with LeakyReLU nonlinearities (negative slope 0.2), input/output size of 128 and 2010 respectively, and a final hyperbolic tangent nonlinearity applied to the output. We set λ = 0.2. C.4.2 ZERO-LAPLACIAN FIELDS We used MLPs as a discriminator and generator. The discriminator consisted of 3 hidden layers of sizes 1024-512-256 with LeakyReLU nonlinearities (negative slope 0.2) and input/output size of 1086 and 1 respectively. The generator consisted of 3 hidden layers of sizes 256-512-1024 with LeakyReLU nonlinearities (negative slope 0.2), input/output size of 128 and 1086 respectively, and sigmoid function applied to the output. We set λ = 0.0075. D CAHN-HILLIARD EQUATION WITH HARD INTEGRAL CONSTRAINTS Here we demonstrate how the approach to enforcing hard constraints described in Section 3.3 can be used to enforce integral constraints on a nonuniform grid. We use the same setup as in Section 4.2 with 30 training simulations and 50% of nodes in the grid. We compare three models: unconstrained model, model with soft constraint and model with hard constraint. We use the PWL basis during training and testing. Table 1 shows that relative errors of all three models are practically similar but, as Figure 12 demonstrates, constraint violations differ significantly. We see that constraint violations of the model with hard constraint are zero at all time points as expected. Being able to produce predictions that satisfy some constraints exactly might be very useful for some applications, however, as we mention in Section 3.3, currently this approach to enforcing hard constraints is limited to systems with a relatively small number of nodes and is significantly slower than models with soft constraints. We report training times in Table 1. E LEARNING CAHN-HILLIARD EQUATION WITH GNNS We use the same setup as in Section 4.2 with 75% of nodes in the grid but trained the models for 1500 epochs. Instead of an MLP we use a GNN with messaging and aggregation networks being MLPs with two hidden layers of size 64 and LeakyReLU nonlinearities (negative slope 0.2). For each node, the GNN evaluates the output as dui dt = γ 1 |N (i)| N∑ j∈N (i) φ(uproji , u proj j , x proj ij ), u proj i , (26) where uproji and u proj j are linear projections of the state at nodes i and j and x proj ij is a linear projection of the pair consisting of the distance between nodes i and j and a unit vector pointing from node i to node j. All projections have dimension 16. We compare constrained and unconstrained models. We use the PWL, PWQ and cubic RBF bases. Results of the experiment are shown in Figure 13. The figure shows that relative errors and constraint violations of all models are significantly higher than for MLP-based models. F EXTRA FIGURES
1. What is the main contribution of the paper regarding incorporating constraints into learnable PDE models? 2. What are the strengths of the proposed method, particularly in representing the PDE solution in a basis? 3. Do you have any questions regarding the experimental section, such as the choice of benchmarks, the lack of baselines, or the representation of error bars in figures? 4. Are there any minor issues in the notation or explanations in the paper that could be clarified?
Summary Of The Paper Review
Summary Of The Paper The authors present a method to incorporate constraints into the output of learnable PDE models. They cover point-wise, differential, and integral constraints. They achieve this by representing the PDE solution in a basis, as is common for the variational method (e.g. pseudo-spectral method, and finite element method). The neural network outputs interpolation coefficients for each time step into the future. To implement the constraints, they have two methods: ‘soft constraints’, whereby a constraint-breaking penalty is applied to the neural network output and ‘hard constraints’, whereby the output of the network is projected on to constraint-satisfying solutions. This latter method is achieved by solving a convex programme. The authors test on a variety of benchmarks, demonstrating that their method works; although, it is hard to parse the experimental section for whether these results are significant. Review Advantages The submission is written well, is clear up until the experimental section. The experiments show that the addition of constraints help to learn a model that does not violate said constraints. The central method is sound and simple enough to be reimplemented. That said, I think some experimental details are missing and I could not reimplement the experiments. That said, since code will be released, this is not such a big issue. Queries Equation 1: can dynamics F also explicitly depend on t? As written, it does not. Equation 7: I’m confused about the dimensionality of \alpha_j. In Equation 6, it is a p-dimensional quantity, because u(t, x) \in R^p. Is it the case that u(t, x) is now scalar-valued, since h: R \to R and \gamma_m: R \to R? This should be made explicit in the text. Equation 10: what is g? Section 4.1: I think it would be more helpful to the reader if the experimental setups from [37] and [10] were described more completely in the main text. It is difficult to gauge from what is written, what exactly is going on and whether your results are meaningful Section 4.2: Could you provide intuition for why the MLP would work better than the GNN? I would hazard a guess that the GNN does not have a large enough receptive field. Since you only use a single layer of message passing from what I see in the appendix, the receptive field of an output neuron is the 1-step neighborhood due the the Delaunay triangulation. By contrast, the MLP receives the full domain as its receptive field. There seems to be a lack of baselines in the experimental section after section 4.1. I am not entirely sure why this is. Given that it is mentioned in the related work that there is a rich literature on incorporating constraints, both hard and soft, into learnable PDE models, I would expect to have seen some in the experiments. What do the error bars represent in Figures 4 and 7? Minor notes Equation 2: the ordering of x and t in du(x, t)/dt does not align with elsewhere in the submission, where you have used u(t, x). Equation 2: Please be more specific what the notation F_\theta(x_i, x_N(i), …) means. For instance what is x_N(i) or u_N(i)? My understanding is that you have replaced the differential operator F with a local parametrised operator F_\theta over neighbourhoods N(i). The notation and underlying assumptions here should be explained more precisely.
ICLR
Title Enforcing physics-based algebraic constraints for inference of PDE models on unstructured grids Abstract The lack of relevant physical constraints in data-driven models of physical systems, such as neural network parameterized partial differential equations (PDEs), might lead to unrealistic modeling outcomes. A majority of approaches to solving this problem are based on forcing a model to satisfy a set of equations representing physical constraints. Currently available approaches can enforce a very limited set of constraints and are applicable only to uniform spatial grids. We propose a method for enforcing general pointwise, differential and integral constraints on unstructured spatial grids. Our method is based on representing a model’s output in terms of a function approximation and enforcing constraints on that approximation. We demonstrate wide applicability and strong performance of our approach in data-driven learning of dynamical PDE systems and distributions of physical fields. 1 INTRODUCTION Multiple works have shown the capability of neural networks to solve complex physical problems and learn the behavior of physical systems from data. Examples include learning and solving ordinary differential equations (ODEs) [6], partial differential equations (PDEs) [28; 20] and rigid body dynamics [31; 5]. Purely data-driven models are typically not forced to satisfy physical constraints of the system that generated the data. This might lead to unrealistic predictions that violate some known properties of the underlying physical system. Incorporation of relevant constraints allows to make a better use of the available data and makes predictions more physically plausible. The field dealing with physics-constrained learning is diverse and offers many approaches to adding constraints to models. We refer the reader to many reviews for details [30; 3; 36; 19]. The approach we consider in this work is based on forcing a model to satisfy algebraic constraints represented by a set of equalities and inequalities. This is the most commonly used approach which allows to represent a wide range of constraints and has been shown to work well in many cases [18; 17; 25]. However, while many constraints can be represented algebraically, it is not always clear how to evaluate and enforce them. Currently available approaches to enforcing algebraic constraints are limited to uniform grids and have a very narrow range of constraints they can enforce (e.g. only pointwise, or specific differential constraints), see Section 5 for details of related work. Such approaches can be readily applied to models based on convolutional neural networks (CNNs) but cannot be extended to recently developed models based on graph neural networks (GNNs) [33; 27; 15] and other models working on unstructured grids. We propose a much more general method which allows to enforce pointwise, differential and integral constraints on unstructured spatial grids and demonstrate its applicability in learning of PDE-driven dynamical systems and distributions of physical fields. The method is based on using a models’s output at the nodes of a grid to construct an interpolant and applying constraints directly to that interpolant (Section 3). Code and data will be made publicly available. 2 BACKGROUND PDE-driven dynamical systems. Many physical systems can be described in terms of PDEs. Such systems are defined on a bounded domain on which they evolve over time. We consider continuous dynamical systems with state u(t,x) ∈ Rp that evolves over time t ∈ R≥0 and spatial locations x ∈ Ω ⊂ RD. For physical systems, D is typically limited to {1, 2, 3} although our method will work with any value of D. We assume the system is governed by an unknown PDE ∂u(t,x) ∂t = F (x, u(t,x),∇xu(t,x),∇2xu(t,x), ...) (1) which describes the temporal evolution of the system in terms of the locations x, state u and its first and higher-order partial derivatives w.r.t. x. The goal of a data-driven PDE model is to learn the dynamics F from data. Data for learning F is collected by measuring the state of the system at observation locations (x1, . . . ,xN ) over increasing time points (t0, . . . , tM ). This results in a dataset (y(t0), . . . ,y(tM )), where y(ti) = (u(ti,x1), . . . , u(ti,xN )) is a collection of observations. The dataset is used to train the model to predict (y(t1), . . . ,y(tM )) starting from the initial state y(t0). Training is typically done by minimizing an average loss between the model’s predictions u(t) and the data y(t). PDE models differ in restrictions they impose on time points (temporal grid) and observation locations (spatial grid). Some models require both grids to be uniform [23], other models relax these requirements and allow arbitrary spatial [27] and spatio-temporal grids [15]. We build our algebraic constraints method using the model from [15] as the most general one. The model is based on application of the method of lines [32] to Equation 1 which results into a system of ODEs u̇(t) := du(t,x1) dt ... du(t,xN ) dt ≈ Fθ(x1,xN (1), u1, uN (1))... Fθ(xN ,xN (N), uN , uN (N)) (2) which approximates the solution of Equation 1 at the observation locations xi using their neighboring points N (i), where xN (i) and uN (i) are the neighbors’ positions and states respectively, and ui is u(t,xi). The approximate solution converges to the true solution as N increases. The true dynamics F is approximated by a parametric model Fθ whose parameters θ are learned by minimizing the difference between the model’s predictions u(t) = u(0) + ∫ t 0 u̇(τ)dτ (3) and the data y(t). The integral in Equation 3 is solved using a numerical ODE solver. In [15], the function Fθ was represented by a graph neural network (GNN) which takes states and locations at an observation point i and its neighboring points N (i). The observation points are connected into a grid using Delaunay triangulation which allows to naturally defineN (i) as a set of points connected to the point i. However, Fθ can be represented by other models and a different neighbor selection criterion can be used. The model parameters θ are learned by minimizing the MSE between y(t) and u(t) Ldata = 1 M M∑ i=1 ‖u(ti)− y(ti)‖22. (4) The gradient of Ldata w.r.t. θ is evaluated using the adjoint method as shown in [7]. Generative Adversarial Networks One of the tasks that we consider is learning distributions of physical fields. For that purpose we utilize generative adversarial networks (GANs). A GAN is a generative model consisting of a generator and a discriminator [12]. The generator, G, learns to transform a random variable Z ∼ pZ over a latent space Z to the data space Y in such a way that the discriminator, D, cannot tell the difference between samples generated by G and samples from the data distribution pdata. Both, G and D are learned by solving the following minimax problem min G max D V (G,D) = EY∼pdata [logD(Y )] + EZ∼pZ [log (1−D(G(Z)))] . (5) Solution of this problem exists and is unique with the optimal generator perfectly mimicking the data distribution [12]. 3 METHODS In this section we presents an approach to evaluating pointwise, differential and integral constraints on unstructured grids. Then, we demonstrate how this approach can be used to enforce arbitrary soft and linear hard constraints. 3.1 EVALUATING CONSTRAINTS ON UNSTRUCTURED GRIDS We assume the data y(t) is available at observation points (x1, . . . ,xN ) and time points (t1, . . . , tM ) and that a model makes predictions u(t) at these points. We assume the predictions to be evaluations of an unknown underlying function. Since the underlying function is unknown, we cannot impose constraints on it directly. Instead, we approximate it by an interpolant uf (t,x) and impose constraints on uf (t,x) (Figure 1). The approximation is constructed from u(t) by placing a basis function at each xi and representing uf (t,x) as uf (t,x) = N∑ j=1 αj(t)φj(x), (6) where φj is a scalar basis function at xj and αj ∈ Rp. The coefficients αj(t) are obtained from u(t) (see Section 3.4). Next, we show how to evaluate constraints on uf (t,x) using basic building blocks. To avoid cluttered notation, we consider equality constraints and assume u(t,x),x ∈ R. Generalization to inequality constraints, vector fields and higher spatial dimensions is straightforward. Pointwise constraints. Consider points z = (z1, . . . , zK) in Ω on which a pointwise constraint h(uf (t, zi)) = 0 should be evaluated. Assume the function h : R→ R is representable in terms of a finite number of functions γm(uf (t, zi)) : R → R indexed by m. For example, should the constraint be h(uf ) = 3uf + u2f = 0, then we would define γ1(uf ) = uf , γ2(uf ) = u2f and h(uf ) = 3 · γ1(uf ) + γ2(uf ) = 0. Then, h can be evaluated by evaluating each γm as γm(uf (t, zi)) = γm N∑ j=1 αj(t)φj(zi) = γm (Φi,·α(t)) , (7) where α(t) = (α1(t), . . . , αN (t))T , Φ is K-by-N matrix with elements Φi,j = φj(zi), and Φi,· is the i’th row of Φ. Differential constraints. Consider the same setup as before but now h(uf (t, zi)) = 0 consists of differential operators and is representable in terms of a finite number of functions ∂ qγm(uf (t,zi)) ∂zqi : R→ R indexed bym, where the derivative order q could be different for eachm. For example, should the constraint be h(uf ) = 3uf + uf · ∂u2f ∂x = 0, then we would define γ1(uf ) = uf , γ2(uf ) = u 2 f and h(uf ) = 3 · γ1(uf ) + γ1(uf ) · ∂γ2(uf )∂z = 0. Then, h can be evaluated by evaluating each ∂qγm(uf (t,zi)) ∂zqi using the generalization of the chain rule (Appendix A) which contains only two types of terms. The first type of terms dγmduf , . . . , dqγm duqf can be evaluated using automatic differentiation while the second type of terms ∂uf∂zi , . . . , ∂quf ∂zqi can be evaluated as ∂quf ∂zqi = N∑ j=1 αj(t) ∂qφj(zi) ∂zqi = Φ (q) i,· α(t), (8) where Φ(q)i,j = ∂qφj(zi) ∂zqi . Mixed partial derivatives can be handled in a similar way (Appendix A). Integral constraints. Consider the same setup as before but with h(uf (t,x)) =∫ Ω τ(uf (t,x))dx = 0, where the function τ : R → R is representable in terms of functions γm(uf (t, zi)) : R→ R similarly to the pointwise constraints. Then, ∫ Ω τ(uf (t,x))dx can be evalu- ated using a numerical integration technique, e.g. midpoint rule, Gaussian quadrature or Monte-Carlo integration, as ∫ Ω τ(uf (t,x))dx ≈ K∑ i=1 τ(uf (t, zi))µi, (9) where K is the number of integration points, µi are integration coefficients which depend on the grid and integration method, and τ(uf (t, zi)) is evaluated as in Equation 7. 3.2 SOFT CONSTRAINTS Soft constraints are implemented by minimizing the following loss Ldata + λr(h(uf )), where λ ∈ R and Ldata is defined as in Equation 4. We set r(h(uf )) = 1KM ∑K i=1 ∑M j=1 h(uf (tj , zi)) 2 for point- wise and differential constraints and r(h(uf )) = 1M ∑M j=1 h(uf (tj ,x)) 2 for integral constraints. 3.3 HARD CONSTRAINTS Our method allows to implement hard constraints by projecting the interpolant uf (t,x) to a subset of functions which satisfy the required constraints. Namely, if uf (t,x) does not satisfy constraints g and h, it is projected to a subset of functions which satisfy the constraint by solving the following optimization problem min ûf∈Vφ ‖uf − ûf‖2L2 s.t. h(ûf ) = 0, g(ûf ) ≤ 0, (10) where the projection is denoted by ûf (t,x) and Vφ is spanned by the basis functions. Using the basis representation uf (t,x) = ∑N i=1 αi(t)φi(x) and ûf (t,x) = ∑N i=1 βi(t)φi(x) we can rewrite the optimization problem (10) as min β(t)∈RN (α(t)− β(t))T Φ̂(α(t)− β(t)) s.t. h(ûf ) = 0, g(ûf ) ≤ 0, (11) where β(t) = (β1(t), . . . , βN (t))T and Φ̂i,j = ∫ Ω φi(x)φj(x)dx. To train the model end-to-end, the problem (11) should be differentiable. Agrawal et. al. [1] proposed differentiable convex optimization which could be used in this case if the problem (11) could be expressed in a DPP-compliant way (see [1]). To do that, we restrict ourselves to constraints that can be expressed as an equality or inequality between Aβ(t) and b, where A is a constant matrix and b is a constant vector. This formulation admits pointwise, differential and integral constraints on untransformed uf . The objective function is convex since its Hessian is positive-semidefinite i.e. for any v ∈ RN vT Φ̂v = N∑ i,j=1 vivjΦ̂i,j = N∑ i,j=1 〈viφi, vjφj〉L2 = 〈 N∑ i=1 viφi, N∑ j=1 vjφj〉L2 ≥ 0. (12) This allows to solve the problem (11) and differentiate its solution β∗(t) w.r.t. α(t). The model parameters are found by minimizing the following loss function Ldata +λLproj, where λ ∈ R and Ldata is defined as in Equation 4 but with u(ti) replaced by û(ti) = (ûf (ti,x1), . . . , ûf (ti,xN )). We set Lproj = 1NM ∑N i=1 ∑M j=1 ‖uf (tj ,xi)− ûf (tj ,xi)‖ 2 2. The second term makes the optimization procedure prefer models that predict uf close to the feasible set of the problem (11). We note that the proposed approach is currently limited to small-scale problems due to existing computational bottlenecks in the implementation of differentiable convex optimization [1]. 3.4 BASIS FUNCTIONS Selecting appropriate basis is crucial for efficiency and applicability of the proposed method. Ideally, the basis should allow efficient construction of uf (t,x) from u(t), contain no tunable parameters, and lead to sparse matrices Φ,Φ(q) and Φ̂. We consider bases from two families: Lagrange basis functions and radial basis functions (RBFs). Lagrange basis functions do not have tunable parameters and have compact support which leads to sparse Φ,Φ(q) and Φ̂. For the piecewise linear basis the interpolant uf (t,x) can be constructed directly from the predictions by setting α(t) = u(t). However, constructing uf (t,x) for a higher order basis, e.g. piecewise quadratic, requires the model to make predictions not only at the observation points, but also at some extra points where the data is not available. In Section 4 we demonstrate one approach to solving this problem. After extending the state u(t) by predictions at the extra nodes, the coefficients α(t) can be evaluated similarly to the piecewise linear basis. In this work we use piecewise linear (PWL) and piecewise quadratic (PWQ) bases. Examples of PWL basis functions are shown in Figure 2. Radial basis functions have a wider range of properties. Some RBFs have tunable parameters, some don’t. The matrices Φ,Φ(q) and Φ̂ evaluated with RBFs are typically dense, but RBFs with compact support exist (e.g. bump function). The interpolant uf (t,x) can be constructed by evaluating α(t) = K−1u(t), where K−1 is the inverse of the interpolation matrix of the given RBF and Kij = φ(‖xi − xj‖), where φ is an RBF and xi,xj are observation locations. In this work we use the cubic RBF basis i.e. φ(r) = r3. We use PyTorch [26] to handle sparse matrices and to evaluateK−1u(t) in a differentiable way. 4 EXPERIMENTS In the following experiments we use the relative error between the data y(t) and model predictions u(t) defined as ‖y(t)−u(t)‖2‖y(t)‖2 and consider only soft constraints. We present an experiment with hard constraints implemented as shown in Section 3.3 in Appendix D. Data generation is described in Appendix B. Training, testing and modeling details are in Appendix C. All experiments were run on a single NVIDIA Quadro P5000 GPU. All errors bars represent one standard deviation of the results over five random seeds. 4.1 REPLACING EXISTING METHODS In this experiment we take existing models which incorporate physics-based constraints in training and replace their constraint enforcing approaches with ours. We consider two works. First, [37] which trains a GAN to produce divergence-free vector fields using zero-divergence constraint. Second, [10] which predicts warping fields driving the evolution of sea surface temperature by observing snapshots of the temperature over time while enforcing gradient and divergence constraints on the warping fields (see Appendix C for more details). Both models work on uniform grids which allows them to evaluate constraints using finite differences. For comparison, we replace finite differences with our method and observe how it changes the models’ performance. In both cases we use the PWL basis. For [37] we track the mean divergence and discriminator loss. Results of the original approach are as follows: mean divergence 0.079 and discriminator loss 0.091. With our method the mean divergence was 0.014 and the discriminator loss was 0.088. Both approaches results in similar discriminator losses but our approach produces a smaller mean divergence (smaller is better). Our method increased the runtime per epoch by 6%. For [10] we track the total, divergence and smoothness losses which, with the original approach, were 0.139, 8.4 · 10−5 and 1.51 · 10−4, respectively. With our approach the losses were 0.139, 8.3 · 10−5 and 1.51 · 10−4, respectively. Both methods produce very similar results. Our method increased the runtime per epoch by 30%. Overall, replacing existing constraint enforcing approaches by ours on data from uniform grids resulted in comparable model performance, except for runtime which was slightly increased. 4.2 CAHN-HILLIARD EQUATION WITH AN INTEGRAL CONSTRAINT We start with the 1D Cahn-Hilliard equation ∂u ∂t = 2∇2(u(1− u)2 − u2(1− u)− 2∇2u) (13) which is known to conserve the state u i.e. h(u) = ∫ Ω u(t, x)dx− C = 0 at all time points, where C = ∫ Ω u(0, x)dx is a constant. This is an example of a conservation law which are abundant in nature and are important class of constraints that data-driven models of physical systems should satisfy. Conservation laws can be expressed in differential and integral forms and this experiment demonstrates how the integral form can be enforced. The constraint is evaluated using the midpoint rule as shown in the previous section with a single γ1 being the identity function. We use PWL, PWQ and cubic RBF bases and compare the results to an unconstrained model. For training we use 30, 60 and 120 simulations while the test set consist of 60 simulations. Simulations in the training/test data last for 0.0015/0.0030 seconds and contain 50/100 uniformly spaced time points. The full spatial grid consists of 101 uniformly spaced nodes. We randomly sample 50%, 75% and 100% of the nodes and train/test on the resulting (irregular) spatial grid. Training and testing is done with identical spatial grids. An example of a spatial grid with 50% of nodes is shown in Figure 3. We evaluate the constraint on a uniform grid with 200 nodes placed on top of the original grid. To learn the dynamics of the system we use the model from [15] (Section 2). We found that using a GNN produced poor results. For that reason we represented the function Fθ with a multiplayer perceptron (MLP) which updates the state of each node based on the states of all other nodes in the grid (results for a GNN are in Appendix E). The MLP contains two hidden layers with Leaky ReLU nonlinearities. The number of hidden neurons was set to the number of nodes in the grid. The coefficients α(t) for the PWL and cubic bases can be evaluated directly from the model predictions at the grid nodes. But the PWQ basis requires extra predictions to be available between the nodes. This is problematic since there is no data at these points to guide the model’s predictions. To solve this problem we introduce a small MLP which is applied to consecutive pairs of nodes. The MLP takes the states at both nodes and the distance between them as the input and estimates the state at the midpoint between the two nodes. The MLP is trained jointly with the main model and uses only the constraint-related loss term during training. For testing, we construct the interpolant uf (t,x) using the thin plate spline basis (φ(r) = r2 log r) and evaluate the constraint on that interpolant. This allows to make a fair comparison between the unconstrained model and different bases and avoid biasing or ovefitting to bases used for training. Figure 4 shows results of the experiment. We observe that changing the node fraction does not significantly affect the relative errors but has noticeable effect on constraint violations, especially for the unconstrained model. Constrained models tend to show similar or better performance than the unconstrained model. Among all bases, the cubic basis consistently results in lower relative errors and constraint violations. However, the simpler PWL basis often performs on par with the cubic basis, especially on denser spatial grids. We also observe that coarsening of the grid increases the constraint violation gap between constrained and unconstrained models and that this gap seems to not close as we increase the amount of training data. The PWQ basis performs rather poorly on fine grids which is likely due to a suboptimal approach to evaluating the state at the extra nodes. A better approach could consider not only pairs of points but also larger neighborhoods. Nonetheless, the PWQ basis achieves good performance on coarse grids which shows that piecewise bases of higher order could potentially be used to enforce constraints. This will allow to scale to grids with a large number of nodes due to sparsity of the constraint matrices and efficient evaluation of α. 4.3 HEAT EQUATION WITH A MONOTONICITY CONSTRAINT We impose constraints on a 2D system governed by the heat equation ∂u∂t = ∇2u for which the generated initial conditions (ICs) are monotone in one direction. Since the ICs are monotone, the state u remains monotone at all time points as well. We enforce the monotonicity constraint as ∂u∂x ≥ 0. The constraint is evaluated as shown in the previous section with γ1 being the identity function. For training we use 15, 30 and 90 simulations while the test set consist of 120 simulations. Simulations in the training/test data last for 0.1/0.2 seconds and contain 21/41 uniformly spaced time points. The full spatial grid consists of 1087 nodes. We randomly sample 33%, 66% and 100% of the nodes and train/test on the resulting (irregular) spatial grid. Training and testing is done with identical spatial grids. Spatial grid with 100% of nodes is shown in Figure 5. The constraint is evaluated at the nodes of a uniform 51 × 51 grid placed on top of the original grid. To learn the dynamics of the system we use the model from [15] directly with the messaging and aggregation networks being MLPs with a single hidden layer consisting of 60 neurons with Tanh nonlinearities and the input/output sizes of 4/40 and 41/1 respectively. During testing, we use predictions of the models to construct an interpolant uf (t,x) using the thin plate spline basis and evaluate the constraint on that interpolant. This allows to make a fair comparison between the unconstrained model and different bases. Figure 7 shows results of the experiment. We observe that changing the node fraction equally increases relative errors of all models and has noticeable effect on constraint violations, especially for the unconstrained model. Constrained models tend to show slightly higher or comparable relative errors but noticeably lower constraint violations than the unconstrained model. The cubic and PWL bases perform equally well in this case. Similarly to the experiment in the previous section, we observe that coarsening of the grid introduces a larger constraint violation gap between constrained and unconstrained models and that this gap seems to not close as we increase the amount of training data. Figure 6 shows qualitative difference between predictions of constrained and unconstrained models. It can be noted that predictions from the constrained model have noticeably smoother contours thus making the field more monotonous in the horizontal direction. 4.4 LEARNING DISTRIBUTIONS OF PHYSICAL FIELDS We demonstrate the effect of adding constraints to a GAN when learning distributions of physical fields on unstructured grids. We use Wasserstein GAN (WGAN) [2] as a more stable variant of a GAN. We use MLPs as a generator and discriminator. Unconstrained and constrained models are trained for 1.2M iterations. Constraints are enabled only after 600k iterations. Constrained models are trained similarly to the unconstrained ones but with a modified generator loss defined as LG + λ ln (1 + LC), where LG is the standard generator loss and LC is the constraint-based loss. We defineLC as the mean value of h(u)2, where h is a constraint evaluated at the centroid of each cell in the grid. 4.4.1 ZERO-DIVERGENCE FIELDS Divergence-free vector fields are often encountered in solutions of fluid dynamics problems. The divergence-free constraint on a vector field u(x, y) = (u1(x, y), u2(x, y)) T is defined as h(u) = ∂u1∂x + ∂u2 ∂y = 0. The constraint is enforced using the PWL basis. We generated a dataset with 10k divergence-free fields on an unstructured grid with 1050 nodes (Figure 14) and used a WGAN to learn a distribution over such fields. Note that the generated fields are not entirely divergence-free but have small residual divergence due to discretization errors. Figure 9a shows that there is a clear difference in the quality of the samples generated by the unconstrained and constrained models. Samples from the constrained model are smoother and more similar to the data. Quantitative comparison of the samples presented in Figure 8 shows that the constrained model generates fields that have much lower constraint violation and divergence distribution very similar to that of the data. 4.4.2 ZERO-LAPLACIAN FIELDS Fields with zero Laplacian represent solutions to some PDEs, for example the steady-state heat equation. The zero-Laplacian constraint on a scalar field u(x, y) is defined as h(u) = ∂ 2u ∂x2 + ∂2u ∂y2 = 0. The constraint is enforced using the cubic basis as the PWL basis has zero second derivatives everywhere. We generated a dataset with 10k Laplacian-free fields on an unstructured grid with 1050 nodes (Figure 14) and used a WGAN to learn a distribution over such fields. Note that the generated fields are not entirely Laplacian-free due to discretization errors. Results of the experiment are shown in Figures 9b and 8. Similarly to the divergence-free case, visual quality of the fields generated by the constrained model is significantly better than for the unconstrained model. Quantitative comparison of the samples presented in Figure 8 shows that the constrained model generates fields that have much lower constraint violation and Laplacian distribution very similar to that of the data. 5 RELATED WORK Soft constraints. Soft constraints are widely used due to being relatively easy to implement. Examples include lake temperature prediction [18; 16], traffic simulation [22], fluid and climate modeling [11; 10; 4], where constraints are evaluated pointwise or using finite differences. Hard constraints. Approaches to implementing hard constraints are diverse and can be categorized as processing the output of an unconstrained model [4; 17; 24; 34] and designing a model that produces feasible predictions by default [23; 25; 14; 9; 13; 8; 38]. Constrained PDE models Current approaches to enforcing soft [10; 11] and hard [21; 25; 17] constraints are limited to specific types of constraints and spatial grids. For example, [25; 17] implement only hard differential constraints and both are limited to uniform grids. Uniform grids allow to evaluate constraints efficiently e.g. using finite differences [10; 21; 25] or fast Fourier transform [17] but assuming that the data lies on a uniform grid might be limiting. Constrained GANs Works such as [37; 17] showed how physics-based constraints benefit training and quality of the generated samples but are also limited to uniform grids. 6 CONCLUSION We presented a general approach to enforcing algebraic constraints on unstructured grids and showed how it can be used to enforce soft and hard constraints. We demonstrated applicability of the approach to learning of PDE-driven dynamical systems and distributions of physical fields. We considered two families of basis functions for constructing the interpolant and showed how Lagrange basis functions of order higher than one can be used. Our method allows to drop the unrealistic assumption about uniformity of spatial grids and shows promising results on various tasks. REPRODUCIBILITY STATEMENT All details required to reproduce the experiments are provided in Section 4 and Appendices. Code and data used to run the experiments will be made publicly available after the review process. A GENERALIZED CHAIN RULE AND HANDLING MIXED PARTIAL DERIVATIVES. Let y = g(x1, . . . , xn) with all arguments being either identical, distinct or grouped. Then, partial derivatives of f(y) can be evaluated using the Faà di Bruno’s formula ∂nf(y) ∂x1 · · · ∂xn = ∑ π∈Π f (|π|)(y) ∏ B∈π ∂|B|y∏ j∈B ∂xj , where Π is the set of all partitions of the set {1, . . . , n}, B runs through elements of the partition π, f (m) denotes m’th derivative, and | · | is cardinality. The formula consists of two terms: f (|π|)(y), which can be evaluated using automatic differentiation, and ∂ |B|y∏ j∈B ∂xj , which can be evaluated as shown in Equation 8. In case that all x1, . . . , xn are identical, the mixed derivative ∂nf(y) ∂x1···∂xn reduces to ∂nf(y) ∂xn1 . B DATA GENERATION In all cases we run simulation on a fine grid and then interpolate the results to a coarser grid represented as the "full grid" in the experiments. B.1 CAHN-HILLIARD EQUATION WITH AN INTEGRAL CONSTRAINT Training and testing data was obtained by solving ∂u ∂t = 2∇2(u(1− u)2 − u2(1− u)− 2∇2u) (14) on a unit interval with periodic boundary conditions and = 0.04. The domain was represented by a uniform grid with 100 nodes and the time step was set to 1.0e-6 sec. The initial conditions u0(x) were generated as follows ũ0(x) = 10∑ i=1 (λi cos ((x− s)2π) + γi sin ((x− s)2π)) + λ0 2 , (15) u0(x) = ũ0(x)−min ũ0(x) max ũ0(x)−min ũ0(x) , (16) where λi, γi ∼ Unif(−1, 1) and s ∼ Unif(0, 1). Examples of the simulations are shown in Figure 10. B.2 HEAT EQUATION WITH A MONOTONICITY CONSTRAINT Training and testing data was obtained by solving ∂u ∂t = D∇2u (17) on a unit square with zero Neumann boundary conditions and D = 0.2. The domain was represented by an unstructured grid with 2971 nodes and the time step was set to 0.001 sec. The initial conditions u0(x) were generated as f(x) = 6∑ i=0 ωix i, (18) g(y) = 1 2 3∑ i=1 (λi cos ((x+ s)2π) + γi sin ((x+ s)2π)) + λ0 2 , (19) ũ0(x, y) = f(x) + g(y), (20) u0(x, y) = ũ0(x, y)−min ũ0(x, y) max ũ0(x, y)−min ũ0(x, y) , (21) where ωi ∼ Unif(0.1, 1.1), λi, γi, s ∼ Unif(−1, 1). Examples of the simulations are shown in Figure 11. B.3 GAN WITH A DIVERGENCE CONSTRAINT The data was generated by sampling random velocity fields and then projecting them to the space of divergence-free fields. The procedure was as follows. First, a random velocity field u0(x, y) was generated on a unit square by generating each component i as ũ0i(x, y) = N∑ k,l=−N λkl cos (kx+ ly) + γkl sin (kx+ ly), (22) u0i(x, y) = 6× ( ũ0i(x, y)−min ũ0i(x, y) max ũ0i(x, y)−min ũ0i(x, y) − 0.5 ) , (23) where N = 10 and λkl, γkl ∼ N (0, 1). Then, the divergence-free component of u0(x, y), denoted by u∗0(x, y), was extracted by using the projection method by solving∇ · u0 = ∇2φ for φ and then evaluating u∗0(x, y) = u0(x, y)−∇φ. Finally, the data was scaled to [−1, 1]. B.4 GAN WITH A LAPLACIAN CONSTRAINT The data was generated by solving ∇2u = 0 (24) on a unit square with Dirichlet boundary conditions. The domain was represented by an unstructured grid with 2971 nodes. The boundary conditions were generated by generating random functions u0(x) and using their boundary values as the boundary conditions. The functions u0(x) were generated as u0(x, y) = N∑ k,l=−N λkl cos (kx+ ly) + γkl sin (kx+ ly) (25) where N = 5 and λkl, γkl ∼ N (0, 1). The data was then scaled to [0, 1]. C MODELS, TRAINING AND TESTING C.1 REPLACING EXISTING METHODS For our comparisons we considered experiments from two works. Next, we provide some details about these experiments. The first experiment was taken from [37] Section 3.2. The experiment shows how soft physics-based constraints affect predictions of a GAN learning a distribution of divergence-free fields. The data is generated on a uniform grid which allows to evaluate divergence using finite differences. The constraint is enforced through an extra loss term which penalizes violation of the constraint. The performance metric used is the Frobenius norm of the divergence averaged over all fields in a batch. For training we used code provided by the authors with original parameters. We replaced finite differences in the constrained evaluation function with our method. The second experiment was taken from [10]. This work deals with the task of predicting sea surface temperatures at future times given snapshots of the temperature over current and previous times. The model proposed by the authors accomplishes this tasks by taking a sequence of surface temperatures at times ti−k, . . . , ti and predicting the underlying motion field which is then used to predict the temperature at time ti+1. Insights about physical properties of the motion field were used to constrain the model’s predictions. Constraints are imposed on divergence, magnitude and gradients of the motion field. The data is generated on a uniform grid which allows to evaluate the constraints using finite differences. The constraints are enforced through extra loss terms which penalize violation of the constraints. Performance metrics that were used are MSE between the data and model predictions, smoothness loss and divergence loss. For training we used code provided by the authors with original parameters. We replaced finite differences in the constrained evaluation function with our method. C.2 CAHN-HILLIARD EQUATION WITH AN INTEGRAL CONSTRAINT In all experiments with the Cahn-Hilliard equation we represent the dynamics function Fθ by an MLP with 2 hidden layers and LeakyReLU nonlinearities (negative slope 0.2). The number of neurons in each layer was set to the number of nodes in the spatial grid on which the model was trained. The predictions u(t) were obtained by simulating the system forward in time using adaptive Heun solver from torchdiffeq package [6] with rtol and atol set to 1.0e-5 and 1.0e-5 respectively. All models were trained for 1500 epochs using Rprop optimizer [29] with learning rate set to 1.0 · 10−6 and batch size set to the number of simulations in the training set. Mean squared error was used as the loss function. Spatial and temporal grids in the testing data were the same as in the training data. We set λ = 2. C.3 HEAT EQUATION WITH A MONOTONICITY CONSTRAINT In all experiments with the heat equation we represent the dynamics function Fθ by a GNN with the messaging and aggregation networks being MLPs with a single hidden layer consisting of 60 neurons with Tanh nonlinearities and the input/output sizes of 4/40 and 41/1 respectively. The predictions u(t) were obtained by simulating the system forward in time using adaptive Heun solver from torchdiffeq package [6] with rtol and atol set to 1.0e-5 and 1.0e-5 respectively. All models were trained for 750 epochs using Rprop optimizer [29] with learning rate set to 1.0 · 10−6 and batch size set to the number of simulations in the training set. Mean squared error was used as the loss function. Spatial and temporal grids in the testing data were the same as in the training data. We set λ = 0.1. C.4 LEARNING DISTRIBUTIONS OF PHYSICAL FIELDS In both cases we used identical architectures and training process for the constrained and unconstrained models. Both models were trained for 1.2M iterations using the same random seed. Constraints in the constrained model were enabled only after 600k iterations. The base distribution was set to a 128-dimensional isotropic standard normal. Models were trained using RMSProp optimizer [35] with batch size and learning rate set to 64 and 0.00001 respectively. The discriminator’s weights were clipped to [−0.01, 0.01]. C.4.1 ZERO-DIVERGENCE FIELDS We used MLPs as a discriminator and generator. The discriminator consisted of 3 hidden layers of sizes 1024-512-256 with LeakyReLU nonlinearities (negative slope 0.2) and input/output size of 2010 and 1 respectively. The generator consisted of 3 hidden layers of sizes 256-512-1024 with LeakyReLU nonlinearities (negative slope 0.2), input/output size of 128 and 2010 respectively, and a final hyperbolic tangent nonlinearity applied to the output. We set λ = 0.2. C.4.2 ZERO-LAPLACIAN FIELDS We used MLPs as a discriminator and generator. The discriminator consisted of 3 hidden layers of sizes 1024-512-256 with LeakyReLU nonlinearities (negative slope 0.2) and input/output size of 1086 and 1 respectively. The generator consisted of 3 hidden layers of sizes 256-512-1024 with LeakyReLU nonlinearities (negative slope 0.2), input/output size of 128 and 1086 respectively, and sigmoid function applied to the output. We set λ = 0.0075. D CAHN-HILLIARD EQUATION WITH HARD INTEGRAL CONSTRAINTS Here we demonstrate how the approach to enforcing hard constraints described in Section 3.3 can be used to enforce integral constraints on a nonuniform grid. We use the same setup as in Section 4.2 with 30 training simulations and 50% of nodes in the grid. We compare three models: unconstrained model, model with soft constraint and model with hard constraint. We use the PWL basis during training and testing. Table 1 shows that relative errors of all three models are practically similar but, as Figure 12 demonstrates, constraint violations differ significantly. We see that constraint violations of the model with hard constraint are zero at all time points as expected. Being able to produce predictions that satisfy some constraints exactly might be very useful for some applications, however, as we mention in Section 3.3, currently this approach to enforcing hard constraints is limited to systems with a relatively small number of nodes and is significantly slower than models with soft constraints. We report training times in Table 1. E LEARNING CAHN-HILLIARD EQUATION WITH GNNS We use the same setup as in Section 4.2 with 75% of nodes in the grid but trained the models for 1500 epochs. Instead of an MLP we use a GNN with messaging and aggregation networks being MLPs with two hidden layers of size 64 and LeakyReLU nonlinearities (negative slope 0.2). For each node, the GNN evaluates the output as dui dt = γ 1 |N (i)| N∑ j∈N (i) φ(uproji , u proj j , x proj ij ), u proj i , (26) where uproji and u proj j are linear projections of the state at nodes i and j and x proj ij is a linear projection of the pair consisting of the distance between nodes i and j and a unit vector pointing from node i to node j. All projections have dimension 16. We compare constrained and unconstrained models. We use the PWL, PWQ and cubic RBF bases. Results of the experiment are shown in Figure 13. The figure shows that relative errors and constraint violations of all models are significantly higher than for MLP-based models. F EXTRA FIGURES
1. What are the main contributions and novel aspects introduced by the paper regarding soft and hard constraints in learned models of PDEs on unstructured meshes? 2. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content? 3. Do you have any questions regarding the implementation of the constraints in the experiments, the training of the differentiable hard constraint solver, or the split of constraints into gamma_n terms? 4. How does the reviewer evaluate the significance and advantages of using unstructured grids and providing a formalism for solving hard constraints? 5. What is the reviewer's opinion on the suitability of the presented mitigation techniques for quadratic basis functions (PWQ)? 6. How do the results of the paper compare to other methods, such as PDEs on regular grids with physics-based losses or Hamiltonian methods? 7. Why does the reviewer think that the constrained look 'more similar' to the data in fig 6, 9? 8. What is the reason behind the failure of the GNN in 4.2, according to the reviewer?
Summary Of The Paper Review
Summary Of The Paper The paper introduces and studies several variants of introducing soft and hard constraints into learned models of PDEs on unstructured meshes. Review In general, the paper is easy to read. Many details remain vague though, e.g., the implementation of the constraints in the experiments, how to train the differentiable hard constraint solver, how exactly are constraints split into gamma_n terms (and why, this seems unnecessary), what is a thin plate spline and why is this a good basis for evaluating constraints. Section 3.1, 3.2 could have been framed better by noting that this is simply a finite element formulation of constraints, and differential/integral formulations directly correspond to FE differentials/integrals. As the paper mentions in the introduction, using soft constraints (often on regular grids) in training is an extremely common practice in physics prediction papers. Often, this is not referred to as constraints but as "physics-based"/auxiliary losses (e.g. adding a loss on divergence when modeling incompressible flow), but it amount to the exact same thing. This paper differentiates itself from these methods by a) using unstructured grids, and b) providing a formalism for solving hard constraints. However, the paper doesn't make a particular strong case for these choices. First, the results are either on regular grids (4.1), or irregular meshing of a square domain, but with a very uniform edge lengths (4.2-4.3). None of these demonstrate any advantage of irregular meshing, and I expect all examples could have been much more easily solved on regular grids with standard CNN-based methods. And second, hard constraints are presented as one of the contributions, and the description spans quite a bit of the method section. However as the authors state themselves, their method for solving hard constraints is only feasible for very small systems, and is hence not used for any of the paper's main results (only for a minor example in the appendix). Similarly, the paper introduces mitigation techniques to make quadratic basis functions (PWQ) work, which turn out to perform strictly worse than a simple linear basis in all settings. And finally, I'm a bit unsure what to take away from the results. It's a bit of an obvious finding that putting an auxiliary loss on a certain quantity (e.g. divergence) will produce results with lower values for this quantity. It would have been much more interesting to show examples that actually highlight the contributions of the paper-- i.e. that isn't easily achieved with corresponding loss terms on regular grids. Also, this paper really needs comparisons to other methods, e.g. PDEs on regular grids with physics-based losses, or perhaps comparing constraints on energy to Hamiltonian methods, etc. Other comments: On regular grid in 4.1, wouldn't the used PWL basis be exactly equivalent to FD? While there seems to be a bit less noise, I'm not sure that the constrained look 'more similar' to the data in fig 6, 9 Why does the GNN in 4.2 not work? GNNs generally work very well for such PDEs. An MLP is likely a poor choice for learning local PDE dynamics, particularly when trained on small datasets
ICLR
Title Enforcing physics-based algebraic constraints for inference of PDE models on unstructured grids Abstract The lack of relevant physical constraints in data-driven models of physical systems, such as neural network parameterized partial differential equations (PDEs), might lead to unrealistic modeling outcomes. A majority of approaches to solving this problem are based on forcing a model to satisfy a set of equations representing physical constraints. Currently available approaches can enforce a very limited set of constraints and are applicable only to uniform spatial grids. We propose a method for enforcing general pointwise, differential and integral constraints on unstructured spatial grids. Our method is based on representing a model’s output in terms of a function approximation and enforcing constraints on that approximation. We demonstrate wide applicability and strong performance of our approach in data-driven learning of dynamical PDE systems and distributions of physical fields. 1 INTRODUCTION Multiple works have shown the capability of neural networks to solve complex physical problems and learn the behavior of physical systems from data. Examples include learning and solving ordinary differential equations (ODEs) [6], partial differential equations (PDEs) [28; 20] and rigid body dynamics [31; 5]. Purely data-driven models are typically not forced to satisfy physical constraints of the system that generated the data. This might lead to unrealistic predictions that violate some known properties of the underlying physical system. Incorporation of relevant constraints allows to make a better use of the available data and makes predictions more physically plausible. The field dealing with physics-constrained learning is diverse and offers many approaches to adding constraints to models. We refer the reader to many reviews for details [30; 3; 36; 19]. The approach we consider in this work is based on forcing a model to satisfy algebraic constraints represented by a set of equalities and inequalities. This is the most commonly used approach which allows to represent a wide range of constraints and has been shown to work well in many cases [18; 17; 25]. However, while many constraints can be represented algebraically, it is not always clear how to evaluate and enforce them. Currently available approaches to enforcing algebraic constraints are limited to uniform grids and have a very narrow range of constraints they can enforce (e.g. only pointwise, or specific differential constraints), see Section 5 for details of related work. Such approaches can be readily applied to models based on convolutional neural networks (CNNs) but cannot be extended to recently developed models based on graph neural networks (GNNs) [33; 27; 15] and other models working on unstructured grids. We propose a much more general method which allows to enforce pointwise, differential and integral constraints on unstructured spatial grids and demonstrate its applicability in learning of PDE-driven dynamical systems and distributions of physical fields. The method is based on using a models’s output at the nodes of a grid to construct an interpolant and applying constraints directly to that interpolant (Section 3). Code and data will be made publicly available. 2 BACKGROUND PDE-driven dynamical systems. Many physical systems can be described in terms of PDEs. Such systems are defined on a bounded domain on which they evolve over time. We consider continuous dynamical systems with state u(t,x) ∈ Rp that evolves over time t ∈ R≥0 and spatial locations x ∈ Ω ⊂ RD. For physical systems, D is typically limited to {1, 2, 3} although our method will work with any value of D. We assume the system is governed by an unknown PDE ∂u(t,x) ∂t = F (x, u(t,x),∇xu(t,x),∇2xu(t,x), ...) (1) which describes the temporal evolution of the system in terms of the locations x, state u and its first and higher-order partial derivatives w.r.t. x. The goal of a data-driven PDE model is to learn the dynamics F from data. Data for learning F is collected by measuring the state of the system at observation locations (x1, . . . ,xN ) over increasing time points (t0, . . . , tM ). This results in a dataset (y(t0), . . . ,y(tM )), where y(ti) = (u(ti,x1), . . . , u(ti,xN )) is a collection of observations. The dataset is used to train the model to predict (y(t1), . . . ,y(tM )) starting from the initial state y(t0). Training is typically done by minimizing an average loss between the model’s predictions u(t) and the data y(t). PDE models differ in restrictions they impose on time points (temporal grid) and observation locations (spatial grid). Some models require both grids to be uniform [23], other models relax these requirements and allow arbitrary spatial [27] and spatio-temporal grids [15]. We build our algebraic constraints method using the model from [15] as the most general one. The model is based on application of the method of lines [32] to Equation 1 which results into a system of ODEs u̇(t) := du(t,x1) dt ... du(t,xN ) dt ≈ Fθ(x1,xN (1), u1, uN (1))... Fθ(xN ,xN (N), uN , uN (N)) (2) which approximates the solution of Equation 1 at the observation locations xi using their neighboring points N (i), where xN (i) and uN (i) are the neighbors’ positions and states respectively, and ui is u(t,xi). The approximate solution converges to the true solution as N increases. The true dynamics F is approximated by a parametric model Fθ whose parameters θ are learned by minimizing the difference between the model’s predictions u(t) = u(0) + ∫ t 0 u̇(τ)dτ (3) and the data y(t). The integral in Equation 3 is solved using a numerical ODE solver. In [15], the function Fθ was represented by a graph neural network (GNN) which takes states and locations at an observation point i and its neighboring points N (i). The observation points are connected into a grid using Delaunay triangulation which allows to naturally defineN (i) as a set of points connected to the point i. However, Fθ can be represented by other models and a different neighbor selection criterion can be used. The model parameters θ are learned by minimizing the MSE between y(t) and u(t) Ldata = 1 M M∑ i=1 ‖u(ti)− y(ti)‖22. (4) The gradient of Ldata w.r.t. θ is evaluated using the adjoint method as shown in [7]. Generative Adversarial Networks One of the tasks that we consider is learning distributions of physical fields. For that purpose we utilize generative adversarial networks (GANs). A GAN is a generative model consisting of a generator and a discriminator [12]. The generator, G, learns to transform a random variable Z ∼ pZ over a latent space Z to the data space Y in such a way that the discriminator, D, cannot tell the difference between samples generated by G and samples from the data distribution pdata. Both, G and D are learned by solving the following minimax problem min G max D V (G,D) = EY∼pdata [logD(Y )] + EZ∼pZ [log (1−D(G(Z)))] . (5) Solution of this problem exists and is unique with the optimal generator perfectly mimicking the data distribution [12]. 3 METHODS In this section we presents an approach to evaluating pointwise, differential and integral constraints on unstructured grids. Then, we demonstrate how this approach can be used to enforce arbitrary soft and linear hard constraints. 3.1 EVALUATING CONSTRAINTS ON UNSTRUCTURED GRIDS We assume the data y(t) is available at observation points (x1, . . . ,xN ) and time points (t1, . . . , tM ) and that a model makes predictions u(t) at these points. We assume the predictions to be evaluations of an unknown underlying function. Since the underlying function is unknown, we cannot impose constraints on it directly. Instead, we approximate it by an interpolant uf (t,x) and impose constraints on uf (t,x) (Figure 1). The approximation is constructed from u(t) by placing a basis function at each xi and representing uf (t,x) as uf (t,x) = N∑ j=1 αj(t)φj(x), (6) where φj is a scalar basis function at xj and αj ∈ Rp. The coefficients αj(t) are obtained from u(t) (see Section 3.4). Next, we show how to evaluate constraints on uf (t,x) using basic building blocks. To avoid cluttered notation, we consider equality constraints and assume u(t,x),x ∈ R. Generalization to inequality constraints, vector fields and higher spatial dimensions is straightforward. Pointwise constraints. Consider points z = (z1, . . . , zK) in Ω on which a pointwise constraint h(uf (t, zi)) = 0 should be evaluated. Assume the function h : R→ R is representable in terms of a finite number of functions γm(uf (t, zi)) : R → R indexed by m. For example, should the constraint be h(uf ) = 3uf + u2f = 0, then we would define γ1(uf ) = uf , γ2(uf ) = u2f and h(uf ) = 3 · γ1(uf ) + γ2(uf ) = 0. Then, h can be evaluated by evaluating each γm as γm(uf (t, zi)) = γm N∑ j=1 αj(t)φj(zi) = γm (Φi,·α(t)) , (7) where α(t) = (α1(t), . . . , αN (t))T , Φ is K-by-N matrix with elements Φi,j = φj(zi), and Φi,· is the i’th row of Φ. Differential constraints. Consider the same setup as before but now h(uf (t, zi)) = 0 consists of differential operators and is representable in terms of a finite number of functions ∂ qγm(uf (t,zi)) ∂zqi : R→ R indexed bym, where the derivative order q could be different for eachm. For example, should the constraint be h(uf ) = 3uf + uf · ∂u2f ∂x = 0, then we would define γ1(uf ) = uf , γ2(uf ) = u 2 f and h(uf ) = 3 · γ1(uf ) + γ1(uf ) · ∂γ2(uf )∂z = 0. Then, h can be evaluated by evaluating each ∂qγm(uf (t,zi)) ∂zqi using the generalization of the chain rule (Appendix A) which contains only two types of terms. The first type of terms dγmduf , . . . , dqγm duqf can be evaluated using automatic differentiation while the second type of terms ∂uf∂zi , . . . , ∂quf ∂zqi can be evaluated as ∂quf ∂zqi = N∑ j=1 αj(t) ∂qφj(zi) ∂zqi = Φ (q) i,· α(t), (8) where Φ(q)i,j = ∂qφj(zi) ∂zqi . Mixed partial derivatives can be handled in a similar way (Appendix A). Integral constraints. Consider the same setup as before but with h(uf (t,x)) =∫ Ω τ(uf (t,x))dx = 0, where the function τ : R → R is representable in terms of functions γm(uf (t, zi)) : R→ R similarly to the pointwise constraints. Then, ∫ Ω τ(uf (t,x))dx can be evalu- ated using a numerical integration technique, e.g. midpoint rule, Gaussian quadrature or Monte-Carlo integration, as ∫ Ω τ(uf (t,x))dx ≈ K∑ i=1 τ(uf (t, zi))µi, (9) where K is the number of integration points, µi are integration coefficients which depend on the grid and integration method, and τ(uf (t, zi)) is evaluated as in Equation 7. 3.2 SOFT CONSTRAINTS Soft constraints are implemented by minimizing the following loss Ldata + λr(h(uf )), where λ ∈ R and Ldata is defined as in Equation 4. We set r(h(uf )) = 1KM ∑K i=1 ∑M j=1 h(uf (tj , zi)) 2 for point- wise and differential constraints and r(h(uf )) = 1M ∑M j=1 h(uf (tj ,x)) 2 for integral constraints. 3.3 HARD CONSTRAINTS Our method allows to implement hard constraints by projecting the interpolant uf (t,x) to a subset of functions which satisfy the required constraints. Namely, if uf (t,x) does not satisfy constraints g and h, it is projected to a subset of functions which satisfy the constraint by solving the following optimization problem min ûf∈Vφ ‖uf − ûf‖2L2 s.t. h(ûf ) = 0, g(ûf ) ≤ 0, (10) where the projection is denoted by ûf (t,x) and Vφ is spanned by the basis functions. Using the basis representation uf (t,x) = ∑N i=1 αi(t)φi(x) and ûf (t,x) = ∑N i=1 βi(t)φi(x) we can rewrite the optimization problem (10) as min β(t)∈RN (α(t)− β(t))T Φ̂(α(t)− β(t)) s.t. h(ûf ) = 0, g(ûf ) ≤ 0, (11) where β(t) = (β1(t), . . . , βN (t))T and Φ̂i,j = ∫ Ω φi(x)φj(x)dx. To train the model end-to-end, the problem (11) should be differentiable. Agrawal et. al. [1] proposed differentiable convex optimization which could be used in this case if the problem (11) could be expressed in a DPP-compliant way (see [1]). To do that, we restrict ourselves to constraints that can be expressed as an equality or inequality between Aβ(t) and b, where A is a constant matrix and b is a constant vector. This formulation admits pointwise, differential and integral constraints on untransformed uf . The objective function is convex since its Hessian is positive-semidefinite i.e. for any v ∈ RN vT Φ̂v = N∑ i,j=1 vivjΦ̂i,j = N∑ i,j=1 〈viφi, vjφj〉L2 = 〈 N∑ i=1 viφi, N∑ j=1 vjφj〉L2 ≥ 0. (12) This allows to solve the problem (11) and differentiate its solution β∗(t) w.r.t. α(t). The model parameters are found by minimizing the following loss function Ldata +λLproj, where λ ∈ R and Ldata is defined as in Equation 4 but with u(ti) replaced by û(ti) = (ûf (ti,x1), . . . , ûf (ti,xN )). We set Lproj = 1NM ∑N i=1 ∑M j=1 ‖uf (tj ,xi)− ûf (tj ,xi)‖ 2 2. The second term makes the optimization procedure prefer models that predict uf close to the feasible set of the problem (11). We note that the proposed approach is currently limited to small-scale problems due to existing computational bottlenecks in the implementation of differentiable convex optimization [1]. 3.4 BASIS FUNCTIONS Selecting appropriate basis is crucial for efficiency and applicability of the proposed method. Ideally, the basis should allow efficient construction of uf (t,x) from u(t), contain no tunable parameters, and lead to sparse matrices Φ,Φ(q) and Φ̂. We consider bases from two families: Lagrange basis functions and radial basis functions (RBFs). Lagrange basis functions do not have tunable parameters and have compact support which leads to sparse Φ,Φ(q) and Φ̂. For the piecewise linear basis the interpolant uf (t,x) can be constructed directly from the predictions by setting α(t) = u(t). However, constructing uf (t,x) for a higher order basis, e.g. piecewise quadratic, requires the model to make predictions not only at the observation points, but also at some extra points where the data is not available. In Section 4 we demonstrate one approach to solving this problem. After extending the state u(t) by predictions at the extra nodes, the coefficients α(t) can be evaluated similarly to the piecewise linear basis. In this work we use piecewise linear (PWL) and piecewise quadratic (PWQ) bases. Examples of PWL basis functions are shown in Figure 2. Radial basis functions have a wider range of properties. Some RBFs have tunable parameters, some don’t. The matrices Φ,Φ(q) and Φ̂ evaluated with RBFs are typically dense, but RBFs with compact support exist (e.g. bump function). The interpolant uf (t,x) can be constructed by evaluating α(t) = K−1u(t), where K−1 is the inverse of the interpolation matrix of the given RBF and Kij = φ(‖xi − xj‖), where φ is an RBF and xi,xj are observation locations. In this work we use the cubic RBF basis i.e. φ(r) = r3. We use PyTorch [26] to handle sparse matrices and to evaluateK−1u(t) in a differentiable way. 4 EXPERIMENTS In the following experiments we use the relative error between the data y(t) and model predictions u(t) defined as ‖y(t)−u(t)‖2‖y(t)‖2 and consider only soft constraints. We present an experiment with hard constraints implemented as shown in Section 3.3 in Appendix D. Data generation is described in Appendix B. Training, testing and modeling details are in Appendix C. All experiments were run on a single NVIDIA Quadro P5000 GPU. All errors bars represent one standard deviation of the results over five random seeds. 4.1 REPLACING EXISTING METHODS In this experiment we take existing models which incorporate physics-based constraints in training and replace their constraint enforcing approaches with ours. We consider two works. First, [37] which trains a GAN to produce divergence-free vector fields using zero-divergence constraint. Second, [10] which predicts warping fields driving the evolution of sea surface temperature by observing snapshots of the temperature over time while enforcing gradient and divergence constraints on the warping fields (see Appendix C for more details). Both models work on uniform grids which allows them to evaluate constraints using finite differences. For comparison, we replace finite differences with our method and observe how it changes the models’ performance. In both cases we use the PWL basis. For [37] we track the mean divergence and discriminator loss. Results of the original approach are as follows: mean divergence 0.079 and discriminator loss 0.091. With our method the mean divergence was 0.014 and the discriminator loss was 0.088. Both approaches results in similar discriminator losses but our approach produces a smaller mean divergence (smaller is better). Our method increased the runtime per epoch by 6%. For [10] we track the total, divergence and smoothness losses which, with the original approach, were 0.139, 8.4 · 10−5 and 1.51 · 10−4, respectively. With our approach the losses were 0.139, 8.3 · 10−5 and 1.51 · 10−4, respectively. Both methods produce very similar results. Our method increased the runtime per epoch by 30%. Overall, replacing existing constraint enforcing approaches by ours on data from uniform grids resulted in comparable model performance, except for runtime which was slightly increased. 4.2 CAHN-HILLIARD EQUATION WITH AN INTEGRAL CONSTRAINT We start with the 1D Cahn-Hilliard equation ∂u ∂t = 2∇2(u(1− u)2 − u2(1− u)− 2∇2u) (13) which is known to conserve the state u i.e. h(u) = ∫ Ω u(t, x)dx− C = 0 at all time points, where C = ∫ Ω u(0, x)dx is a constant. This is an example of a conservation law which are abundant in nature and are important class of constraints that data-driven models of physical systems should satisfy. Conservation laws can be expressed in differential and integral forms and this experiment demonstrates how the integral form can be enforced. The constraint is evaluated using the midpoint rule as shown in the previous section with a single γ1 being the identity function. We use PWL, PWQ and cubic RBF bases and compare the results to an unconstrained model. For training we use 30, 60 and 120 simulations while the test set consist of 60 simulations. Simulations in the training/test data last for 0.0015/0.0030 seconds and contain 50/100 uniformly spaced time points. The full spatial grid consists of 101 uniformly spaced nodes. We randomly sample 50%, 75% and 100% of the nodes and train/test on the resulting (irregular) spatial grid. Training and testing is done with identical spatial grids. An example of a spatial grid with 50% of nodes is shown in Figure 3. We evaluate the constraint on a uniform grid with 200 nodes placed on top of the original grid. To learn the dynamics of the system we use the model from [15] (Section 2). We found that using a GNN produced poor results. For that reason we represented the function Fθ with a multiplayer perceptron (MLP) which updates the state of each node based on the states of all other nodes in the grid (results for a GNN are in Appendix E). The MLP contains two hidden layers with Leaky ReLU nonlinearities. The number of hidden neurons was set to the number of nodes in the grid. The coefficients α(t) for the PWL and cubic bases can be evaluated directly from the model predictions at the grid nodes. But the PWQ basis requires extra predictions to be available between the nodes. This is problematic since there is no data at these points to guide the model’s predictions. To solve this problem we introduce a small MLP which is applied to consecutive pairs of nodes. The MLP takes the states at both nodes and the distance between them as the input and estimates the state at the midpoint between the two nodes. The MLP is trained jointly with the main model and uses only the constraint-related loss term during training. For testing, we construct the interpolant uf (t,x) using the thin plate spline basis (φ(r) = r2 log r) and evaluate the constraint on that interpolant. This allows to make a fair comparison between the unconstrained model and different bases and avoid biasing or ovefitting to bases used for training. Figure 4 shows results of the experiment. We observe that changing the node fraction does not significantly affect the relative errors but has noticeable effect on constraint violations, especially for the unconstrained model. Constrained models tend to show similar or better performance than the unconstrained model. Among all bases, the cubic basis consistently results in lower relative errors and constraint violations. However, the simpler PWL basis often performs on par with the cubic basis, especially on denser spatial grids. We also observe that coarsening of the grid increases the constraint violation gap between constrained and unconstrained models and that this gap seems to not close as we increase the amount of training data. The PWQ basis performs rather poorly on fine grids which is likely due to a suboptimal approach to evaluating the state at the extra nodes. A better approach could consider not only pairs of points but also larger neighborhoods. Nonetheless, the PWQ basis achieves good performance on coarse grids which shows that piecewise bases of higher order could potentially be used to enforce constraints. This will allow to scale to grids with a large number of nodes due to sparsity of the constraint matrices and efficient evaluation of α. 4.3 HEAT EQUATION WITH A MONOTONICITY CONSTRAINT We impose constraints on a 2D system governed by the heat equation ∂u∂t = ∇2u for which the generated initial conditions (ICs) are monotone in one direction. Since the ICs are monotone, the state u remains monotone at all time points as well. We enforce the monotonicity constraint as ∂u∂x ≥ 0. The constraint is evaluated as shown in the previous section with γ1 being the identity function. For training we use 15, 30 and 90 simulations while the test set consist of 120 simulations. Simulations in the training/test data last for 0.1/0.2 seconds and contain 21/41 uniformly spaced time points. The full spatial grid consists of 1087 nodes. We randomly sample 33%, 66% and 100% of the nodes and train/test on the resulting (irregular) spatial grid. Training and testing is done with identical spatial grids. Spatial grid with 100% of nodes is shown in Figure 5. The constraint is evaluated at the nodes of a uniform 51 × 51 grid placed on top of the original grid. To learn the dynamics of the system we use the model from [15] directly with the messaging and aggregation networks being MLPs with a single hidden layer consisting of 60 neurons with Tanh nonlinearities and the input/output sizes of 4/40 and 41/1 respectively. During testing, we use predictions of the models to construct an interpolant uf (t,x) using the thin plate spline basis and evaluate the constraint on that interpolant. This allows to make a fair comparison between the unconstrained model and different bases. Figure 7 shows results of the experiment. We observe that changing the node fraction equally increases relative errors of all models and has noticeable effect on constraint violations, especially for the unconstrained model. Constrained models tend to show slightly higher or comparable relative errors but noticeably lower constraint violations than the unconstrained model. The cubic and PWL bases perform equally well in this case. Similarly to the experiment in the previous section, we observe that coarsening of the grid introduces a larger constraint violation gap between constrained and unconstrained models and that this gap seems to not close as we increase the amount of training data. Figure 6 shows qualitative difference between predictions of constrained and unconstrained models. It can be noted that predictions from the constrained model have noticeably smoother contours thus making the field more monotonous in the horizontal direction. 4.4 LEARNING DISTRIBUTIONS OF PHYSICAL FIELDS We demonstrate the effect of adding constraints to a GAN when learning distributions of physical fields on unstructured grids. We use Wasserstein GAN (WGAN) [2] as a more stable variant of a GAN. We use MLPs as a generator and discriminator. Unconstrained and constrained models are trained for 1.2M iterations. Constraints are enabled only after 600k iterations. Constrained models are trained similarly to the unconstrained ones but with a modified generator loss defined as LG + λ ln (1 + LC), where LG is the standard generator loss and LC is the constraint-based loss. We defineLC as the mean value of h(u)2, where h is a constraint evaluated at the centroid of each cell in the grid. 4.4.1 ZERO-DIVERGENCE FIELDS Divergence-free vector fields are often encountered in solutions of fluid dynamics problems. The divergence-free constraint on a vector field u(x, y) = (u1(x, y), u2(x, y)) T is defined as h(u) = ∂u1∂x + ∂u2 ∂y = 0. The constraint is enforced using the PWL basis. We generated a dataset with 10k divergence-free fields on an unstructured grid with 1050 nodes (Figure 14) and used a WGAN to learn a distribution over such fields. Note that the generated fields are not entirely divergence-free but have small residual divergence due to discretization errors. Figure 9a shows that there is a clear difference in the quality of the samples generated by the unconstrained and constrained models. Samples from the constrained model are smoother and more similar to the data. Quantitative comparison of the samples presented in Figure 8 shows that the constrained model generates fields that have much lower constraint violation and divergence distribution very similar to that of the data. 4.4.2 ZERO-LAPLACIAN FIELDS Fields with zero Laplacian represent solutions to some PDEs, for example the steady-state heat equation. The zero-Laplacian constraint on a scalar field u(x, y) is defined as h(u) = ∂ 2u ∂x2 + ∂2u ∂y2 = 0. The constraint is enforced using the cubic basis as the PWL basis has zero second derivatives everywhere. We generated a dataset with 10k Laplacian-free fields on an unstructured grid with 1050 nodes (Figure 14) and used a WGAN to learn a distribution over such fields. Note that the generated fields are not entirely Laplacian-free due to discretization errors. Results of the experiment are shown in Figures 9b and 8. Similarly to the divergence-free case, visual quality of the fields generated by the constrained model is significantly better than for the unconstrained model. Quantitative comparison of the samples presented in Figure 8 shows that the constrained model generates fields that have much lower constraint violation and Laplacian distribution very similar to that of the data. 5 RELATED WORK Soft constraints. Soft constraints are widely used due to being relatively easy to implement. Examples include lake temperature prediction [18; 16], traffic simulation [22], fluid and climate modeling [11; 10; 4], where constraints are evaluated pointwise or using finite differences. Hard constraints. Approaches to implementing hard constraints are diverse and can be categorized as processing the output of an unconstrained model [4; 17; 24; 34] and designing a model that produces feasible predictions by default [23; 25; 14; 9; 13; 8; 38]. Constrained PDE models Current approaches to enforcing soft [10; 11] and hard [21; 25; 17] constraints are limited to specific types of constraints and spatial grids. For example, [25; 17] implement only hard differential constraints and both are limited to uniform grids. Uniform grids allow to evaluate constraints efficiently e.g. using finite differences [10; 21; 25] or fast Fourier transform [17] but assuming that the data lies on a uniform grid might be limiting. Constrained GANs Works such as [37; 17] showed how physics-based constraints benefit training and quality of the generated samples but are also limited to uniform grids. 6 CONCLUSION We presented a general approach to enforcing algebraic constraints on unstructured grids and showed how it can be used to enforce soft and hard constraints. We demonstrated applicability of the approach to learning of PDE-driven dynamical systems and distributions of physical fields. We considered two families of basis functions for constructing the interpolant and showed how Lagrange basis functions of order higher than one can be used. Our method allows to drop the unrealistic assumption about uniformity of spatial grids and shows promising results on various tasks. REPRODUCIBILITY STATEMENT All details required to reproduce the experiments are provided in Section 4 and Appendices. Code and data used to run the experiments will be made publicly available after the review process. A GENERALIZED CHAIN RULE AND HANDLING MIXED PARTIAL DERIVATIVES. Let y = g(x1, . . . , xn) with all arguments being either identical, distinct or grouped. Then, partial derivatives of f(y) can be evaluated using the Faà di Bruno’s formula ∂nf(y) ∂x1 · · · ∂xn = ∑ π∈Π f (|π|)(y) ∏ B∈π ∂|B|y∏ j∈B ∂xj , where Π is the set of all partitions of the set {1, . . . , n}, B runs through elements of the partition π, f (m) denotes m’th derivative, and | · | is cardinality. The formula consists of two terms: f (|π|)(y), which can be evaluated using automatic differentiation, and ∂ |B|y∏ j∈B ∂xj , which can be evaluated as shown in Equation 8. In case that all x1, . . . , xn are identical, the mixed derivative ∂nf(y) ∂x1···∂xn reduces to ∂nf(y) ∂xn1 . B DATA GENERATION In all cases we run simulation on a fine grid and then interpolate the results to a coarser grid represented as the "full grid" in the experiments. B.1 CAHN-HILLIARD EQUATION WITH AN INTEGRAL CONSTRAINT Training and testing data was obtained by solving ∂u ∂t = 2∇2(u(1− u)2 − u2(1− u)− 2∇2u) (14) on a unit interval with periodic boundary conditions and = 0.04. The domain was represented by a uniform grid with 100 nodes and the time step was set to 1.0e-6 sec. The initial conditions u0(x) were generated as follows ũ0(x) = 10∑ i=1 (λi cos ((x− s)2π) + γi sin ((x− s)2π)) + λ0 2 , (15) u0(x) = ũ0(x)−min ũ0(x) max ũ0(x)−min ũ0(x) , (16) where λi, γi ∼ Unif(−1, 1) and s ∼ Unif(0, 1). Examples of the simulations are shown in Figure 10. B.2 HEAT EQUATION WITH A MONOTONICITY CONSTRAINT Training and testing data was obtained by solving ∂u ∂t = D∇2u (17) on a unit square with zero Neumann boundary conditions and D = 0.2. The domain was represented by an unstructured grid with 2971 nodes and the time step was set to 0.001 sec. The initial conditions u0(x) were generated as f(x) = 6∑ i=0 ωix i, (18) g(y) = 1 2 3∑ i=1 (λi cos ((x+ s)2π) + γi sin ((x+ s)2π)) + λ0 2 , (19) ũ0(x, y) = f(x) + g(y), (20) u0(x, y) = ũ0(x, y)−min ũ0(x, y) max ũ0(x, y)−min ũ0(x, y) , (21) where ωi ∼ Unif(0.1, 1.1), λi, γi, s ∼ Unif(−1, 1). Examples of the simulations are shown in Figure 11. B.3 GAN WITH A DIVERGENCE CONSTRAINT The data was generated by sampling random velocity fields and then projecting them to the space of divergence-free fields. The procedure was as follows. First, a random velocity field u0(x, y) was generated on a unit square by generating each component i as ũ0i(x, y) = N∑ k,l=−N λkl cos (kx+ ly) + γkl sin (kx+ ly), (22) u0i(x, y) = 6× ( ũ0i(x, y)−min ũ0i(x, y) max ũ0i(x, y)−min ũ0i(x, y) − 0.5 ) , (23) where N = 10 and λkl, γkl ∼ N (0, 1). Then, the divergence-free component of u0(x, y), denoted by u∗0(x, y), was extracted by using the projection method by solving∇ · u0 = ∇2φ for φ and then evaluating u∗0(x, y) = u0(x, y)−∇φ. Finally, the data was scaled to [−1, 1]. B.4 GAN WITH A LAPLACIAN CONSTRAINT The data was generated by solving ∇2u = 0 (24) on a unit square with Dirichlet boundary conditions. The domain was represented by an unstructured grid with 2971 nodes. The boundary conditions were generated by generating random functions u0(x) and using their boundary values as the boundary conditions. The functions u0(x) were generated as u0(x, y) = N∑ k,l=−N λkl cos (kx+ ly) + γkl sin (kx+ ly) (25) where N = 5 and λkl, γkl ∼ N (0, 1). The data was then scaled to [0, 1]. C MODELS, TRAINING AND TESTING C.1 REPLACING EXISTING METHODS For our comparisons we considered experiments from two works. Next, we provide some details about these experiments. The first experiment was taken from [37] Section 3.2. The experiment shows how soft physics-based constraints affect predictions of a GAN learning a distribution of divergence-free fields. The data is generated on a uniform grid which allows to evaluate divergence using finite differences. The constraint is enforced through an extra loss term which penalizes violation of the constraint. The performance metric used is the Frobenius norm of the divergence averaged over all fields in a batch. For training we used code provided by the authors with original parameters. We replaced finite differences in the constrained evaluation function with our method. The second experiment was taken from [10]. This work deals with the task of predicting sea surface temperatures at future times given snapshots of the temperature over current and previous times. The model proposed by the authors accomplishes this tasks by taking a sequence of surface temperatures at times ti−k, . . . , ti and predicting the underlying motion field which is then used to predict the temperature at time ti+1. Insights about physical properties of the motion field were used to constrain the model’s predictions. Constraints are imposed on divergence, magnitude and gradients of the motion field. The data is generated on a uniform grid which allows to evaluate the constraints using finite differences. The constraints are enforced through extra loss terms which penalize violation of the constraints. Performance metrics that were used are MSE between the data and model predictions, smoothness loss and divergence loss. For training we used code provided by the authors with original parameters. We replaced finite differences in the constrained evaluation function with our method. C.2 CAHN-HILLIARD EQUATION WITH AN INTEGRAL CONSTRAINT In all experiments with the Cahn-Hilliard equation we represent the dynamics function Fθ by an MLP with 2 hidden layers and LeakyReLU nonlinearities (negative slope 0.2). The number of neurons in each layer was set to the number of nodes in the spatial grid on which the model was trained. The predictions u(t) were obtained by simulating the system forward in time using adaptive Heun solver from torchdiffeq package [6] with rtol and atol set to 1.0e-5 and 1.0e-5 respectively. All models were trained for 1500 epochs using Rprop optimizer [29] with learning rate set to 1.0 · 10−6 and batch size set to the number of simulations in the training set. Mean squared error was used as the loss function. Spatial and temporal grids in the testing data were the same as in the training data. We set λ = 2. C.3 HEAT EQUATION WITH A MONOTONICITY CONSTRAINT In all experiments with the heat equation we represent the dynamics function Fθ by a GNN with the messaging and aggregation networks being MLPs with a single hidden layer consisting of 60 neurons with Tanh nonlinearities and the input/output sizes of 4/40 and 41/1 respectively. The predictions u(t) were obtained by simulating the system forward in time using adaptive Heun solver from torchdiffeq package [6] with rtol and atol set to 1.0e-5 and 1.0e-5 respectively. All models were trained for 750 epochs using Rprop optimizer [29] with learning rate set to 1.0 · 10−6 and batch size set to the number of simulations in the training set. Mean squared error was used as the loss function. Spatial and temporal grids in the testing data were the same as in the training data. We set λ = 0.1. C.4 LEARNING DISTRIBUTIONS OF PHYSICAL FIELDS In both cases we used identical architectures and training process for the constrained and unconstrained models. Both models were trained for 1.2M iterations using the same random seed. Constraints in the constrained model were enabled only after 600k iterations. The base distribution was set to a 128-dimensional isotropic standard normal. Models were trained using RMSProp optimizer [35] with batch size and learning rate set to 64 and 0.00001 respectively. The discriminator’s weights were clipped to [−0.01, 0.01]. C.4.1 ZERO-DIVERGENCE FIELDS We used MLPs as a discriminator and generator. The discriminator consisted of 3 hidden layers of sizes 1024-512-256 with LeakyReLU nonlinearities (negative slope 0.2) and input/output size of 2010 and 1 respectively. The generator consisted of 3 hidden layers of sizes 256-512-1024 with LeakyReLU nonlinearities (negative slope 0.2), input/output size of 128 and 2010 respectively, and a final hyperbolic tangent nonlinearity applied to the output. We set λ = 0.2. C.4.2 ZERO-LAPLACIAN FIELDS We used MLPs as a discriminator and generator. The discriminator consisted of 3 hidden layers of sizes 1024-512-256 with LeakyReLU nonlinearities (negative slope 0.2) and input/output size of 1086 and 1 respectively. The generator consisted of 3 hidden layers of sizes 256-512-1024 with LeakyReLU nonlinearities (negative slope 0.2), input/output size of 128 and 1086 respectively, and sigmoid function applied to the output. We set λ = 0.0075. D CAHN-HILLIARD EQUATION WITH HARD INTEGRAL CONSTRAINTS Here we demonstrate how the approach to enforcing hard constraints described in Section 3.3 can be used to enforce integral constraints on a nonuniform grid. We use the same setup as in Section 4.2 with 30 training simulations and 50% of nodes in the grid. We compare three models: unconstrained model, model with soft constraint and model with hard constraint. We use the PWL basis during training and testing. Table 1 shows that relative errors of all three models are practically similar but, as Figure 12 demonstrates, constraint violations differ significantly. We see that constraint violations of the model with hard constraint are zero at all time points as expected. Being able to produce predictions that satisfy some constraints exactly might be very useful for some applications, however, as we mention in Section 3.3, currently this approach to enforcing hard constraints is limited to systems with a relatively small number of nodes and is significantly slower than models with soft constraints. We report training times in Table 1. E LEARNING CAHN-HILLIARD EQUATION WITH GNNS We use the same setup as in Section 4.2 with 75% of nodes in the grid but trained the models for 1500 epochs. Instead of an MLP we use a GNN with messaging and aggregation networks being MLPs with two hidden layers of size 64 and LeakyReLU nonlinearities (negative slope 0.2). For each node, the GNN evaluates the output as dui dt = γ 1 |N (i)| N∑ j∈N (i) φ(uproji , u proj j , x proj ij ), u proj i , (26) where uproji and u proj j are linear projections of the state at nodes i and j and x proj ij is a linear projection of the pair consisting of the distance between nodes i and j and a unit vector pointing from node i to node j. All projections have dimension 16. We compare constrained and unconstrained models. We use the PWL, PWQ and cubic RBF bases. Results of the experiment are shown in Figure 13. The figure shows that relative errors and constraint violations of all models are significantly higher than for MLP-based models. F EXTRA FIGURES
1. What is the main contribution of the paper regarding enforcing constraints in statistical models? 2. What are the strengths and weaknesses of the proposed two-folded method? 3. How does the interpolant function u_f approximate the original problem, and what are the conditions required for its accuracy? 4. Why do piecewise cubic approximations not require extra conditions compared to piecewise quadratic ones? 5. How does the generator depend on time in the GAN experiments, and how is the interpolation handled when it's PWL? 6. How can the paper improve its presentation and clarity, specifically regarding the interpolation scheme and the estimation of α and Φ?
Summary Of The Paper Review
Summary Of The Paper The paper proposes a two-folded method to enforce constraints of different natures (differential, integrals….) on a statistical model learned from physical data. The constraints are not enforced directly on the model but rather on “interpolant” functions that aims at completing the original model in between the observed grid points. Review The task addressed by the authors is crucial in machine learning since purely data driven approaches blatantly fail at conserving several physical quantities such as momentum or energy. In particular, the enforcement of algebraic constraints (whether differential or integral) on a learned model is a necessary path to learn from/for physical data. The method can be summarized as follows: Learn the model F θ that fits the data constraints (regression, gan etc. ...), hence the solution at “grid points” u . Then fit a simple function of separate space and time u f ( t , x ) = α j ( t ) ϕ j ( t ) . (linear, quadratic, etc..) to interpolate between the prediction on which can enforce specific constraint, then a fortiori constraining the learned u . The experiments seem to support the author’s choices. The experiments are well conducted and present various use cases useful for the whole community. However, the main weakness of the paper is the clarity in the presented interpolation method. Mainly, how it is computed and its link to the learned solution u ( t ) . Also the estimation of ϕ is only very lightly discussed. Question: How well can the interpolant u f approximate the original to the problem ? Indeed, I guess this is related to the convergence of a numerical scheme: if the space is well covered (i.e. discrete step size goes down to 0) can we expect to recover a good function u f using such a prior? Why piecewise cubic approximations do not require extra conditions (do you use Hermite polynomials and continuity of the derivative) compared to piecewise quadratic ? I suggest the authors include a thorough discussion on this topic in the appendices (with the conditions imposed on such local polynomials). For the gan experiments, does the generator depend on time ? If not, then the interpolation simply concerns the ϕ part ? How to enforce a second order differential constraint when the interpolation is PWL ? Remark on the presentation: Despite interesting ideas, there is room for improving the presentation and the clarity of the paper. a) Since the main focus of the experiments concerns “soft constraints”, I suggest the authors develop the section treating the soft-constraint components (Perhaps including an algorithm to show practical use) and how constraining u f constrains the learned u (or more specifically how the losses on u f impacts the θ of F θ ), by detailing explicitly the link between the interpolation and the learned solution. b) The key point of the paper lies in the interpolation scheme, however, in my opinion the paper lacks elements of details for an in depth understanding of the estimation method of the parameters α j and ϕ . Since this point is central in the paper, I highly recommend including thorough details on the estimation of the α and Φ at least in the appendices. Minor: I suggest for clarity to write the learned solution u θ instead of u to avoid the possible confusion between u the solution to (1) and u the learned numerical solution at the nodes x 1 , . . , x N ). Should t be a variable of F θ in eq.2. Does u 1 = u ( x 1 ) . ? Eq.6. If there are N points ( x 1 , …, x n ). Should the sum stop at N − 1 ? P.6 “with a multiplayer perceptron”
ICLR
Title Enforcing physics-based algebraic constraints for inference of PDE models on unstructured grids Abstract The lack of relevant physical constraints in data-driven models of physical systems, such as neural network parameterized partial differential equations (PDEs), might lead to unrealistic modeling outcomes. A majority of approaches to solving this problem are based on forcing a model to satisfy a set of equations representing physical constraints. Currently available approaches can enforce a very limited set of constraints and are applicable only to uniform spatial grids. We propose a method for enforcing general pointwise, differential and integral constraints on unstructured spatial grids. Our method is based on representing a model’s output in terms of a function approximation and enforcing constraints on that approximation. We demonstrate wide applicability and strong performance of our approach in data-driven learning of dynamical PDE systems and distributions of physical fields. 1 INTRODUCTION Multiple works have shown the capability of neural networks to solve complex physical problems and learn the behavior of physical systems from data. Examples include learning and solving ordinary differential equations (ODEs) [6], partial differential equations (PDEs) [28; 20] and rigid body dynamics [31; 5]. Purely data-driven models are typically not forced to satisfy physical constraints of the system that generated the data. This might lead to unrealistic predictions that violate some known properties of the underlying physical system. Incorporation of relevant constraints allows to make a better use of the available data and makes predictions more physically plausible. The field dealing with physics-constrained learning is diverse and offers many approaches to adding constraints to models. We refer the reader to many reviews for details [30; 3; 36; 19]. The approach we consider in this work is based on forcing a model to satisfy algebraic constraints represented by a set of equalities and inequalities. This is the most commonly used approach which allows to represent a wide range of constraints and has been shown to work well in many cases [18; 17; 25]. However, while many constraints can be represented algebraically, it is not always clear how to evaluate and enforce them. Currently available approaches to enforcing algebraic constraints are limited to uniform grids and have a very narrow range of constraints they can enforce (e.g. only pointwise, or specific differential constraints), see Section 5 for details of related work. Such approaches can be readily applied to models based on convolutional neural networks (CNNs) but cannot be extended to recently developed models based on graph neural networks (GNNs) [33; 27; 15] and other models working on unstructured grids. We propose a much more general method which allows to enforce pointwise, differential and integral constraints on unstructured spatial grids and demonstrate its applicability in learning of PDE-driven dynamical systems and distributions of physical fields. The method is based on using a models’s output at the nodes of a grid to construct an interpolant and applying constraints directly to that interpolant (Section 3). Code and data will be made publicly available. 2 BACKGROUND PDE-driven dynamical systems. Many physical systems can be described in terms of PDEs. Such systems are defined on a bounded domain on which they evolve over time. We consider continuous dynamical systems with state u(t,x) ∈ Rp that evolves over time t ∈ R≥0 and spatial locations x ∈ Ω ⊂ RD. For physical systems, D is typically limited to {1, 2, 3} although our method will work with any value of D. We assume the system is governed by an unknown PDE ∂u(t,x) ∂t = F (x, u(t,x),∇xu(t,x),∇2xu(t,x), ...) (1) which describes the temporal evolution of the system in terms of the locations x, state u and its first and higher-order partial derivatives w.r.t. x. The goal of a data-driven PDE model is to learn the dynamics F from data. Data for learning F is collected by measuring the state of the system at observation locations (x1, . . . ,xN ) over increasing time points (t0, . . . , tM ). This results in a dataset (y(t0), . . . ,y(tM )), where y(ti) = (u(ti,x1), . . . , u(ti,xN )) is a collection of observations. The dataset is used to train the model to predict (y(t1), . . . ,y(tM )) starting from the initial state y(t0). Training is typically done by minimizing an average loss between the model’s predictions u(t) and the data y(t). PDE models differ in restrictions they impose on time points (temporal grid) and observation locations (spatial grid). Some models require both grids to be uniform [23], other models relax these requirements and allow arbitrary spatial [27] and spatio-temporal grids [15]. We build our algebraic constraints method using the model from [15] as the most general one. The model is based on application of the method of lines [32] to Equation 1 which results into a system of ODEs u̇(t) := du(t,x1) dt ... du(t,xN ) dt ≈ Fθ(x1,xN (1), u1, uN (1))... Fθ(xN ,xN (N), uN , uN (N)) (2) which approximates the solution of Equation 1 at the observation locations xi using their neighboring points N (i), where xN (i) and uN (i) are the neighbors’ positions and states respectively, and ui is u(t,xi). The approximate solution converges to the true solution as N increases. The true dynamics F is approximated by a parametric model Fθ whose parameters θ are learned by minimizing the difference between the model’s predictions u(t) = u(0) + ∫ t 0 u̇(τ)dτ (3) and the data y(t). The integral in Equation 3 is solved using a numerical ODE solver. In [15], the function Fθ was represented by a graph neural network (GNN) which takes states and locations at an observation point i and its neighboring points N (i). The observation points are connected into a grid using Delaunay triangulation which allows to naturally defineN (i) as a set of points connected to the point i. However, Fθ can be represented by other models and a different neighbor selection criterion can be used. The model parameters θ are learned by minimizing the MSE between y(t) and u(t) Ldata = 1 M M∑ i=1 ‖u(ti)− y(ti)‖22. (4) The gradient of Ldata w.r.t. θ is evaluated using the adjoint method as shown in [7]. Generative Adversarial Networks One of the tasks that we consider is learning distributions of physical fields. For that purpose we utilize generative adversarial networks (GANs). A GAN is a generative model consisting of a generator and a discriminator [12]. The generator, G, learns to transform a random variable Z ∼ pZ over a latent space Z to the data space Y in such a way that the discriminator, D, cannot tell the difference between samples generated by G and samples from the data distribution pdata. Both, G and D are learned by solving the following minimax problem min G max D V (G,D) = EY∼pdata [logD(Y )] + EZ∼pZ [log (1−D(G(Z)))] . (5) Solution of this problem exists and is unique with the optimal generator perfectly mimicking the data distribution [12]. 3 METHODS In this section we presents an approach to evaluating pointwise, differential and integral constraints on unstructured grids. Then, we demonstrate how this approach can be used to enforce arbitrary soft and linear hard constraints. 3.1 EVALUATING CONSTRAINTS ON UNSTRUCTURED GRIDS We assume the data y(t) is available at observation points (x1, . . . ,xN ) and time points (t1, . . . , tM ) and that a model makes predictions u(t) at these points. We assume the predictions to be evaluations of an unknown underlying function. Since the underlying function is unknown, we cannot impose constraints on it directly. Instead, we approximate it by an interpolant uf (t,x) and impose constraints on uf (t,x) (Figure 1). The approximation is constructed from u(t) by placing a basis function at each xi and representing uf (t,x) as uf (t,x) = N∑ j=1 αj(t)φj(x), (6) where φj is a scalar basis function at xj and αj ∈ Rp. The coefficients αj(t) are obtained from u(t) (see Section 3.4). Next, we show how to evaluate constraints on uf (t,x) using basic building blocks. To avoid cluttered notation, we consider equality constraints and assume u(t,x),x ∈ R. Generalization to inequality constraints, vector fields and higher spatial dimensions is straightforward. Pointwise constraints. Consider points z = (z1, . . . , zK) in Ω on which a pointwise constraint h(uf (t, zi)) = 0 should be evaluated. Assume the function h : R→ R is representable in terms of a finite number of functions γm(uf (t, zi)) : R → R indexed by m. For example, should the constraint be h(uf ) = 3uf + u2f = 0, then we would define γ1(uf ) = uf , γ2(uf ) = u2f and h(uf ) = 3 · γ1(uf ) + γ2(uf ) = 0. Then, h can be evaluated by evaluating each γm as γm(uf (t, zi)) = γm N∑ j=1 αj(t)φj(zi) = γm (Φi,·α(t)) , (7) where α(t) = (α1(t), . . . , αN (t))T , Φ is K-by-N matrix with elements Φi,j = φj(zi), and Φi,· is the i’th row of Φ. Differential constraints. Consider the same setup as before but now h(uf (t, zi)) = 0 consists of differential operators and is representable in terms of a finite number of functions ∂ qγm(uf (t,zi)) ∂zqi : R→ R indexed bym, where the derivative order q could be different for eachm. For example, should the constraint be h(uf ) = 3uf + uf · ∂u2f ∂x = 0, then we would define γ1(uf ) = uf , γ2(uf ) = u 2 f and h(uf ) = 3 · γ1(uf ) + γ1(uf ) · ∂γ2(uf )∂z = 0. Then, h can be evaluated by evaluating each ∂qγm(uf (t,zi)) ∂zqi using the generalization of the chain rule (Appendix A) which contains only two types of terms. The first type of terms dγmduf , . . . , dqγm duqf can be evaluated using automatic differentiation while the second type of terms ∂uf∂zi , . . . , ∂quf ∂zqi can be evaluated as ∂quf ∂zqi = N∑ j=1 αj(t) ∂qφj(zi) ∂zqi = Φ (q) i,· α(t), (8) where Φ(q)i,j = ∂qφj(zi) ∂zqi . Mixed partial derivatives can be handled in a similar way (Appendix A). Integral constraints. Consider the same setup as before but with h(uf (t,x)) =∫ Ω τ(uf (t,x))dx = 0, where the function τ : R → R is representable in terms of functions γm(uf (t, zi)) : R→ R similarly to the pointwise constraints. Then, ∫ Ω τ(uf (t,x))dx can be evalu- ated using a numerical integration technique, e.g. midpoint rule, Gaussian quadrature or Monte-Carlo integration, as ∫ Ω τ(uf (t,x))dx ≈ K∑ i=1 τ(uf (t, zi))µi, (9) where K is the number of integration points, µi are integration coefficients which depend on the grid and integration method, and τ(uf (t, zi)) is evaluated as in Equation 7. 3.2 SOFT CONSTRAINTS Soft constraints are implemented by minimizing the following loss Ldata + λr(h(uf )), where λ ∈ R and Ldata is defined as in Equation 4. We set r(h(uf )) = 1KM ∑K i=1 ∑M j=1 h(uf (tj , zi)) 2 for point- wise and differential constraints and r(h(uf )) = 1M ∑M j=1 h(uf (tj ,x)) 2 for integral constraints. 3.3 HARD CONSTRAINTS Our method allows to implement hard constraints by projecting the interpolant uf (t,x) to a subset of functions which satisfy the required constraints. Namely, if uf (t,x) does not satisfy constraints g and h, it is projected to a subset of functions which satisfy the constraint by solving the following optimization problem min ûf∈Vφ ‖uf − ûf‖2L2 s.t. h(ûf ) = 0, g(ûf ) ≤ 0, (10) where the projection is denoted by ûf (t,x) and Vφ is spanned by the basis functions. Using the basis representation uf (t,x) = ∑N i=1 αi(t)φi(x) and ûf (t,x) = ∑N i=1 βi(t)φi(x) we can rewrite the optimization problem (10) as min β(t)∈RN (α(t)− β(t))T Φ̂(α(t)− β(t)) s.t. h(ûf ) = 0, g(ûf ) ≤ 0, (11) where β(t) = (β1(t), . . . , βN (t))T and Φ̂i,j = ∫ Ω φi(x)φj(x)dx. To train the model end-to-end, the problem (11) should be differentiable. Agrawal et. al. [1] proposed differentiable convex optimization which could be used in this case if the problem (11) could be expressed in a DPP-compliant way (see [1]). To do that, we restrict ourselves to constraints that can be expressed as an equality or inequality between Aβ(t) and b, where A is a constant matrix and b is a constant vector. This formulation admits pointwise, differential and integral constraints on untransformed uf . The objective function is convex since its Hessian is positive-semidefinite i.e. for any v ∈ RN vT Φ̂v = N∑ i,j=1 vivjΦ̂i,j = N∑ i,j=1 〈viφi, vjφj〉L2 = 〈 N∑ i=1 viφi, N∑ j=1 vjφj〉L2 ≥ 0. (12) This allows to solve the problem (11) and differentiate its solution β∗(t) w.r.t. α(t). The model parameters are found by minimizing the following loss function Ldata +λLproj, where λ ∈ R and Ldata is defined as in Equation 4 but with u(ti) replaced by û(ti) = (ûf (ti,x1), . . . , ûf (ti,xN )). We set Lproj = 1NM ∑N i=1 ∑M j=1 ‖uf (tj ,xi)− ûf (tj ,xi)‖ 2 2. The second term makes the optimization procedure prefer models that predict uf close to the feasible set of the problem (11). We note that the proposed approach is currently limited to small-scale problems due to existing computational bottlenecks in the implementation of differentiable convex optimization [1]. 3.4 BASIS FUNCTIONS Selecting appropriate basis is crucial for efficiency and applicability of the proposed method. Ideally, the basis should allow efficient construction of uf (t,x) from u(t), contain no tunable parameters, and lead to sparse matrices Φ,Φ(q) and Φ̂. We consider bases from two families: Lagrange basis functions and radial basis functions (RBFs). Lagrange basis functions do not have tunable parameters and have compact support which leads to sparse Φ,Φ(q) and Φ̂. For the piecewise linear basis the interpolant uf (t,x) can be constructed directly from the predictions by setting α(t) = u(t). However, constructing uf (t,x) for a higher order basis, e.g. piecewise quadratic, requires the model to make predictions not only at the observation points, but also at some extra points where the data is not available. In Section 4 we demonstrate one approach to solving this problem. After extending the state u(t) by predictions at the extra nodes, the coefficients α(t) can be evaluated similarly to the piecewise linear basis. In this work we use piecewise linear (PWL) and piecewise quadratic (PWQ) bases. Examples of PWL basis functions are shown in Figure 2. Radial basis functions have a wider range of properties. Some RBFs have tunable parameters, some don’t. The matrices Φ,Φ(q) and Φ̂ evaluated with RBFs are typically dense, but RBFs with compact support exist (e.g. bump function). The interpolant uf (t,x) can be constructed by evaluating α(t) = K−1u(t), where K−1 is the inverse of the interpolation matrix of the given RBF and Kij = φ(‖xi − xj‖), where φ is an RBF and xi,xj are observation locations. In this work we use the cubic RBF basis i.e. φ(r) = r3. We use PyTorch [26] to handle sparse matrices and to evaluateK−1u(t) in a differentiable way. 4 EXPERIMENTS In the following experiments we use the relative error between the data y(t) and model predictions u(t) defined as ‖y(t)−u(t)‖2‖y(t)‖2 and consider only soft constraints. We present an experiment with hard constraints implemented as shown in Section 3.3 in Appendix D. Data generation is described in Appendix B. Training, testing and modeling details are in Appendix C. All experiments were run on a single NVIDIA Quadro P5000 GPU. All errors bars represent one standard deviation of the results over five random seeds. 4.1 REPLACING EXISTING METHODS In this experiment we take existing models which incorporate physics-based constraints in training and replace their constraint enforcing approaches with ours. We consider two works. First, [37] which trains a GAN to produce divergence-free vector fields using zero-divergence constraint. Second, [10] which predicts warping fields driving the evolution of sea surface temperature by observing snapshots of the temperature over time while enforcing gradient and divergence constraints on the warping fields (see Appendix C for more details). Both models work on uniform grids which allows them to evaluate constraints using finite differences. For comparison, we replace finite differences with our method and observe how it changes the models’ performance. In both cases we use the PWL basis. For [37] we track the mean divergence and discriminator loss. Results of the original approach are as follows: mean divergence 0.079 and discriminator loss 0.091. With our method the mean divergence was 0.014 and the discriminator loss was 0.088. Both approaches results in similar discriminator losses but our approach produces a smaller mean divergence (smaller is better). Our method increased the runtime per epoch by 6%. For [10] we track the total, divergence and smoothness losses which, with the original approach, were 0.139, 8.4 · 10−5 and 1.51 · 10−4, respectively. With our approach the losses were 0.139, 8.3 · 10−5 and 1.51 · 10−4, respectively. Both methods produce very similar results. Our method increased the runtime per epoch by 30%. Overall, replacing existing constraint enforcing approaches by ours on data from uniform grids resulted in comparable model performance, except for runtime which was slightly increased. 4.2 CAHN-HILLIARD EQUATION WITH AN INTEGRAL CONSTRAINT We start with the 1D Cahn-Hilliard equation ∂u ∂t = 2∇2(u(1− u)2 − u2(1− u)− 2∇2u) (13) which is known to conserve the state u i.e. h(u) = ∫ Ω u(t, x)dx− C = 0 at all time points, where C = ∫ Ω u(0, x)dx is a constant. This is an example of a conservation law which are abundant in nature and are important class of constraints that data-driven models of physical systems should satisfy. Conservation laws can be expressed in differential and integral forms and this experiment demonstrates how the integral form can be enforced. The constraint is evaluated using the midpoint rule as shown in the previous section with a single γ1 being the identity function. We use PWL, PWQ and cubic RBF bases and compare the results to an unconstrained model. For training we use 30, 60 and 120 simulations while the test set consist of 60 simulations. Simulations in the training/test data last for 0.0015/0.0030 seconds and contain 50/100 uniformly spaced time points. The full spatial grid consists of 101 uniformly spaced nodes. We randomly sample 50%, 75% and 100% of the nodes and train/test on the resulting (irregular) spatial grid. Training and testing is done with identical spatial grids. An example of a spatial grid with 50% of nodes is shown in Figure 3. We evaluate the constraint on a uniform grid with 200 nodes placed on top of the original grid. To learn the dynamics of the system we use the model from [15] (Section 2). We found that using a GNN produced poor results. For that reason we represented the function Fθ with a multiplayer perceptron (MLP) which updates the state of each node based on the states of all other nodes in the grid (results for a GNN are in Appendix E). The MLP contains two hidden layers with Leaky ReLU nonlinearities. The number of hidden neurons was set to the number of nodes in the grid. The coefficients α(t) for the PWL and cubic bases can be evaluated directly from the model predictions at the grid nodes. But the PWQ basis requires extra predictions to be available between the nodes. This is problematic since there is no data at these points to guide the model’s predictions. To solve this problem we introduce a small MLP which is applied to consecutive pairs of nodes. The MLP takes the states at both nodes and the distance between them as the input and estimates the state at the midpoint between the two nodes. The MLP is trained jointly with the main model and uses only the constraint-related loss term during training. For testing, we construct the interpolant uf (t,x) using the thin plate spline basis (φ(r) = r2 log r) and evaluate the constraint on that interpolant. This allows to make a fair comparison between the unconstrained model and different bases and avoid biasing or ovefitting to bases used for training. Figure 4 shows results of the experiment. We observe that changing the node fraction does not significantly affect the relative errors but has noticeable effect on constraint violations, especially for the unconstrained model. Constrained models tend to show similar or better performance than the unconstrained model. Among all bases, the cubic basis consistently results in lower relative errors and constraint violations. However, the simpler PWL basis often performs on par with the cubic basis, especially on denser spatial grids. We also observe that coarsening of the grid increases the constraint violation gap between constrained and unconstrained models and that this gap seems to not close as we increase the amount of training data. The PWQ basis performs rather poorly on fine grids which is likely due to a suboptimal approach to evaluating the state at the extra nodes. A better approach could consider not only pairs of points but also larger neighborhoods. Nonetheless, the PWQ basis achieves good performance on coarse grids which shows that piecewise bases of higher order could potentially be used to enforce constraints. This will allow to scale to grids with a large number of nodes due to sparsity of the constraint matrices and efficient evaluation of α. 4.3 HEAT EQUATION WITH A MONOTONICITY CONSTRAINT We impose constraints on a 2D system governed by the heat equation ∂u∂t = ∇2u for which the generated initial conditions (ICs) are monotone in one direction. Since the ICs are monotone, the state u remains monotone at all time points as well. We enforce the monotonicity constraint as ∂u∂x ≥ 0. The constraint is evaluated as shown in the previous section with γ1 being the identity function. For training we use 15, 30 and 90 simulations while the test set consist of 120 simulations. Simulations in the training/test data last for 0.1/0.2 seconds and contain 21/41 uniformly spaced time points. The full spatial grid consists of 1087 nodes. We randomly sample 33%, 66% and 100% of the nodes and train/test on the resulting (irregular) spatial grid. Training and testing is done with identical spatial grids. Spatial grid with 100% of nodes is shown in Figure 5. The constraint is evaluated at the nodes of a uniform 51 × 51 grid placed on top of the original grid. To learn the dynamics of the system we use the model from [15] directly with the messaging and aggregation networks being MLPs with a single hidden layer consisting of 60 neurons with Tanh nonlinearities and the input/output sizes of 4/40 and 41/1 respectively. During testing, we use predictions of the models to construct an interpolant uf (t,x) using the thin plate spline basis and evaluate the constraint on that interpolant. This allows to make a fair comparison between the unconstrained model and different bases. Figure 7 shows results of the experiment. We observe that changing the node fraction equally increases relative errors of all models and has noticeable effect on constraint violations, especially for the unconstrained model. Constrained models tend to show slightly higher or comparable relative errors but noticeably lower constraint violations than the unconstrained model. The cubic and PWL bases perform equally well in this case. Similarly to the experiment in the previous section, we observe that coarsening of the grid introduces a larger constraint violation gap between constrained and unconstrained models and that this gap seems to not close as we increase the amount of training data. Figure 6 shows qualitative difference between predictions of constrained and unconstrained models. It can be noted that predictions from the constrained model have noticeably smoother contours thus making the field more monotonous in the horizontal direction. 4.4 LEARNING DISTRIBUTIONS OF PHYSICAL FIELDS We demonstrate the effect of adding constraints to a GAN when learning distributions of physical fields on unstructured grids. We use Wasserstein GAN (WGAN) [2] as a more stable variant of a GAN. We use MLPs as a generator and discriminator. Unconstrained and constrained models are trained for 1.2M iterations. Constraints are enabled only after 600k iterations. Constrained models are trained similarly to the unconstrained ones but with a modified generator loss defined as LG + λ ln (1 + LC), where LG is the standard generator loss and LC is the constraint-based loss. We defineLC as the mean value of h(u)2, where h is a constraint evaluated at the centroid of each cell in the grid. 4.4.1 ZERO-DIVERGENCE FIELDS Divergence-free vector fields are often encountered in solutions of fluid dynamics problems. The divergence-free constraint on a vector field u(x, y) = (u1(x, y), u2(x, y)) T is defined as h(u) = ∂u1∂x + ∂u2 ∂y = 0. The constraint is enforced using the PWL basis. We generated a dataset with 10k divergence-free fields on an unstructured grid with 1050 nodes (Figure 14) and used a WGAN to learn a distribution over such fields. Note that the generated fields are not entirely divergence-free but have small residual divergence due to discretization errors. Figure 9a shows that there is a clear difference in the quality of the samples generated by the unconstrained and constrained models. Samples from the constrained model are smoother and more similar to the data. Quantitative comparison of the samples presented in Figure 8 shows that the constrained model generates fields that have much lower constraint violation and divergence distribution very similar to that of the data. 4.4.2 ZERO-LAPLACIAN FIELDS Fields with zero Laplacian represent solutions to some PDEs, for example the steady-state heat equation. The zero-Laplacian constraint on a scalar field u(x, y) is defined as h(u) = ∂ 2u ∂x2 + ∂2u ∂y2 = 0. The constraint is enforced using the cubic basis as the PWL basis has zero second derivatives everywhere. We generated a dataset with 10k Laplacian-free fields on an unstructured grid with 1050 nodes (Figure 14) and used a WGAN to learn a distribution over such fields. Note that the generated fields are not entirely Laplacian-free due to discretization errors. Results of the experiment are shown in Figures 9b and 8. Similarly to the divergence-free case, visual quality of the fields generated by the constrained model is significantly better than for the unconstrained model. Quantitative comparison of the samples presented in Figure 8 shows that the constrained model generates fields that have much lower constraint violation and Laplacian distribution very similar to that of the data. 5 RELATED WORK Soft constraints. Soft constraints are widely used due to being relatively easy to implement. Examples include lake temperature prediction [18; 16], traffic simulation [22], fluid and climate modeling [11; 10; 4], where constraints are evaluated pointwise or using finite differences. Hard constraints. Approaches to implementing hard constraints are diverse and can be categorized as processing the output of an unconstrained model [4; 17; 24; 34] and designing a model that produces feasible predictions by default [23; 25; 14; 9; 13; 8; 38]. Constrained PDE models Current approaches to enforcing soft [10; 11] and hard [21; 25; 17] constraints are limited to specific types of constraints and spatial grids. For example, [25; 17] implement only hard differential constraints and both are limited to uniform grids. Uniform grids allow to evaluate constraints efficiently e.g. using finite differences [10; 21; 25] or fast Fourier transform [17] but assuming that the data lies on a uniform grid might be limiting. Constrained GANs Works such as [37; 17] showed how physics-based constraints benefit training and quality of the generated samples but are also limited to uniform grids. 6 CONCLUSION We presented a general approach to enforcing algebraic constraints on unstructured grids and showed how it can be used to enforce soft and hard constraints. We demonstrated applicability of the approach to learning of PDE-driven dynamical systems and distributions of physical fields. We considered two families of basis functions for constructing the interpolant and showed how Lagrange basis functions of order higher than one can be used. Our method allows to drop the unrealistic assumption about uniformity of spatial grids and shows promising results on various tasks. REPRODUCIBILITY STATEMENT All details required to reproduce the experiments are provided in Section 4 and Appendices. Code and data used to run the experiments will be made publicly available after the review process. A GENERALIZED CHAIN RULE AND HANDLING MIXED PARTIAL DERIVATIVES. Let y = g(x1, . . . , xn) with all arguments being either identical, distinct or grouped. Then, partial derivatives of f(y) can be evaluated using the Faà di Bruno’s formula ∂nf(y) ∂x1 · · · ∂xn = ∑ π∈Π f (|π|)(y) ∏ B∈π ∂|B|y∏ j∈B ∂xj , where Π is the set of all partitions of the set {1, . . . , n}, B runs through elements of the partition π, f (m) denotes m’th derivative, and | · | is cardinality. The formula consists of two terms: f (|π|)(y), which can be evaluated using automatic differentiation, and ∂ |B|y∏ j∈B ∂xj , which can be evaluated as shown in Equation 8. In case that all x1, . . . , xn are identical, the mixed derivative ∂nf(y) ∂x1···∂xn reduces to ∂nf(y) ∂xn1 . B DATA GENERATION In all cases we run simulation on a fine grid and then interpolate the results to a coarser grid represented as the "full grid" in the experiments. B.1 CAHN-HILLIARD EQUATION WITH AN INTEGRAL CONSTRAINT Training and testing data was obtained by solving ∂u ∂t = 2∇2(u(1− u)2 − u2(1− u)− 2∇2u) (14) on a unit interval with periodic boundary conditions and = 0.04. The domain was represented by a uniform grid with 100 nodes and the time step was set to 1.0e-6 sec. The initial conditions u0(x) were generated as follows ũ0(x) = 10∑ i=1 (λi cos ((x− s)2π) + γi sin ((x− s)2π)) + λ0 2 , (15) u0(x) = ũ0(x)−min ũ0(x) max ũ0(x)−min ũ0(x) , (16) where λi, γi ∼ Unif(−1, 1) and s ∼ Unif(0, 1). Examples of the simulations are shown in Figure 10. B.2 HEAT EQUATION WITH A MONOTONICITY CONSTRAINT Training and testing data was obtained by solving ∂u ∂t = D∇2u (17) on a unit square with zero Neumann boundary conditions and D = 0.2. The domain was represented by an unstructured grid with 2971 nodes and the time step was set to 0.001 sec. The initial conditions u0(x) were generated as f(x) = 6∑ i=0 ωix i, (18) g(y) = 1 2 3∑ i=1 (λi cos ((x+ s)2π) + γi sin ((x+ s)2π)) + λ0 2 , (19) ũ0(x, y) = f(x) + g(y), (20) u0(x, y) = ũ0(x, y)−min ũ0(x, y) max ũ0(x, y)−min ũ0(x, y) , (21) where ωi ∼ Unif(0.1, 1.1), λi, γi, s ∼ Unif(−1, 1). Examples of the simulations are shown in Figure 11. B.3 GAN WITH A DIVERGENCE CONSTRAINT The data was generated by sampling random velocity fields and then projecting them to the space of divergence-free fields. The procedure was as follows. First, a random velocity field u0(x, y) was generated on a unit square by generating each component i as ũ0i(x, y) = N∑ k,l=−N λkl cos (kx+ ly) + γkl sin (kx+ ly), (22) u0i(x, y) = 6× ( ũ0i(x, y)−min ũ0i(x, y) max ũ0i(x, y)−min ũ0i(x, y) − 0.5 ) , (23) where N = 10 and λkl, γkl ∼ N (0, 1). Then, the divergence-free component of u0(x, y), denoted by u∗0(x, y), was extracted by using the projection method by solving∇ · u0 = ∇2φ for φ and then evaluating u∗0(x, y) = u0(x, y)−∇φ. Finally, the data was scaled to [−1, 1]. B.4 GAN WITH A LAPLACIAN CONSTRAINT The data was generated by solving ∇2u = 0 (24) on a unit square with Dirichlet boundary conditions. The domain was represented by an unstructured grid with 2971 nodes. The boundary conditions were generated by generating random functions u0(x) and using their boundary values as the boundary conditions. The functions u0(x) were generated as u0(x, y) = N∑ k,l=−N λkl cos (kx+ ly) + γkl sin (kx+ ly) (25) where N = 5 and λkl, γkl ∼ N (0, 1). The data was then scaled to [0, 1]. C MODELS, TRAINING AND TESTING C.1 REPLACING EXISTING METHODS For our comparisons we considered experiments from two works. Next, we provide some details about these experiments. The first experiment was taken from [37] Section 3.2. The experiment shows how soft physics-based constraints affect predictions of a GAN learning a distribution of divergence-free fields. The data is generated on a uniform grid which allows to evaluate divergence using finite differences. The constraint is enforced through an extra loss term which penalizes violation of the constraint. The performance metric used is the Frobenius norm of the divergence averaged over all fields in a batch. For training we used code provided by the authors with original parameters. We replaced finite differences in the constrained evaluation function with our method. The second experiment was taken from [10]. This work deals with the task of predicting sea surface temperatures at future times given snapshots of the temperature over current and previous times. The model proposed by the authors accomplishes this tasks by taking a sequence of surface temperatures at times ti−k, . . . , ti and predicting the underlying motion field which is then used to predict the temperature at time ti+1. Insights about physical properties of the motion field were used to constrain the model’s predictions. Constraints are imposed on divergence, magnitude and gradients of the motion field. The data is generated on a uniform grid which allows to evaluate the constraints using finite differences. The constraints are enforced through extra loss terms which penalize violation of the constraints. Performance metrics that were used are MSE between the data and model predictions, smoothness loss and divergence loss. For training we used code provided by the authors with original parameters. We replaced finite differences in the constrained evaluation function with our method. C.2 CAHN-HILLIARD EQUATION WITH AN INTEGRAL CONSTRAINT In all experiments with the Cahn-Hilliard equation we represent the dynamics function Fθ by an MLP with 2 hidden layers and LeakyReLU nonlinearities (negative slope 0.2). The number of neurons in each layer was set to the number of nodes in the spatial grid on which the model was trained. The predictions u(t) were obtained by simulating the system forward in time using adaptive Heun solver from torchdiffeq package [6] with rtol and atol set to 1.0e-5 and 1.0e-5 respectively. All models were trained for 1500 epochs using Rprop optimizer [29] with learning rate set to 1.0 · 10−6 and batch size set to the number of simulations in the training set. Mean squared error was used as the loss function. Spatial and temporal grids in the testing data were the same as in the training data. We set λ = 2. C.3 HEAT EQUATION WITH A MONOTONICITY CONSTRAINT In all experiments with the heat equation we represent the dynamics function Fθ by a GNN with the messaging and aggregation networks being MLPs with a single hidden layer consisting of 60 neurons with Tanh nonlinearities and the input/output sizes of 4/40 and 41/1 respectively. The predictions u(t) were obtained by simulating the system forward in time using adaptive Heun solver from torchdiffeq package [6] with rtol and atol set to 1.0e-5 and 1.0e-5 respectively. All models were trained for 750 epochs using Rprop optimizer [29] with learning rate set to 1.0 · 10−6 and batch size set to the number of simulations in the training set. Mean squared error was used as the loss function. Spatial and temporal grids in the testing data were the same as in the training data. We set λ = 0.1. C.4 LEARNING DISTRIBUTIONS OF PHYSICAL FIELDS In both cases we used identical architectures and training process for the constrained and unconstrained models. Both models were trained for 1.2M iterations using the same random seed. Constraints in the constrained model were enabled only after 600k iterations. The base distribution was set to a 128-dimensional isotropic standard normal. Models were trained using RMSProp optimizer [35] with batch size and learning rate set to 64 and 0.00001 respectively. The discriminator’s weights were clipped to [−0.01, 0.01]. C.4.1 ZERO-DIVERGENCE FIELDS We used MLPs as a discriminator and generator. The discriminator consisted of 3 hidden layers of sizes 1024-512-256 with LeakyReLU nonlinearities (negative slope 0.2) and input/output size of 2010 and 1 respectively. The generator consisted of 3 hidden layers of sizes 256-512-1024 with LeakyReLU nonlinearities (negative slope 0.2), input/output size of 128 and 2010 respectively, and a final hyperbolic tangent nonlinearity applied to the output. We set λ = 0.2. C.4.2 ZERO-LAPLACIAN FIELDS We used MLPs as a discriminator and generator. The discriminator consisted of 3 hidden layers of sizes 1024-512-256 with LeakyReLU nonlinearities (negative slope 0.2) and input/output size of 1086 and 1 respectively. The generator consisted of 3 hidden layers of sizes 256-512-1024 with LeakyReLU nonlinearities (negative slope 0.2), input/output size of 128 and 1086 respectively, and sigmoid function applied to the output. We set λ = 0.0075. D CAHN-HILLIARD EQUATION WITH HARD INTEGRAL CONSTRAINTS Here we demonstrate how the approach to enforcing hard constraints described in Section 3.3 can be used to enforce integral constraints on a nonuniform grid. We use the same setup as in Section 4.2 with 30 training simulations and 50% of nodes in the grid. We compare three models: unconstrained model, model with soft constraint and model with hard constraint. We use the PWL basis during training and testing. Table 1 shows that relative errors of all three models are practically similar but, as Figure 12 demonstrates, constraint violations differ significantly. We see that constraint violations of the model with hard constraint are zero at all time points as expected. Being able to produce predictions that satisfy some constraints exactly might be very useful for some applications, however, as we mention in Section 3.3, currently this approach to enforcing hard constraints is limited to systems with a relatively small number of nodes and is significantly slower than models with soft constraints. We report training times in Table 1. E LEARNING CAHN-HILLIARD EQUATION WITH GNNS We use the same setup as in Section 4.2 with 75% of nodes in the grid but trained the models for 1500 epochs. Instead of an MLP we use a GNN with messaging and aggregation networks being MLPs with two hidden layers of size 64 and LeakyReLU nonlinearities (negative slope 0.2). For each node, the GNN evaluates the output as dui dt = γ 1 |N (i)| N∑ j∈N (i) φ(uproji , u proj j , x proj ij ), u proj i , (26) where uproji and u proj j are linear projections of the state at nodes i and j and x proj ij is a linear projection of the pair consisting of the distance between nodes i and j and a unit vector pointing from node i to node j. All projections have dimension 16. We compare constrained and unconstrained models. We use the PWL, PWQ and cubic RBF bases. Results of the experiment are shown in Figure 13. The figure shows that relative errors and constraint violations of all models are significantly higher than for MLP-based models. F EXTRA FIGURES
1. What is the main contribution of the paper regarding enforcing physical constraints in deep learning models? 2. What are the strengths and weaknesses of the proposed method, particularly in its application to unstructured grids? 3. How does the reviewer assess the clarity and organization of the paper's content, including the ordering of the test cases and the introduction? 4. What are the limitations of the paper compared to prior works, and how could they be addressed? 5. Are there any specific areas where the reviewer suggests improvements or additional details, such as providing more intuitive explanations or expanding the related work section?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a method to enforce physical constraints in deep learning models. It provides a nice summary of local, differential and integral constraints, and frames them in a Lagrangian setting. For a reason which I could not clearly follow, the paper focuses on GAN early on. This is intuitive to me, as the physical constraints could nicely stand on their own. Review The paper presents a series of tests which follow a somewhat unintuitive order. First, a regular grid case is presented, which seems to be taken from previous work. Not much detail is given, and the test only confirms that the constraints give very similar results. This test seems like a pure debugging case, as it is based on Cartesian grids instead of the unstructured grids which the paper wants to focus on. The subsequent sections (4.1 ff) then provide examples with unstructured grids. While the first CH-case is very simplistic, the latter cases contain interesting setups and show some interesting results. E.g., I found it interesting to see how the physical constraints affect the smoothness of the zero divergence fields In terms of writing, I found the order and argumentation of the results sub-optimal, as mentioned above. The structured grid case seems mostly out of place. The introduction also sounds strange to me, with current work being "limited" to uniform grids. Previous works have admittedly focused on these, but the methods are still applicable - after all, this submission is applying the methods from uniform grids in a fairly straight forward manner. I would recommend to rephrase this. Also, stylistically, citations shouldn't be used as nouns, and the related work seems to brief to me (e.g., generative models for physical problems are missing).
ICLR
Title SNAS: stochastic neural architecture search Abstract We propose Stochastic Neural Architecture Search (SNAS), an economical endto-end solution to Neural Architecture Search (NAS) that trains neural operation parameters and architecture distribution parameters in same round of backpropagation, while maintaining the completeness and differentiability of the NAS pipeline. In this work, NAS is reformulated as an optimization problem on parameters of a joint distribution for the search space in a cell. To leverage the gradient information in generic differentiable loss for architecture search, a novel search gradient is proposed. We prove that this search gradient optimizes the same objective as reinforcement-learning-based NAS, but assigns credits to structural decisions more efficiently. This credit assignment is further augmented with locally decomposable reward to enforce a resource-efficient constraint. In experiments on CIFAR-10, SNAS takes fewer epochs to find a cell architecture with state-of-theart accuracy than non-differentiable evolution-based and reinforcement-learningbased NAS, which is also transferable to ImageNet. It is also shown that child networks of SNAS can maintain the validation accuracy in searching, with which attention-based NAS requires parameter retraining to compete, exhibiting potentials to stride towards efficient NAS on big datasets. 1 INTRODUCTION The trend to seek for state-of-the-art neural network architecture automatically has been growing since Zoph & Le (2016), given the enormous effort needed in scientific research. Normally, a Neural Architecture Search (NAS) pipeline comprises architecture sampling, parameter learning, architecture validation, credit assignment and search direction update. There are basically three existing frameworks for neural architecture search. Evolution-based NAS like NEAT (Stanley & Miikkulainen, 2002) employs evolution algorithm to simultaneously optimize topology alongside with parameters. However, it takes enormous computational power and could not leverage the efficient gradient back-propagation in deep learning. To achieve the state-of-the-art performance as human-designed architectures, Real et al. (2018) takes 3150 GPU days for the whole evolution. Reinforcement-learning-based NAS is end-to-end for gradient back-propagation, among which the most efficient one, ENAS (Pham et al., 2018) learns optimal parameters and architectures together just like NEAT. However, as NAS is modeled as a Markov Decision Process, credits are assigned to structural decisions with temporal-difference (TD) learning (Sutton et al., 1998), whose efficiency and interpretability suffer from delayed rewards (Arjona-Medina et al., 2018). To get rid of the architecture sampling process, DARTS (Liu et al., 2019) proposes deterministic attention on operations to analytically calculate expectation at each layer. After the convergence of the parent network, it removes operations with relatively weak attention. Due to the pervasive non-linearity in neural operations, it introduces untractable bias to the loss function. This bias causes inconsistency between the performance of derived child networks and converged parent networks, thus parameter retraining comes up as necessary. A more efficient, more interpretable and less biased framework is in desire, especially for future full-fledged NAS solutions on large datasets. In this work, we propose a novel, efficient and highly automated framework, Stochastic Neural Architecture Search (SNAS), that trains neural operation parameters and architecture distribution parameters in same round of back propagation, while maintaining the completeness and differentiability of the NAS pipeline. One of the key motivations of SNAS is to replace the feedback mechanism triggered by constant rewards in reinforcement-learning-based NAS with more efficient gradient feedback from generic loss. We reformulate NAS with a new stochastic modeling to bypass the MDP assumption in reinforcement learning. To combine architecture sampling with computational graph of arbitrary differentiable loss, the search space is represented with a set of one-hot random variables from a fully factorizable joint distribution, multiplied as a mask to select operations in the graph. Sampling from this search space is made differentiable by relaxing the architecture distribution with concrete distribution (Maddison et al., 2016). We name gradients w.r.t their parameters search gradient. From a global view, we prove that SNAS optimizes the same objective as reinforcement-learning-based NAS, except the training loss is used as reward. Zooming in, we provide a policy gradient equivalent of this search gradient, showing how gradients from the loss of each sample are used to assign credits to structural decisions. By interpreting this credit assignment as Taylor Decomposition (Montavon et al., 2017a), we prove SNAS’s efficiency over reinforcementlearning-based NAS. Additionally, seeing that existing methods (Liu et al., 2019) manually design topology in child networks to avoid complex architecture, we propose a global resource constraint to automate it, augmenting the objective with feasiblity concerns. This global constraint could be linearly decomposed for structural decisions, hence the proof of SNAS’s efficiency still applies. In our experiments, SNAS shows strong performance compared with DARTS and all other existing NAS methods in terms of test error, model complexity and searching resources. Specifically, SNAS discovers novel convolutional cells achieving 2.85±0.02% test error on CIFAR-10 with only 2.8M parameters, which is better than 3.00±0.14%-3.3M from 1st-order DARTS and 2.89%-4.6M from ENAS. It is also on par with 2.76±0.09%-3.3M from 2nd-order DARTS with fewer parameters. With a more aggressive resource constraint, SNAS discovers even smaller model achieving 3.10±0.04% test error on CIFAR-10 with 2.3M parameters. During the architecture search process, SNAS obtains a validation accuracy of 88% compared to around 70% of ENAS in fewer epochs. When validating the derived child network on CIFAR-10 without finetuning, SNAS maintains the search validation accuracy, significantly outperforming 54.66% by DARTS. These results validate our theory that SNAS is less biased than DARTS. The discovered cell achieves 27.3% top-1 error when transferred to ImageNet (mobile setting), which is comparable to 26.9% by 2nd-order DARTS. 2 METHODOLOGY The main initiative of SNAS is to build an efficient and economical end-to-end learning system with as little compromise of the NAS pipeline as possible. In this section, we first describe how to sample from the search space for NAS in a cell, and how it motivates a stochastic reformuation for SNAS (Section 2.1). A new optimization objective is provided and the attention-based NAS’s inconsistency is discussed. Then in Section 2.2, we introduce how this discrete search space is relaxed to be continuous to let gradients back-propagate through. In Section 2.3, the search gradient of SNAS is connected to the policy gradient in reinforcement-learning-based NAS (Zoph & Le, 2016; Pham et al., 2018), interpreting SNAS’s credit assignment with contribution analysis. At last, we introduce in Section 2.4 how SNAS automates the topology search to reduce the complexity of child netowrk, as well as how it decomposes this global constraint in the context of credit assignment. 2.1 SEARCH SPACE AND ARCHITECTURE SAMPLING Searching for structure of a cell that is later stacked as building blocks for a deep architecture is an ad hoc solution to trade-off search efficiency and result optimality (Zoph et al., 2017; Liu et al., 2017a; Real et al., 2018; Pham et al., 2018; Liu et al., 2019). As shown in the left of Figure 1, the search space, i.e. a cell, is represented using a directed acyclic graph (DAG), which is called parent graph. Nodes xi in this DAG represent latent representation, whose dimensions are simply ignored to avoid abuse of notations. In convolutional networks, they are feature maps. Edges (i, j) represent information flows and possible operations Oi,j to be selected between two nodes xi and xj . To make the skip operation included, nodes are enforced to be ordered, while edges only point from lower indexed nodes to higher ones. Thus we have intermediate nodes xj = ∑ i<j Õi,j(xi), (1) where Õi,j is the selected operation at edge (i, j). Analog to ENAS, SNAS search for operations and topology of this cell at the same time. Rather than using two distributions, this is done by introducing a zero operation, as in DARTS. Same as ENAS and DARTS, each cell is designed to have two inputs from the output of previous cells. The output of a cell is the concatenate of intermediate nodes. Thanks to the fact that the volume of structural decisions, which pick Õi,j for edge (i, j), is generally tractable in a cell, we represent it with a distribution p(Z). Multiplying each one-hot random variable Zi,j to each edge (i, j) in the DAG, we obtain a child graph, whose intermediate nodes are xj = ∑ i<j Õi,j(xi) = ∑ i<j ZTi,jOi,j(xi). (2) In terms of how to parameterize and factorize p(Z), SNAS is built upon the observation that NAS is a task with fully delayed rewards in a deterministic environment. That is, the feedback signal is only ready after the whole episode is done and all state transition distributions are delta functions. Therefore, a Markov Decision Process assumption as in ENAS may not be necessary. In SNAS, we simply assume that p(Z) is fully factorizable, whose factors are parameterized with α and learnt along with operation parameters θ. In Appendix A we connect the probability of a trajectory in the MDP of ENAS and this joint probability p(Z). Following the setting in Zoph & Le (2016), the objective of SNAS is also EZ∼pα(Z)[R(Z)]. (3) While the difference is that rather than using a constant reward from validation accuracy, we use the training/testing loss directly as reward, R(Z) = Lθ(Z), such that the operation parameters and architecture parameters can be trained under one generic loss: EZ∼pα(Z)[R(Z)] = EZ∼pα(Z)[Lθ(Z)]. (4) The whole process of obtaining a Monte Carlo estimate of this objective is shown in Figure 1. An intuitive interpretation of this objective is to optimize the expected performance of architectures sampled with p(Z). This differentiates SNAS from attention-based NAS like DARTS, which avoids the sampling process by taking analytical expectation at each edge over all operations. In Appendix B we illustrate the inconsistency between DARTS’s loss and this objective, explaining its necessity of parameter finetuning or even retraining after architecture derivation. Resembling ENAS, SNAS does not have this constraint. We introduce in next subsection how SNAS calculates gradients w.r.t θ and α. 2.2 PARAMETER LEARNING FOR OPERATIONS AND ARCHITECTURES Though the objective (4) could be optimized with black-box gradient descent method as in Ranganath et al. (2014), it would suffer from the high variance of likelihood ratio trick (Williams, 1992) and could not make use of the differentiable nature of Lθ(Z). Instead, we use concrete distribution (Maddison et al., 2016) here to relax the discrete architecture distribution to be continuous and differentiable with reparameterization trick: Zki,j = fαi,j (G k i,j) = exp((logαki,j +G k i,j)/λ)∑n l=0 exp((logα l i,j +G l i,j)/λ) , (5) where Zi,j is the softened one-hot random variable for operation selection at edge (i, j), Gki,j = − log(− log(Uki,j)) is the kth Gumbel random variable, Uki,j is a uniform random variable. αi,j is the architecture parameter, which could depend on predecessors Zh,i if p(Zi,j) is a conditional probability. λ is the temperature of the softmax, which is steadily annealed to be close to zero in SNAS. In Maddison et al. (2016), it is proved that p(limλ→0Zki,j = 1) = α k i,j/( ∑n l=0α l i,j), making this relaxation unbiased once converged. The full derivation of ∇EZ∼pα(Z)[Lθ(Z)] is given in Appendix C. Here with the surrogate loss L for each sample, we provide its gradient w.r.t xj , θki,j and α k i,j : ∂L ∂xj = ∑ m>j ∂L ∂xm ZTm ∂Om(xj) ∂xj , ∂L ∂θki,j = ∂L ∂xj Zki,j ∂Oi,j(xi) ∂θki,j , ∂L ∂αki,j = ∂L ∂xj OTi,j(xi)(δ(k ′ − k)−Zi,j)Zki,j 1 λαki,j . (6) We name ∂L∂α search gradient similar to the one in Wierstra et al. (2008), even though no policy gradient is involved. This renders SNAS a differentiable version of evolutionary-strategy-based NAS. 2.3 CREDIT ASSIGNMENT With the equivalence of p(Z) in SNAS and p(τ) in ENAS from Section 2.1 and the search gradient of SNAS from Section 2.2, we discuss in this subsection what credits SNAS search gradients assign to each structural decision. To assign credits to actions both temporally and laterally is an important topic in reinforcement learning (Precup, 2000; Schulman et al., 2015; Tucker et al., 2018; Xu et al., 2018). In ENAS, proximal policy optimization (PPO) (Schulman et al., 2017) is used to optimize the architecture policy, which distributes credits with TD learning and generalized advantage estimator (GAE) (Schulman et al., 2015). However, as the reward of NAS task is only obtainable after the architecture is finalized and the network is tested for accuracy, it is a task with delayed rewards. As proved by Arjona-Medina et al. (2018), TD learning has bias with reward delay and corrects it exponentially slowly. Different from ENAS, there is no MDP assumption in SNAS, but the reward function is made differentiable in terms of structural decisions. From Section 2.2 we can derive the expected search gradient for architecture parameters at edge (i, j): EZ∼p(Z)[ ∂L ∂αki,j ] = EZ∼p(Z)[∇αki,j log p(Zi,j)[ ∂L ∂xj Õi,j(xi)]c], (7) where [·]c emphasizes · is constant for the gradient calculation w.r.t. α. A full derivation is provided in Appendix D. Apparently, the search gradient is equivalent to a policy gradient for distribution at this edge whose credit is assigned as Ri,j = −[ ∂L ∂xj Õi,j(xi)]c. (8) From a decision-wise perspective, this reward could be interpreted as contribution analysis of L with Taylor Decomposition (Montavon et al., 2017a), which distributes importance scores among nodes in the same effective layer. Given the presence of skip connections, nodes may be involved into multiple effective layers, credits from which would be integrated. This integrated credit of a node j is then distributed to edges (i, j) pointing to it, weighted by Õi,j(xi). Details are given in Appendix E. Thus for each structural decision, no delayed reward exists, the credits assigned to it are valid from the beginning. This proves why SNAS is more efficient than ENAS. Laterally at each edge, credits are distributed among possible operations, adjusted with random variables Zi,j . At the beginning of the training, Zi,j is continuous and operations share the credit, the training is mainly on neural operation parameters. With the temperature goes down and Zi,j becomes closer to one-hot, credits are given to the chosen operations, adjusting their probabilities to be sampled. 2.4 RESOURCE CONSTRAINT Apart from training efficiency and validation accuracy, forwarding time of the child network is another concern in NAS in order for its feasible employment. In SNAS, this could be taken into account as a regularizer in the objective: EZ∼pα(Z)[Lθ(Z) + ηC(Z)] = EZ∼pα(Z)[Lθ(Z)] + ηEZ∼pα(Z)[C(Z)], (9) where C(Z) is the cost of time for the child network associated with random variables Z. Rather than directly estimating the forwarding time, there are three candidates from the literature (Gordon et al., 2018; Ma et al., 2018) that can be used to approximately represent it: 1) the parameter size ; 2) the number of float-point operations (FLOPs); and 3) the memory access cost (MAC). Details about C(Z) in SNAS could be found in Appendix F. However, not like Lθ(Z), C(Z) is not differentiable w.r.t. either θ or α. A natural problem to ask is, whether efficient credit assignment from C(Z) could be done with similar decomposition introduced above, such that the proof of SNAS’s efficiency still applies. And the answer is positive, thanks to the fact that C(Z) is linear in terms of all one-hot random variables Zi,j : C(Z) = ∑ i,j C(Zi,j) = ∑ i,j ZTi,jC(Oi,j), (10) mainly because the size of feature maps at each node is not dependent on the structural decision. That is, the distribution at each edge (i, j) is optimized with local penalty, which is the conservative decomposition of the global cost, consistent with the credit assignment principle in SNAS. In SNAS, pα(Z) is fully factorizable, making it possible to calculate EZ∼pα [C(Z)] analytically with sum-product algorithm (Kschischang et al., 2001). Unfortunately, this expectation is non-trivial to calculate, we optimize the Monte Carlo estimate of the final form from sum-product algorithm EZ∼pα [C(Z)] = ∑ i,j EZ\i,j∼pα [EZi,j∼pα [Z T i,jC(Oi,j)]] (11) with policy gradients. 3 EXPERIMENTS Following the pipeline in DARTS, our experiments consist of three stages. First, SNAS is applied to search for convolutional cells in a small parent network on CIFAR-10 and we choose the best cells based on their search validation accuracy. Then, a larger network is constructed by stacking the learned cells (child graphs) and is retrained on CIFAR-10 to compare the performance of SNAS with other state-of-the-art methods. Finally, we show that the cells learned on CIFAR-10 is transferable to large datasets by evaluating their performance on ImageNet. 3.1 ARCHITECTURE SEARCH ON CIFAR-10 Motivation We apply SNAS to find convolutional cells on CIFAR-10 for image classification. Unlike DARTS, which evaluates the performance of child networks during the searching stage by training their snapshots from scratch, we directly take the search validation accuracy as the performance evaluation criterion. This evaluation method is valid in SNAS since the searching is unbiased from its objective, as introduced in Section 2.1. Dataset CIFAR-10 dataset (Krizhevsky & Hinton, 2009) is a basic dataset for image classification, which consists of 50,000 training images and 10,000 testing images. Data transformation is achieved by the standard data pre-processing and augmentation techniques (see Appendix G.1). Search Space Our setup follows DARTS, where convolutional cells (parent graphs) of 7 nodes are stacked for multiple times to form a network. The input nodes, i.e. the first and second nodes, of cell k are set equal to the outputs of cell k−2 and cell k−1, respectively, with 1× 1 convolutions inserted as necessary, and the output node is the depthwise concatenation of all the intermediate nodes. Reduction cells are located at the 1/3 and 2/3 of the total depth of the network to reduce the spatial resolution of feature maps. Therefore the architecture distribution parameters is (αnormal,αreduce), where αnormal is shared by all the normal cells and αreduce is shared by all the reduction cells. Details about all operations included are shown in Appendix G.1. Training Settings In the searching stage, we train a small network stacked by 8 cells (parent graphs) using SNAS with three levels of resource constraint for 150 epochs. This network size is determined to fit into a single GPU. Single-level optimization is employed to optimize θ and α over the same dataset as opposed to bilevel optimization employed by DARTS. The rest of the setup follows DARTS (Appendix G.1). The search takes 32 hours1 on a single GPU2. Searching Process The normal and reduction cells learned on CIFAR-10 using SNAS with mild resource constraint are shown in Figure 2. In Figure 3, we give the validation accuracy during the search of SNAS, DARTS and ENAS with 10 Randomly Generated Seeds. Comparing with ENAS, SNAS takes fewer epochs to converge to higher validation accuracy. Though DARTS converges faster than SNAS, this accuracy is inconsistent with the child network. Table 1 presents their comparison of the validation accuracy at the end of search and after architecture derivation without fine-tuning. While SNAS can maintain its performance, there is a huge gap between those two in DARTS. Figure 3: Search progress in validation accuracy from SNAS, DARTS and ENAS. 0.8183 0.3234 0.622 1 0.472 0.776 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Normal cell Reduction cell Overall DARTS SNAS 0.8183 0.3234 0.622 1 0.472 0.776 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Normal cell Reduction cell Overall DARTS SNAS 0.8183 0.3234 0.622 1 0.472 0.776 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Normal cell Reduction cell Overall DARTS SNAS This gap is caused by the extra architecture derivation step in DARTS, consisting of the following two steps. (1) Remove operations with relatively weak attention. As shown in Figure 4, the entropy of the architecture distribution (softmax) at each edge, i.e. Hpα , is relatively high in DARTS, indicating its uncertainty in structural decisions. Hence removing other operations from the continuous relaxation will strongly affect the output of the network. (2) Remove relatively ambiguous edges. DARTS manually selects two inputs for each intermediate nodes, thus the topology is inconsistent with that in the training stage. While SNAS employs architecture sampling and resource regularizer to automatically induce sparsity. Phenomena shown in Figure 4 and Table 1 verify our claim that searching process in SNAS is less biased from the objective, i.e. Equation (4), and could possibly save computation resources for parameter retraining when extended to NAS on large datasets. Searching Results Three levels of resource constraint, mild, moderate and aggressive are examined in SNAS. Mild resource constraint lies at the margin of the appearance of zero operation to drop edges in child graphs, as shown in Figure 2. Interestingly, every node takes only two input edges, just as in the designed scheme in ENAS and DARTS. When the constraint level is increased to moderate, the reduction cell begins to discover similar structures as normal cells, as shown in Appendix H. When a more aggressive resource constraint is added, the structure of reduction cells is further sparsified. As shown in Figure 5, more edges are dropped, leaving only two, which leads to the drop of some nodes, including the input node ck−1, and two intermediate nodes x2 and x3. Note that this child graph is a structure that ENAS and DARTS are not able to discover 4. 3Repetition for convolutional cells is not necessary since the optimization outcomes are not initializationsensetive (Liu et al., 2019). 4In the code from Liu et al. (2019), zero is omitted in child graph derivation as empirically it tends to learn the largest weight. 3.2 ARCHITECTURE EVALUATION ON CIFAR-10 Motivation In the searching stage, we follow the economical setup of DARTS to use only one single GPU, which constrains the parameter size of the child network. A conventional assumption in DARTS and ENAS5 is that the final search validation accuracy has exploited the parameter size, the ceiling of which can only be raised by allowing more parameters. For a fair comparison, we follow this assumption in evaluation stage, stacking more cells (child graphs) to build a deeper network. This network is trained from scratch as in DARTS and ENAS to report the performance of the cells learned by SNAS on CIFAR-10. Evaluation Settings A large network of 20 cells is trained from scratch for 600 epochs with batch size 96. Other hyperparameters remain the same as those for architecture search. Additional enhancements are listed in Appendix G.2. The training takes 1.5 days on a single GPU with our implementation in PyTorch. Results The CIFAR-10 evaluation results are presented in Table 2. The test error of SNAS is on par with the state-of-the-art RL-based and evolution-based NAS while using three orders of magnitude less computation resources. Furthermore, with slightly longer wall-clock-time, SNAS outperforms 1st-order DARTS and ENAS by discovering convolutional cells with both a smaller error rate and fewer parameters. It also achieves a comparable error rate compared to 2nd-order DARTS but with fewer parameters. With a more aggressive resource constraint, SNAS can sparsify the architecture even further to distinguish from ENAS and DARTS with only a slight drop in performance, which is still on par with 1st-order DARTS. It is interesting to note that with same single-level optimization, SNAS significantly outperforms DARTS. Bilevel optimization could be regarded as a data-driven meta-learning method to resolve the bias proved above, whose bias from the exact meta-learning objective is still unjustified due to the ignorance of separate child network derivation scheme. 3.3 ARCHITECTURE TRANSFERABILITY EVALUATION ON IMAGENET Motivation Since real world applications often involve much larger datasets than CIFAR-10, transferability is a crucial criterion to evaluate the potential of the learned cells (child graphs) (Zoph et al., 2017). To show whether the cells learned on by SNAS CIFAR-10 can be generalized to larger datasets, we apply the same cells evaluated in Section 3.2 to the classification task on ImageNet. Dataset The mobile setting is adopted where the size of the input images is 224 × 224 and the number of multiply-add operations in the model is restricted to be less than 600M. 5As shown in the code publicly released by Pham et al. (2018) Evaluation Settings We stack a network of 14 cells using the same cells designed by SNAS (mild constraint) and evaluated on CIFAR-10 (Section 3.2) and train it for 250 epochs with other hyperparameters following DARTS (see Appendix G.3). The training takes 12 days on a single GPU. Results Table 3 presents the results of the evaluation on ImageNet and shows that the cell found by SNAS on CIFAR-10 can be successfully transferred to ImageNet. Notably, SNAS is able to achieve competitive test error with the state-of-the-art RL-based NAS using three orders of magnitude less computation resources. And with resource constraints added, SNAS can find smaller cell architectures that achieve competitive performance with DARTS. 4 RELATED WORKS Improving the efficiency of NAS is a prerequisite to extending it to more complicated vision tasks like detection, as well as larger datasets. In the complete pipeline of NAS, parameter learning is a time-consuming one that attracts attention from the literature. Ideas to design auxiliary mechanisms like performance prediction (Baker et al., 2017; Deng et al., 2017), iterative search (Liu et al., 2017a), hypernetwork generated weights (Brock et al., 2017) successfully accelerate NAS to certain degrees. Getting rid of these auxiliary mechanisms, ENAS (Pham et al., 2018) is the state-of-the-art NAS framework, proposing parameter sharing among all possible child graphs, which is followed by SNAS. In Section 2 we introduced SNAS’s relation with ENAS in details. Apart from ENAS, we are also inspired by Louizos et al. (2017) to use continuous distribution for structural decision at each edge and optimize it along with an l0 complexity regularizer. The most important motivation of SNAS is to leverage the gradient information in generic differentiable loss to update architecture distribution, which is shared by DARTS (Liu et al., 2019). In Section 2 and Appendix B we have introduced SNAS’s advantage over DARTS, a reward for maintaining the completeness of the NAS pipeline. Actually, the idea to make use of this gradient information to improve the learning efficiency of a stochastic model has been discussed in the literature of generative model (Gu et al., 2015; Maddison et al., 2016) and reinforcement learning (Schmidhuber, 1990; Arjona-Medina et al., 2018). But as far as we known, we are the first one to combine the insights from these two fields to discuss possible efficiency improvement of NAS. 5 CONCLUSION In this work, we presented SNAS, a novel and economical end-to-end neural architecture search framework. The key contribution of SNAS is that by making use of gradient information from generic differentiable loss without sacrificing the completeness of NAS pipeline, stochastic architecture search could be more efficient. This improvement is proved by comparing the credit assigned by the search gradient with reinforcement-learning-based NAS. Augmented by a complexity regu- larizer, this search gradient trades off testing error and forwarding time. Experiments showed that SNAS searches well on CIFAR-10, whose result could be transferred to ImageNet as well. As a more efficient and less-biased framework, SNAS will serve as a possible candidate for full-fledged NAS on large datasets in the future. B DIFFERENCE BETWEEN SNAS AND DARTS We take a search space with three intermediate nodes for example to exhibit the difference between SNAS and DARTS (Liu et al., 2019), as shown in Figure 6. This search space could be viewed as a unit search space whose property could be generalized to larger space since it contains nodes in series and in parallel. The objective of a NAS task is EZ∼pα(Z)[R(Z)], (16) where pα(Z) is the distribution of architectures, which is previously solved with reinforcement learning. In both SNAS and DARTS, the reward function is made differentiable using the training/testing loss, R(Z) = Lθ((Z)), such that the architecture learning could leverage information in the gradients of this loss and conduct together with operation parameters training: EZ∼pα(Z)[R(Z)] = EZ∼pα(Z)[Lθ(Z)]. (17) As introduced in Appendix A, SNAS solves (16) with a novel type of factorization, without relying on the MDP assumption. Though independent assumption between edges would restrict the probability distribution, there is no bias introduced. However, to avoid the sampling process and gradient back-propagation through discrete random variables, DARTS takes analytical expectation at the input of each node over operations at incoming edges and optimizes a relaxed loss with deterministic gradients. Take the cell in Figure 6 as a base case, the objective before this relaxation is EZ∼pα(Z)[Lθ(Z T j,lOj,l(Z T i,jOi,j(xi)) +Z T j,mOj,m(Z T i,jOi,j(xi)))] =EZ∼pα(Z)[Lθ( ∑ m>j ZTj,mOj,m(Z T i,jOi,j(xi))]. (18) DARTS relaxed this objective to Lθ( ∑ m>j Epαj,m [Z T j,mOj,m(Epαi,j [Z T i,jOi,j(xi)])]). (19) Considering that O(x) are ReLU-Conv-BN stacks as in ENAS (Pham et al., 2018), which are nonlinear, this transformation introduces unbounded bias. Though it will not be perceivable in training, where the complete graph is used for accuracy validation, consistent this loss, the derived graph is never validated during training. Hence the training is inconsistent with the true objective maximizing the expected performance of derived architectures. After an architecture derivation introduced in DARTS, the performance falls enormously and the parameters need to be retrained. C GRADIENTS IN SNAS Figure 6(b) gives an illustration of a base three-intermediate-node unit in SNAS, where each edge has three operations (indexed by k) to choose from. In the search space of SNAS, intermediate nodes take input from all previous nodes. We have xj = ∑ h<j ZTh,jOh,j(xh) = Z T i,jOi,j(xi) + ∑ h<i ZTh,jOh,j(xh). (20) Let θki,j be the parameters inO k i,j , we have ∂xj ∂θki,j = ZTi,j ∂Oi,j(xi) ∂θki,j . (21) As we use concrete disctribution here to make the sampling differentiable with reparametrization trick: Zki,j = fαi,j (G k i,j) = exp((logαki,j +G k i,j)/λ)∑n l=0 exp((logα l i,j +G l i,j)/λ) , (22) where Gki,j = − log(− log(Uki,j)) is the kth Gumbel random variable, Uki,j is a uniform random variable, the gradient w.r.t. αi,j is: ∂xj ∂αki,j = OTi,j(xi) ∂fαi,j (Gi,j) ∂αki,j . (23) The partial derivative ∂fαi,j ∂αki,j is ∂fαi,j (Gi,j) ∂αki,j = ∂ ∂αki,j exp((logαki,j +G k i,j)/λ)∑n l=0 exp((logα l i,j +G l i,j)/λ) (δ(k′ − k)− exp((logαi,j +Gi,j)/λ)∑n l=0 exp((logα i i,j +G l i,j)/λ) ) = ∂(logαki,j +G k i,j)/λ ∂αki,j fαi,j (G k i,j)(δ(k ′ − k)− fαi,j (Gi,j)) =(δ(k′ − k)− fαi,j (Gi,j))fαi,j (Gki,j) 1 λαki,j =(δ(k′ − k)−Zi,j)Zki,j 1 λαki,j . (24) Substitute it back to (23), we obtain ∂xj ∂αki,j = OTi,j(xi)(δ(k ′ − k)−Zi,j)Zki,j 1 λαki,j . (25) We can also derive ∂xm∂xj for chain rule connection: ∂xm ∂xj = ZTj,m ∂Oj,m(xj) ∂xj . (26) Thus the gradient from the surrogate loss L to xj , θki,j and αki,j respectively are ∂L ∂xj = ∑ m>j ∂L ∂xm ZTj,m ∂Oj,m(xj) ∂xj , ∂L ∂θki,j = ∂L ∂xj Zki,j ∂Oi,j(xi) ∂θki,j , ∂L ∂αki,j = ∂L ∂x1 OTi,j(xi)(δ(k ′ − k)−Zi,j)Zki,j 1 λαki,j . (27) D CREDIT ASSIGNMENT FOR EQUIVALENT POLICY GRADIENT From Appendix C we can see that the expected search gradient for architecture parameters at each edge is: EZ∼p(Z)[ ∂L ∂αki,j ] = EU∼Uniform[ ∂L ∂xj OTi,j(xi) ∂fαi,j (− log(− log(Ui,j))) ∂αki,j ] = ∫ 1 0 p(Ui,j) ∂L ∂xj OTi,j(xi) ∂fαi,j (− log(− log(Ui.j))) ∂αki,j dUi,j = ∂ ∂αk1 ∫ 1 0 p(Ui,j)[ ∂L ∂xj OTi,j(xi)]cfαi,j (− log(− log(Ui,j)))dUi,j = ∂ ∂αki,j ∫ p(Zi,j)[ ∂L ∂xj OTi,j(xi)]cZi,jdZi,j = ∫ p(Zi,j) ∂ log p(Zi,j) ∂αki,j [ ∂L ∂xj OTi,j(xi)Zi,j ]cdZi,j = EZ∼p(Z)[∇αki,j log p(Zi,j)[ ∂L ∂xj OTi,j(xi)Zi,j ]c] = EZ∼p(Z)[∇αki,j log p(Zi,j)[ ∂L ∂xj Õi,j(xi)]c], (28) where [·]c denotes · is a constant for the gradient calculation w.r.t. α. Note that in this derivation we stop the gradient from successor nodes, with an independence assumption enforced in backpropagation. E TAYLOR DECOMPOSITION FOR CONTRIBUTION ANALYSIS With d neurons (pixels) xi in the same layer of a deep neural network, whose output is f(x), Montavon et al. (2017a) decomposes f(x) as a sum of individual credits for xi. This decomposition is obtained by the first-order Taylor expansion of the function at some root point x̃ for which f(x̃) = 0: f(x) = d∑ i=1 Ri(x) +O(xx T ), (29) where the individual credits Ri(x) = ∂f ∂xi |x=x̃(xi − x̃i) (30) are first-order terms and O(xxT ) is for higher-order information. When ReLU is chosen as the activation function, O(xxT ) can be omitted (Montavon et al., 2017b). Thus ones can always find a root point x̃ = lim →0 x that incidentally lies on the same linear region as point x, in which case the function can be written as f(x) = d∑ i=1 Ri(x) = d∑ i=1 ∂f ∂xi xi. (31) Noticing the similarity between (8) and (31), we try using Taylor Decomposition to interpret the credit assignment in SNAS. Given a sample x0, ones can iterate all effective layers of the DAG and distribute credits from network output f among nodes xj in each layer. In Figure 1 for example, DAG(Z(1)) has 2 effective layers, while DAG(Z(2)) has 3 effective layers. Given the presence of the skip connection, nodes may be involved into multiple layers and thus obtain integrated credits ∂f ∂xj = ∑ m>j ∂f ∂xm ∂Õm(xj) ∂xj , (32) e.g. x1 in DAG(2) integrates credits from x2 and x3. According to (1), multiple edges (i, j) are pointing to j, which decompose (32) as: R̂i,j = ∂f ∂xj Õi,j(xi). (33) Adjusting the weight of this sample with ∂L/∂f and taking the optimization direction into account, we have Ri,j = − ∂L ∂xj Õi,j(xi) (34) F CANDIDATES FOR LOCAL RESOURCE CONSTRAINTS In the case of a convolutional layer, H , W and f , k correspond to the output spatial dimensions and the filter dimensions respectively and we use I ,O to denote the number of input and output channels. Since group convolution is also adopted in this paper to reduce the computational complexity, g is the number of groups. Thus, the parameter size and the number of float-point operations (FLOPs) of a single convolutional layer is parameter size = fkIO g (35) FLOPs = HWfkIO g (36) By assuming the computing device has enough cache to store the feature maps and the parameters, we can simplify the memory access cost (MAC) to be the sum of the memory access for the input/output feature maps and kernel weights (Ma et al., 2018). MAC = HW (I +O) + fkIO g (37) In SNAS, because all the operations on a single edge share the same output spatial dimensions and the input/output channels, FLOPs of a convolutional operation is directly proportional to its parameter size. And although the memory access cost for the input/output feature mapsHW (I+O) does not depend on the parameter size, since both are positively correlated to the number of layers used in the operation, we may say there is a positive correlation between MAC and the parameter size. Thus, when only considering the convolution operations, solely using the parameter size as the resource constraint is sufficient. However, in SNAS, we also have the pooling operation and the skip connection, which are parameter free. The equations to calculate the resource criteria of a pooling operation or a skip connection are as follows. FLOPs of pooling: FLOPs = HWfkIO (38) FLOPs of skip connection: FLOPs = 0 (39) MAC of pooling and skip connection: MAC = HW (I +O) (40) We can see that MAC is the same for pooling and skip connection since they need to access the same input/output feature maps, therefore, to distinguish between pooling and skip connection, FLOPs need to be included in the resource constraint. Similarly, to distinguish between skip connection and none (free, no operation), MAC also need to be included. In conclusion, to construct a resource constraint which fully distinguishes the four types of operations, all three locally decomposable criteria, the parameter size, FLOPs and MAC, need to be combined. G DETAILED SETTINGS OF EXPERIMENTS G.1 ARCHITECTURE SEARCH ON CIFAR-10 Data Pre-processing and Augmentation Techniques We employ the following techniques in our experiments: centrally padding the training images to 40×40 and then randomly cropping them back to 32 × 32; randomly flipping the training images horizontally; normalizing the training and validation images by subtracting the channel mean and dividing by the channel standard deviation. Implementation Details of Operations The operations include: 3× 3 and 5× 5 separable convolutions, 3 × 3 and 5 × 5 dilated separable convolutions, 3 × 3 max pooling, 3 × 3 average pooling, skip connection and zero operation. All operations are of stride one (excluded the ones adjacent to the input nodes in the reduction cell, which are of stride two) and the convolved feature maps are padded to preserve their spatial resolution. Convolutions are applied in the order of ReLU-ConvBN, and the depthwise separable convolution is always applied twice (Zoph et al., 2017; Real et al., 2018; Liu et al., 2017a; 2019). Detailed Training Settings We follow the training settings as in Liu et al. (2019). The neural operation parameters θ are optimized using momentum SGD, with initial learning rate ηθ = 0.025 (annealed down to zero following a cosine schedule), momentum 0.9, and weight decay 3 × 10−4. The architecture distribution parameters α are optimized by Adam, with initial learning rate ηα = 3× 10−4, momentum β = (0.5, 0.999) and weight decay 10−3. The batch size employed is 64 and the initial number of channels is 16. G.2 ARCHITECTURE EVALUATION ON CIFAR-10 Additional Enhancement Techniques Following existing works (Zoph et al., 2017; Liu et al., 2017a; Pham et al., 2018; Real et al., 2018; Liu et al., 2019), we employ the following additional enhancements: cutout (DeVries & Taylor, 2017), path dropout of probability 0.2 (same as DARTS in the code publicly released by its authors) and auxiliary towers with weight 0.4. G.3 ARCHITECTURE TRANSFERABILITY EVALUATION ON CIFAR-10 Detailed Training Settings The network is trained with batch size 128, weight decay 3 × 10−5 and initial SGD learning rate 0.1, which is decayed by a factor of 0.97 after each epoch. Auxiliary towers with weight 0.4 are adopted as additional enhancements. H CELLS LEARNED BY SNAS WITH A MODERATE RESOURCE CONSTRAINT
1. What are the main contributions and improvements introduced by the paper in NAS? 2. What are the strengths of the paper regarding its experiments and comparisons? 3. Do you have any questions or concerns about the generation of child network structures? 4. How does the reviewer assess the factorization of p(Z) in SNAS, particularly in the context of NAS being a task with fully delayed rewards? 5. Why does the reviewer question the use of training/testing loss as a reward in the paper's approach compared to the previous method's use of a constant reward from validation accuracy?
Review
Review This work refines the NAS method for efficient neural architecture search. The paper brings new methods for gradient/reward updates and credit assignment. pros: 1. An improvement on gradient calculation and reward back-propagation mechanism 2. Good experiment results and fair comparisons cons: 1. Missing details on how to use the gradient information to generate child network structures. In eq.2, multiplying each one-hot random variable Zij to each edge (i, j) in the DAG can obtain a child graph whose intermediate nodes are xj. However, it is still unclear how to generate the child graph. More details on generating child network based on gradient information is expected. 2. In SNAS, P(z) is assumed fully factorizable. Factors are parameterized with alpha and learnt along with operation parameters theta. The factorization of p(Z) is based on the observation that NAS is a task with fully delayed rewards in a deterministic environment. That is, the feedback signal is only ready after the whole episode is done and all state transitions distributions are delta functions. In eq. 3, the authors use the training/testing loss directly as reward, while the previous method uses a constant reward from validation accuracy. It is unclear why using the training/testing loss can improve the results?
ICLR
Title SNAS: stochastic neural architecture search Abstract We propose Stochastic Neural Architecture Search (SNAS), an economical endto-end solution to Neural Architecture Search (NAS) that trains neural operation parameters and architecture distribution parameters in same round of backpropagation, while maintaining the completeness and differentiability of the NAS pipeline. In this work, NAS is reformulated as an optimization problem on parameters of a joint distribution for the search space in a cell. To leverage the gradient information in generic differentiable loss for architecture search, a novel search gradient is proposed. We prove that this search gradient optimizes the same objective as reinforcement-learning-based NAS, but assigns credits to structural decisions more efficiently. This credit assignment is further augmented with locally decomposable reward to enforce a resource-efficient constraint. In experiments on CIFAR-10, SNAS takes fewer epochs to find a cell architecture with state-of-theart accuracy than non-differentiable evolution-based and reinforcement-learningbased NAS, which is also transferable to ImageNet. It is also shown that child networks of SNAS can maintain the validation accuracy in searching, with which attention-based NAS requires parameter retraining to compete, exhibiting potentials to stride towards efficient NAS on big datasets. 1 INTRODUCTION The trend to seek for state-of-the-art neural network architecture automatically has been growing since Zoph & Le (2016), given the enormous effort needed in scientific research. Normally, a Neural Architecture Search (NAS) pipeline comprises architecture sampling, parameter learning, architecture validation, credit assignment and search direction update. There are basically three existing frameworks for neural architecture search. Evolution-based NAS like NEAT (Stanley & Miikkulainen, 2002) employs evolution algorithm to simultaneously optimize topology alongside with parameters. However, it takes enormous computational power and could not leverage the efficient gradient back-propagation in deep learning. To achieve the state-of-the-art performance as human-designed architectures, Real et al. (2018) takes 3150 GPU days for the whole evolution. Reinforcement-learning-based NAS is end-to-end for gradient back-propagation, among which the most efficient one, ENAS (Pham et al., 2018) learns optimal parameters and architectures together just like NEAT. However, as NAS is modeled as a Markov Decision Process, credits are assigned to structural decisions with temporal-difference (TD) learning (Sutton et al., 1998), whose efficiency and interpretability suffer from delayed rewards (Arjona-Medina et al., 2018). To get rid of the architecture sampling process, DARTS (Liu et al., 2019) proposes deterministic attention on operations to analytically calculate expectation at each layer. After the convergence of the parent network, it removes operations with relatively weak attention. Due to the pervasive non-linearity in neural operations, it introduces untractable bias to the loss function. This bias causes inconsistency between the performance of derived child networks and converged parent networks, thus parameter retraining comes up as necessary. A more efficient, more interpretable and less biased framework is in desire, especially for future full-fledged NAS solutions on large datasets. In this work, we propose a novel, efficient and highly automated framework, Stochastic Neural Architecture Search (SNAS), that trains neural operation parameters and architecture distribution parameters in same round of back propagation, while maintaining the completeness and differentiability of the NAS pipeline. One of the key motivations of SNAS is to replace the feedback mechanism triggered by constant rewards in reinforcement-learning-based NAS with more efficient gradient feedback from generic loss. We reformulate NAS with a new stochastic modeling to bypass the MDP assumption in reinforcement learning. To combine architecture sampling with computational graph of arbitrary differentiable loss, the search space is represented with a set of one-hot random variables from a fully factorizable joint distribution, multiplied as a mask to select operations in the graph. Sampling from this search space is made differentiable by relaxing the architecture distribution with concrete distribution (Maddison et al., 2016). We name gradients w.r.t their parameters search gradient. From a global view, we prove that SNAS optimizes the same objective as reinforcement-learning-based NAS, except the training loss is used as reward. Zooming in, we provide a policy gradient equivalent of this search gradient, showing how gradients from the loss of each sample are used to assign credits to structural decisions. By interpreting this credit assignment as Taylor Decomposition (Montavon et al., 2017a), we prove SNAS’s efficiency over reinforcementlearning-based NAS. Additionally, seeing that existing methods (Liu et al., 2019) manually design topology in child networks to avoid complex architecture, we propose a global resource constraint to automate it, augmenting the objective with feasiblity concerns. This global constraint could be linearly decomposed for structural decisions, hence the proof of SNAS’s efficiency still applies. In our experiments, SNAS shows strong performance compared with DARTS and all other existing NAS methods in terms of test error, model complexity and searching resources. Specifically, SNAS discovers novel convolutional cells achieving 2.85±0.02% test error on CIFAR-10 with only 2.8M parameters, which is better than 3.00±0.14%-3.3M from 1st-order DARTS and 2.89%-4.6M from ENAS. It is also on par with 2.76±0.09%-3.3M from 2nd-order DARTS with fewer parameters. With a more aggressive resource constraint, SNAS discovers even smaller model achieving 3.10±0.04% test error on CIFAR-10 with 2.3M parameters. During the architecture search process, SNAS obtains a validation accuracy of 88% compared to around 70% of ENAS in fewer epochs. When validating the derived child network on CIFAR-10 without finetuning, SNAS maintains the search validation accuracy, significantly outperforming 54.66% by DARTS. These results validate our theory that SNAS is less biased than DARTS. The discovered cell achieves 27.3% top-1 error when transferred to ImageNet (mobile setting), which is comparable to 26.9% by 2nd-order DARTS. 2 METHODOLOGY The main initiative of SNAS is to build an efficient and economical end-to-end learning system with as little compromise of the NAS pipeline as possible. In this section, we first describe how to sample from the search space for NAS in a cell, and how it motivates a stochastic reformuation for SNAS (Section 2.1). A new optimization objective is provided and the attention-based NAS’s inconsistency is discussed. Then in Section 2.2, we introduce how this discrete search space is relaxed to be continuous to let gradients back-propagate through. In Section 2.3, the search gradient of SNAS is connected to the policy gradient in reinforcement-learning-based NAS (Zoph & Le, 2016; Pham et al., 2018), interpreting SNAS’s credit assignment with contribution analysis. At last, we introduce in Section 2.4 how SNAS automates the topology search to reduce the complexity of child netowrk, as well as how it decomposes this global constraint in the context of credit assignment. 2.1 SEARCH SPACE AND ARCHITECTURE SAMPLING Searching for structure of a cell that is later stacked as building blocks for a deep architecture is an ad hoc solution to trade-off search efficiency and result optimality (Zoph et al., 2017; Liu et al., 2017a; Real et al., 2018; Pham et al., 2018; Liu et al., 2019). As shown in the left of Figure 1, the search space, i.e. a cell, is represented using a directed acyclic graph (DAG), which is called parent graph. Nodes xi in this DAG represent latent representation, whose dimensions are simply ignored to avoid abuse of notations. In convolutional networks, they are feature maps. Edges (i, j) represent information flows and possible operations Oi,j to be selected between two nodes xi and xj . To make the skip operation included, nodes are enforced to be ordered, while edges only point from lower indexed nodes to higher ones. Thus we have intermediate nodes xj = ∑ i<j Õi,j(xi), (1) where Õi,j is the selected operation at edge (i, j). Analog to ENAS, SNAS search for operations and topology of this cell at the same time. Rather than using two distributions, this is done by introducing a zero operation, as in DARTS. Same as ENAS and DARTS, each cell is designed to have two inputs from the output of previous cells. The output of a cell is the concatenate of intermediate nodes. Thanks to the fact that the volume of structural decisions, which pick Õi,j for edge (i, j), is generally tractable in a cell, we represent it with a distribution p(Z). Multiplying each one-hot random variable Zi,j to each edge (i, j) in the DAG, we obtain a child graph, whose intermediate nodes are xj = ∑ i<j Õi,j(xi) = ∑ i<j ZTi,jOi,j(xi). (2) In terms of how to parameterize and factorize p(Z), SNAS is built upon the observation that NAS is a task with fully delayed rewards in a deterministic environment. That is, the feedback signal is only ready after the whole episode is done and all state transition distributions are delta functions. Therefore, a Markov Decision Process assumption as in ENAS may not be necessary. In SNAS, we simply assume that p(Z) is fully factorizable, whose factors are parameterized with α and learnt along with operation parameters θ. In Appendix A we connect the probability of a trajectory in the MDP of ENAS and this joint probability p(Z). Following the setting in Zoph & Le (2016), the objective of SNAS is also EZ∼pα(Z)[R(Z)]. (3) While the difference is that rather than using a constant reward from validation accuracy, we use the training/testing loss directly as reward, R(Z) = Lθ(Z), such that the operation parameters and architecture parameters can be trained under one generic loss: EZ∼pα(Z)[R(Z)] = EZ∼pα(Z)[Lθ(Z)]. (4) The whole process of obtaining a Monte Carlo estimate of this objective is shown in Figure 1. An intuitive interpretation of this objective is to optimize the expected performance of architectures sampled with p(Z). This differentiates SNAS from attention-based NAS like DARTS, which avoids the sampling process by taking analytical expectation at each edge over all operations. In Appendix B we illustrate the inconsistency between DARTS’s loss and this objective, explaining its necessity of parameter finetuning or even retraining after architecture derivation. Resembling ENAS, SNAS does not have this constraint. We introduce in next subsection how SNAS calculates gradients w.r.t θ and α. 2.2 PARAMETER LEARNING FOR OPERATIONS AND ARCHITECTURES Though the objective (4) could be optimized with black-box gradient descent method as in Ranganath et al. (2014), it would suffer from the high variance of likelihood ratio trick (Williams, 1992) and could not make use of the differentiable nature of Lθ(Z). Instead, we use concrete distribution (Maddison et al., 2016) here to relax the discrete architecture distribution to be continuous and differentiable with reparameterization trick: Zki,j = fαi,j (G k i,j) = exp((logαki,j +G k i,j)/λ)∑n l=0 exp((logα l i,j +G l i,j)/λ) , (5) where Zi,j is the softened one-hot random variable for operation selection at edge (i, j), Gki,j = − log(− log(Uki,j)) is the kth Gumbel random variable, Uki,j is a uniform random variable. αi,j is the architecture parameter, which could depend on predecessors Zh,i if p(Zi,j) is a conditional probability. λ is the temperature of the softmax, which is steadily annealed to be close to zero in SNAS. In Maddison et al. (2016), it is proved that p(limλ→0Zki,j = 1) = α k i,j/( ∑n l=0α l i,j), making this relaxation unbiased once converged. The full derivation of ∇EZ∼pα(Z)[Lθ(Z)] is given in Appendix C. Here with the surrogate loss L for each sample, we provide its gradient w.r.t xj , θki,j and α k i,j : ∂L ∂xj = ∑ m>j ∂L ∂xm ZTm ∂Om(xj) ∂xj , ∂L ∂θki,j = ∂L ∂xj Zki,j ∂Oi,j(xi) ∂θki,j , ∂L ∂αki,j = ∂L ∂xj OTi,j(xi)(δ(k ′ − k)−Zi,j)Zki,j 1 λαki,j . (6) We name ∂L∂α search gradient similar to the one in Wierstra et al. (2008), even though no policy gradient is involved. This renders SNAS a differentiable version of evolutionary-strategy-based NAS. 2.3 CREDIT ASSIGNMENT With the equivalence of p(Z) in SNAS and p(τ) in ENAS from Section 2.1 and the search gradient of SNAS from Section 2.2, we discuss in this subsection what credits SNAS search gradients assign to each structural decision. To assign credits to actions both temporally and laterally is an important topic in reinforcement learning (Precup, 2000; Schulman et al., 2015; Tucker et al., 2018; Xu et al., 2018). In ENAS, proximal policy optimization (PPO) (Schulman et al., 2017) is used to optimize the architecture policy, which distributes credits with TD learning and generalized advantage estimator (GAE) (Schulman et al., 2015). However, as the reward of NAS task is only obtainable after the architecture is finalized and the network is tested for accuracy, it is a task with delayed rewards. As proved by Arjona-Medina et al. (2018), TD learning has bias with reward delay and corrects it exponentially slowly. Different from ENAS, there is no MDP assumption in SNAS, but the reward function is made differentiable in terms of structural decisions. From Section 2.2 we can derive the expected search gradient for architecture parameters at edge (i, j): EZ∼p(Z)[ ∂L ∂αki,j ] = EZ∼p(Z)[∇αki,j log p(Zi,j)[ ∂L ∂xj Õi,j(xi)]c], (7) where [·]c emphasizes · is constant for the gradient calculation w.r.t. α. A full derivation is provided in Appendix D. Apparently, the search gradient is equivalent to a policy gradient for distribution at this edge whose credit is assigned as Ri,j = −[ ∂L ∂xj Õi,j(xi)]c. (8) From a decision-wise perspective, this reward could be interpreted as contribution analysis of L with Taylor Decomposition (Montavon et al., 2017a), which distributes importance scores among nodes in the same effective layer. Given the presence of skip connections, nodes may be involved into multiple effective layers, credits from which would be integrated. This integrated credit of a node j is then distributed to edges (i, j) pointing to it, weighted by Õi,j(xi). Details are given in Appendix E. Thus for each structural decision, no delayed reward exists, the credits assigned to it are valid from the beginning. This proves why SNAS is more efficient than ENAS. Laterally at each edge, credits are distributed among possible operations, adjusted with random variables Zi,j . At the beginning of the training, Zi,j is continuous and operations share the credit, the training is mainly on neural operation parameters. With the temperature goes down and Zi,j becomes closer to one-hot, credits are given to the chosen operations, adjusting their probabilities to be sampled. 2.4 RESOURCE CONSTRAINT Apart from training efficiency and validation accuracy, forwarding time of the child network is another concern in NAS in order for its feasible employment. In SNAS, this could be taken into account as a regularizer in the objective: EZ∼pα(Z)[Lθ(Z) + ηC(Z)] = EZ∼pα(Z)[Lθ(Z)] + ηEZ∼pα(Z)[C(Z)], (9) where C(Z) is the cost of time for the child network associated with random variables Z. Rather than directly estimating the forwarding time, there are three candidates from the literature (Gordon et al., 2018; Ma et al., 2018) that can be used to approximately represent it: 1) the parameter size ; 2) the number of float-point operations (FLOPs); and 3) the memory access cost (MAC). Details about C(Z) in SNAS could be found in Appendix F. However, not like Lθ(Z), C(Z) is not differentiable w.r.t. either θ or α. A natural problem to ask is, whether efficient credit assignment from C(Z) could be done with similar decomposition introduced above, such that the proof of SNAS’s efficiency still applies. And the answer is positive, thanks to the fact that C(Z) is linear in terms of all one-hot random variables Zi,j : C(Z) = ∑ i,j C(Zi,j) = ∑ i,j ZTi,jC(Oi,j), (10) mainly because the size of feature maps at each node is not dependent on the structural decision. That is, the distribution at each edge (i, j) is optimized with local penalty, which is the conservative decomposition of the global cost, consistent with the credit assignment principle in SNAS. In SNAS, pα(Z) is fully factorizable, making it possible to calculate EZ∼pα [C(Z)] analytically with sum-product algorithm (Kschischang et al., 2001). Unfortunately, this expectation is non-trivial to calculate, we optimize the Monte Carlo estimate of the final form from sum-product algorithm EZ∼pα [C(Z)] = ∑ i,j EZ\i,j∼pα [EZi,j∼pα [Z T i,jC(Oi,j)]] (11) with policy gradients. 3 EXPERIMENTS Following the pipeline in DARTS, our experiments consist of three stages. First, SNAS is applied to search for convolutional cells in a small parent network on CIFAR-10 and we choose the best cells based on their search validation accuracy. Then, a larger network is constructed by stacking the learned cells (child graphs) and is retrained on CIFAR-10 to compare the performance of SNAS with other state-of-the-art methods. Finally, we show that the cells learned on CIFAR-10 is transferable to large datasets by evaluating their performance on ImageNet. 3.1 ARCHITECTURE SEARCH ON CIFAR-10 Motivation We apply SNAS to find convolutional cells on CIFAR-10 for image classification. Unlike DARTS, which evaluates the performance of child networks during the searching stage by training their snapshots from scratch, we directly take the search validation accuracy as the performance evaluation criterion. This evaluation method is valid in SNAS since the searching is unbiased from its objective, as introduced in Section 2.1. Dataset CIFAR-10 dataset (Krizhevsky & Hinton, 2009) is a basic dataset for image classification, which consists of 50,000 training images and 10,000 testing images. Data transformation is achieved by the standard data pre-processing and augmentation techniques (see Appendix G.1). Search Space Our setup follows DARTS, where convolutional cells (parent graphs) of 7 nodes are stacked for multiple times to form a network. The input nodes, i.e. the first and second nodes, of cell k are set equal to the outputs of cell k−2 and cell k−1, respectively, with 1× 1 convolutions inserted as necessary, and the output node is the depthwise concatenation of all the intermediate nodes. Reduction cells are located at the 1/3 and 2/3 of the total depth of the network to reduce the spatial resolution of feature maps. Therefore the architecture distribution parameters is (αnormal,αreduce), where αnormal is shared by all the normal cells and αreduce is shared by all the reduction cells. Details about all operations included are shown in Appendix G.1. Training Settings In the searching stage, we train a small network stacked by 8 cells (parent graphs) using SNAS with three levels of resource constraint for 150 epochs. This network size is determined to fit into a single GPU. Single-level optimization is employed to optimize θ and α over the same dataset as opposed to bilevel optimization employed by DARTS. The rest of the setup follows DARTS (Appendix G.1). The search takes 32 hours1 on a single GPU2. Searching Process The normal and reduction cells learned on CIFAR-10 using SNAS with mild resource constraint are shown in Figure 2. In Figure 3, we give the validation accuracy during the search of SNAS, DARTS and ENAS with 10 Randomly Generated Seeds. Comparing with ENAS, SNAS takes fewer epochs to converge to higher validation accuracy. Though DARTS converges faster than SNAS, this accuracy is inconsistent with the child network. Table 1 presents their comparison of the validation accuracy at the end of search and after architecture derivation without fine-tuning. While SNAS can maintain its performance, there is a huge gap between those two in DARTS. Figure 3: Search progress in validation accuracy from SNAS, DARTS and ENAS. 0.8183 0.3234 0.622 1 0.472 0.776 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Normal cell Reduction cell Overall DARTS SNAS 0.8183 0.3234 0.622 1 0.472 0.776 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Normal cell Reduction cell Overall DARTS SNAS 0.8183 0.3234 0.622 1 0.472 0.776 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Normal cell Reduction cell Overall DARTS SNAS This gap is caused by the extra architecture derivation step in DARTS, consisting of the following two steps. (1) Remove operations with relatively weak attention. As shown in Figure 4, the entropy of the architecture distribution (softmax) at each edge, i.e. Hpα , is relatively high in DARTS, indicating its uncertainty in structural decisions. Hence removing other operations from the continuous relaxation will strongly affect the output of the network. (2) Remove relatively ambiguous edges. DARTS manually selects two inputs for each intermediate nodes, thus the topology is inconsistent with that in the training stage. While SNAS employs architecture sampling and resource regularizer to automatically induce sparsity. Phenomena shown in Figure 4 and Table 1 verify our claim that searching process in SNAS is less biased from the objective, i.e. Equation (4), and could possibly save computation resources for parameter retraining when extended to NAS on large datasets. Searching Results Three levels of resource constraint, mild, moderate and aggressive are examined in SNAS. Mild resource constraint lies at the margin of the appearance of zero operation to drop edges in child graphs, as shown in Figure 2. Interestingly, every node takes only two input edges, just as in the designed scheme in ENAS and DARTS. When the constraint level is increased to moderate, the reduction cell begins to discover similar structures as normal cells, as shown in Appendix H. When a more aggressive resource constraint is added, the structure of reduction cells is further sparsified. As shown in Figure 5, more edges are dropped, leaving only two, which leads to the drop of some nodes, including the input node ck−1, and two intermediate nodes x2 and x3. Note that this child graph is a structure that ENAS and DARTS are not able to discover 4. 3Repetition for convolutional cells is not necessary since the optimization outcomes are not initializationsensetive (Liu et al., 2019). 4In the code from Liu et al. (2019), zero is omitted in child graph derivation as empirically it tends to learn the largest weight. 3.2 ARCHITECTURE EVALUATION ON CIFAR-10 Motivation In the searching stage, we follow the economical setup of DARTS to use only one single GPU, which constrains the parameter size of the child network. A conventional assumption in DARTS and ENAS5 is that the final search validation accuracy has exploited the parameter size, the ceiling of which can only be raised by allowing more parameters. For a fair comparison, we follow this assumption in evaluation stage, stacking more cells (child graphs) to build a deeper network. This network is trained from scratch as in DARTS and ENAS to report the performance of the cells learned by SNAS on CIFAR-10. Evaluation Settings A large network of 20 cells is trained from scratch for 600 epochs with batch size 96. Other hyperparameters remain the same as those for architecture search. Additional enhancements are listed in Appendix G.2. The training takes 1.5 days on a single GPU with our implementation in PyTorch. Results The CIFAR-10 evaluation results are presented in Table 2. The test error of SNAS is on par with the state-of-the-art RL-based and evolution-based NAS while using three orders of magnitude less computation resources. Furthermore, with slightly longer wall-clock-time, SNAS outperforms 1st-order DARTS and ENAS by discovering convolutional cells with both a smaller error rate and fewer parameters. It also achieves a comparable error rate compared to 2nd-order DARTS but with fewer parameters. With a more aggressive resource constraint, SNAS can sparsify the architecture even further to distinguish from ENAS and DARTS with only a slight drop in performance, which is still on par with 1st-order DARTS. It is interesting to note that with same single-level optimization, SNAS significantly outperforms DARTS. Bilevel optimization could be regarded as a data-driven meta-learning method to resolve the bias proved above, whose bias from the exact meta-learning objective is still unjustified due to the ignorance of separate child network derivation scheme. 3.3 ARCHITECTURE TRANSFERABILITY EVALUATION ON IMAGENET Motivation Since real world applications often involve much larger datasets than CIFAR-10, transferability is a crucial criterion to evaluate the potential of the learned cells (child graphs) (Zoph et al., 2017). To show whether the cells learned on by SNAS CIFAR-10 can be generalized to larger datasets, we apply the same cells evaluated in Section 3.2 to the classification task on ImageNet. Dataset The mobile setting is adopted where the size of the input images is 224 × 224 and the number of multiply-add operations in the model is restricted to be less than 600M. 5As shown in the code publicly released by Pham et al. (2018) Evaluation Settings We stack a network of 14 cells using the same cells designed by SNAS (mild constraint) and evaluated on CIFAR-10 (Section 3.2) and train it for 250 epochs with other hyperparameters following DARTS (see Appendix G.3). The training takes 12 days on a single GPU. Results Table 3 presents the results of the evaluation on ImageNet and shows that the cell found by SNAS on CIFAR-10 can be successfully transferred to ImageNet. Notably, SNAS is able to achieve competitive test error with the state-of-the-art RL-based NAS using three orders of magnitude less computation resources. And with resource constraints added, SNAS can find smaller cell architectures that achieve competitive performance with DARTS. 4 RELATED WORKS Improving the efficiency of NAS is a prerequisite to extending it to more complicated vision tasks like detection, as well as larger datasets. In the complete pipeline of NAS, parameter learning is a time-consuming one that attracts attention from the literature. Ideas to design auxiliary mechanisms like performance prediction (Baker et al., 2017; Deng et al., 2017), iterative search (Liu et al., 2017a), hypernetwork generated weights (Brock et al., 2017) successfully accelerate NAS to certain degrees. Getting rid of these auxiliary mechanisms, ENAS (Pham et al., 2018) is the state-of-the-art NAS framework, proposing parameter sharing among all possible child graphs, which is followed by SNAS. In Section 2 we introduced SNAS’s relation with ENAS in details. Apart from ENAS, we are also inspired by Louizos et al. (2017) to use continuous distribution for structural decision at each edge and optimize it along with an l0 complexity regularizer. The most important motivation of SNAS is to leverage the gradient information in generic differentiable loss to update architecture distribution, which is shared by DARTS (Liu et al., 2019). In Section 2 and Appendix B we have introduced SNAS’s advantage over DARTS, a reward for maintaining the completeness of the NAS pipeline. Actually, the idea to make use of this gradient information to improve the learning efficiency of a stochastic model has been discussed in the literature of generative model (Gu et al., 2015; Maddison et al., 2016) and reinforcement learning (Schmidhuber, 1990; Arjona-Medina et al., 2018). But as far as we known, we are the first one to combine the insights from these two fields to discuss possible efficiency improvement of NAS. 5 CONCLUSION In this work, we presented SNAS, a novel and economical end-to-end neural architecture search framework. The key contribution of SNAS is that by making use of gradient information from generic differentiable loss without sacrificing the completeness of NAS pipeline, stochastic architecture search could be more efficient. This improvement is proved by comparing the credit assigned by the search gradient with reinforcement-learning-based NAS. Augmented by a complexity regu- larizer, this search gradient trades off testing error and forwarding time. Experiments showed that SNAS searches well on CIFAR-10, whose result could be transferred to ImageNet as well. As a more efficient and less-biased framework, SNAS will serve as a possible candidate for full-fledged NAS on large datasets in the future. B DIFFERENCE BETWEEN SNAS AND DARTS We take a search space with three intermediate nodes for example to exhibit the difference between SNAS and DARTS (Liu et al., 2019), as shown in Figure 6. This search space could be viewed as a unit search space whose property could be generalized to larger space since it contains nodes in series and in parallel. The objective of a NAS task is EZ∼pα(Z)[R(Z)], (16) where pα(Z) is the distribution of architectures, which is previously solved with reinforcement learning. In both SNAS and DARTS, the reward function is made differentiable using the training/testing loss, R(Z) = Lθ((Z)), such that the architecture learning could leverage information in the gradients of this loss and conduct together with operation parameters training: EZ∼pα(Z)[R(Z)] = EZ∼pα(Z)[Lθ(Z)]. (17) As introduced in Appendix A, SNAS solves (16) with a novel type of factorization, without relying on the MDP assumption. Though independent assumption between edges would restrict the probability distribution, there is no bias introduced. However, to avoid the sampling process and gradient back-propagation through discrete random variables, DARTS takes analytical expectation at the input of each node over operations at incoming edges and optimizes a relaxed loss with deterministic gradients. Take the cell in Figure 6 as a base case, the objective before this relaxation is EZ∼pα(Z)[Lθ(Z T j,lOj,l(Z T i,jOi,j(xi)) +Z T j,mOj,m(Z T i,jOi,j(xi)))] =EZ∼pα(Z)[Lθ( ∑ m>j ZTj,mOj,m(Z T i,jOi,j(xi))]. (18) DARTS relaxed this objective to Lθ( ∑ m>j Epαj,m [Z T j,mOj,m(Epαi,j [Z T i,jOi,j(xi)])]). (19) Considering that O(x) are ReLU-Conv-BN stacks as in ENAS (Pham et al., 2018), which are nonlinear, this transformation introduces unbounded bias. Though it will not be perceivable in training, where the complete graph is used for accuracy validation, consistent this loss, the derived graph is never validated during training. Hence the training is inconsistent with the true objective maximizing the expected performance of derived architectures. After an architecture derivation introduced in DARTS, the performance falls enormously and the parameters need to be retrained. C GRADIENTS IN SNAS Figure 6(b) gives an illustration of a base three-intermediate-node unit in SNAS, where each edge has three operations (indexed by k) to choose from. In the search space of SNAS, intermediate nodes take input from all previous nodes. We have xj = ∑ h<j ZTh,jOh,j(xh) = Z T i,jOi,j(xi) + ∑ h<i ZTh,jOh,j(xh). (20) Let θki,j be the parameters inO k i,j , we have ∂xj ∂θki,j = ZTi,j ∂Oi,j(xi) ∂θki,j . (21) As we use concrete disctribution here to make the sampling differentiable with reparametrization trick: Zki,j = fαi,j (G k i,j) = exp((logαki,j +G k i,j)/λ)∑n l=0 exp((logα l i,j +G l i,j)/λ) , (22) where Gki,j = − log(− log(Uki,j)) is the kth Gumbel random variable, Uki,j is a uniform random variable, the gradient w.r.t. αi,j is: ∂xj ∂αki,j = OTi,j(xi) ∂fαi,j (Gi,j) ∂αki,j . (23) The partial derivative ∂fαi,j ∂αki,j is ∂fαi,j (Gi,j) ∂αki,j = ∂ ∂αki,j exp((logαki,j +G k i,j)/λ)∑n l=0 exp((logα l i,j +G l i,j)/λ) (δ(k′ − k)− exp((logαi,j +Gi,j)/λ)∑n l=0 exp((logα i i,j +G l i,j)/λ) ) = ∂(logαki,j +G k i,j)/λ ∂αki,j fαi,j (G k i,j)(δ(k ′ − k)− fαi,j (Gi,j)) =(δ(k′ − k)− fαi,j (Gi,j))fαi,j (Gki,j) 1 λαki,j =(δ(k′ − k)−Zi,j)Zki,j 1 λαki,j . (24) Substitute it back to (23), we obtain ∂xj ∂αki,j = OTi,j(xi)(δ(k ′ − k)−Zi,j)Zki,j 1 λαki,j . (25) We can also derive ∂xm∂xj for chain rule connection: ∂xm ∂xj = ZTj,m ∂Oj,m(xj) ∂xj . (26) Thus the gradient from the surrogate loss L to xj , θki,j and αki,j respectively are ∂L ∂xj = ∑ m>j ∂L ∂xm ZTj,m ∂Oj,m(xj) ∂xj , ∂L ∂θki,j = ∂L ∂xj Zki,j ∂Oi,j(xi) ∂θki,j , ∂L ∂αki,j = ∂L ∂x1 OTi,j(xi)(δ(k ′ − k)−Zi,j)Zki,j 1 λαki,j . (27) D CREDIT ASSIGNMENT FOR EQUIVALENT POLICY GRADIENT From Appendix C we can see that the expected search gradient for architecture parameters at each edge is: EZ∼p(Z)[ ∂L ∂αki,j ] = EU∼Uniform[ ∂L ∂xj OTi,j(xi) ∂fαi,j (− log(− log(Ui,j))) ∂αki,j ] = ∫ 1 0 p(Ui,j) ∂L ∂xj OTi,j(xi) ∂fαi,j (− log(− log(Ui.j))) ∂αki,j dUi,j = ∂ ∂αk1 ∫ 1 0 p(Ui,j)[ ∂L ∂xj OTi,j(xi)]cfαi,j (− log(− log(Ui,j)))dUi,j = ∂ ∂αki,j ∫ p(Zi,j)[ ∂L ∂xj OTi,j(xi)]cZi,jdZi,j = ∫ p(Zi,j) ∂ log p(Zi,j) ∂αki,j [ ∂L ∂xj OTi,j(xi)Zi,j ]cdZi,j = EZ∼p(Z)[∇αki,j log p(Zi,j)[ ∂L ∂xj OTi,j(xi)Zi,j ]c] = EZ∼p(Z)[∇αki,j log p(Zi,j)[ ∂L ∂xj Õi,j(xi)]c], (28) where [·]c denotes · is a constant for the gradient calculation w.r.t. α. Note that in this derivation we stop the gradient from successor nodes, with an independence assumption enforced in backpropagation. E TAYLOR DECOMPOSITION FOR CONTRIBUTION ANALYSIS With d neurons (pixels) xi in the same layer of a deep neural network, whose output is f(x), Montavon et al. (2017a) decomposes f(x) as a sum of individual credits for xi. This decomposition is obtained by the first-order Taylor expansion of the function at some root point x̃ for which f(x̃) = 0: f(x) = d∑ i=1 Ri(x) +O(xx T ), (29) where the individual credits Ri(x) = ∂f ∂xi |x=x̃(xi − x̃i) (30) are first-order terms and O(xxT ) is for higher-order information. When ReLU is chosen as the activation function, O(xxT ) can be omitted (Montavon et al., 2017b). Thus ones can always find a root point x̃ = lim →0 x that incidentally lies on the same linear region as point x, in which case the function can be written as f(x) = d∑ i=1 Ri(x) = d∑ i=1 ∂f ∂xi xi. (31) Noticing the similarity between (8) and (31), we try using Taylor Decomposition to interpret the credit assignment in SNAS. Given a sample x0, ones can iterate all effective layers of the DAG and distribute credits from network output f among nodes xj in each layer. In Figure 1 for example, DAG(Z(1)) has 2 effective layers, while DAG(Z(2)) has 3 effective layers. Given the presence of the skip connection, nodes may be involved into multiple layers and thus obtain integrated credits ∂f ∂xj = ∑ m>j ∂f ∂xm ∂Õm(xj) ∂xj , (32) e.g. x1 in DAG(2) integrates credits from x2 and x3. According to (1), multiple edges (i, j) are pointing to j, which decompose (32) as: R̂i,j = ∂f ∂xj Õi,j(xi). (33) Adjusting the weight of this sample with ∂L/∂f and taking the optimization direction into account, we have Ri,j = − ∂L ∂xj Õi,j(xi) (34) F CANDIDATES FOR LOCAL RESOURCE CONSTRAINTS In the case of a convolutional layer, H , W and f , k correspond to the output spatial dimensions and the filter dimensions respectively and we use I ,O to denote the number of input and output channels. Since group convolution is also adopted in this paper to reduce the computational complexity, g is the number of groups. Thus, the parameter size and the number of float-point operations (FLOPs) of a single convolutional layer is parameter size = fkIO g (35) FLOPs = HWfkIO g (36) By assuming the computing device has enough cache to store the feature maps and the parameters, we can simplify the memory access cost (MAC) to be the sum of the memory access for the input/output feature maps and kernel weights (Ma et al., 2018). MAC = HW (I +O) + fkIO g (37) In SNAS, because all the operations on a single edge share the same output spatial dimensions and the input/output channels, FLOPs of a convolutional operation is directly proportional to its parameter size. And although the memory access cost for the input/output feature mapsHW (I+O) does not depend on the parameter size, since both are positively correlated to the number of layers used in the operation, we may say there is a positive correlation between MAC and the parameter size. Thus, when only considering the convolution operations, solely using the parameter size as the resource constraint is sufficient. However, in SNAS, we also have the pooling operation and the skip connection, which are parameter free. The equations to calculate the resource criteria of a pooling operation or a skip connection are as follows. FLOPs of pooling: FLOPs = HWfkIO (38) FLOPs of skip connection: FLOPs = 0 (39) MAC of pooling and skip connection: MAC = HW (I +O) (40) We can see that MAC is the same for pooling and skip connection since they need to access the same input/output feature maps, therefore, to distinguish between pooling and skip connection, FLOPs need to be included in the resource constraint. Similarly, to distinguish between skip connection and none (free, no operation), MAC also need to be included. In conclusion, to construct a resource constraint which fully distinguishes the four types of operations, all three locally decomposable criteria, the parameter size, FLOPs and MAC, need to be combined. G DETAILED SETTINGS OF EXPERIMENTS G.1 ARCHITECTURE SEARCH ON CIFAR-10 Data Pre-processing and Augmentation Techniques We employ the following techniques in our experiments: centrally padding the training images to 40×40 and then randomly cropping them back to 32 × 32; randomly flipping the training images horizontally; normalizing the training and validation images by subtracting the channel mean and dividing by the channel standard deviation. Implementation Details of Operations The operations include: 3× 3 and 5× 5 separable convolutions, 3 × 3 and 5 × 5 dilated separable convolutions, 3 × 3 max pooling, 3 × 3 average pooling, skip connection and zero operation. All operations are of stride one (excluded the ones adjacent to the input nodes in the reduction cell, which are of stride two) and the convolved feature maps are padded to preserve their spatial resolution. Convolutions are applied in the order of ReLU-ConvBN, and the depthwise separable convolution is always applied twice (Zoph et al., 2017; Real et al., 2018; Liu et al., 2017a; 2019). Detailed Training Settings We follow the training settings as in Liu et al. (2019). The neural operation parameters θ are optimized using momentum SGD, with initial learning rate ηθ = 0.025 (annealed down to zero following a cosine schedule), momentum 0.9, and weight decay 3 × 10−4. The architecture distribution parameters α are optimized by Adam, with initial learning rate ηα = 3× 10−4, momentum β = (0.5, 0.999) and weight decay 10−3. The batch size employed is 64 and the initial number of channels is 16. G.2 ARCHITECTURE EVALUATION ON CIFAR-10 Additional Enhancement Techniques Following existing works (Zoph et al., 2017; Liu et al., 2017a; Pham et al., 2018; Real et al., 2018; Liu et al., 2019), we employ the following additional enhancements: cutout (DeVries & Taylor, 2017), path dropout of probability 0.2 (same as DARTS in the code publicly released by its authors) and auxiliary towers with weight 0.4. G.3 ARCHITECTURE TRANSFERABILITY EVALUATION ON CIFAR-10 Detailed Training Settings The network is trained with batch size 128, weight decay 3 × 10−5 and initial SGD learning rate 0.1, which is decayed by a factor of 0.97 after each epoch. Auxiliary towers with weight 0.4 are adopted as additional enhancements. H CELLS LEARNED BY SNAS WITH A MODERATE RESOURCE CONSTRAINT
1. What is the focus of the paper regarding NAS? 2. How does the proposed approach improve upon previous methods like ENAS and DARTS? 3. What are the strengths of the employed Gumbel random variables? 4. How does the resource constraint regularization contribute to the method's advantages? 5. Are there any concerns regarding the performance of the proposed method compared to existing techniques?
Review
Review This paper improves upon ENAS and DARTS by taking a differentiable approach to NAS and optimizing the objective across the distribution of child graphs. This technique allows for end-to-end architecture search while constraining resource usage and allowing parameter sharing by generating effective reusable child graphs. SNAS employs Gumbel random variables which gives it better gradients and makes learning more robust compared to ENAS. The use of Gumbel variables also allow SNAS to directly optimize the NAS objective which is an advantage over DARTS. The resource constraint regularization is interesting. Regularizing on the parameters that describe the architecture can help constrain resource usage during the forward pass. The proposed method is novel but the main concern here is that there is no clear win over existing techniques in terms of performance. I can't see anywhere in the tables where you demonstrate a clear improvement over DARTS or ENAS. Furthermore, in your child network evaluation with CIFAR-10, you mention that the comparison is without fine-tuning. Do you think this might be contributing to the performance gap in DARTS?
ICLR
Title SNAS: stochastic neural architecture search Abstract We propose Stochastic Neural Architecture Search (SNAS), an economical endto-end solution to Neural Architecture Search (NAS) that trains neural operation parameters and architecture distribution parameters in same round of backpropagation, while maintaining the completeness and differentiability of the NAS pipeline. In this work, NAS is reformulated as an optimization problem on parameters of a joint distribution for the search space in a cell. To leverage the gradient information in generic differentiable loss for architecture search, a novel search gradient is proposed. We prove that this search gradient optimizes the same objective as reinforcement-learning-based NAS, but assigns credits to structural decisions more efficiently. This credit assignment is further augmented with locally decomposable reward to enforce a resource-efficient constraint. In experiments on CIFAR-10, SNAS takes fewer epochs to find a cell architecture with state-of-theart accuracy than non-differentiable evolution-based and reinforcement-learningbased NAS, which is also transferable to ImageNet. It is also shown that child networks of SNAS can maintain the validation accuracy in searching, with which attention-based NAS requires parameter retraining to compete, exhibiting potentials to stride towards efficient NAS on big datasets. 1 INTRODUCTION The trend to seek for state-of-the-art neural network architecture automatically has been growing since Zoph & Le (2016), given the enormous effort needed in scientific research. Normally, a Neural Architecture Search (NAS) pipeline comprises architecture sampling, parameter learning, architecture validation, credit assignment and search direction update. There are basically three existing frameworks for neural architecture search. Evolution-based NAS like NEAT (Stanley & Miikkulainen, 2002) employs evolution algorithm to simultaneously optimize topology alongside with parameters. However, it takes enormous computational power and could not leverage the efficient gradient back-propagation in deep learning. To achieve the state-of-the-art performance as human-designed architectures, Real et al. (2018) takes 3150 GPU days for the whole evolution. Reinforcement-learning-based NAS is end-to-end for gradient back-propagation, among which the most efficient one, ENAS (Pham et al., 2018) learns optimal parameters and architectures together just like NEAT. However, as NAS is modeled as a Markov Decision Process, credits are assigned to structural decisions with temporal-difference (TD) learning (Sutton et al., 1998), whose efficiency and interpretability suffer from delayed rewards (Arjona-Medina et al., 2018). To get rid of the architecture sampling process, DARTS (Liu et al., 2019) proposes deterministic attention on operations to analytically calculate expectation at each layer. After the convergence of the parent network, it removes operations with relatively weak attention. Due to the pervasive non-linearity in neural operations, it introduces untractable bias to the loss function. This bias causes inconsistency between the performance of derived child networks and converged parent networks, thus parameter retraining comes up as necessary. A more efficient, more interpretable and less biased framework is in desire, especially for future full-fledged NAS solutions on large datasets. In this work, we propose a novel, efficient and highly automated framework, Stochastic Neural Architecture Search (SNAS), that trains neural operation parameters and architecture distribution parameters in same round of back propagation, while maintaining the completeness and differentiability of the NAS pipeline. One of the key motivations of SNAS is to replace the feedback mechanism triggered by constant rewards in reinforcement-learning-based NAS with more efficient gradient feedback from generic loss. We reformulate NAS with a new stochastic modeling to bypass the MDP assumption in reinforcement learning. To combine architecture sampling with computational graph of arbitrary differentiable loss, the search space is represented with a set of one-hot random variables from a fully factorizable joint distribution, multiplied as a mask to select operations in the graph. Sampling from this search space is made differentiable by relaxing the architecture distribution with concrete distribution (Maddison et al., 2016). We name gradients w.r.t their parameters search gradient. From a global view, we prove that SNAS optimizes the same objective as reinforcement-learning-based NAS, except the training loss is used as reward. Zooming in, we provide a policy gradient equivalent of this search gradient, showing how gradients from the loss of each sample are used to assign credits to structural decisions. By interpreting this credit assignment as Taylor Decomposition (Montavon et al., 2017a), we prove SNAS’s efficiency over reinforcementlearning-based NAS. Additionally, seeing that existing methods (Liu et al., 2019) manually design topology in child networks to avoid complex architecture, we propose a global resource constraint to automate it, augmenting the objective with feasiblity concerns. This global constraint could be linearly decomposed for structural decisions, hence the proof of SNAS’s efficiency still applies. In our experiments, SNAS shows strong performance compared with DARTS and all other existing NAS methods in terms of test error, model complexity and searching resources. Specifically, SNAS discovers novel convolutional cells achieving 2.85±0.02% test error on CIFAR-10 with only 2.8M parameters, which is better than 3.00±0.14%-3.3M from 1st-order DARTS and 2.89%-4.6M from ENAS. It is also on par with 2.76±0.09%-3.3M from 2nd-order DARTS with fewer parameters. With a more aggressive resource constraint, SNAS discovers even smaller model achieving 3.10±0.04% test error on CIFAR-10 with 2.3M parameters. During the architecture search process, SNAS obtains a validation accuracy of 88% compared to around 70% of ENAS in fewer epochs. When validating the derived child network on CIFAR-10 without finetuning, SNAS maintains the search validation accuracy, significantly outperforming 54.66% by DARTS. These results validate our theory that SNAS is less biased than DARTS. The discovered cell achieves 27.3% top-1 error when transferred to ImageNet (mobile setting), which is comparable to 26.9% by 2nd-order DARTS. 2 METHODOLOGY The main initiative of SNAS is to build an efficient and economical end-to-end learning system with as little compromise of the NAS pipeline as possible. In this section, we first describe how to sample from the search space for NAS in a cell, and how it motivates a stochastic reformuation for SNAS (Section 2.1). A new optimization objective is provided and the attention-based NAS’s inconsistency is discussed. Then in Section 2.2, we introduce how this discrete search space is relaxed to be continuous to let gradients back-propagate through. In Section 2.3, the search gradient of SNAS is connected to the policy gradient in reinforcement-learning-based NAS (Zoph & Le, 2016; Pham et al., 2018), interpreting SNAS’s credit assignment with contribution analysis. At last, we introduce in Section 2.4 how SNAS automates the topology search to reduce the complexity of child netowrk, as well as how it decomposes this global constraint in the context of credit assignment. 2.1 SEARCH SPACE AND ARCHITECTURE SAMPLING Searching for structure of a cell that is later stacked as building blocks for a deep architecture is an ad hoc solution to trade-off search efficiency and result optimality (Zoph et al., 2017; Liu et al., 2017a; Real et al., 2018; Pham et al., 2018; Liu et al., 2019). As shown in the left of Figure 1, the search space, i.e. a cell, is represented using a directed acyclic graph (DAG), which is called parent graph. Nodes xi in this DAG represent latent representation, whose dimensions are simply ignored to avoid abuse of notations. In convolutional networks, they are feature maps. Edges (i, j) represent information flows and possible operations Oi,j to be selected between two nodes xi and xj . To make the skip operation included, nodes are enforced to be ordered, while edges only point from lower indexed nodes to higher ones. Thus we have intermediate nodes xj = ∑ i<j Õi,j(xi), (1) where Õi,j is the selected operation at edge (i, j). Analog to ENAS, SNAS search for operations and topology of this cell at the same time. Rather than using two distributions, this is done by introducing a zero operation, as in DARTS. Same as ENAS and DARTS, each cell is designed to have two inputs from the output of previous cells. The output of a cell is the concatenate of intermediate nodes. Thanks to the fact that the volume of structural decisions, which pick Õi,j for edge (i, j), is generally tractable in a cell, we represent it with a distribution p(Z). Multiplying each one-hot random variable Zi,j to each edge (i, j) in the DAG, we obtain a child graph, whose intermediate nodes are xj = ∑ i<j Õi,j(xi) = ∑ i<j ZTi,jOi,j(xi). (2) In terms of how to parameterize and factorize p(Z), SNAS is built upon the observation that NAS is a task with fully delayed rewards in a deterministic environment. That is, the feedback signal is only ready after the whole episode is done and all state transition distributions are delta functions. Therefore, a Markov Decision Process assumption as in ENAS may not be necessary. In SNAS, we simply assume that p(Z) is fully factorizable, whose factors are parameterized with α and learnt along with operation parameters θ. In Appendix A we connect the probability of a trajectory in the MDP of ENAS and this joint probability p(Z). Following the setting in Zoph & Le (2016), the objective of SNAS is also EZ∼pα(Z)[R(Z)]. (3) While the difference is that rather than using a constant reward from validation accuracy, we use the training/testing loss directly as reward, R(Z) = Lθ(Z), such that the operation parameters and architecture parameters can be trained under one generic loss: EZ∼pα(Z)[R(Z)] = EZ∼pα(Z)[Lθ(Z)]. (4) The whole process of obtaining a Monte Carlo estimate of this objective is shown in Figure 1. An intuitive interpretation of this objective is to optimize the expected performance of architectures sampled with p(Z). This differentiates SNAS from attention-based NAS like DARTS, which avoids the sampling process by taking analytical expectation at each edge over all operations. In Appendix B we illustrate the inconsistency between DARTS’s loss and this objective, explaining its necessity of parameter finetuning or even retraining after architecture derivation. Resembling ENAS, SNAS does not have this constraint. We introduce in next subsection how SNAS calculates gradients w.r.t θ and α. 2.2 PARAMETER LEARNING FOR OPERATIONS AND ARCHITECTURES Though the objective (4) could be optimized with black-box gradient descent method as in Ranganath et al. (2014), it would suffer from the high variance of likelihood ratio trick (Williams, 1992) and could not make use of the differentiable nature of Lθ(Z). Instead, we use concrete distribution (Maddison et al., 2016) here to relax the discrete architecture distribution to be continuous and differentiable with reparameterization trick: Zki,j = fαi,j (G k i,j) = exp((logαki,j +G k i,j)/λ)∑n l=0 exp((logα l i,j +G l i,j)/λ) , (5) where Zi,j is the softened one-hot random variable for operation selection at edge (i, j), Gki,j = − log(− log(Uki,j)) is the kth Gumbel random variable, Uki,j is a uniform random variable. αi,j is the architecture parameter, which could depend on predecessors Zh,i if p(Zi,j) is a conditional probability. λ is the temperature of the softmax, which is steadily annealed to be close to zero in SNAS. In Maddison et al. (2016), it is proved that p(limλ→0Zki,j = 1) = α k i,j/( ∑n l=0α l i,j), making this relaxation unbiased once converged. The full derivation of ∇EZ∼pα(Z)[Lθ(Z)] is given in Appendix C. Here with the surrogate loss L for each sample, we provide its gradient w.r.t xj , θki,j and α k i,j : ∂L ∂xj = ∑ m>j ∂L ∂xm ZTm ∂Om(xj) ∂xj , ∂L ∂θki,j = ∂L ∂xj Zki,j ∂Oi,j(xi) ∂θki,j , ∂L ∂αki,j = ∂L ∂xj OTi,j(xi)(δ(k ′ − k)−Zi,j)Zki,j 1 λαki,j . (6) We name ∂L∂α search gradient similar to the one in Wierstra et al. (2008), even though no policy gradient is involved. This renders SNAS a differentiable version of evolutionary-strategy-based NAS. 2.3 CREDIT ASSIGNMENT With the equivalence of p(Z) in SNAS and p(τ) in ENAS from Section 2.1 and the search gradient of SNAS from Section 2.2, we discuss in this subsection what credits SNAS search gradients assign to each structural decision. To assign credits to actions both temporally and laterally is an important topic in reinforcement learning (Precup, 2000; Schulman et al., 2015; Tucker et al., 2018; Xu et al., 2018). In ENAS, proximal policy optimization (PPO) (Schulman et al., 2017) is used to optimize the architecture policy, which distributes credits with TD learning and generalized advantage estimator (GAE) (Schulman et al., 2015). However, as the reward of NAS task is only obtainable after the architecture is finalized and the network is tested for accuracy, it is a task with delayed rewards. As proved by Arjona-Medina et al. (2018), TD learning has bias with reward delay and corrects it exponentially slowly. Different from ENAS, there is no MDP assumption in SNAS, but the reward function is made differentiable in terms of structural decisions. From Section 2.2 we can derive the expected search gradient for architecture parameters at edge (i, j): EZ∼p(Z)[ ∂L ∂αki,j ] = EZ∼p(Z)[∇αki,j log p(Zi,j)[ ∂L ∂xj Õi,j(xi)]c], (7) where [·]c emphasizes · is constant for the gradient calculation w.r.t. α. A full derivation is provided in Appendix D. Apparently, the search gradient is equivalent to a policy gradient for distribution at this edge whose credit is assigned as Ri,j = −[ ∂L ∂xj Õi,j(xi)]c. (8) From a decision-wise perspective, this reward could be interpreted as contribution analysis of L with Taylor Decomposition (Montavon et al., 2017a), which distributes importance scores among nodes in the same effective layer. Given the presence of skip connections, nodes may be involved into multiple effective layers, credits from which would be integrated. This integrated credit of a node j is then distributed to edges (i, j) pointing to it, weighted by Õi,j(xi). Details are given in Appendix E. Thus for each structural decision, no delayed reward exists, the credits assigned to it are valid from the beginning. This proves why SNAS is more efficient than ENAS. Laterally at each edge, credits are distributed among possible operations, adjusted with random variables Zi,j . At the beginning of the training, Zi,j is continuous and operations share the credit, the training is mainly on neural operation parameters. With the temperature goes down and Zi,j becomes closer to one-hot, credits are given to the chosen operations, adjusting their probabilities to be sampled. 2.4 RESOURCE CONSTRAINT Apart from training efficiency and validation accuracy, forwarding time of the child network is another concern in NAS in order for its feasible employment. In SNAS, this could be taken into account as a regularizer in the objective: EZ∼pα(Z)[Lθ(Z) + ηC(Z)] = EZ∼pα(Z)[Lθ(Z)] + ηEZ∼pα(Z)[C(Z)], (9) where C(Z) is the cost of time for the child network associated with random variables Z. Rather than directly estimating the forwarding time, there are three candidates from the literature (Gordon et al., 2018; Ma et al., 2018) that can be used to approximately represent it: 1) the parameter size ; 2) the number of float-point operations (FLOPs); and 3) the memory access cost (MAC). Details about C(Z) in SNAS could be found in Appendix F. However, not like Lθ(Z), C(Z) is not differentiable w.r.t. either θ or α. A natural problem to ask is, whether efficient credit assignment from C(Z) could be done with similar decomposition introduced above, such that the proof of SNAS’s efficiency still applies. And the answer is positive, thanks to the fact that C(Z) is linear in terms of all one-hot random variables Zi,j : C(Z) = ∑ i,j C(Zi,j) = ∑ i,j ZTi,jC(Oi,j), (10) mainly because the size of feature maps at each node is not dependent on the structural decision. That is, the distribution at each edge (i, j) is optimized with local penalty, which is the conservative decomposition of the global cost, consistent with the credit assignment principle in SNAS. In SNAS, pα(Z) is fully factorizable, making it possible to calculate EZ∼pα [C(Z)] analytically with sum-product algorithm (Kschischang et al., 2001). Unfortunately, this expectation is non-trivial to calculate, we optimize the Monte Carlo estimate of the final form from sum-product algorithm EZ∼pα [C(Z)] = ∑ i,j EZ\i,j∼pα [EZi,j∼pα [Z T i,jC(Oi,j)]] (11) with policy gradients. 3 EXPERIMENTS Following the pipeline in DARTS, our experiments consist of three stages. First, SNAS is applied to search for convolutional cells in a small parent network on CIFAR-10 and we choose the best cells based on their search validation accuracy. Then, a larger network is constructed by stacking the learned cells (child graphs) and is retrained on CIFAR-10 to compare the performance of SNAS with other state-of-the-art methods. Finally, we show that the cells learned on CIFAR-10 is transferable to large datasets by evaluating their performance on ImageNet. 3.1 ARCHITECTURE SEARCH ON CIFAR-10 Motivation We apply SNAS to find convolutional cells on CIFAR-10 for image classification. Unlike DARTS, which evaluates the performance of child networks during the searching stage by training their snapshots from scratch, we directly take the search validation accuracy as the performance evaluation criterion. This evaluation method is valid in SNAS since the searching is unbiased from its objective, as introduced in Section 2.1. Dataset CIFAR-10 dataset (Krizhevsky & Hinton, 2009) is a basic dataset for image classification, which consists of 50,000 training images and 10,000 testing images. Data transformation is achieved by the standard data pre-processing and augmentation techniques (see Appendix G.1). Search Space Our setup follows DARTS, where convolutional cells (parent graphs) of 7 nodes are stacked for multiple times to form a network. The input nodes, i.e. the first and second nodes, of cell k are set equal to the outputs of cell k−2 and cell k−1, respectively, with 1× 1 convolutions inserted as necessary, and the output node is the depthwise concatenation of all the intermediate nodes. Reduction cells are located at the 1/3 and 2/3 of the total depth of the network to reduce the spatial resolution of feature maps. Therefore the architecture distribution parameters is (αnormal,αreduce), where αnormal is shared by all the normal cells and αreduce is shared by all the reduction cells. Details about all operations included are shown in Appendix G.1. Training Settings In the searching stage, we train a small network stacked by 8 cells (parent graphs) using SNAS with three levels of resource constraint for 150 epochs. This network size is determined to fit into a single GPU. Single-level optimization is employed to optimize θ and α over the same dataset as opposed to bilevel optimization employed by DARTS. The rest of the setup follows DARTS (Appendix G.1). The search takes 32 hours1 on a single GPU2. Searching Process The normal and reduction cells learned on CIFAR-10 using SNAS with mild resource constraint are shown in Figure 2. In Figure 3, we give the validation accuracy during the search of SNAS, DARTS and ENAS with 10 Randomly Generated Seeds. Comparing with ENAS, SNAS takes fewer epochs to converge to higher validation accuracy. Though DARTS converges faster than SNAS, this accuracy is inconsistent with the child network. Table 1 presents their comparison of the validation accuracy at the end of search and after architecture derivation without fine-tuning. While SNAS can maintain its performance, there is a huge gap between those two in DARTS. Figure 3: Search progress in validation accuracy from SNAS, DARTS and ENAS. 0.8183 0.3234 0.622 1 0.472 0.776 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Normal cell Reduction cell Overall DARTS SNAS 0.8183 0.3234 0.622 1 0.472 0.776 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Normal cell Reduction cell Overall DARTS SNAS 0.8183 0.3234 0.622 1 0.472 0.776 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Normal cell Reduction cell Overall DARTS SNAS This gap is caused by the extra architecture derivation step in DARTS, consisting of the following two steps. (1) Remove operations with relatively weak attention. As shown in Figure 4, the entropy of the architecture distribution (softmax) at each edge, i.e. Hpα , is relatively high in DARTS, indicating its uncertainty in structural decisions. Hence removing other operations from the continuous relaxation will strongly affect the output of the network. (2) Remove relatively ambiguous edges. DARTS manually selects two inputs for each intermediate nodes, thus the topology is inconsistent with that in the training stage. While SNAS employs architecture sampling and resource regularizer to automatically induce sparsity. Phenomena shown in Figure 4 and Table 1 verify our claim that searching process in SNAS is less biased from the objective, i.e. Equation (4), and could possibly save computation resources for parameter retraining when extended to NAS on large datasets. Searching Results Three levels of resource constraint, mild, moderate and aggressive are examined in SNAS. Mild resource constraint lies at the margin of the appearance of zero operation to drop edges in child graphs, as shown in Figure 2. Interestingly, every node takes only two input edges, just as in the designed scheme in ENAS and DARTS. When the constraint level is increased to moderate, the reduction cell begins to discover similar structures as normal cells, as shown in Appendix H. When a more aggressive resource constraint is added, the structure of reduction cells is further sparsified. As shown in Figure 5, more edges are dropped, leaving only two, which leads to the drop of some nodes, including the input node ck−1, and two intermediate nodes x2 and x3. Note that this child graph is a structure that ENAS and DARTS are not able to discover 4. 3Repetition for convolutional cells is not necessary since the optimization outcomes are not initializationsensetive (Liu et al., 2019). 4In the code from Liu et al. (2019), zero is omitted in child graph derivation as empirically it tends to learn the largest weight. 3.2 ARCHITECTURE EVALUATION ON CIFAR-10 Motivation In the searching stage, we follow the economical setup of DARTS to use only one single GPU, which constrains the parameter size of the child network. A conventional assumption in DARTS and ENAS5 is that the final search validation accuracy has exploited the parameter size, the ceiling of which can only be raised by allowing more parameters. For a fair comparison, we follow this assumption in evaluation stage, stacking more cells (child graphs) to build a deeper network. This network is trained from scratch as in DARTS and ENAS to report the performance of the cells learned by SNAS on CIFAR-10. Evaluation Settings A large network of 20 cells is trained from scratch for 600 epochs with batch size 96. Other hyperparameters remain the same as those for architecture search. Additional enhancements are listed in Appendix G.2. The training takes 1.5 days on a single GPU with our implementation in PyTorch. Results The CIFAR-10 evaluation results are presented in Table 2. The test error of SNAS is on par with the state-of-the-art RL-based and evolution-based NAS while using three orders of magnitude less computation resources. Furthermore, with slightly longer wall-clock-time, SNAS outperforms 1st-order DARTS and ENAS by discovering convolutional cells with both a smaller error rate and fewer parameters. It also achieves a comparable error rate compared to 2nd-order DARTS but with fewer parameters. With a more aggressive resource constraint, SNAS can sparsify the architecture even further to distinguish from ENAS and DARTS with only a slight drop in performance, which is still on par with 1st-order DARTS. It is interesting to note that with same single-level optimization, SNAS significantly outperforms DARTS. Bilevel optimization could be regarded as a data-driven meta-learning method to resolve the bias proved above, whose bias from the exact meta-learning objective is still unjustified due to the ignorance of separate child network derivation scheme. 3.3 ARCHITECTURE TRANSFERABILITY EVALUATION ON IMAGENET Motivation Since real world applications often involve much larger datasets than CIFAR-10, transferability is a crucial criterion to evaluate the potential of the learned cells (child graphs) (Zoph et al., 2017). To show whether the cells learned on by SNAS CIFAR-10 can be generalized to larger datasets, we apply the same cells evaluated in Section 3.2 to the classification task on ImageNet. Dataset The mobile setting is adopted where the size of the input images is 224 × 224 and the number of multiply-add operations in the model is restricted to be less than 600M. 5As shown in the code publicly released by Pham et al. (2018) Evaluation Settings We stack a network of 14 cells using the same cells designed by SNAS (mild constraint) and evaluated on CIFAR-10 (Section 3.2) and train it for 250 epochs with other hyperparameters following DARTS (see Appendix G.3). The training takes 12 days on a single GPU. Results Table 3 presents the results of the evaluation on ImageNet and shows that the cell found by SNAS on CIFAR-10 can be successfully transferred to ImageNet. Notably, SNAS is able to achieve competitive test error with the state-of-the-art RL-based NAS using three orders of magnitude less computation resources. And with resource constraints added, SNAS can find smaller cell architectures that achieve competitive performance with DARTS. 4 RELATED WORKS Improving the efficiency of NAS is a prerequisite to extending it to more complicated vision tasks like detection, as well as larger datasets. In the complete pipeline of NAS, parameter learning is a time-consuming one that attracts attention from the literature. Ideas to design auxiliary mechanisms like performance prediction (Baker et al., 2017; Deng et al., 2017), iterative search (Liu et al., 2017a), hypernetwork generated weights (Brock et al., 2017) successfully accelerate NAS to certain degrees. Getting rid of these auxiliary mechanisms, ENAS (Pham et al., 2018) is the state-of-the-art NAS framework, proposing parameter sharing among all possible child graphs, which is followed by SNAS. In Section 2 we introduced SNAS’s relation with ENAS in details. Apart from ENAS, we are also inspired by Louizos et al. (2017) to use continuous distribution for structural decision at each edge and optimize it along with an l0 complexity regularizer. The most important motivation of SNAS is to leverage the gradient information in generic differentiable loss to update architecture distribution, which is shared by DARTS (Liu et al., 2019). In Section 2 and Appendix B we have introduced SNAS’s advantage over DARTS, a reward for maintaining the completeness of the NAS pipeline. Actually, the idea to make use of this gradient information to improve the learning efficiency of a stochastic model has been discussed in the literature of generative model (Gu et al., 2015; Maddison et al., 2016) and reinforcement learning (Schmidhuber, 1990; Arjona-Medina et al., 2018). But as far as we known, we are the first one to combine the insights from these two fields to discuss possible efficiency improvement of NAS. 5 CONCLUSION In this work, we presented SNAS, a novel and economical end-to-end neural architecture search framework. The key contribution of SNAS is that by making use of gradient information from generic differentiable loss without sacrificing the completeness of NAS pipeline, stochastic architecture search could be more efficient. This improvement is proved by comparing the credit assigned by the search gradient with reinforcement-learning-based NAS. Augmented by a complexity regu- larizer, this search gradient trades off testing error and forwarding time. Experiments showed that SNAS searches well on CIFAR-10, whose result could be transferred to ImageNet as well. As a more efficient and less-biased framework, SNAS will serve as a possible candidate for full-fledged NAS on large datasets in the future. B DIFFERENCE BETWEEN SNAS AND DARTS We take a search space with three intermediate nodes for example to exhibit the difference between SNAS and DARTS (Liu et al., 2019), as shown in Figure 6. This search space could be viewed as a unit search space whose property could be generalized to larger space since it contains nodes in series and in parallel. The objective of a NAS task is EZ∼pα(Z)[R(Z)], (16) where pα(Z) is the distribution of architectures, which is previously solved with reinforcement learning. In both SNAS and DARTS, the reward function is made differentiable using the training/testing loss, R(Z) = Lθ((Z)), such that the architecture learning could leverage information in the gradients of this loss and conduct together with operation parameters training: EZ∼pα(Z)[R(Z)] = EZ∼pα(Z)[Lθ(Z)]. (17) As introduced in Appendix A, SNAS solves (16) with a novel type of factorization, without relying on the MDP assumption. Though independent assumption between edges would restrict the probability distribution, there is no bias introduced. However, to avoid the sampling process and gradient back-propagation through discrete random variables, DARTS takes analytical expectation at the input of each node over operations at incoming edges and optimizes a relaxed loss with deterministic gradients. Take the cell in Figure 6 as a base case, the objective before this relaxation is EZ∼pα(Z)[Lθ(Z T j,lOj,l(Z T i,jOi,j(xi)) +Z T j,mOj,m(Z T i,jOi,j(xi)))] =EZ∼pα(Z)[Lθ( ∑ m>j ZTj,mOj,m(Z T i,jOi,j(xi))]. (18) DARTS relaxed this objective to Lθ( ∑ m>j Epαj,m [Z T j,mOj,m(Epαi,j [Z T i,jOi,j(xi)])]). (19) Considering that O(x) are ReLU-Conv-BN stacks as in ENAS (Pham et al., 2018), which are nonlinear, this transformation introduces unbounded bias. Though it will not be perceivable in training, where the complete graph is used for accuracy validation, consistent this loss, the derived graph is never validated during training. Hence the training is inconsistent with the true objective maximizing the expected performance of derived architectures. After an architecture derivation introduced in DARTS, the performance falls enormously and the parameters need to be retrained. C GRADIENTS IN SNAS Figure 6(b) gives an illustration of a base three-intermediate-node unit in SNAS, where each edge has three operations (indexed by k) to choose from. In the search space of SNAS, intermediate nodes take input from all previous nodes. We have xj = ∑ h<j ZTh,jOh,j(xh) = Z T i,jOi,j(xi) + ∑ h<i ZTh,jOh,j(xh). (20) Let θki,j be the parameters inO k i,j , we have ∂xj ∂θki,j = ZTi,j ∂Oi,j(xi) ∂θki,j . (21) As we use concrete disctribution here to make the sampling differentiable with reparametrization trick: Zki,j = fαi,j (G k i,j) = exp((logαki,j +G k i,j)/λ)∑n l=0 exp((logα l i,j +G l i,j)/λ) , (22) where Gki,j = − log(− log(Uki,j)) is the kth Gumbel random variable, Uki,j is a uniform random variable, the gradient w.r.t. αi,j is: ∂xj ∂αki,j = OTi,j(xi) ∂fαi,j (Gi,j) ∂αki,j . (23) The partial derivative ∂fαi,j ∂αki,j is ∂fαi,j (Gi,j) ∂αki,j = ∂ ∂αki,j exp((logαki,j +G k i,j)/λ)∑n l=0 exp((logα l i,j +G l i,j)/λ) (δ(k′ − k)− exp((logαi,j +Gi,j)/λ)∑n l=0 exp((logα i i,j +G l i,j)/λ) ) = ∂(logαki,j +G k i,j)/λ ∂αki,j fαi,j (G k i,j)(δ(k ′ − k)− fαi,j (Gi,j)) =(δ(k′ − k)− fαi,j (Gi,j))fαi,j (Gki,j) 1 λαki,j =(δ(k′ − k)−Zi,j)Zki,j 1 λαki,j . (24) Substitute it back to (23), we obtain ∂xj ∂αki,j = OTi,j(xi)(δ(k ′ − k)−Zi,j)Zki,j 1 λαki,j . (25) We can also derive ∂xm∂xj for chain rule connection: ∂xm ∂xj = ZTj,m ∂Oj,m(xj) ∂xj . (26) Thus the gradient from the surrogate loss L to xj , θki,j and αki,j respectively are ∂L ∂xj = ∑ m>j ∂L ∂xm ZTj,m ∂Oj,m(xj) ∂xj , ∂L ∂θki,j = ∂L ∂xj Zki,j ∂Oi,j(xi) ∂θki,j , ∂L ∂αki,j = ∂L ∂x1 OTi,j(xi)(δ(k ′ − k)−Zi,j)Zki,j 1 λαki,j . (27) D CREDIT ASSIGNMENT FOR EQUIVALENT POLICY GRADIENT From Appendix C we can see that the expected search gradient for architecture parameters at each edge is: EZ∼p(Z)[ ∂L ∂αki,j ] = EU∼Uniform[ ∂L ∂xj OTi,j(xi) ∂fαi,j (− log(− log(Ui,j))) ∂αki,j ] = ∫ 1 0 p(Ui,j) ∂L ∂xj OTi,j(xi) ∂fαi,j (− log(− log(Ui.j))) ∂αki,j dUi,j = ∂ ∂αk1 ∫ 1 0 p(Ui,j)[ ∂L ∂xj OTi,j(xi)]cfαi,j (− log(− log(Ui,j)))dUi,j = ∂ ∂αki,j ∫ p(Zi,j)[ ∂L ∂xj OTi,j(xi)]cZi,jdZi,j = ∫ p(Zi,j) ∂ log p(Zi,j) ∂αki,j [ ∂L ∂xj OTi,j(xi)Zi,j ]cdZi,j = EZ∼p(Z)[∇αki,j log p(Zi,j)[ ∂L ∂xj OTi,j(xi)Zi,j ]c] = EZ∼p(Z)[∇αki,j log p(Zi,j)[ ∂L ∂xj Õi,j(xi)]c], (28) where [·]c denotes · is a constant for the gradient calculation w.r.t. α. Note that in this derivation we stop the gradient from successor nodes, with an independence assumption enforced in backpropagation. E TAYLOR DECOMPOSITION FOR CONTRIBUTION ANALYSIS With d neurons (pixels) xi in the same layer of a deep neural network, whose output is f(x), Montavon et al. (2017a) decomposes f(x) as a sum of individual credits for xi. This decomposition is obtained by the first-order Taylor expansion of the function at some root point x̃ for which f(x̃) = 0: f(x) = d∑ i=1 Ri(x) +O(xx T ), (29) where the individual credits Ri(x) = ∂f ∂xi |x=x̃(xi − x̃i) (30) are first-order terms and O(xxT ) is for higher-order information. When ReLU is chosen as the activation function, O(xxT ) can be omitted (Montavon et al., 2017b). Thus ones can always find a root point x̃ = lim →0 x that incidentally lies on the same linear region as point x, in which case the function can be written as f(x) = d∑ i=1 Ri(x) = d∑ i=1 ∂f ∂xi xi. (31) Noticing the similarity between (8) and (31), we try using Taylor Decomposition to interpret the credit assignment in SNAS. Given a sample x0, ones can iterate all effective layers of the DAG and distribute credits from network output f among nodes xj in each layer. In Figure 1 for example, DAG(Z(1)) has 2 effective layers, while DAG(Z(2)) has 3 effective layers. Given the presence of the skip connection, nodes may be involved into multiple layers and thus obtain integrated credits ∂f ∂xj = ∑ m>j ∂f ∂xm ∂Õm(xj) ∂xj , (32) e.g. x1 in DAG(2) integrates credits from x2 and x3. According to (1), multiple edges (i, j) are pointing to j, which decompose (32) as: R̂i,j = ∂f ∂xj Õi,j(xi). (33) Adjusting the weight of this sample with ∂L/∂f and taking the optimization direction into account, we have Ri,j = − ∂L ∂xj Õi,j(xi) (34) F CANDIDATES FOR LOCAL RESOURCE CONSTRAINTS In the case of a convolutional layer, H , W and f , k correspond to the output spatial dimensions and the filter dimensions respectively and we use I ,O to denote the number of input and output channels. Since group convolution is also adopted in this paper to reduce the computational complexity, g is the number of groups. Thus, the parameter size and the number of float-point operations (FLOPs) of a single convolutional layer is parameter size = fkIO g (35) FLOPs = HWfkIO g (36) By assuming the computing device has enough cache to store the feature maps and the parameters, we can simplify the memory access cost (MAC) to be the sum of the memory access for the input/output feature maps and kernel weights (Ma et al., 2018). MAC = HW (I +O) + fkIO g (37) In SNAS, because all the operations on a single edge share the same output spatial dimensions and the input/output channels, FLOPs of a convolutional operation is directly proportional to its parameter size. And although the memory access cost for the input/output feature mapsHW (I+O) does not depend on the parameter size, since both are positively correlated to the number of layers used in the operation, we may say there is a positive correlation between MAC and the parameter size. Thus, when only considering the convolution operations, solely using the parameter size as the resource constraint is sufficient. However, in SNAS, we also have the pooling operation and the skip connection, which are parameter free. The equations to calculate the resource criteria of a pooling operation or a skip connection are as follows. FLOPs of pooling: FLOPs = HWfkIO (38) FLOPs of skip connection: FLOPs = 0 (39) MAC of pooling and skip connection: MAC = HW (I +O) (40) We can see that MAC is the same for pooling and skip connection since they need to access the same input/output feature maps, therefore, to distinguish between pooling and skip connection, FLOPs need to be included in the resource constraint. Similarly, to distinguish between skip connection and none (free, no operation), MAC also need to be included. In conclusion, to construct a resource constraint which fully distinguishes the four types of operations, all three locally decomposable criteria, the parameter size, FLOPs and MAC, need to be combined. G DETAILED SETTINGS OF EXPERIMENTS G.1 ARCHITECTURE SEARCH ON CIFAR-10 Data Pre-processing and Augmentation Techniques We employ the following techniques in our experiments: centrally padding the training images to 40×40 and then randomly cropping them back to 32 × 32; randomly flipping the training images horizontally; normalizing the training and validation images by subtracting the channel mean and dividing by the channel standard deviation. Implementation Details of Operations The operations include: 3× 3 and 5× 5 separable convolutions, 3 × 3 and 5 × 5 dilated separable convolutions, 3 × 3 max pooling, 3 × 3 average pooling, skip connection and zero operation. All operations are of stride one (excluded the ones adjacent to the input nodes in the reduction cell, which are of stride two) and the convolved feature maps are padded to preserve their spatial resolution. Convolutions are applied in the order of ReLU-ConvBN, and the depthwise separable convolution is always applied twice (Zoph et al., 2017; Real et al., 2018; Liu et al., 2017a; 2019). Detailed Training Settings We follow the training settings as in Liu et al. (2019). The neural operation parameters θ are optimized using momentum SGD, with initial learning rate ηθ = 0.025 (annealed down to zero following a cosine schedule), momentum 0.9, and weight decay 3 × 10−4. The architecture distribution parameters α are optimized by Adam, with initial learning rate ηα = 3× 10−4, momentum β = (0.5, 0.999) and weight decay 10−3. The batch size employed is 64 and the initial number of channels is 16. G.2 ARCHITECTURE EVALUATION ON CIFAR-10 Additional Enhancement Techniques Following existing works (Zoph et al., 2017; Liu et al., 2017a; Pham et al., 2018; Real et al., 2018; Liu et al., 2019), we employ the following additional enhancements: cutout (DeVries & Taylor, 2017), path dropout of probability 0.2 (same as DARTS in the code publicly released by its authors) and auxiliary towers with weight 0.4. G.3 ARCHITECTURE TRANSFERABILITY EVALUATION ON CIFAR-10 Detailed Training Settings The network is trained with batch size 128, weight decay 3 × 10−5 and initial SGD learning rate 0.1, which is decayed by a factor of 0.97 after each epoch. Auxiliary towers with weight 0.4 are adopted as additional enhancements. H CELLS LEARNED BY SNAS WITH A MODERATE RESOURCE CONSTRAINT
1. What is the main contribution of the paper on neural architecture search? 2. How does the proposed method, SNAS, improve over existing works such as ENAS and DARTS? 3. Can you explain how SNAS allows independent sampling at edges in the shared DAG and how it optimizes the direct NAS objective? 4. What are the strengths and weaknesses of the paper according to the reviewer? 5. Do you have any questions regarding the experimental results or the presentation of the paper?
Review
Review Summary: This paper proposes Stochastic Neural Architecture Search (SNAS), a method to automatically and efficiently search for neural architectures. It is built upon 2 existing works on these topics, namely ENAS (Pham et al 2018) and DARTS (Liu et al 2018). SNAS provides nice theory and explanation of gradient computations, unites the strengths and avoid the weaknesses of ENAS and DARTS. There are many details in the paper, including the Appendix. The idea is as follows: +------------+---------------------+-------------------------+ | Method | Differentiable | Directly Optimize | | | | NAS reward | +------------+---------------------+-------------------------+ | ENAS | No | Yes | | DARTS | Yes | No | | SNAS | Yes | Yes | +------------+---------------------+-------------------------+ SNAS inherits the idea of ENAS and DARTS by superpositioning all possible architectures into a Directed Acyclic Graph (DAG), effectively sharing the weights among all architectures. However, SNAS improves over ENAS and DARTS as follows (Section 2.2): 1. SNAS improves over ENAS in that it allows independent sampling at edges in the shared DAG, leading to a more tractable gradient at the edges of the DAG, which in turn allows more tractable Monte Carlo estimation of the gradients with respect to the architectural parameters. 2. While DARTS also has the property (1), DARTS implements this by computing the expected value at each node in the DAG, with respect to the joint distribution of the input edges and the operations. This makes DARTS not optimize the direct NAS objective. SNAS, due to their smart manipulation of architectural gradients using Gumbel variables, still optimizes the same objective with NAS and ENAS, but has a smoother gradients. Experimental results in the paper show that SNAS finds architectures on CIFAR-10 that are comparable to those found by ENAS and DARTS, using a reasonable amount of computing resource. These architectures can also be transferred to learn competent models on ImageNet, like those of DARTS. Furthermore, experimental observations (Figure 3) are consistent with the theory above, that is: 1. The search process of SNAS is more stable than that of ENAS (as SNAS samples with a smaller variance). 2. Architectures found by SNAS perform better than those of DARTS, as SNAS searches directly for the NAS reward of the sampled models. Strengths: 1. SNAS unites the strengths and avoids the weaknesses of ENAS and DARTS 2. SNAS provides a nice theory, which is verified through their experimental results. Weaknesses: I don’t really have any complaints about this paper. Some presentations of the paper might have been improved, e.g. the discussion on the ZERO operation in other comments should have been included.
ICLR
Title One Transformer Can Understand Both 2D & 3D Molecular Data Abstract Unlike vision and language data which usually has a unique format, molecules can naturally be characterized using different chemical formulations. One can view a molecule as a 2D graph or define it as a collection of atoms located in a 3D space. For molecular representation learning, most previous works designed neural networks only for a particular data format, making the learned models likely to fail for other data formats. We believe a general-purpose neural network model for chemistry should be able to handle molecular tasks across data modalities. To achieve this goal, in this work, we develop a novel Transformer-based Molecular model called Transformer-M, which can take molecular data of 2D or 3D formats as input and generate meaningful semantic representations. Using the standard Transformer as the backbone architecture, Transformer-M develops two separated channels to encode 2D and 3D structural information and incorporate them with the atom features in the network modules. When the input data is in a particular format, the corresponding channel will be activated, and the other will be disabled. By training on 2D and 3D molecular data with properly designed supervised signals, Transformer-M automatically learns to leverage knowledge from different data modalities and correctly capture the representations. We conducted extensive experiments for Transformer-M. All empirical results show that Transformer-M can simultaneously achieve strong performance on 2D and 3D tasks, suggesting its broad applicability. The code and models will be made publicly available at https://github.com/lsj2408/Transformer-M. 1 INTRODUCTION Deep learning approaches have revolutionized many domains, including computer vision (He et al., 2016), natural language processing (Devlin et al., 2019; Brown et al., 2020), and games (Mnih et al., 2013; Silver et al., 2016). Recently, researchers have started investigating whether the power of neural networks could help solve important scientific problems in chemistry, e.g., predicting the property of molecules and simulating the molecular dynamics from large-scale training data (Hu et al., 2020a; 2021; Zhang et al., 2018; Chanussot et al., 2020). One key difference between chemistry and conventional domains such as vision and language is the multimodality of data. In vision and language, a data instance is usually characterized in a particular form. For example, an image is defined as RGB values in a pixel grid, while a sentence is defined as tokens in a sequence. In contrast, molecules naturally have different chemical formulations. A molecule can be represented as a sequence (Weininger, 1988), a 2D graph (Wiswesser, 1985), or a collection of atoms located in a 3D space. 2D and 3D structures are the most popularly used formulations as many valuable properties and statistics can be obtained from them (Chmiela et al., 2017; Stokes et al., 2020). However, as far as we know, most previous works focus on designing neural network models for either 2D or 3D structures, making the model learned in one form fail to be applied in tasks of the other form. We argue that a general-purpose neural network model in chemistry should at least be able to handle molecular tasks across data modalities. In this paper, we take the first step toward this goal by ∗These two authors contributed equally to this project †Correspondence to: Di He <dihe@pku.edu.cn> and Liwei Wang <wanglw@pku.edu.cn>. developing Transformer-M, a versatile Transformer-based Molecular model that performs well for both 2D and 3D molecular representation learning. Note that for a molecule, its 2D and 3D forms describe the same collection of atoms but use different characterizations of the structure. Therefore, the key challenge is to design a model expressive and compatible in capturing structural knowledge in different formulations and train the parameters to learn from both information. Transformer is more favorable than other architectures as it can explicitly plug structural signals in the model as bias terms (e.g., positional encodings (Vaswani et al., 2017; Raffel et al., 2020)). We can conveniently set 2D and 3D structural information as different bias terms through separated channels and incorporate them with the atom features in the attention layers. Architecture. The backbone network of our Transformer-M is composed of standard Transformer blocks. We develop two separate channels to encode 2D and 3D structural information. The 2D channel uses degree encoding, shortest path distance encoding, and edge encoding extracted from the 2D graph structure, following Ying et al. (2021a). The shortest path distance encoding and edge encoding reflect the spatial relations and bond features of a pair of atoms and are used as bias terms in the softmax attention. The degree encoding is added to the atom features in the input layer. For the 3D channel, we follow Shi et al. (2022) to use the 3D distance encoding to encode the spatial distance between atoms in the 3D geometric structure. Each atom pair’s Euclidean distance is encoded via the Gaussian Basis Kernel function (Scholkopf et al., 1997) and will be used as a bias term in the softmax attention. For each atom, we sum up the 3D distance encodings between it and all other atoms, and add it to atom features in the input layer. See Figure 1 for an illustration. Training. Except for the parameters in the two structural channels, all other parameters in Transformer-M (e.g., self-attention and feed-forward networks) are shared for different data modalities. We design a joint-training approach for Transformer-M to learn its parameters. During training, when the instances in a batch are only associated with 2D graph structures, the 2D channel will be activated, and the 3D channel will be disabled. Similarly, when the instances in a batch use 3D geometric structures, the 3D channel will be activated, and the 2D channel will be disabled. When both 2D and 3D information are given, both channels will be activated. In such a way, we can collect 2D and 3D data from separate databases and train Transformer-M with different training objectives, making the training process more flexible. We expect a single model to learn to identify and incorporate information from different modalities and efficiently utilize the parameters, leading to better generalization performance. Experimental Results. We use the PCQM4Mv2 dataset in the OGB Large-Scale Challenge (OGBLSC) (Hu et al., 2021) to train our Transformer-M, which consists of 3.4 million molecules of both 2D and 3D forms. The model is trained to predict the pre-computed HOMO-LUMO gap of each data instance in different formats with a pre-text 3D denoising task specifically for 3D data. With the pre-trained model, we directly use or fine-tune the parameters for various molecular tasks of different data formats. First, we show that on the validation set of the PCQM4Mv2 task, which only contains 2D molecular graphs, our Transformer-M surpasses all previous works by a large margin. The improvement is credited to the joint training, which effectively mitigates the overfitting problem. Second, On PDBBind (Wang et al., 2004; 2005b) (2D&3D), the fine-tuned Transformer-M achieves state-of-the-art performance compared to strong baselines. Lastly, on QM9 (Ramakrishnan et al., 2014) (3D) benchmark, the fine-tuned Transformer-M models achieve competitive performance compared to recent methods. All results show that our Transformer-M has the potential to be used as a general-purpose model in a broad range of applications in chemistry. 2 RELATED WORKS Neural networks for learning 2D molecular representations. Graph Neural Network (GNN) is popularly used in molecular graph representation learning (Kipf & Welling, 2016; Hamilton et al., 2017; Gilmer et al., 2017; Xu et al., 2019; Veličković et al., 2018). A GNN learns node and graph representations by recursively aggregating (i.e., message passing) and updating the node representations from neighbor representations. Different architectures are developed by using different aggregation and update strategies. We refer the readers to Wu et al. (2020) for a comprehensive survey. Recently, many works extended the Transformer model to graph tasks (Dwivedi & Bresson, 2020; Kreuzer et al., 2021; Ying et al., 2021a; Luo et al., 2022; Kim et al., 2022; Rampášek et al., 2022; Park et al., 2022; Hussain et al., 2022; Zhang et al., 2023). Seminal works include Graphormer (Ying et al., 2021a), which developed graph structural encodings and used them in a standard Transformer model. Neural networks for learning 3D molecular representations. Learning molecular representations with 3D geometric information is essential in many applications, such as molecular dynamics simulation. Recently, researchers have designed architectures to preserve invariant and equivariant properties for several necessary transformations like rotation and translation. Schütt et al. (2017) used continuous-filter convolutional layers to model quantum interactions in molecules. Thomas et al. (2018) used filters built from spherical harmonics to construct a rotation- and translationequivariant neural network. Klicpera et al. (2020) proposed directional message passing, which ensured their embeddings to be rotationally equivariant. Liu et al. (2022); Wang et al. (2022) use spherical coordinates to capture geometric information and achieve equivariance. Hutchinson et al. (2021); Thölke & De Fabritiis (2021) built Transformer models preserving equivariant properties. Shi et al. (2022) extended Ying et al. (2021a) to a 3D Transformer model which attains better results on large-scale molecular modeling challenges (Chanussot et al., 2020). Multi-view learning for molecules. The 2D graph structure and 3D geometric structure can be considered as different views of the same molecule. Inspired by the contrastive pre-training approach in vision (Chen et al., 2020; He et al., 2020; Radford et al., 2021), many works studied pre-training methods for molecules by jointly using the 2D and 3D information. Stärk et al. (2022) used two encoders to encode the 2D and 3D molecular information separately while maximizing the mutual information between the representations. Liu et al. (2021a) derived the GraphMVP framework, which uses contrastive learning and reconstruction to pre-train a 2D encoder and a 3D encoder. Zhu et al. (2022) unified the 2D and 3D pre-training methods above and proposed a 2D GNN model that can be enhanced by 3D geometric features. Different from these works, we aim to develop a single model which is compatible with both 2D and 3D molecular tasks. Furthermore, all the above works train models using paired 2D and 3D data, while such paired data is not a strong requirement to train our model. General-purpose models. Building a single agent that works for multiple tasks, even across modalities, is a recent discovery in deep learning. In the early years, researchers found that a single multilingual translation model can translate tens of languages using the same weights and perform better than a bilingual translation model for rare languages (Lample & Conneau, 2019; Conneau et al., 2019; Xue et al., 2020; Liu et al., 2020). Large-scale language model (Devlin et al., 2019; Brown et al., 2020) is another example that can be applied to different downstream tasks using in-context learning or fine-tuning. Reed et al. (2022) further pushed the boundary by building a single generalist agent, Gato. This agent uses the same network with the same weights but can play Atari, caption images, and make conversations like a human. Our work also lies in this direction. We focus on developing a general-purpose model in chemistry, which can take molecules in different formats as input and perform well on various molecular tasks with a small number of additional training data. 3 TRANSFORMER-M In this section, we introduce Transformer-M, a versatile Transformer serving as a general architecture for 2D and 3D molecular representation learning. First, we introduce notations and recap the preliminaries in the backbone Transformer architecture (Section 3.1). After that, we present the proposed Transformer-M model with two structural channels for different data modalities (Section 3.2). 3.1 NOTATIONS AND THE BACKBONE TRANSFORMER A molecule M is made up of a collection of atoms held together by attractive forces. We denote X ∈ Rn×d as the atoms with features, where n is the number of atoms, and d is the feature dimension. The structure of M can be represented in different formulations, such as 2D graph structure and 3D geometric structure. For the 2D graph structure, atoms are explicitly connected by chemical bonds, and we define M2D = (X, E), where e(i,j) ∈ E denotes the edge feature (i.e., the type of the bond) between atom i and j if the edge exists. For the 3D geometric structure, for each atom i, its position ri in the Cartesian coordinate system is provided. We define M3D = (X, R), where R = {r1, ..., rn} and ri ∈ R3. Our goal is to design a parametric model which can take either M2D or M3D (or both of them) as input, obtain contextual representations, and make predictions on downstream tasks. Transformer layer. The backbone architecture we use in this work is the Transformer model (Vaswani et al., 2017). A Transformer is composed of stacked Transformer blocks. A Transformer block consists of two layers: a self-attention layer followed by a feed-forward layer, with both layers having normalization (e.g., LayerNorm (Ba et al., 2016)) and skip connections (He et al., 2016). Denote X(l) as the input to the (l + 1)-th block and define X(0) = X . For an input X(l), the (l + 1)-th block works as follows: Ah(X(l)) = softmax ( X(l)W l,hQ (X (l)W l,hK ) ⊤ √ d ) ; (1) X̂(l) = X(l) + H∑ h=1 Ah(X(l))X(l)W l,hV W l,h O ; (2) X(l+1) = X̂(l) +GELU(X̂(l)W l1)W l 2, (3) where W l,hO ∈ RdH×d, W l,h Q ,W l,h K ,W l,h V ∈ Rd×dH , W l1 ∈ Rd×r,W l2 ∈ Rr×d. H is the number of attention heads, dH is the dimension of each head, and r is the dimension of the hidden layer. Ah(X) is usually referred to as the attention matrix. Positional encoding. Another essential component in the Transformer is positional encoding. Note that the self-attention layer and the feed-forward layer do not make use of the order of input elements (e.g., word tokens), making the model impossible to capture the structural information. The original paper (Vaswani et al., 2017) developed effective positional encodings to encode the sentence structural information and explicitly integrate them as bias terms into the model. Shortly, many works realized that positional encoding plays a crucial role in extending standard Transformer to more complicated data structures beyond language. By carefully designing structural encoding using domain knowledge, Transformer has successfully been applied to the image and graph domain and achieved impressive performance (Dosovitskiy et al., 2020; Liu et al., 2021b; Ying et al., 2021a). 3.2 TRANSFORMER-M AND TRAINING STRATEGY As we can see, the two molecular formulations defined in Section 3.1 use the same atom feature space but different characterizations of the structure (graph structure E v.s. geometric structure R). Therefore, the key challenge is to design a compatible architecture that can utilize either structural information in E or R (or both) and incorporate them with the atom features in a principled way. The Transformer is a suitable backbone to achieve the goal as we can encode structural information as bias terms and properly plug them into different modules. Furthermore, with Transformer, we can treat E and R in a unified way by decomposing the structural information into pair-wise and atom-wise encodings. Without loss of generality, we choose to use the encoding strategies in the graph and geometric Transformers proposed by Ying et al. (2021a); Shi et al. (2022). For the sake of completeness, we briefly introduce those structural encodings and show how to leverage them in Transformer-M. Note that our design methodology also works with other encoding strategies (Hussain et al., 2022; Park et al., 2022; Thölke & De Fabritiis, 2021). See Appendix B.5 for the detailed results. Encoding pair-wise relations in E. We use two terms to encode the structural relations between any atom pairs in the graph. First, we encode the shortest path distance (SPD) between two atoms to reflect their spatial relation. Let ΦSPDij denote the SPD encoding between atom i and j, which is a learnable scalar determined by the distance of the shortest path between i and j. Second, we encode the edge features (e.g., the chemical bond types) along the shortest path between i and j to reflect the bond information. For most molecules, there exists only one distinct shortest path between any two atoms. Denote the edges in the shortest path from i to j as SPij = (e1, e2, ..., eN ), and the edge encoding between i and j is defined as ΦEdgeij = 1 N ∑N n=1 en(wn) T , where wn are learnable vectors of the same dimension as the edge feature. Denote ΦSPD and ΦEdge as the matrix form of the SPD encoding and edge encoding, both of which are of shape n× n. Encoding pair-wise relations in R. We encode the Euclidean distance to reflect the spatial relation between any pair of atoms in the 3D space. For each atom pair (i, j), we first process their Euclidean distance with the Gaussian Basis Kernel function (Scholkopf et al., 1997), ψk(i,j) = − 1√ 2π|σk| exp ( − 12 ( γ(i,j)∥ri−rj∥+β(i,j)−µk |σk| )2) , k = 1, ...,K, where K is the number of Gaussian Basis kernels. Then the 3D Distance encoding Φ3D Distanceij is obtained according to Φ 3D Distance ij = GELU ( ψ(i,j)W 1 D ) W 2D, where ψ(i,j) = [ψ 1 (i,j); ...;ψ K (i,j)] ⊤, W 1D ∈ RK×K ,W 2D ∈ RK×1 are learnable parameters. γ(i,j), β(i,j) are learnable scalars indexed by the pair of atom types, and µk, σk are learnable kernel center and learnable scaling factor of the k-th Gaussian Basis Kernel. Denote Φ3D Distance as the matrix form of the 3D distance encoding, whose shape is n× n. Integrating ΦSPD, ΦEdge and Φ3D Distance in Transformer-M. All pair-wise encodings defined above capture the interatomic information, which is in a similar spirit to the relative positional encoding for sequential tasks (Raffel et al., 2020). Therefore, we similarly locate those pair-wise signals in the self-attention module to provide complementary information to the dot-product term XWQ(XWK) ⊤. For simplicity, we omit the index of attention head h and layer l, and the modified attention matrix is defined as: A(X) = softmax XWQ(XWK)⊤√ d + ΦSPD +ΦEdge︸ ︷︷ ︸ 2D pair-wise channel + Φ3D Distance︸ ︷︷ ︸ 3D pair-wise channel (4) Encoding atom-wise structural information in E. For atom i, Eqn. (4) computes the normalized weights according to the semantic (first term) and spatial relation (last three terms) between i and other atoms. However, the information is still not sufficient. For example, the importance (i.e., centrality) of each atom is missing in the attention. For each atom i, we use its degree as the centrality information. Formally, let ΨDegreei denote the degree encoding of the atom i, which is a d-dimensional learnable vector determined by the degree of the atom. Denote ΨDegree = [ΨDegree1 ,Ψ Degree 2 , ...,Ψ Degree n ] as the centrality encoding of all the atoms, which is of shape n× d. Encoding atom-wise structural information inR. Similar to the 2D atom-wise centrality encoding, for geometric data, we encode the centrality of each atom in the 3D space. For each atom i, we sum up the 3D Distance encodings between it and all other atoms. Let ΨSum of 3D Distancei denote the centrality encoding of atom i, we have ΨSum of 3D Distancei = ∑ j∈[n]ψ(i,j)W 3 D, where W 3 D ∈ RK×d is a learnable weight matrix. Similarly, we define ΨSum of 3D Distance as the encoding of all atoms, whose shape is n× d. Integrating ΨDegree and ΨSum of 3D Distance in Transformer-M. We add the atom-wise encodings of 2D and 3D structures to the atom features in the input layer. Formally, the inputX(0) is modified as: X(0) =X + ΨDegree︸ ︷︷ ︸ 2D atom-wise channel +ΨSum of 3D Distance︸ ︷︷ ︸ 3D atom-wise channel , (5) Through this simple way, the structural information of molecules in both 2D and 3D formats is integrated into one Transformer model. It is easy to check that Transformer-M preserves equivariant properties for both data formats. Training. The next step is to learn the parameters in Transformer-M to capture meaningful representations from each data format. To achieve this goal, we develop a simple and flexible joint training method to learn Transformer-M. We first collect datasets in different formats (2D/3D) and define supervised/self-supervised tasks (e.g., energy regression) on each format, and train the model on all the data toward each objective, respectively. To be concrete, during training, if a data instance comes from a dataset in the 2D format, the 2D channel is activated, and the 3D channel is disabled. The model parameter will be optimized to minimize the corresponding (i.e., 2D) objective. When a data instance comes from a dataset in the 3D format, only the 3D channel is activated, and the model will learn to minimize the 3D objective. Both channels are activated if the model takes molecules in both 2D and 3D formats as input. Compared with the multi-view learning approaches, we can train Transformer-M using unpaired 2D and 3D data, making the training process more flexible. The Transformer-M may generalize better due to the joint training. Several previous works (Liu et al., 2021a) observed that 2D graph structure and 3D geometric structure contain complementary chemical knowledge. For example, the 2D graph structure only contains bonds with bond type, while the 3D geometric structure contains fine-grained information such as lengths and angles. As another example, the 3D geometric structures are usually obtained from computational simulations like Density Functional Theory (DFT) (Burke, 2012), which could have approximation errors. The 2D graphs are constructed by domain experts, which to some extent, provide references to the 3D structure. By jointly training using 2D and 3D data with parameter sharing, our model can learn more chemical knowledge instead of overfitting to data noise and perform better on both 2D and 3D tasks. Future Directions. As an initial attempt, our Transformer-M opens up a way to develop generalpurpose molecular models to handle diverse chemical tasks in different data formats. We believe it is a starting point with more possibilities to explore in the future. For example, in this work, we use a simple way and linearly combine the structural information of 2D and 3D structures, and we believe there should be other efficient ways to fuse such encodings. Our model can also be combined with previous multi-view contrastive learning approaches. It is worth investigating how to pre-train our model using those methods. 4 EXPERIMENTS In this section, we empirically study the performance of Transformer-M. First, we pre-train our model on the PCQM4Mv2 training set from OGB Large-Scale Challenge (Hu et al., 2021) (Section 4.1). With the pre-trained model, we conduct experiments on molecular tasks in different data formats and evaluate the versatility and effectiveness of our Transformer-M. Due to space limitation, we study three representative tasks, PCQM4Mv2 (2D, Section 4.2), PDBBind (2D & 3D, Section 4.3) and QM9 (3D, Section 4.4). Ablation studies are presented in Section 4.5. All codes are implemented based on the official codebase of Graphormer (Ying et al., 2021a) in PyTorch (Paszke et al., 2019). 4.1 LARGE-SCALE PRE-TRAINING Our Transformer-M is pre-trained on the training set of PCQM4Mv2 from the OGB Large-Scale Challenge (Hu et al., 2021). The total number of training samples is 3.37 million. Each molecule is associated with its 2D graph structure and 3D geometric structure. The HOMO-LUMO energy gap of each molecule is provided as its label, which is obtained by DFT-based methods (Burke, 2012). We follow Ying et al. (2021a) and employ a 12-layer Transformer-M model. The dimension of hidden layers and feed-forward layers is set to 768. The number of attention heads is set to 32. The number of Gaussian Basis kernels is set to 128. To train Transformer-M, we provide three modes for each data instance: (1) activate the 2D channels and disable the 3D channels (2D mode); (2) activate the 3D channels and disable the 2D channels (3D mode); (3) activate both channels (2D+3D mode). The mode of each data instance during training is randomly drawn on the fly according to a pre-defined distribution, implemented similarly to Dropout (Srivastava et al., 2014). In this work, we use two training objectives. The first one is a supervised learning objective, which aims to predict the HOMO-LUMO energy gap of each molecule. Besides, we also use a self-supervised learning objective called 3D Position Denoising (Godwin et al., 2022; Zaidi et al., 2022), which is particularly effective. During training, if a data instance is in the 3D mode, we add Gaussian noise to the position of each atom and require the model to predict the noise from the noisy input. The model is optimized to minimize a linear combination of the two objectives above. Details of settings are in Appendix B.1. 4.2 PCQM4MV2 PERFORMANCE (2D) After the model is pre-trained, we evaluate our Transformer-M on the validation set of PCQM4Mv2. Note that the validation set of PCQM4Mv2 consists of molecules in the 2D format only. Therefore, we can use it to evaluate how well Transformer-M performs on 2D molecular data. The goal of the task is to predict the HOMU-LUMO energy gap, and the evaluation metric is the Mean Absolute Error (MAE). As our training objectives include the HOMO-LUMO gap prediction task, we didn’t fine-tune the model parameters on any data. During inference, only the 2D channels are activated. We choose several strong baselines covering message passing neural network (MPNN) variants and Graph Transformers. Detailed descriptions of baselines are presented in Appendix B.2. The results are shown in Table 1. It can be easily seen that our Transformer-M surpasses all baselines by a large margin, e.g., 8.2% relative MAE reduction compared to the previous best model (Rampášek et al., 2022), establishing a new state-of-the-art on PCQM4Mv2 dataset. Note that our general architecture is the same as the Graphormer model (Ying et al., 2021a). The only difference between Transformer-M and the Graphormer baseline is that Graphormer is trained on 2D data only, while Transformer-M is trained using both 2D and 3D structural information. Therefore, we can conclude that Transformer-M performs well on 2D molecular data, and the 2D-3D joint training with shared parameters indeed helps the model learn more chemical knowledge. 4.3 PDBBIND PERFORMANCE (2D & 3D) To verify the compatibility of our Transformer-M, we further fine-tune our model on the PDBBind dataset (version 2016, Wang et al. (2004; 2005b)), one of the most widely used datasets for structurebased virtual screening (Jiménez et al., 2018; Stepniewska-Dziubinska et al., 2018; Zheng et al., 2019). PDBBind dataset consists of protein-ligand complexes as data instances, which are obtained in bioassay experiments associated with the pKa (or − logKd, − logKi) affinity values. For each data instance, the 3D geometric structures are provided and the 2D graph structures are constructed via pre-defined rules. The task requires models to predict the binding affinity of protein-ligand complexes, which is extremely vital for drug discovery. After pre-trained on the PCQM4Mv2 training set, our Transformer-M model is fine-tuned and evaluated on the core set of the PDBBind dataset. We compare our model with competitive baselines covering classical methods, CNN-based methods, and GNNs. All experiments are repeated five times with different seeds. Average performance is reported. Due to space limitation, we present the details of baselines and experiment settings in Appendix B.3. The results are presented in Table 2. Our Transformer-M consistently outperforms all the baselines on all evaluation metrics by a large margin, e.g., 3.3% absolute improvement on Pearson’s correlation coefficient (R). It is worth noting that data instances of the PDBBind dataset are protein-ligand complexes, while our model is pre-trained on simple molecules, demonstrating the transferability of Transformer-M. 4.4 QM9 PERFORMANCE (3D) We use the QM9 dataset (Ramakrishnan et al., 2014) to evaluate our Transformer-M on molecular tasks in the 3D data format. QM9 is a quantum chemistry benchmark consisting of 134k stable small organic molecules. These molecules correspond to the subset of all 133,885 species out of the GDB-17 chemical universe of 166 billion organic molecules. Each molecule is associated with 12 targets covering its energetic, electronic, and thermodynamic properties. The 3D geometric structure of the molecule is used as input. Following Thölke & De Fabritiis (2021), we randomly choose 10,000 and 10,831 molecules for validation and test evaluation, respectively. The remaining molecules are used to fine-tune our Transformer-M model. We observed that several previous works used different data splitting ratios or didn’t describe the evaluation details. For a fair comparison, we choose baselines that use similar splitting ratios in the original papers. The details of baselines and experiment settings are presented in Appendix B.4. The results are presented in Table 3. It can be seen that our Transformer-M achieves competitive performance compared to those baselines, suggesting that the model is compatible with 3D molecular data. In particular, Transformer-M performs best on HUMO, LUMO, and HUMO-LUMO predictions. This indicates that the knowledge learned in the pre-training task transfers better to similar tasks. Note that the model doesn’t perform quite well on some other tasks. We believe the Transformer-M can be improved in several aspects, including employing a carefully designed output layer (Thölke & De Fabritiis, 2021) or pre-training with more self-supervised training signals. 4.5 ABLATION STUDY In this subsection, we conduct a series of experiments to investigate the key designs of our Transformer-M. In this paper, we use two training objectives to train the model, and we will ablate the effect of the two training objectives. Besides, we use three modes to activate different channels with a pre-defined distribution, and we will study the impact of the distribution on the final performance. Due to space limitation, we present more analysis on our Transformer-M model in Appendix B.5. Impact of the pre-training tasks. As stated in Section 4.1, our Transformer-M model is pre-trained on the PCQM4Mv2 training set via two tasks: (1) predicting the HOMO-LUMO gap of molecules in both 2D and 3D formats. (2) 3D position denoising. We conduct ablation studies on both PCQM4Mv2 and QM9 datasets to check whether both objectives benefit downstream tasks. In detail, we conduct two additional experiments. The first experiment is training Transformer-M models from scratch on PCQM4Mv2 using its 2D graph data and QM9 using its 3D geometric data to check the benefit of the overall pre-training method. The second experiment is pre-training Transformer-M without using the 3D denoising task to study the effectiveness of the proposed 2D-3D joint pre-training approach. The results are shown in Table 4. It can be seen that the joint pre-training significantly boosts the performance on both PCQM4Mv2 and QM9 datasets. Besides, the 3D Position Denoising task is also beneficial, especially on the QM9 dataset in the 3D format. Impact of mode distribution. Denote (p2D, p3D, p2D&3D) as the probability of the modes mentioned in Section 4.1. We conduct experiments to investigate the influence of different distributions on the model performance. We select three distribution with (p2D, p3D, p2D&3D) being: 1:1:1, 1:2:2, and 1:2:1. The results are presented in Table 4. We obtain consistent conclusions on both PCQM4Mv2 and QM9 datasets: 1) for all three configurations, our Transformer-M model achieves strong performance, which shows that our joint training is robust to hyperparameter selection; 2) Using a slightly larger probability on the 3D mode achieves the best results. 5 CONCLUSION In this work, we take the first step toward general-purpose molecular models. The proposed Transformer-M offers a promising way to handle molecular tasks in 2D and 3D formats. We use two separate channels to encode 2D and 3D structural information and integrate them into the backbone Transformer. When the input data is in a particular format, the corresponding channel will be activated, and the other will be disabled. Through simple training tasks on 2D and 3D molecular data, our model automatically learns to leverage chemical knowledge from different data formats and correctly capture the representations. Extensive experiments are conducted, and all empirical results show that our Transformer-M can achieve strong performance on 2D and 3D tasks simultaneously. The potential of our Transformer-M can be further explored in a broad range of applications in chemistry. ACKNOWLEDGEMENTS We thank Shanda Li for the helpful discussions. We also thank all the anonymous reviewers for the very careful and detailed reviews as well as the valuable suggestions. Their help has further enhanced our work. This work is supported by National Key R&D Program of China (2022ZD0114900) and National Science Foundation of China (NSFC62276005). This work is partially supported by the Shanghai Committee of Science and Technology (Grant No. 21DZ1100100). B EXPERIMENTAL DETAILS B.1 LARGE-SCALE PRE-TRAINING Dataset. Our Transformer-M model is pre-trained on the training set of PCQM4Mv2 from the OGB Large-Scale Challenge (Hu et al., 2021). PCQM4Mv2 is a quantum chemistry dataset originally curated under the PubChemQC project (Maho, 2015; Nakata & Shimazaki, 2017). The total number of training samples is 3.37 million. Each molecule in the training set is associated with both 2D graph structures and 3D geometric structures. The HOMO-LUMO energy gap of each molecule is provided, which is obtained by DFT-based geometry optimization (Burke, 2012). According to the OGB-LSC (Hu et al., 2021), the HOMO-LUMO energy gap is one of the most practically-relevant quantum chemical properties of molecules since it is related to reactivity, photoexcitation, and charge transport. Being the largest publicly available dataset for molecular property prediction, PCQM4Mv2 is considered to be a challenging benchmark for molecular models. Settings. Our Transformer-M model consists of 12 layers. The dimension of hidden layers and feed-forward layers is set to 768. The number of attention heads is set to 32. The number of Gaussian Basis kernels is set to 128. We use AdamW (Kingma & Ba, 2014) as the optimizer and set its hyperparameter ϵ to 1e-8 and (β1, β2) to (0.9,0.999). The gradient clip norm is set to 5.0. The peak learning rate is set to 2e-4. The batch size is set to 1024. The model is trained for 1.5 million steps with a 90k-step warm-up stage. After the warm-up stage, the learning rate decays linearly to zero. The dropout ratios for the input embeddings, attention matrices, and hidden representations are set to 0.0, 0.1, and 0.0 respectively. The weight decay is set to 0.0. We also employ the stochastic depth (Huang et al., 2016) and set the probability to 0.2. The probability (p2D, p3D, p2D&3D) of each data instance entering the three modes mentioned in Section 4.1 is set to (0.2, 0.5, 0.3). The scaling factor σ of added noise in the 3D Position Denoising task is set to 0.2. The ratio of the supervised loss to the denoising loss is set to 1:1. All models are trained on 4 NVIDIA Tesla A100 GPUs. B.2 PCQM4MV2 Baselines. We compare our Transformer-M with several competitive baselines. These models fall into two categories: message passing neural network (MPNN) variants and Graph Transformers. For MPNN variants, we include two widely used models, GCN (Kipf & Welling, 2016) and GIN (Xu et al., 2019), and their variants with virtual node (VN) (Gilmer et al., 2017; Hu et al., 2020a). Additionally, we compare GINE-VN (Brossard et al., 2020) and DeeperGCN-VN (Li et al., 2020). GINE is the multi-hop version of GIN. DeeperGCN is a 12-layer GNN model with carefully designed aggregators. The result of MLP-Fingerprint (Hu et al., 2021) is also reported. We also compare several Graph Transformer models. Graphormer (Ying et al., 2021a) developed graph structural encodings and integrated them into a standard Transformer model. It achieved impressive performance across several world competitions (Ying et al., 2021b; Shi et al., 2022). CoAtGIN (Cui, 2022) is a hybrid architecture combining both Convolution and Attention. TokenGT (Kim et al., 2022) adopted the standard Transformer architecture without graph-specific modifications. GraphGPS (Rampášek et al., 2022) proposed a framework to integrate the positional and structural encodings, local message-passing mechanism, and global attention mechanism into the Transformer model. GRPE (Park et al., 2022) proposed a graph-specific relative positional encoding and considerd both node-spatial and node-edge relations. EGT (Hussain et al., 2022) exclusively used global self-attention as an aggregation mechanism rather than static localized convolutional aggregation, and utilized edge channels to capture structural information. B.3 PDBBIND Dataset. PDBBind is a well-known dataset that provides a comprehensive collection of experimentally measured binding affinity data for biomolecular complexes deposited in the Protein Data Bank (PDB) (Wang et al., 2005a). The task requires models to predict the binding affinity value pKa (or − logKd, − logKi) of protein-ligand complexes, which is extremely vital for drug discovery. In our experiment, we use the PDBBind v2016 dataset, which is widely used in recent works (Li et al., 2021). The PDBBind dataset includes three overlapped subsets called the general, refined, and core sets. The general set contains all 13,283 protein-ligand complexes, while the 4,057 complexes in the refined set are selected out of the general set with better quality. Moreover, the core set serves as the highest quality benchmark for testing. To avoid data leakage, we remove the data instances in the core set from the refined set. After training, we evaluate our model on the core set. The evaluation metrics include Pearson’s correlation coefficient (R), Mean Absolute Error (MAE), Root-Mean Squared Error (RMSE), and Standard Deviation (SD). Baselines. We compare our Transformer-M with several competitive baselines. These models mainly fall into three categories: classic Machine Learning methods, Convolution Neural Network (CNN) based methods, Graph Neural Network (GNN) based methods. First, we report the results of LR, SVR, and RF-Score (Ballester et al., 2010), which employed traditional machine learning approaches to predict the binding affinities. Second, inspired by the success of CNNs in computer vision, Stepniewska-Dziubinska et al. (2018) proposed the Pafnucy model that represents the complexes via a 3D grid and utilizes 3D convolution to produce feature maps. Zheng et al. (2019) introduced OnionNet, which also used CNNs to extract features based on rotation-free element-pair specific contacts between atoms of proteins and ligands. There are also several works that leverage GNNs to improve the performance of the PDBBind dataset. GraphDTA (Nguyen et al., 2020) represented protein-ligand complexes as 2D graphs and used GNN models to predict the affinity score. GNN-DTI (Lim et al., 2019) incorporated the 3D structures of protein-ligand complexes into GNNs. DMPNN (Yang et al., 2019) operated over a hybrid representation that combines convolutions and descriptors. SGCN (Danel et al., 2020) is a GCN-inspired architecture that leverages node positions. MAT (Maziarka et al., 2020) augmented the attention mechanism in the standard Transformer model with inter-atomic distances and molecular graph structures. DimeNet (Klicpera et al., 2020) developed the atom-pair embeddings and utilized directional information between atoms. CMPNN (Song et al., 2020) introduced a communicative kernel and a message booster module to strengthen the message passing between atoms. SIGN (Li et al., 2021)) proposed polar-inspired graph attention layers and pairwise interactive pooling layers to utilize the biomolecular structural information. Settings. We fine-tune the pre-trained Transformer-M on the PDBBind dataset. We use AdamW (Kingma & Ba, 2014) as the optimizer and set its hyperparameter ϵ to 1e-8 and (β1, β2) to (0.9,0.999). The gradient clip norm is set to 5.0. The peak learning rate is set to 2e-4. The total number of epochs is set to 30. The ratio of the warm-up steps to the total steps is set to 0.06. The batch size is set to 16. The dropout ratios for the input embeddings, attention matrices, and hidden representations are set to 0.0, 0.1, and 0.0 respectively. The weight decay is set to 0.0. Following (Ying et al., 2021a), We use FLAG (Kong et al., 2020) with minor modifications for graph data augmentation. In particular, except for the step size α and the number of adversarial attack steps m, we also employ a projection step in Zhu et al. (2020) with maximum perturbation γ. These hyperparaters are set to the following configurations: α = 0.01,m = 4, γ = 0.01. All models are trained on 2 NVIDIA Tesla V100 GPUs. B.4 QM9 Dataset. QM9 (Ramakrishnan et al., 2014) is a quantum chemistry benchmark consisting of 134k stable small organic molecules. These molecules correspond to the subset of all 133,885 species out of the GDB-17 chemical universe of 166 billion organic molecules. Each molecule is associated with 12 targets covering its energetic, electronic, and thermodynamic properties. The 3D geometric structure of the molecule is used as input. Following Thölke & De Fabritiis (2021), we randomly choose 10,000 and 10,831 molecules for validation and test evaluation, respectively. The remaining molecules are used to fine-tune our Transformer-M model. Baselines. We comprehensively compare our Transformer-M with both pre-training methods and 3D molecular models. First, we follow Jiao et al. (2022) to compare several pre-training methods. Hu et al. (2019) proposed a strategy to pre-train GNNs via both node-level and graph-level tasks. Sun et al. (2019) maximized the mutual information between graph-level representations and substructure representations as the pre-training tasks. You et al. (2020) instead used contrastive learning to pre-train GNNs. There are also several works that utilize 3D geometric structures during pre-training. Jing et al. (2021) maximized the mutual information between 2D and 3D representations. Fang et al. (2021) proposed a strategy to learn spatial information by utilizing both local and global 3D structures. Stärk et al. (2022) used two encoders to capture 2D and 3D structural information separately while maximizing the mutual information between 2D and 3D representations. Jiao et al. (2022) adopted an equivariant energy-based model and developed a node-level pretraining loss for force prediction. We report the results of these methods from (Jiao et al., 2022) for comparison. Second, we follow Thölke & De Fabritiis (2021) to compare 3D molecular models. Schütt et al. (2017) used continuous-filter convolution layers to model quantum interactions in molecules. Anderson et al. (2019) developed a GNN model equipped with activation functions being covariant to rotations. Klicpera et al. (2020) proposed directional message passing, which uses atom-pair embeddings and utilizes directional information between atoms. Schütt et al. (2021) proposed the polarizable atom interaction neural network (PaiNN) that uses an equivariant message passing mechanism. Hutchinson et al. (2021) built upon the Transformer model consisting of attention layers that are equivariant to arbitrary Lie groups and their discrete subgroups. Thölke & De Fabritiis (2021) also developed a Transformer variant with layers designed by prior physical and chemical knowledge. Satorras et al. (2021) proposed the EGNN model which does not require computationally expensive higher-order representations in immediate layers to keep equivariance, and can be easily scaled to higher-dimensional spaces. Godwin et al. (2022) proposed the 3D position denoising task and verified it on the Graph Network-based Simulator (GNS) model (Sanchez-Gonzalez et al., 2020). Settings. We fine-tune the pre-trained Transformer-M on the QM9 dataset. Following Thölke & De Fabritiis (2021), we adopt the Mean Squared Error (MSE) loss during training and use the Mean Absolute Error (MAE) loss function during evaluation. We also adopt label standardization for stable training. We use AdamW as the optimizer, and set the hyper-parameter ϵ to 1e-8 and (β1, β2) to (0.9,0.999). The gradient clip norm is set to 5.0. The peak learning rate is set to 7e-5. The batch size is set to 128. The dropout ratios for the input embeddings, attention matrices, and hidden representations are set to 0.0, 0.1, and 0.0 respectively. The weight decay is set to 0.0. The model is fine-tuned for 600k steps with a 60k-step warm-up stage. After the warm-up stage, the learning rate decays linearly to zero. All models are trained on 1 NVIDIA A100 GPU. B.5 MORE ANALYSIS Investigation on the generality of the design methodology of Transformer-M. In this work, we develop our Transformer-M model based on the Transformer backbone and integrate separate 2D and 3D channels (implemented by encoding methods) to encode the structural information of 2D and 3D molecular data. As stated in Section 3.2, it is a general design methodology for handling molecular data in different forms, which works well with different structural encoding instantiations. To demonstrate its generality and effectiveness, we further conduct experiments with other structural encodings from GRPE (Park et al., 2022) and EGT (Hussain et al., 2022), which are competitive baselines on the PCQM4Mv2 benchmark as shown in Table 1. All the hyperparameters are kept the same as the settings in Appendix B.1 for a fair comparison. The results are presented in Table 5. It can be easily seen that our Transformer-M model equipped with different encoding methods can consistently obtain significantly better performance than the corresponding vanilla 2D models, which indeed verifies the generality and effectiveness of the design methodology of Transformer-M. Investigation on the impact of 3D conformers calculated by different methods. Besides the versatility to handle molecules in different formats, our Transformer-M further achieves strong performance on various challenging molecular tasks, as shown in Section 4. On PCQM4Mv2 validation set (2D only), our Transformer-M establishes a new state-of-the-art, which mainly credits to the newly introduced 2D-3D joint training strategy in Section 3.2. The chemical knowledge in the 3D geometric structure can be leveraged during joint training and boost the performance of 2D tasks. Since the benefits of the 3D geometric structure are observed, it is natural to ask how the quality of calculated 3D conformers influences the performance of Transformer-M. To investigate this question, we additionally use the RDKit (Landrum, 2016) to generate one 3D conformed for each molecule in the training set of PCQM4Mv2. Compared to officially provided DFT-optimized geometric structures, the structures are less costly to obtain by using RDKit while also being less accurate. Thus, each molecule has its 2D molecular graph, 3D conformer calculated by DFT, and 3D conformer calculated by RDKit. Based on such a dataset, we conduct three additional experiments. Firstly, we train our Transformer-M model only using 2D molecular graphs. In this experiment, only the 2D channels are activated. Secondly, we train our Transformer-M model using both 2D molecular graphs (encoded by 2D channels) and 3D conformers generated by RDKit (encoded by 3D channels). Thirdly, we train our Transformer-M model using 2D molecular graphs, 3D conformers generated by RDKit, and 3D conformers calculated by DFT. In this experiment, we use two sets of 3D channels to separately encode structural information of 3D RDKit conformers and 3D DFT conformers. During training, when a data instance enters 3D or 2D+3D modes, both sets of 3D channels are activated and integrated. For all three experiments, the hyperparameters of Transformer-M are kept the same as the settings in Appendix B.1. The results are presented in Table 6. We can see that the quality of the 3D conformer matters in the final performance: leveraging 3D conformers generated by RDKit (second line) brings minor gains compared to using 2D molecular graphs only (first line). On the contrary, when leveraging 3D conformers calculated by DFT, the improvement is significant (the last two lines). From the practical view, it will be interesting to investigate the influence of 3D conformers calculated by methods that are more accurate than RDKit while more efficient than DFT, e.g., semiempirical methods (Dral et al., 2016), which we leave as future work. Investigation on the effectiveness of Transformer-M pre-training. We provide additional results on the effectiveness of our model on both PDBBind (2D+3D) and QM9 (3D) downstream datasets. Firstly, to verify the effectiveness of Transformer-M pre-training on the PDBBind dataset, we further pre-train the Graphormer model (Ying et al., 2021a) on the same PCQM4Mv2 dataset as a competitive pre-trained baseline. Since the Graphormer model can only handle graph data, we only use the 2D molecular graph of each data instance. All hyperparameters are kept the same as the settings in Appendix B.1. The results are presented in Table 7. We can draw the following conclusions: (1) pre-training is helpful (e.g., 0.797 (R of SIGN model, the best baseline) -> 0.804 (R of pre-trained Graphormer model)); (2) our pre-training method is more significant (e.g., 0.804 -> 0.830), which indeed demonstrate the effectiveness of our framework. Secondly, we demonstrate that our pre-training strategy helps learn a better Transformer model on downstream QM9 dataset. We conduct two additional experiments on the QM9 dataset. In the first experiment, we train the 3D geometric Transformer model (Transformer-M with 3D channel only) from scratch. In the second experiment, we use the 3D Position Denoising task as the objective to pre-train the 3D geometric Transformer on the PCQM4Mv2 and fine-tune the pre-trained checkpoint on QM9. Due to the time limits and constrained resources, we selected six QM9 targets for comparison. All the hyperparameters of pre-training and fine-tuning are kept the same. The results are presented in Table 8. It can be easily seen that our pre-training methods consistently and significantly improve the downstream performance on all six tasks, which indeed demonstrates the effectiveness of our general framework for 3D molecular data. We are aware that we achieve competitive rather than SOTA performance compared with baselines (5 best performance out of 12 targets, see Table 3). For U0, U, H, and G, there still exists a performance gap between our Transformer-M and some latest baselines, which use pretty complicated neural architectures. We believe that exploring more model alternatives and leveraging the wisdom in those networks into our Transformer-M will further improve the performance, which we will keep working on.
1. What is the focus of the paper regarding the Graphormer 3D? 2. What are the strengths and weaknesses of the proposed approach, particularly its simplicity and experimental limitations? 3. Do you have any concerns or suggestions regarding the use of 2D/3D switching mechanisms? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The authors propose a simple modifications over the Graphormer 3D [1] , which previously encoded node, graph level information through additional biases added to the self-attention modules, namely node-encoding via in/out degree embeddings, path encoding via shortest path and edge embedding aggregation along this path. To encode the spatial and centrality information for 3D molecules they embed these using a GBF assuming a fully connected graph as opposed to using cutoffs. Now, the authors simply introduce a switching mechanism in this model, where the corresponding 2D/3D encodings are switched on and off. This is done as it may be the case that 2D information is available and 3D information is not on a particular input sample, so to allow the model to distill "3D" level information to enrich the 2D representation, for example it is the case for the PCQM4Mv2 dataset which includes a highly optimized conformer for the train-set but not the rest. [1] Shi, Yu, et al. "Benchmarking graphormer on large-scale molecular modeling datasets." arXiv preprint arXiv:2203.04810 (2022). Strengths And Weaknesses The modifications are only a few lines of code over the original Graphormer 3D proposed in [1]. Still, this simple modification results in a new STOTA on PCQM4Mv2, namely introducing a mask in the input based on whether the network should operate in 2D/3D/Both. The experiments on the task probabilities, which is the primary modification, are limited; changing this ratio can drastically change the model's performance. Most of the gains are attributed to the pretraining task and 3D encoding in the bias term, which was previously leveraged in [1]. I would have liked to see more experiments on trying to further leverage 3D information on 2D inputs that don't have the position information, for example, adding conformers as well. Would the network operated in 3D/2D mode perform the best over simply 2D? What happens if you generate multiple conformers and switch these modes on and off? Would there be a gain in performance? The authors could have done more experiments on this front. Clarity, Quality, Novelty And Reproducibility The paper is well written and easy to follow.
ICLR
Title One Transformer Can Understand Both 2D & 3D Molecular Data Abstract Unlike vision and language data which usually has a unique format, molecules can naturally be characterized using different chemical formulations. One can view a molecule as a 2D graph or define it as a collection of atoms located in a 3D space. For molecular representation learning, most previous works designed neural networks only for a particular data format, making the learned models likely to fail for other data formats. We believe a general-purpose neural network model for chemistry should be able to handle molecular tasks across data modalities. To achieve this goal, in this work, we develop a novel Transformer-based Molecular model called Transformer-M, which can take molecular data of 2D or 3D formats as input and generate meaningful semantic representations. Using the standard Transformer as the backbone architecture, Transformer-M develops two separated channels to encode 2D and 3D structural information and incorporate them with the atom features in the network modules. When the input data is in a particular format, the corresponding channel will be activated, and the other will be disabled. By training on 2D and 3D molecular data with properly designed supervised signals, Transformer-M automatically learns to leverage knowledge from different data modalities and correctly capture the representations. We conducted extensive experiments for Transformer-M. All empirical results show that Transformer-M can simultaneously achieve strong performance on 2D and 3D tasks, suggesting its broad applicability. The code and models will be made publicly available at https://github.com/lsj2408/Transformer-M. 1 INTRODUCTION Deep learning approaches have revolutionized many domains, including computer vision (He et al., 2016), natural language processing (Devlin et al., 2019; Brown et al., 2020), and games (Mnih et al., 2013; Silver et al., 2016). Recently, researchers have started investigating whether the power of neural networks could help solve important scientific problems in chemistry, e.g., predicting the property of molecules and simulating the molecular dynamics from large-scale training data (Hu et al., 2020a; 2021; Zhang et al., 2018; Chanussot et al., 2020). One key difference between chemistry and conventional domains such as vision and language is the multimodality of data. In vision and language, a data instance is usually characterized in a particular form. For example, an image is defined as RGB values in a pixel grid, while a sentence is defined as tokens in a sequence. In contrast, molecules naturally have different chemical formulations. A molecule can be represented as a sequence (Weininger, 1988), a 2D graph (Wiswesser, 1985), or a collection of atoms located in a 3D space. 2D and 3D structures are the most popularly used formulations as many valuable properties and statistics can be obtained from them (Chmiela et al., 2017; Stokes et al., 2020). However, as far as we know, most previous works focus on designing neural network models for either 2D or 3D structures, making the model learned in one form fail to be applied in tasks of the other form. We argue that a general-purpose neural network model in chemistry should at least be able to handle molecular tasks across data modalities. In this paper, we take the first step toward this goal by ∗These two authors contributed equally to this project †Correspondence to: Di He <dihe@pku.edu.cn> and Liwei Wang <wanglw@pku.edu.cn>. developing Transformer-M, a versatile Transformer-based Molecular model that performs well for both 2D and 3D molecular representation learning. Note that for a molecule, its 2D and 3D forms describe the same collection of atoms but use different characterizations of the structure. Therefore, the key challenge is to design a model expressive and compatible in capturing structural knowledge in different formulations and train the parameters to learn from both information. Transformer is more favorable than other architectures as it can explicitly plug structural signals in the model as bias terms (e.g., positional encodings (Vaswani et al., 2017; Raffel et al., 2020)). We can conveniently set 2D and 3D structural information as different bias terms through separated channels and incorporate them with the atom features in the attention layers. Architecture. The backbone network of our Transformer-M is composed of standard Transformer blocks. We develop two separate channels to encode 2D and 3D structural information. The 2D channel uses degree encoding, shortest path distance encoding, and edge encoding extracted from the 2D graph structure, following Ying et al. (2021a). The shortest path distance encoding and edge encoding reflect the spatial relations and bond features of a pair of atoms and are used as bias terms in the softmax attention. The degree encoding is added to the atom features in the input layer. For the 3D channel, we follow Shi et al. (2022) to use the 3D distance encoding to encode the spatial distance between atoms in the 3D geometric structure. Each atom pair’s Euclidean distance is encoded via the Gaussian Basis Kernel function (Scholkopf et al., 1997) and will be used as a bias term in the softmax attention. For each atom, we sum up the 3D distance encodings between it and all other atoms, and add it to atom features in the input layer. See Figure 1 for an illustration. Training. Except for the parameters in the two structural channels, all other parameters in Transformer-M (e.g., self-attention and feed-forward networks) are shared for different data modalities. We design a joint-training approach for Transformer-M to learn its parameters. During training, when the instances in a batch are only associated with 2D graph structures, the 2D channel will be activated, and the 3D channel will be disabled. Similarly, when the instances in a batch use 3D geometric structures, the 3D channel will be activated, and the 2D channel will be disabled. When both 2D and 3D information are given, both channels will be activated. In such a way, we can collect 2D and 3D data from separate databases and train Transformer-M with different training objectives, making the training process more flexible. We expect a single model to learn to identify and incorporate information from different modalities and efficiently utilize the parameters, leading to better generalization performance. Experimental Results. We use the PCQM4Mv2 dataset in the OGB Large-Scale Challenge (OGBLSC) (Hu et al., 2021) to train our Transformer-M, which consists of 3.4 million molecules of both 2D and 3D forms. The model is trained to predict the pre-computed HOMO-LUMO gap of each data instance in different formats with a pre-text 3D denoising task specifically for 3D data. With the pre-trained model, we directly use or fine-tune the parameters for various molecular tasks of different data formats. First, we show that on the validation set of the PCQM4Mv2 task, which only contains 2D molecular graphs, our Transformer-M surpasses all previous works by a large margin. The improvement is credited to the joint training, which effectively mitigates the overfitting problem. Second, On PDBBind (Wang et al., 2004; 2005b) (2D&3D), the fine-tuned Transformer-M achieves state-of-the-art performance compared to strong baselines. Lastly, on QM9 (Ramakrishnan et al., 2014) (3D) benchmark, the fine-tuned Transformer-M models achieve competitive performance compared to recent methods. All results show that our Transformer-M has the potential to be used as a general-purpose model in a broad range of applications in chemistry. 2 RELATED WORKS Neural networks for learning 2D molecular representations. Graph Neural Network (GNN) is popularly used in molecular graph representation learning (Kipf & Welling, 2016; Hamilton et al., 2017; Gilmer et al., 2017; Xu et al., 2019; Veličković et al., 2018). A GNN learns node and graph representations by recursively aggregating (i.e., message passing) and updating the node representations from neighbor representations. Different architectures are developed by using different aggregation and update strategies. We refer the readers to Wu et al. (2020) for a comprehensive survey. Recently, many works extended the Transformer model to graph tasks (Dwivedi & Bresson, 2020; Kreuzer et al., 2021; Ying et al., 2021a; Luo et al., 2022; Kim et al., 2022; Rampášek et al., 2022; Park et al., 2022; Hussain et al., 2022; Zhang et al., 2023). Seminal works include Graphormer (Ying et al., 2021a), which developed graph structural encodings and used them in a standard Transformer model. Neural networks for learning 3D molecular representations. Learning molecular representations with 3D geometric information is essential in many applications, such as molecular dynamics simulation. Recently, researchers have designed architectures to preserve invariant and equivariant properties for several necessary transformations like rotation and translation. Schütt et al. (2017) used continuous-filter convolutional layers to model quantum interactions in molecules. Thomas et al. (2018) used filters built from spherical harmonics to construct a rotation- and translationequivariant neural network. Klicpera et al. (2020) proposed directional message passing, which ensured their embeddings to be rotationally equivariant. Liu et al. (2022); Wang et al. (2022) use spherical coordinates to capture geometric information and achieve equivariance. Hutchinson et al. (2021); Thölke & De Fabritiis (2021) built Transformer models preserving equivariant properties. Shi et al. (2022) extended Ying et al. (2021a) to a 3D Transformer model which attains better results on large-scale molecular modeling challenges (Chanussot et al., 2020). Multi-view learning for molecules. The 2D graph structure and 3D geometric structure can be considered as different views of the same molecule. Inspired by the contrastive pre-training approach in vision (Chen et al., 2020; He et al., 2020; Radford et al., 2021), many works studied pre-training methods for molecules by jointly using the 2D and 3D information. Stärk et al. (2022) used two encoders to encode the 2D and 3D molecular information separately while maximizing the mutual information between the representations. Liu et al. (2021a) derived the GraphMVP framework, which uses contrastive learning and reconstruction to pre-train a 2D encoder and a 3D encoder. Zhu et al. (2022) unified the 2D and 3D pre-training methods above and proposed a 2D GNN model that can be enhanced by 3D geometric features. Different from these works, we aim to develop a single model which is compatible with both 2D and 3D molecular tasks. Furthermore, all the above works train models using paired 2D and 3D data, while such paired data is not a strong requirement to train our model. General-purpose models. Building a single agent that works for multiple tasks, even across modalities, is a recent discovery in deep learning. In the early years, researchers found that a single multilingual translation model can translate tens of languages using the same weights and perform better than a bilingual translation model for rare languages (Lample & Conneau, 2019; Conneau et al., 2019; Xue et al., 2020; Liu et al., 2020). Large-scale language model (Devlin et al., 2019; Brown et al., 2020) is another example that can be applied to different downstream tasks using in-context learning or fine-tuning. Reed et al. (2022) further pushed the boundary by building a single generalist agent, Gato. This agent uses the same network with the same weights but can play Atari, caption images, and make conversations like a human. Our work also lies in this direction. We focus on developing a general-purpose model in chemistry, which can take molecules in different formats as input and perform well on various molecular tasks with a small number of additional training data. 3 TRANSFORMER-M In this section, we introduce Transformer-M, a versatile Transformer serving as a general architecture for 2D and 3D molecular representation learning. First, we introduce notations and recap the preliminaries in the backbone Transformer architecture (Section 3.1). After that, we present the proposed Transformer-M model with two structural channels for different data modalities (Section 3.2). 3.1 NOTATIONS AND THE BACKBONE TRANSFORMER A molecule M is made up of a collection of atoms held together by attractive forces. We denote X ∈ Rn×d as the atoms with features, where n is the number of atoms, and d is the feature dimension. The structure of M can be represented in different formulations, such as 2D graph structure and 3D geometric structure. For the 2D graph structure, atoms are explicitly connected by chemical bonds, and we define M2D = (X, E), where e(i,j) ∈ E denotes the edge feature (i.e., the type of the bond) between atom i and j if the edge exists. For the 3D geometric structure, for each atom i, its position ri in the Cartesian coordinate system is provided. We define M3D = (X, R), where R = {r1, ..., rn} and ri ∈ R3. Our goal is to design a parametric model which can take either M2D or M3D (or both of them) as input, obtain contextual representations, and make predictions on downstream tasks. Transformer layer. The backbone architecture we use in this work is the Transformer model (Vaswani et al., 2017). A Transformer is composed of stacked Transformer blocks. A Transformer block consists of two layers: a self-attention layer followed by a feed-forward layer, with both layers having normalization (e.g., LayerNorm (Ba et al., 2016)) and skip connections (He et al., 2016). Denote X(l) as the input to the (l + 1)-th block and define X(0) = X . For an input X(l), the (l + 1)-th block works as follows: Ah(X(l)) = softmax ( X(l)W l,hQ (X (l)W l,hK ) ⊤ √ d ) ; (1) X̂(l) = X(l) + H∑ h=1 Ah(X(l))X(l)W l,hV W l,h O ; (2) X(l+1) = X̂(l) +GELU(X̂(l)W l1)W l 2, (3) where W l,hO ∈ RdH×d, W l,h Q ,W l,h K ,W l,h V ∈ Rd×dH , W l1 ∈ Rd×r,W l2 ∈ Rr×d. H is the number of attention heads, dH is the dimension of each head, and r is the dimension of the hidden layer. Ah(X) is usually referred to as the attention matrix. Positional encoding. Another essential component in the Transformer is positional encoding. Note that the self-attention layer and the feed-forward layer do not make use of the order of input elements (e.g., word tokens), making the model impossible to capture the structural information. The original paper (Vaswani et al., 2017) developed effective positional encodings to encode the sentence structural information and explicitly integrate them as bias terms into the model. Shortly, many works realized that positional encoding plays a crucial role in extending standard Transformer to more complicated data structures beyond language. By carefully designing structural encoding using domain knowledge, Transformer has successfully been applied to the image and graph domain and achieved impressive performance (Dosovitskiy et al., 2020; Liu et al., 2021b; Ying et al., 2021a). 3.2 TRANSFORMER-M AND TRAINING STRATEGY As we can see, the two molecular formulations defined in Section 3.1 use the same atom feature space but different characterizations of the structure (graph structure E v.s. geometric structure R). Therefore, the key challenge is to design a compatible architecture that can utilize either structural information in E or R (or both) and incorporate them with the atom features in a principled way. The Transformer is a suitable backbone to achieve the goal as we can encode structural information as bias terms and properly plug them into different modules. Furthermore, with Transformer, we can treat E and R in a unified way by decomposing the structural information into pair-wise and atom-wise encodings. Without loss of generality, we choose to use the encoding strategies in the graph and geometric Transformers proposed by Ying et al. (2021a); Shi et al. (2022). For the sake of completeness, we briefly introduce those structural encodings and show how to leverage them in Transformer-M. Note that our design methodology also works with other encoding strategies (Hussain et al., 2022; Park et al., 2022; Thölke & De Fabritiis, 2021). See Appendix B.5 for the detailed results. Encoding pair-wise relations in E. We use two terms to encode the structural relations between any atom pairs in the graph. First, we encode the shortest path distance (SPD) between two atoms to reflect their spatial relation. Let ΦSPDij denote the SPD encoding between atom i and j, which is a learnable scalar determined by the distance of the shortest path between i and j. Second, we encode the edge features (e.g., the chemical bond types) along the shortest path between i and j to reflect the bond information. For most molecules, there exists only one distinct shortest path between any two atoms. Denote the edges in the shortest path from i to j as SPij = (e1, e2, ..., eN ), and the edge encoding between i and j is defined as ΦEdgeij = 1 N ∑N n=1 en(wn) T , where wn are learnable vectors of the same dimension as the edge feature. Denote ΦSPD and ΦEdge as the matrix form of the SPD encoding and edge encoding, both of which are of shape n× n. Encoding pair-wise relations in R. We encode the Euclidean distance to reflect the spatial relation between any pair of atoms in the 3D space. For each atom pair (i, j), we first process their Euclidean distance with the Gaussian Basis Kernel function (Scholkopf et al., 1997), ψk(i,j) = − 1√ 2π|σk| exp ( − 12 ( γ(i,j)∥ri−rj∥+β(i,j)−µk |σk| )2) , k = 1, ...,K, where K is the number of Gaussian Basis kernels. Then the 3D Distance encoding Φ3D Distanceij is obtained according to Φ 3D Distance ij = GELU ( ψ(i,j)W 1 D ) W 2D, where ψ(i,j) = [ψ 1 (i,j); ...;ψ K (i,j)] ⊤, W 1D ∈ RK×K ,W 2D ∈ RK×1 are learnable parameters. γ(i,j), β(i,j) are learnable scalars indexed by the pair of atom types, and µk, σk are learnable kernel center and learnable scaling factor of the k-th Gaussian Basis Kernel. Denote Φ3D Distance as the matrix form of the 3D distance encoding, whose shape is n× n. Integrating ΦSPD, ΦEdge and Φ3D Distance in Transformer-M. All pair-wise encodings defined above capture the interatomic information, which is in a similar spirit to the relative positional encoding for sequential tasks (Raffel et al., 2020). Therefore, we similarly locate those pair-wise signals in the self-attention module to provide complementary information to the dot-product term XWQ(XWK) ⊤. For simplicity, we omit the index of attention head h and layer l, and the modified attention matrix is defined as: A(X) = softmax XWQ(XWK)⊤√ d + ΦSPD +ΦEdge︸ ︷︷ ︸ 2D pair-wise channel + Φ3D Distance︸ ︷︷ ︸ 3D pair-wise channel (4) Encoding atom-wise structural information in E. For atom i, Eqn. (4) computes the normalized weights according to the semantic (first term) and spatial relation (last three terms) between i and other atoms. However, the information is still not sufficient. For example, the importance (i.e., centrality) of each atom is missing in the attention. For each atom i, we use its degree as the centrality information. Formally, let ΨDegreei denote the degree encoding of the atom i, which is a d-dimensional learnable vector determined by the degree of the atom. Denote ΨDegree = [ΨDegree1 ,Ψ Degree 2 , ...,Ψ Degree n ] as the centrality encoding of all the atoms, which is of shape n× d. Encoding atom-wise structural information inR. Similar to the 2D atom-wise centrality encoding, for geometric data, we encode the centrality of each atom in the 3D space. For each atom i, we sum up the 3D Distance encodings between it and all other atoms. Let ΨSum of 3D Distancei denote the centrality encoding of atom i, we have ΨSum of 3D Distancei = ∑ j∈[n]ψ(i,j)W 3 D, where W 3 D ∈ RK×d is a learnable weight matrix. Similarly, we define ΨSum of 3D Distance as the encoding of all atoms, whose shape is n× d. Integrating ΨDegree and ΨSum of 3D Distance in Transformer-M. We add the atom-wise encodings of 2D and 3D structures to the atom features in the input layer. Formally, the inputX(0) is modified as: X(0) =X + ΨDegree︸ ︷︷ ︸ 2D atom-wise channel +ΨSum of 3D Distance︸ ︷︷ ︸ 3D atom-wise channel , (5) Through this simple way, the structural information of molecules in both 2D and 3D formats is integrated into one Transformer model. It is easy to check that Transformer-M preserves equivariant properties for both data formats. Training. The next step is to learn the parameters in Transformer-M to capture meaningful representations from each data format. To achieve this goal, we develop a simple and flexible joint training method to learn Transformer-M. We first collect datasets in different formats (2D/3D) and define supervised/self-supervised tasks (e.g., energy regression) on each format, and train the model on all the data toward each objective, respectively. To be concrete, during training, if a data instance comes from a dataset in the 2D format, the 2D channel is activated, and the 3D channel is disabled. The model parameter will be optimized to minimize the corresponding (i.e., 2D) objective. When a data instance comes from a dataset in the 3D format, only the 3D channel is activated, and the model will learn to minimize the 3D objective. Both channels are activated if the model takes molecules in both 2D and 3D formats as input. Compared with the multi-view learning approaches, we can train Transformer-M using unpaired 2D and 3D data, making the training process more flexible. The Transformer-M may generalize better due to the joint training. Several previous works (Liu et al., 2021a) observed that 2D graph structure and 3D geometric structure contain complementary chemical knowledge. For example, the 2D graph structure only contains bonds with bond type, while the 3D geometric structure contains fine-grained information such as lengths and angles. As another example, the 3D geometric structures are usually obtained from computational simulations like Density Functional Theory (DFT) (Burke, 2012), which could have approximation errors. The 2D graphs are constructed by domain experts, which to some extent, provide references to the 3D structure. By jointly training using 2D and 3D data with parameter sharing, our model can learn more chemical knowledge instead of overfitting to data noise and perform better on both 2D and 3D tasks. Future Directions. As an initial attempt, our Transformer-M opens up a way to develop generalpurpose molecular models to handle diverse chemical tasks in different data formats. We believe it is a starting point with more possibilities to explore in the future. For example, in this work, we use a simple way and linearly combine the structural information of 2D and 3D structures, and we believe there should be other efficient ways to fuse such encodings. Our model can also be combined with previous multi-view contrastive learning approaches. It is worth investigating how to pre-train our model using those methods. 4 EXPERIMENTS In this section, we empirically study the performance of Transformer-M. First, we pre-train our model on the PCQM4Mv2 training set from OGB Large-Scale Challenge (Hu et al., 2021) (Section 4.1). With the pre-trained model, we conduct experiments on molecular tasks in different data formats and evaluate the versatility and effectiveness of our Transformer-M. Due to space limitation, we study three representative tasks, PCQM4Mv2 (2D, Section 4.2), PDBBind (2D & 3D, Section 4.3) and QM9 (3D, Section 4.4). Ablation studies are presented in Section 4.5. All codes are implemented based on the official codebase of Graphormer (Ying et al., 2021a) in PyTorch (Paszke et al., 2019). 4.1 LARGE-SCALE PRE-TRAINING Our Transformer-M is pre-trained on the training set of PCQM4Mv2 from the OGB Large-Scale Challenge (Hu et al., 2021). The total number of training samples is 3.37 million. Each molecule is associated with its 2D graph structure and 3D geometric structure. The HOMO-LUMO energy gap of each molecule is provided as its label, which is obtained by DFT-based methods (Burke, 2012). We follow Ying et al. (2021a) and employ a 12-layer Transformer-M model. The dimension of hidden layers and feed-forward layers is set to 768. The number of attention heads is set to 32. The number of Gaussian Basis kernels is set to 128. To train Transformer-M, we provide three modes for each data instance: (1) activate the 2D channels and disable the 3D channels (2D mode); (2) activate the 3D channels and disable the 2D channels (3D mode); (3) activate both channels (2D+3D mode). The mode of each data instance during training is randomly drawn on the fly according to a pre-defined distribution, implemented similarly to Dropout (Srivastava et al., 2014). In this work, we use two training objectives. The first one is a supervised learning objective, which aims to predict the HOMO-LUMO energy gap of each molecule. Besides, we also use a self-supervised learning objective called 3D Position Denoising (Godwin et al., 2022; Zaidi et al., 2022), which is particularly effective. During training, if a data instance is in the 3D mode, we add Gaussian noise to the position of each atom and require the model to predict the noise from the noisy input. The model is optimized to minimize a linear combination of the two objectives above. Details of settings are in Appendix B.1. 4.2 PCQM4MV2 PERFORMANCE (2D) After the model is pre-trained, we evaluate our Transformer-M on the validation set of PCQM4Mv2. Note that the validation set of PCQM4Mv2 consists of molecules in the 2D format only. Therefore, we can use it to evaluate how well Transformer-M performs on 2D molecular data. The goal of the task is to predict the HOMU-LUMO energy gap, and the evaluation metric is the Mean Absolute Error (MAE). As our training objectives include the HOMO-LUMO gap prediction task, we didn’t fine-tune the model parameters on any data. During inference, only the 2D channels are activated. We choose several strong baselines covering message passing neural network (MPNN) variants and Graph Transformers. Detailed descriptions of baselines are presented in Appendix B.2. The results are shown in Table 1. It can be easily seen that our Transformer-M surpasses all baselines by a large margin, e.g., 8.2% relative MAE reduction compared to the previous best model (Rampášek et al., 2022), establishing a new state-of-the-art on PCQM4Mv2 dataset. Note that our general architecture is the same as the Graphormer model (Ying et al., 2021a). The only difference between Transformer-M and the Graphormer baseline is that Graphormer is trained on 2D data only, while Transformer-M is trained using both 2D and 3D structural information. Therefore, we can conclude that Transformer-M performs well on 2D molecular data, and the 2D-3D joint training with shared parameters indeed helps the model learn more chemical knowledge. 4.3 PDBBIND PERFORMANCE (2D & 3D) To verify the compatibility of our Transformer-M, we further fine-tune our model on the PDBBind dataset (version 2016, Wang et al. (2004; 2005b)), one of the most widely used datasets for structurebased virtual screening (Jiménez et al., 2018; Stepniewska-Dziubinska et al., 2018; Zheng et al., 2019). PDBBind dataset consists of protein-ligand complexes as data instances, which are obtained in bioassay experiments associated with the pKa (or − logKd, − logKi) affinity values. For each data instance, the 3D geometric structures are provided and the 2D graph structures are constructed via pre-defined rules. The task requires models to predict the binding affinity of protein-ligand complexes, which is extremely vital for drug discovery. After pre-trained on the PCQM4Mv2 training set, our Transformer-M model is fine-tuned and evaluated on the core set of the PDBBind dataset. We compare our model with competitive baselines covering classical methods, CNN-based methods, and GNNs. All experiments are repeated five times with different seeds. Average performance is reported. Due to space limitation, we present the details of baselines and experiment settings in Appendix B.3. The results are presented in Table 2. Our Transformer-M consistently outperforms all the baselines on all evaluation metrics by a large margin, e.g., 3.3% absolute improvement on Pearson’s correlation coefficient (R). It is worth noting that data instances of the PDBBind dataset are protein-ligand complexes, while our model is pre-trained on simple molecules, demonstrating the transferability of Transformer-M. 4.4 QM9 PERFORMANCE (3D) We use the QM9 dataset (Ramakrishnan et al., 2014) to evaluate our Transformer-M on molecular tasks in the 3D data format. QM9 is a quantum chemistry benchmark consisting of 134k stable small organic molecules. These molecules correspond to the subset of all 133,885 species out of the GDB-17 chemical universe of 166 billion organic molecules. Each molecule is associated with 12 targets covering its energetic, electronic, and thermodynamic properties. The 3D geometric structure of the molecule is used as input. Following Thölke & De Fabritiis (2021), we randomly choose 10,000 and 10,831 molecules for validation and test evaluation, respectively. The remaining molecules are used to fine-tune our Transformer-M model. We observed that several previous works used different data splitting ratios or didn’t describe the evaluation details. For a fair comparison, we choose baselines that use similar splitting ratios in the original papers. The details of baselines and experiment settings are presented in Appendix B.4. The results are presented in Table 3. It can be seen that our Transformer-M achieves competitive performance compared to those baselines, suggesting that the model is compatible with 3D molecular data. In particular, Transformer-M performs best on HUMO, LUMO, and HUMO-LUMO predictions. This indicates that the knowledge learned in the pre-training task transfers better to similar tasks. Note that the model doesn’t perform quite well on some other tasks. We believe the Transformer-M can be improved in several aspects, including employing a carefully designed output layer (Thölke & De Fabritiis, 2021) or pre-training with more self-supervised training signals. 4.5 ABLATION STUDY In this subsection, we conduct a series of experiments to investigate the key designs of our Transformer-M. In this paper, we use two training objectives to train the model, and we will ablate the effect of the two training objectives. Besides, we use three modes to activate different channels with a pre-defined distribution, and we will study the impact of the distribution on the final performance. Due to space limitation, we present more analysis on our Transformer-M model in Appendix B.5. Impact of the pre-training tasks. As stated in Section 4.1, our Transformer-M model is pre-trained on the PCQM4Mv2 training set via two tasks: (1) predicting the HOMO-LUMO gap of molecules in both 2D and 3D formats. (2) 3D position denoising. We conduct ablation studies on both PCQM4Mv2 and QM9 datasets to check whether both objectives benefit downstream tasks. In detail, we conduct two additional experiments. The first experiment is training Transformer-M models from scratch on PCQM4Mv2 using its 2D graph data and QM9 using its 3D geometric data to check the benefit of the overall pre-training method. The second experiment is pre-training Transformer-M without using the 3D denoising task to study the effectiveness of the proposed 2D-3D joint pre-training approach. The results are shown in Table 4. It can be seen that the joint pre-training significantly boosts the performance on both PCQM4Mv2 and QM9 datasets. Besides, the 3D Position Denoising task is also beneficial, especially on the QM9 dataset in the 3D format. Impact of mode distribution. Denote (p2D, p3D, p2D&3D) as the probability of the modes mentioned in Section 4.1. We conduct experiments to investigate the influence of different distributions on the model performance. We select three distribution with (p2D, p3D, p2D&3D) being: 1:1:1, 1:2:2, and 1:2:1. The results are presented in Table 4. We obtain consistent conclusions on both PCQM4Mv2 and QM9 datasets: 1) for all three configurations, our Transformer-M model achieves strong performance, which shows that our joint training is robust to hyperparameter selection; 2) Using a slightly larger probability on the 3D mode achieves the best results. 5 CONCLUSION In this work, we take the first step toward general-purpose molecular models. The proposed Transformer-M offers a promising way to handle molecular tasks in 2D and 3D formats. We use two separate channels to encode 2D and 3D structural information and integrate them into the backbone Transformer. When the input data is in a particular format, the corresponding channel will be activated, and the other will be disabled. Through simple training tasks on 2D and 3D molecular data, our model automatically learns to leverage chemical knowledge from different data formats and correctly capture the representations. Extensive experiments are conducted, and all empirical results show that our Transformer-M can achieve strong performance on 2D and 3D tasks simultaneously. The potential of our Transformer-M can be further explored in a broad range of applications in chemistry. ACKNOWLEDGEMENTS We thank Shanda Li for the helpful discussions. We also thank all the anonymous reviewers for the very careful and detailed reviews as well as the valuable suggestions. Their help has further enhanced our work. This work is supported by National Key R&D Program of China (2022ZD0114900) and National Science Foundation of China (NSFC62276005). This work is partially supported by the Shanghai Committee of Science and Technology (Grant No. 21DZ1100100). B EXPERIMENTAL DETAILS B.1 LARGE-SCALE PRE-TRAINING Dataset. Our Transformer-M model is pre-trained on the training set of PCQM4Mv2 from the OGB Large-Scale Challenge (Hu et al., 2021). PCQM4Mv2 is a quantum chemistry dataset originally curated under the PubChemQC project (Maho, 2015; Nakata & Shimazaki, 2017). The total number of training samples is 3.37 million. Each molecule in the training set is associated with both 2D graph structures and 3D geometric structures. The HOMO-LUMO energy gap of each molecule is provided, which is obtained by DFT-based geometry optimization (Burke, 2012). According to the OGB-LSC (Hu et al., 2021), the HOMO-LUMO energy gap is one of the most practically-relevant quantum chemical properties of molecules since it is related to reactivity, photoexcitation, and charge transport. Being the largest publicly available dataset for molecular property prediction, PCQM4Mv2 is considered to be a challenging benchmark for molecular models. Settings. Our Transformer-M model consists of 12 layers. The dimension of hidden layers and feed-forward layers is set to 768. The number of attention heads is set to 32. The number of Gaussian Basis kernels is set to 128. We use AdamW (Kingma & Ba, 2014) as the optimizer and set its hyperparameter ϵ to 1e-8 and (β1, β2) to (0.9,0.999). The gradient clip norm is set to 5.0. The peak learning rate is set to 2e-4. The batch size is set to 1024. The model is trained for 1.5 million steps with a 90k-step warm-up stage. After the warm-up stage, the learning rate decays linearly to zero. The dropout ratios for the input embeddings, attention matrices, and hidden representations are set to 0.0, 0.1, and 0.0 respectively. The weight decay is set to 0.0. We also employ the stochastic depth (Huang et al., 2016) and set the probability to 0.2. The probability (p2D, p3D, p2D&3D) of each data instance entering the three modes mentioned in Section 4.1 is set to (0.2, 0.5, 0.3). The scaling factor σ of added noise in the 3D Position Denoising task is set to 0.2. The ratio of the supervised loss to the denoising loss is set to 1:1. All models are trained on 4 NVIDIA Tesla A100 GPUs. B.2 PCQM4MV2 Baselines. We compare our Transformer-M with several competitive baselines. These models fall into two categories: message passing neural network (MPNN) variants and Graph Transformers. For MPNN variants, we include two widely used models, GCN (Kipf & Welling, 2016) and GIN (Xu et al., 2019), and their variants with virtual node (VN) (Gilmer et al., 2017; Hu et al., 2020a). Additionally, we compare GINE-VN (Brossard et al., 2020) and DeeperGCN-VN (Li et al., 2020). GINE is the multi-hop version of GIN. DeeperGCN is a 12-layer GNN model with carefully designed aggregators. The result of MLP-Fingerprint (Hu et al., 2021) is also reported. We also compare several Graph Transformer models. Graphormer (Ying et al., 2021a) developed graph structural encodings and integrated them into a standard Transformer model. It achieved impressive performance across several world competitions (Ying et al., 2021b; Shi et al., 2022). CoAtGIN (Cui, 2022) is a hybrid architecture combining both Convolution and Attention. TokenGT (Kim et al., 2022) adopted the standard Transformer architecture without graph-specific modifications. GraphGPS (Rampášek et al., 2022) proposed a framework to integrate the positional and structural encodings, local message-passing mechanism, and global attention mechanism into the Transformer model. GRPE (Park et al., 2022) proposed a graph-specific relative positional encoding and considerd both node-spatial and node-edge relations. EGT (Hussain et al., 2022) exclusively used global self-attention as an aggregation mechanism rather than static localized convolutional aggregation, and utilized edge channels to capture structural information. B.3 PDBBIND Dataset. PDBBind is a well-known dataset that provides a comprehensive collection of experimentally measured binding affinity data for biomolecular complexes deposited in the Protein Data Bank (PDB) (Wang et al., 2005a). The task requires models to predict the binding affinity value pKa (or − logKd, − logKi) of protein-ligand complexes, which is extremely vital for drug discovery. In our experiment, we use the PDBBind v2016 dataset, which is widely used in recent works (Li et al., 2021). The PDBBind dataset includes three overlapped subsets called the general, refined, and core sets. The general set contains all 13,283 protein-ligand complexes, while the 4,057 complexes in the refined set are selected out of the general set with better quality. Moreover, the core set serves as the highest quality benchmark for testing. To avoid data leakage, we remove the data instances in the core set from the refined set. After training, we evaluate our model on the core set. The evaluation metrics include Pearson’s correlation coefficient (R), Mean Absolute Error (MAE), Root-Mean Squared Error (RMSE), and Standard Deviation (SD). Baselines. We compare our Transformer-M with several competitive baselines. These models mainly fall into three categories: classic Machine Learning methods, Convolution Neural Network (CNN) based methods, Graph Neural Network (GNN) based methods. First, we report the results of LR, SVR, and RF-Score (Ballester et al., 2010), which employed traditional machine learning approaches to predict the binding affinities. Second, inspired by the success of CNNs in computer vision, Stepniewska-Dziubinska et al. (2018) proposed the Pafnucy model that represents the complexes via a 3D grid and utilizes 3D convolution to produce feature maps. Zheng et al. (2019) introduced OnionNet, which also used CNNs to extract features based on rotation-free element-pair specific contacts between atoms of proteins and ligands. There are also several works that leverage GNNs to improve the performance of the PDBBind dataset. GraphDTA (Nguyen et al., 2020) represented protein-ligand complexes as 2D graphs and used GNN models to predict the affinity score. GNN-DTI (Lim et al., 2019) incorporated the 3D structures of protein-ligand complexes into GNNs. DMPNN (Yang et al., 2019) operated over a hybrid representation that combines convolutions and descriptors. SGCN (Danel et al., 2020) is a GCN-inspired architecture that leverages node positions. MAT (Maziarka et al., 2020) augmented the attention mechanism in the standard Transformer model with inter-atomic distances and molecular graph structures. DimeNet (Klicpera et al., 2020) developed the atom-pair embeddings and utilized directional information between atoms. CMPNN (Song et al., 2020) introduced a communicative kernel and a message booster module to strengthen the message passing between atoms. SIGN (Li et al., 2021)) proposed polar-inspired graph attention layers and pairwise interactive pooling layers to utilize the biomolecular structural information. Settings. We fine-tune the pre-trained Transformer-M on the PDBBind dataset. We use AdamW (Kingma & Ba, 2014) as the optimizer and set its hyperparameter ϵ to 1e-8 and (β1, β2) to (0.9,0.999). The gradient clip norm is set to 5.0. The peak learning rate is set to 2e-4. The total number of epochs is set to 30. The ratio of the warm-up steps to the total steps is set to 0.06. The batch size is set to 16. The dropout ratios for the input embeddings, attention matrices, and hidden representations are set to 0.0, 0.1, and 0.0 respectively. The weight decay is set to 0.0. Following (Ying et al., 2021a), We use FLAG (Kong et al., 2020) with minor modifications for graph data augmentation. In particular, except for the step size α and the number of adversarial attack steps m, we also employ a projection step in Zhu et al. (2020) with maximum perturbation γ. These hyperparaters are set to the following configurations: α = 0.01,m = 4, γ = 0.01. All models are trained on 2 NVIDIA Tesla V100 GPUs. B.4 QM9 Dataset. QM9 (Ramakrishnan et al., 2014) is a quantum chemistry benchmark consisting of 134k stable small organic molecules. These molecules correspond to the subset of all 133,885 species out of the GDB-17 chemical universe of 166 billion organic molecules. Each molecule is associated with 12 targets covering its energetic, electronic, and thermodynamic properties. The 3D geometric structure of the molecule is used as input. Following Thölke & De Fabritiis (2021), we randomly choose 10,000 and 10,831 molecules for validation and test evaluation, respectively. The remaining molecules are used to fine-tune our Transformer-M model. Baselines. We comprehensively compare our Transformer-M with both pre-training methods and 3D molecular models. First, we follow Jiao et al. (2022) to compare several pre-training methods. Hu et al. (2019) proposed a strategy to pre-train GNNs via both node-level and graph-level tasks. Sun et al. (2019) maximized the mutual information between graph-level representations and substructure representations as the pre-training tasks. You et al. (2020) instead used contrastive learning to pre-train GNNs. There are also several works that utilize 3D geometric structures during pre-training. Jing et al. (2021) maximized the mutual information between 2D and 3D representations. Fang et al. (2021) proposed a strategy to learn spatial information by utilizing both local and global 3D structures. Stärk et al. (2022) used two encoders to capture 2D and 3D structural information separately while maximizing the mutual information between 2D and 3D representations. Jiao et al. (2022) adopted an equivariant energy-based model and developed a node-level pretraining loss for force prediction. We report the results of these methods from (Jiao et al., 2022) for comparison. Second, we follow Thölke & De Fabritiis (2021) to compare 3D molecular models. Schütt et al. (2017) used continuous-filter convolution layers to model quantum interactions in molecules. Anderson et al. (2019) developed a GNN model equipped with activation functions being covariant to rotations. Klicpera et al. (2020) proposed directional message passing, which uses atom-pair embeddings and utilizes directional information between atoms. Schütt et al. (2021) proposed the polarizable atom interaction neural network (PaiNN) that uses an equivariant message passing mechanism. Hutchinson et al. (2021) built upon the Transformer model consisting of attention layers that are equivariant to arbitrary Lie groups and their discrete subgroups. Thölke & De Fabritiis (2021) also developed a Transformer variant with layers designed by prior physical and chemical knowledge. Satorras et al. (2021) proposed the EGNN model which does not require computationally expensive higher-order representations in immediate layers to keep equivariance, and can be easily scaled to higher-dimensional spaces. Godwin et al. (2022) proposed the 3D position denoising task and verified it on the Graph Network-based Simulator (GNS) model (Sanchez-Gonzalez et al., 2020). Settings. We fine-tune the pre-trained Transformer-M on the QM9 dataset. Following Thölke & De Fabritiis (2021), we adopt the Mean Squared Error (MSE) loss during training and use the Mean Absolute Error (MAE) loss function during evaluation. We also adopt label standardization for stable training. We use AdamW as the optimizer, and set the hyper-parameter ϵ to 1e-8 and (β1, β2) to (0.9,0.999). The gradient clip norm is set to 5.0. The peak learning rate is set to 7e-5. The batch size is set to 128. The dropout ratios for the input embeddings, attention matrices, and hidden representations are set to 0.0, 0.1, and 0.0 respectively. The weight decay is set to 0.0. The model is fine-tuned for 600k steps with a 60k-step warm-up stage. After the warm-up stage, the learning rate decays linearly to zero. All models are trained on 1 NVIDIA A100 GPU. B.5 MORE ANALYSIS Investigation on the generality of the design methodology of Transformer-M. In this work, we develop our Transformer-M model based on the Transformer backbone and integrate separate 2D and 3D channels (implemented by encoding methods) to encode the structural information of 2D and 3D molecular data. As stated in Section 3.2, it is a general design methodology for handling molecular data in different forms, which works well with different structural encoding instantiations. To demonstrate its generality and effectiveness, we further conduct experiments with other structural encodings from GRPE (Park et al., 2022) and EGT (Hussain et al., 2022), which are competitive baselines on the PCQM4Mv2 benchmark as shown in Table 1. All the hyperparameters are kept the same as the settings in Appendix B.1 for a fair comparison. The results are presented in Table 5. It can be easily seen that our Transformer-M model equipped with different encoding methods can consistently obtain significantly better performance than the corresponding vanilla 2D models, which indeed verifies the generality and effectiveness of the design methodology of Transformer-M. Investigation on the impact of 3D conformers calculated by different methods. Besides the versatility to handle molecules in different formats, our Transformer-M further achieves strong performance on various challenging molecular tasks, as shown in Section 4. On PCQM4Mv2 validation set (2D only), our Transformer-M establishes a new state-of-the-art, which mainly credits to the newly introduced 2D-3D joint training strategy in Section 3.2. The chemical knowledge in the 3D geometric structure can be leveraged during joint training and boost the performance of 2D tasks. Since the benefits of the 3D geometric structure are observed, it is natural to ask how the quality of calculated 3D conformers influences the performance of Transformer-M. To investigate this question, we additionally use the RDKit (Landrum, 2016) to generate one 3D conformed for each molecule in the training set of PCQM4Mv2. Compared to officially provided DFT-optimized geometric structures, the structures are less costly to obtain by using RDKit while also being less accurate. Thus, each molecule has its 2D molecular graph, 3D conformer calculated by DFT, and 3D conformer calculated by RDKit. Based on such a dataset, we conduct three additional experiments. Firstly, we train our Transformer-M model only using 2D molecular graphs. In this experiment, only the 2D channels are activated. Secondly, we train our Transformer-M model using both 2D molecular graphs (encoded by 2D channels) and 3D conformers generated by RDKit (encoded by 3D channels). Thirdly, we train our Transformer-M model using 2D molecular graphs, 3D conformers generated by RDKit, and 3D conformers calculated by DFT. In this experiment, we use two sets of 3D channels to separately encode structural information of 3D RDKit conformers and 3D DFT conformers. During training, when a data instance enters 3D or 2D+3D modes, both sets of 3D channels are activated and integrated. For all three experiments, the hyperparameters of Transformer-M are kept the same as the settings in Appendix B.1. The results are presented in Table 6. We can see that the quality of the 3D conformer matters in the final performance: leveraging 3D conformers generated by RDKit (second line) brings minor gains compared to using 2D molecular graphs only (first line). On the contrary, when leveraging 3D conformers calculated by DFT, the improvement is significant (the last two lines). From the practical view, it will be interesting to investigate the influence of 3D conformers calculated by methods that are more accurate than RDKit while more efficient than DFT, e.g., semiempirical methods (Dral et al., 2016), which we leave as future work. Investigation on the effectiveness of Transformer-M pre-training. We provide additional results on the effectiveness of our model on both PDBBind (2D+3D) and QM9 (3D) downstream datasets. Firstly, to verify the effectiveness of Transformer-M pre-training on the PDBBind dataset, we further pre-train the Graphormer model (Ying et al., 2021a) on the same PCQM4Mv2 dataset as a competitive pre-trained baseline. Since the Graphormer model can only handle graph data, we only use the 2D molecular graph of each data instance. All hyperparameters are kept the same as the settings in Appendix B.1. The results are presented in Table 7. We can draw the following conclusions: (1) pre-training is helpful (e.g., 0.797 (R of SIGN model, the best baseline) -> 0.804 (R of pre-trained Graphormer model)); (2) our pre-training method is more significant (e.g., 0.804 -> 0.830), which indeed demonstrate the effectiveness of our framework. Secondly, we demonstrate that our pre-training strategy helps learn a better Transformer model on downstream QM9 dataset. We conduct two additional experiments on the QM9 dataset. In the first experiment, we train the 3D geometric Transformer model (Transformer-M with 3D channel only) from scratch. In the second experiment, we use the 3D Position Denoising task as the objective to pre-train the 3D geometric Transformer on the PCQM4Mv2 and fine-tune the pre-trained checkpoint on QM9. Due to the time limits and constrained resources, we selected six QM9 targets for comparison. All the hyperparameters of pre-training and fine-tuning are kept the same. The results are presented in Table 8. It can be easily seen that our pre-training methods consistently and significantly improve the downstream performance on all six tasks, which indeed demonstrates the effectiveness of our general framework for 3D molecular data. We are aware that we achieve competitive rather than SOTA performance compared with baselines (5 best performance out of 12 targets, see Table 3). For U0, U, H, and G, there still exists a performance gap between our Transformer-M and some latest baselines, which use pretty complicated neural architectures. We believe that exploring more model alternatives and leveraging the wisdom in those networks into our Transformer-M will further improve the performance, which we will keep working on.
1. What is the main contribution of the paper regarding molecular representation learning? 2. What are the strengths and weaknesses of the proposed Transformer-M model? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any concerns or questions regarding the encoding of 1D molecular data mode, potential negative transfer, and data leakage?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper Molecules can be represented in a variety of chemical data modes, such as a 2D graph or a collection of atoms in 3D space. Most previous work in molecular representation learning has designed networks for a specific data mode, with the risk of failing to learn from other modes. The authors argue that a neural network that is chemically generalized should be able to handle molecule-related tasks across data modes. To accomplish this, the authors created Transformer-M, which is based on the Transformer and can be fed with 2D or 3D molecular data. The results of experiments indicate that Transformer-M performs well in 2D, 3D, and 2D&3D tasks. Strengths And Weaknesses Strengths • The paper is generally well-written and easy to follow. • Transformer-M can encode 2D or 3D structural information as bias terms and add to the attention matrix. It also encodes atomic centrality and adds to the atom features. Based on this, the model can obtain molecular representations from different data modes. • During pretraining, the authors label the data with different data modes for joint training. This training strategy may improve the performance of Transformer-M on downstream tasks from the results. Weaknesses • 2D and 3D information encoding are from previous work. This work just simply combines them, and the model architecture is the same as Graphormer[1]. The novelty is not enough. • Supervised pretraining based on the prediction of homo-lumo gap may lead to negative transfer. For example, on QM9 in downstream experiments, Transformer-M performs poorly on most tasks other than homo, lumo, and gap. This may be contradictory to the description "general-purpose neural network model" claimed in this paper. • Lack of description of PDBBind data processing and splitting in downstream tasks. • Absence of some ablation experiments: ①(p2D, p3D, p2D&3D) = (1:0:0) / (0:1:0) / (0:0:1); ②Only using the 3D position denoising task while pretraining. Other questions: • Do the authors consider encoding 1D molecular data mode, e.g., SMILES, simultaneously? • What do the authors think about the possibility of negative transfer on downstream tasks due to the supervised signal introduced during pretraining? • Whether there is data leakage during finetuning on PDBBind, because we know that the general, refined, and core sets have overlapping parts. Clarity, Quality, Novelty And Reproducibility Clarity: The paper is reader friendly. Quality and Novelty: Not very novel. It looks like a simple combination of previous works. Reproducibility: The code will be released. But I don't have much confidence in the reproduction of the results on PDBBind.
ICLR
Title One Transformer Can Understand Both 2D & 3D Molecular Data Abstract Unlike vision and language data which usually has a unique format, molecules can naturally be characterized using different chemical formulations. One can view a molecule as a 2D graph or define it as a collection of atoms located in a 3D space. For molecular representation learning, most previous works designed neural networks only for a particular data format, making the learned models likely to fail for other data formats. We believe a general-purpose neural network model for chemistry should be able to handle molecular tasks across data modalities. To achieve this goal, in this work, we develop a novel Transformer-based Molecular model called Transformer-M, which can take molecular data of 2D or 3D formats as input and generate meaningful semantic representations. Using the standard Transformer as the backbone architecture, Transformer-M develops two separated channels to encode 2D and 3D structural information and incorporate them with the atom features in the network modules. When the input data is in a particular format, the corresponding channel will be activated, and the other will be disabled. By training on 2D and 3D molecular data with properly designed supervised signals, Transformer-M automatically learns to leverage knowledge from different data modalities and correctly capture the representations. We conducted extensive experiments for Transformer-M. All empirical results show that Transformer-M can simultaneously achieve strong performance on 2D and 3D tasks, suggesting its broad applicability. The code and models will be made publicly available at https://github.com/lsj2408/Transformer-M. 1 INTRODUCTION Deep learning approaches have revolutionized many domains, including computer vision (He et al., 2016), natural language processing (Devlin et al., 2019; Brown et al., 2020), and games (Mnih et al., 2013; Silver et al., 2016). Recently, researchers have started investigating whether the power of neural networks could help solve important scientific problems in chemistry, e.g., predicting the property of molecules and simulating the molecular dynamics from large-scale training data (Hu et al., 2020a; 2021; Zhang et al., 2018; Chanussot et al., 2020). One key difference between chemistry and conventional domains such as vision and language is the multimodality of data. In vision and language, a data instance is usually characterized in a particular form. For example, an image is defined as RGB values in a pixel grid, while a sentence is defined as tokens in a sequence. In contrast, molecules naturally have different chemical formulations. A molecule can be represented as a sequence (Weininger, 1988), a 2D graph (Wiswesser, 1985), or a collection of atoms located in a 3D space. 2D and 3D structures are the most popularly used formulations as many valuable properties and statistics can be obtained from them (Chmiela et al., 2017; Stokes et al., 2020). However, as far as we know, most previous works focus on designing neural network models for either 2D or 3D structures, making the model learned in one form fail to be applied in tasks of the other form. We argue that a general-purpose neural network model in chemistry should at least be able to handle molecular tasks across data modalities. In this paper, we take the first step toward this goal by ∗These two authors contributed equally to this project †Correspondence to: Di He <dihe@pku.edu.cn> and Liwei Wang <wanglw@pku.edu.cn>. developing Transformer-M, a versatile Transformer-based Molecular model that performs well for both 2D and 3D molecular representation learning. Note that for a molecule, its 2D and 3D forms describe the same collection of atoms but use different characterizations of the structure. Therefore, the key challenge is to design a model expressive and compatible in capturing structural knowledge in different formulations and train the parameters to learn from both information. Transformer is more favorable than other architectures as it can explicitly plug structural signals in the model as bias terms (e.g., positional encodings (Vaswani et al., 2017; Raffel et al., 2020)). We can conveniently set 2D and 3D structural information as different bias terms through separated channels and incorporate them with the atom features in the attention layers. Architecture. The backbone network of our Transformer-M is composed of standard Transformer blocks. We develop two separate channels to encode 2D and 3D structural information. The 2D channel uses degree encoding, shortest path distance encoding, and edge encoding extracted from the 2D graph structure, following Ying et al. (2021a). The shortest path distance encoding and edge encoding reflect the spatial relations and bond features of a pair of atoms and are used as bias terms in the softmax attention. The degree encoding is added to the atom features in the input layer. For the 3D channel, we follow Shi et al. (2022) to use the 3D distance encoding to encode the spatial distance between atoms in the 3D geometric structure. Each atom pair’s Euclidean distance is encoded via the Gaussian Basis Kernel function (Scholkopf et al., 1997) and will be used as a bias term in the softmax attention. For each atom, we sum up the 3D distance encodings between it and all other atoms, and add it to atom features in the input layer. See Figure 1 for an illustration. Training. Except for the parameters in the two structural channels, all other parameters in Transformer-M (e.g., self-attention and feed-forward networks) are shared for different data modalities. We design a joint-training approach for Transformer-M to learn its parameters. During training, when the instances in a batch are only associated with 2D graph structures, the 2D channel will be activated, and the 3D channel will be disabled. Similarly, when the instances in a batch use 3D geometric structures, the 3D channel will be activated, and the 2D channel will be disabled. When both 2D and 3D information are given, both channels will be activated. In such a way, we can collect 2D and 3D data from separate databases and train Transformer-M with different training objectives, making the training process more flexible. We expect a single model to learn to identify and incorporate information from different modalities and efficiently utilize the parameters, leading to better generalization performance. Experimental Results. We use the PCQM4Mv2 dataset in the OGB Large-Scale Challenge (OGBLSC) (Hu et al., 2021) to train our Transformer-M, which consists of 3.4 million molecules of both 2D and 3D forms. The model is trained to predict the pre-computed HOMO-LUMO gap of each data instance in different formats with a pre-text 3D denoising task specifically for 3D data. With the pre-trained model, we directly use or fine-tune the parameters for various molecular tasks of different data formats. First, we show that on the validation set of the PCQM4Mv2 task, which only contains 2D molecular graphs, our Transformer-M surpasses all previous works by a large margin. The improvement is credited to the joint training, which effectively mitigates the overfitting problem. Second, On PDBBind (Wang et al., 2004; 2005b) (2D&3D), the fine-tuned Transformer-M achieves state-of-the-art performance compared to strong baselines. Lastly, on QM9 (Ramakrishnan et al., 2014) (3D) benchmark, the fine-tuned Transformer-M models achieve competitive performance compared to recent methods. All results show that our Transformer-M has the potential to be used as a general-purpose model in a broad range of applications in chemistry. 2 RELATED WORKS Neural networks for learning 2D molecular representations. Graph Neural Network (GNN) is popularly used in molecular graph representation learning (Kipf & Welling, 2016; Hamilton et al., 2017; Gilmer et al., 2017; Xu et al., 2019; Veličković et al., 2018). A GNN learns node and graph representations by recursively aggregating (i.e., message passing) and updating the node representations from neighbor representations. Different architectures are developed by using different aggregation and update strategies. We refer the readers to Wu et al. (2020) for a comprehensive survey. Recently, many works extended the Transformer model to graph tasks (Dwivedi & Bresson, 2020; Kreuzer et al., 2021; Ying et al., 2021a; Luo et al., 2022; Kim et al., 2022; Rampášek et al., 2022; Park et al., 2022; Hussain et al., 2022; Zhang et al., 2023). Seminal works include Graphormer (Ying et al., 2021a), which developed graph structural encodings and used them in a standard Transformer model. Neural networks for learning 3D molecular representations. Learning molecular representations with 3D geometric information is essential in many applications, such as molecular dynamics simulation. Recently, researchers have designed architectures to preserve invariant and equivariant properties for several necessary transformations like rotation and translation. Schütt et al. (2017) used continuous-filter convolutional layers to model quantum interactions in molecules. Thomas et al. (2018) used filters built from spherical harmonics to construct a rotation- and translationequivariant neural network. Klicpera et al. (2020) proposed directional message passing, which ensured their embeddings to be rotationally equivariant. Liu et al. (2022); Wang et al. (2022) use spherical coordinates to capture geometric information and achieve equivariance. Hutchinson et al. (2021); Thölke & De Fabritiis (2021) built Transformer models preserving equivariant properties. Shi et al. (2022) extended Ying et al. (2021a) to a 3D Transformer model which attains better results on large-scale molecular modeling challenges (Chanussot et al., 2020). Multi-view learning for molecules. The 2D graph structure and 3D geometric structure can be considered as different views of the same molecule. Inspired by the contrastive pre-training approach in vision (Chen et al., 2020; He et al., 2020; Radford et al., 2021), many works studied pre-training methods for molecules by jointly using the 2D and 3D information. Stärk et al. (2022) used two encoders to encode the 2D and 3D molecular information separately while maximizing the mutual information between the representations. Liu et al. (2021a) derived the GraphMVP framework, which uses contrastive learning and reconstruction to pre-train a 2D encoder and a 3D encoder. Zhu et al. (2022) unified the 2D and 3D pre-training methods above and proposed a 2D GNN model that can be enhanced by 3D geometric features. Different from these works, we aim to develop a single model which is compatible with both 2D and 3D molecular tasks. Furthermore, all the above works train models using paired 2D and 3D data, while such paired data is not a strong requirement to train our model. General-purpose models. Building a single agent that works for multiple tasks, even across modalities, is a recent discovery in deep learning. In the early years, researchers found that a single multilingual translation model can translate tens of languages using the same weights and perform better than a bilingual translation model for rare languages (Lample & Conneau, 2019; Conneau et al., 2019; Xue et al., 2020; Liu et al., 2020). Large-scale language model (Devlin et al., 2019; Brown et al., 2020) is another example that can be applied to different downstream tasks using in-context learning or fine-tuning. Reed et al. (2022) further pushed the boundary by building a single generalist agent, Gato. This agent uses the same network with the same weights but can play Atari, caption images, and make conversations like a human. Our work also lies in this direction. We focus on developing a general-purpose model in chemistry, which can take molecules in different formats as input and perform well on various molecular tasks with a small number of additional training data. 3 TRANSFORMER-M In this section, we introduce Transformer-M, a versatile Transformer serving as a general architecture for 2D and 3D molecular representation learning. First, we introduce notations and recap the preliminaries in the backbone Transformer architecture (Section 3.1). After that, we present the proposed Transformer-M model with two structural channels for different data modalities (Section 3.2). 3.1 NOTATIONS AND THE BACKBONE TRANSFORMER A molecule M is made up of a collection of atoms held together by attractive forces. We denote X ∈ Rn×d as the atoms with features, where n is the number of atoms, and d is the feature dimension. The structure of M can be represented in different formulations, such as 2D graph structure and 3D geometric structure. For the 2D graph structure, atoms are explicitly connected by chemical bonds, and we define M2D = (X, E), where e(i,j) ∈ E denotes the edge feature (i.e., the type of the bond) between atom i and j if the edge exists. For the 3D geometric structure, for each atom i, its position ri in the Cartesian coordinate system is provided. We define M3D = (X, R), where R = {r1, ..., rn} and ri ∈ R3. Our goal is to design a parametric model which can take either M2D or M3D (or both of them) as input, obtain contextual representations, and make predictions on downstream tasks. Transformer layer. The backbone architecture we use in this work is the Transformer model (Vaswani et al., 2017). A Transformer is composed of stacked Transformer blocks. A Transformer block consists of two layers: a self-attention layer followed by a feed-forward layer, with both layers having normalization (e.g., LayerNorm (Ba et al., 2016)) and skip connections (He et al., 2016). Denote X(l) as the input to the (l + 1)-th block and define X(0) = X . For an input X(l), the (l + 1)-th block works as follows: Ah(X(l)) = softmax ( X(l)W l,hQ (X (l)W l,hK ) ⊤ √ d ) ; (1) X̂(l) = X(l) + H∑ h=1 Ah(X(l))X(l)W l,hV W l,h O ; (2) X(l+1) = X̂(l) +GELU(X̂(l)W l1)W l 2, (3) where W l,hO ∈ RdH×d, W l,h Q ,W l,h K ,W l,h V ∈ Rd×dH , W l1 ∈ Rd×r,W l2 ∈ Rr×d. H is the number of attention heads, dH is the dimension of each head, and r is the dimension of the hidden layer. Ah(X) is usually referred to as the attention matrix. Positional encoding. Another essential component in the Transformer is positional encoding. Note that the self-attention layer and the feed-forward layer do not make use of the order of input elements (e.g., word tokens), making the model impossible to capture the structural information. The original paper (Vaswani et al., 2017) developed effective positional encodings to encode the sentence structural information and explicitly integrate them as bias terms into the model. Shortly, many works realized that positional encoding plays a crucial role in extending standard Transformer to more complicated data structures beyond language. By carefully designing structural encoding using domain knowledge, Transformer has successfully been applied to the image and graph domain and achieved impressive performance (Dosovitskiy et al., 2020; Liu et al., 2021b; Ying et al., 2021a). 3.2 TRANSFORMER-M AND TRAINING STRATEGY As we can see, the two molecular formulations defined in Section 3.1 use the same atom feature space but different characterizations of the structure (graph structure E v.s. geometric structure R). Therefore, the key challenge is to design a compatible architecture that can utilize either structural information in E or R (or both) and incorporate them with the atom features in a principled way. The Transformer is a suitable backbone to achieve the goal as we can encode structural information as bias terms and properly plug them into different modules. Furthermore, with Transformer, we can treat E and R in a unified way by decomposing the structural information into pair-wise and atom-wise encodings. Without loss of generality, we choose to use the encoding strategies in the graph and geometric Transformers proposed by Ying et al. (2021a); Shi et al. (2022). For the sake of completeness, we briefly introduce those structural encodings and show how to leverage them in Transformer-M. Note that our design methodology also works with other encoding strategies (Hussain et al., 2022; Park et al., 2022; Thölke & De Fabritiis, 2021). See Appendix B.5 for the detailed results. Encoding pair-wise relations in E. We use two terms to encode the structural relations between any atom pairs in the graph. First, we encode the shortest path distance (SPD) between two atoms to reflect their spatial relation. Let ΦSPDij denote the SPD encoding between atom i and j, which is a learnable scalar determined by the distance of the shortest path between i and j. Second, we encode the edge features (e.g., the chemical bond types) along the shortest path between i and j to reflect the bond information. For most molecules, there exists only one distinct shortest path between any two atoms. Denote the edges in the shortest path from i to j as SPij = (e1, e2, ..., eN ), and the edge encoding between i and j is defined as ΦEdgeij = 1 N ∑N n=1 en(wn) T , where wn are learnable vectors of the same dimension as the edge feature. Denote ΦSPD and ΦEdge as the matrix form of the SPD encoding and edge encoding, both of which are of shape n× n. Encoding pair-wise relations in R. We encode the Euclidean distance to reflect the spatial relation between any pair of atoms in the 3D space. For each atom pair (i, j), we first process their Euclidean distance with the Gaussian Basis Kernel function (Scholkopf et al., 1997), ψk(i,j) = − 1√ 2π|σk| exp ( − 12 ( γ(i,j)∥ri−rj∥+β(i,j)−µk |σk| )2) , k = 1, ...,K, where K is the number of Gaussian Basis kernels. Then the 3D Distance encoding Φ3D Distanceij is obtained according to Φ 3D Distance ij = GELU ( ψ(i,j)W 1 D ) W 2D, where ψ(i,j) = [ψ 1 (i,j); ...;ψ K (i,j)] ⊤, W 1D ∈ RK×K ,W 2D ∈ RK×1 are learnable parameters. γ(i,j), β(i,j) are learnable scalars indexed by the pair of atom types, and µk, σk are learnable kernel center and learnable scaling factor of the k-th Gaussian Basis Kernel. Denote Φ3D Distance as the matrix form of the 3D distance encoding, whose shape is n× n. Integrating ΦSPD, ΦEdge and Φ3D Distance in Transformer-M. All pair-wise encodings defined above capture the interatomic information, which is in a similar spirit to the relative positional encoding for sequential tasks (Raffel et al., 2020). Therefore, we similarly locate those pair-wise signals in the self-attention module to provide complementary information to the dot-product term XWQ(XWK) ⊤. For simplicity, we omit the index of attention head h and layer l, and the modified attention matrix is defined as: A(X) = softmax XWQ(XWK)⊤√ d + ΦSPD +ΦEdge︸ ︷︷ ︸ 2D pair-wise channel + Φ3D Distance︸ ︷︷ ︸ 3D pair-wise channel (4) Encoding atom-wise structural information in E. For atom i, Eqn. (4) computes the normalized weights according to the semantic (first term) and spatial relation (last three terms) between i and other atoms. However, the information is still not sufficient. For example, the importance (i.e., centrality) of each atom is missing in the attention. For each atom i, we use its degree as the centrality information. Formally, let ΨDegreei denote the degree encoding of the atom i, which is a d-dimensional learnable vector determined by the degree of the atom. Denote ΨDegree = [ΨDegree1 ,Ψ Degree 2 , ...,Ψ Degree n ] as the centrality encoding of all the atoms, which is of shape n× d. Encoding atom-wise structural information inR. Similar to the 2D atom-wise centrality encoding, for geometric data, we encode the centrality of each atom in the 3D space. For each atom i, we sum up the 3D Distance encodings between it and all other atoms. Let ΨSum of 3D Distancei denote the centrality encoding of atom i, we have ΨSum of 3D Distancei = ∑ j∈[n]ψ(i,j)W 3 D, where W 3 D ∈ RK×d is a learnable weight matrix. Similarly, we define ΨSum of 3D Distance as the encoding of all atoms, whose shape is n× d. Integrating ΨDegree and ΨSum of 3D Distance in Transformer-M. We add the atom-wise encodings of 2D and 3D structures to the atom features in the input layer. Formally, the inputX(0) is modified as: X(0) =X + ΨDegree︸ ︷︷ ︸ 2D atom-wise channel +ΨSum of 3D Distance︸ ︷︷ ︸ 3D atom-wise channel , (5) Through this simple way, the structural information of molecules in both 2D and 3D formats is integrated into one Transformer model. It is easy to check that Transformer-M preserves equivariant properties for both data formats. Training. The next step is to learn the parameters in Transformer-M to capture meaningful representations from each data format. To achieve this goal, we develop a simple and flexible joint training method to learn Transformer-M. We first collect datasets in different formats (2D/3D) and define supervised/self-supervised tasks (e.g., energy regression) on each format, and train the model on all the data toward each objective, respectively. To be concrete, during training, if a data instance comes from a dataset in the 2D format, the 2D channel is activated, and the 3D channel is disabled. The model parameter will be optimized to minimize the corresponding (i.e., 2D) objective. When a data instance comes from a dataset in the 3D format, only the 3D channel is activated, and the model will learn to minimize the 3D objective. Both channels are activated if the model takes molecules in both 2D and 3D formats as input. Compared with the multi-view learning approaches, we can train Transformer-M using unpaired 2D and 3D data, making the training process more flexible. The Transformer-M may generalize better due to the joint training. Several previous works (Liu et al., 2021a) observed that 2D graph structure and 3D geometric structure contain complementary chemical knowledge. For example, the 2D graph structure only contains bonds with bond type, while the 3D geometric structure contains fine-grained information such as lengths and angles. As another example, the 3D geometric structures are usually obtained from computational simulations like Density Functional Theory (DFT) (Burke, 2012), which could have approximation errors. The 2D graphs are constructed by domain experts, which to some extent, provide references to the 3D structure. By jointly training using 2D and 3D data with parameter sharing, our model can learn more chemical knowledge instead of overfitting to data noise and perform better on both 2D and 3D tasks. Future Directions. As an initial attempt, our Transformer-M opens up a way to develop generalpurpose molecular models to handle diverse chemical tasks in different data formats. We believe it is a starting point with more possibilities to explore in the future. For example, in this work, we use a simple way and linearly combine the structural information of 2D and 3D structures, and we believe there should be other efficient ways to fuse such encodings. Our model can also be combined with previous multi-view contrastive learning approaches. It is worth investigating how to pre-train our model using those methods. 4 EXPERIMENTS In this section, we empirically study the performance of Transformer-M. First, we pre-train our model on the PCQM4Mv2 training set from OGB Large-Scale Challenge (Hu et al., 2021) (Section 4.1). With the pre-trained model, we conduct experiments on molecular tasks in different data formats and evaluate the versatility and effectiveness of our Transformer-M. Due to space limitation, we study three representative tasks, PCQM4Mv2 (2D, Section 4.2), PDBBind (2D & 3D, Section 4.3) and QM9 (3D, Section 4.4). Ablation studies are presented in Section 4.5. All codes are implemented based on the official codebase of Graphormer (Ying et al., 2021a) in PyTorch (Paszke et al., 2019). 4.1 LARGE-SCALE PRE-TRAINING Our Transformer-M is pre-trained on the training set of PCQM4Mv2 from the OGB Large-Scale Challenge (Hu et al., 2021). The total number of training samples is 3.37 million. Each molecule is associated with its 2D graph structure and 3D geometric structure. The HOMO-LUMO energy gap of each molecule is provided as its label, which is obtained by DFT-based methods (Burke, 2012). We follow Ying et al. (2021a) and employ a 12-layer Transformer-M model. The dimension of hidden layers and feed-forward layers is set to 768. The number of attention heads is set to 32. The number of Gaussian Basis kernels is set to 128. To train Transformer-M, we provide three modes for each data instance: (1) activate the 2D channels and disable the 3D channels (2D mode); (2) activate the 3D channels and disable the 2D channels (3D mode); (3) activate both channels (2D+3D mode). The mode of each data instance during training is randomly drawn on the fly according to a pre-defined distribution, implemented similarly to Dropout (Srivastava et al., 2014). In this work, we use two training objectives. The first one is a supervised learning objective, which aims to predict the HOMO-LUMO energy gap of each molecule. Besides, we also use a self-supervised learning objective called 3D Position Denoising (Godwin et al., 2022; Zaidi et al., 2022), which is particularly effective. During training, if a data instance is in the 3D mode, we add Gaussian noise to the position of each atom and require the model to predict the noise from the noisy input. The model is optimized to minimize a linear combination of the two objectives above. Details of settings are in Appendix B.1. 4.2 PCQM4MV2 PERFORMANCE (2D) After the model is pre-trained, we evaluate our Transformer-M on the validation set of PCQM4Mv2. Note that the validation set of PCQM4Mv2 consists of molecules in the 2D format only. Therefore, we can use it to evaluate how well Transformer-M performs on 2D molecular data. The goal of the task is to predict the HOMU-LUMO energy gap, and the evaluation metric is the Mean Absolute Error (MAE). As our training objectives include the HOMO-LUMO gap prediction task, we didn’t fine-tune the model parameters on any data. During inference, only the 2D channels are activated. We choose several strong baselines covering message passing neural network (MPNN) variants and Graph Transformers. Detailed descriptions of baselines are presented in Appendix B.2. The results are shown in Table 1. It can be easily seen that our Transformer-M surpasses all baselines by a large margin, e.g., 8.2% relative MAE reduction compared to the previous best model (Rampášek et al., 2022), establishing a new state-of-the-art on PCQM4Mv2 dataset. Note that our general architecture is the same as the Graphormer model (Ying et al., 2021a). The only difference between Transformer-M and the Graphormer baseline is that Graphormer is trained on 2D data only, while Transformer-M is trained using both 2D and 3D structural information. Therefore, we can conclude that Transformer-M performs well on 2D molecular data, and the 2D-3D joint training with shared parameters indeed helps the model learn more chemical knowledge. 4.3 PDBBIND PERFORMANCE (2D & 3D) To verify the compatibility of our Transformer-M, we further fine-tune our model on the PDBBind dataset (version 2016, Wang et al. (2004; 2005b)), one of the most widely used datasets for structurebased virtual screening (Jiménez et al., 2018; Stepniewska-Dziubinska et al., 2018; Zheng et al., 2019). PDBBind dataset consists of protein-ligand complexes as data instances, which are obtained in bioassay experiments associated with the pKa (or − logKd, − logKi) affinity values. For each data instance, the 3D geometric structures are provided and the 2D graph structures are constructed via pre-defined rules. The task requires models to predict the binding affinity of protein-ligand complexes, which is extremely vital for drug discovery. After pre-trained on the PCQM4Mv2 training set, our Transformer-M model is fine-tuned and evaluated on the core set of the PDBBind dataset. We compare our model with competitive baselines covering classical methods, CNN-based methods, and GNNs. All experiments are repeated five times with different seeds. Average performance is reported. Due to space limitation, we present the details of baselines and experiment settings in Appendix B.3. The results are presented in Table 2. Our Transformer-M consistently outperforms all the baselines on all evaluation metrics by a large margin, e.g., 3.3% absolute improvement on Pearson’s correlation coefficient (R). It is worth noting that data instances of the PDBBind dataset are protein-ligand complexes, while our model is pre-trained on simple molecules, demonstrating the transferability of Transformer-M. 4.4 QM9 PERFORMANCE (3D) We use the QM9 dataset (Ramakrishnan et al., 2014) to evaluate our Transformer-M on molecular tasks in the 3D data format. QM9 is a quantum chemistry benchmark consisting of 134k stable small organic molecules. These molecules correspond to the subset of all 133,885 species out of the GDB-17 chemical universe of 166 billion organic molecules. Each molecule is associated with 12 targets covering its energetic, electronic, and thermodynamic properties. The 3D geometric structure of the molecule is used as input. Following Thölke & De Fabritiis (2021), we randomly choose 10,000 and 10,831 molecules for validation and test evaluation, respectively. The remaining molecules are used to fine-tune our Transformer-M model. We observed that several previous works used different data splitting ratios or didn’t describe the evaluation details. For a fair comparison, we choose baselines that use similar splitting ratios in the original papers. The details of baselines and experiment settings are presented in Appendix B.4. The results are presented in Table 3. It can be seen that our Transformer-M achieves competitive performance compared to those baselines, suggesting that the model is compatible with 3D molecular data. In particular, Transformer-M performs best on HUMO, LUMO, and HUMO-LUMO predictions. This indicates that the knowledge learned in the pre-training task transfers better to similar tasks. Note that the model doesn’t perform quite well on some other tasks. We believe the Transformer-M can be improved in several aspects, including employing a carefully designed output layer (Thölke & De Fabritiis, 2021) or pre-training with more self-supervised training signals. 4.5 ABLATION STUDY In this subsection, we conduct a series of experiments to investigate the key designs of our Transformer-M. In this paper, we use two training objectives to train the model, and we will ablate the effect of the two training objectives. Besides, we use three modes to activate different channels with a pre-defined distribution, and we will study the impact of the distribution on the final performance. Due to space limitation, we present more analysis on our Transformer-M model in Appendix B.5. Impact of the pre-training tasks. As stated in Section 4.1, our Transformer-M model is pre-trained on the PCQM4Mv2 training set via two tasks: (1) predicting the HOMO-LUMO gap of molecules in both 2D and 3D formats. (2) 3D position denoising. We conduct ablation studies on both PCQM4Mv2 and QM9 datasets to check whether both objectives benefit downstream tasks. In detail, we conduct two additional experiments. The first experiment is training Transformer-M models from scratch on PCQM4Mv2 using its 2D graph data and QM9 using its 3D geometric data to check the benefit of the overall pre-training method. The second experiment is pre-training Transformer-M without using the 3D denoising task to study the effectiveness of the proposed 2D-3D joint pre-training approach. The results are shown in Table 4. It can be seen that the joint pre-training significantly boosts the performance on both PCQM4Mv2 and QM9 datasets. Besides, the 3D Position Denoising task is also beneficial, especially on the QM9 dataset in the 3D format. Impact of mode distribution. Denote (p2D, p3D, p2D&3D) as the probability of the modes mentioned in Section 4.1. We conduct experiments to investigate the influence of different distributions on the model performance. We select three distribution with (p2D, p3D, p2D&3D) being: 1:1:1, 1:2:2, and 1:2:1. The results are presented in Table 4. We obtain consistent conclusions on both PCQM4Mv2 and QM9 datasets: 1) for all three configurations, our Transformer-M model achieves strong performance, which shows that our joint training is robust to hyperparameter selection; 2) Using a slightly larger probability on the 3D mode achieves the best results. 5 CONCLUSION In this work, we take the first step toward general-purpose molecular models. The proposed Transformer-M offers a promising way to handle molecular tasks in 2D and 3D formats. We use two separate channels to encode 2D and 3D structural information and integrate them into the backbone Transformer. When the input data is in a particular format, the corresponding channel will be activated, and the other will be disabled. Through simple training tasks on 2D and 3D molecular data, our model automatically learns to leverage chemical knowledge from different data formats and correctly capture the representations. Extensive experiments are conducted, and all empirical results show that our Transformer-M can achieve strong performance on 2D and 3D tasks simultaneously. The potential of our Transformer-M can be further explored in a broad range of applications in chemistry. ACKNOWLEDGEMENTS We thank Shanda Li for the helpful discussions. We also thank all the anonymous reviewers for the very careful and detailed reviews as well as the valuable suggestions. Their help has further enhanced our work. This work is supported by National Key R&D Program of China (2022ZD0114900) and National Science Foundation of China (NSFC62276005). This work is partially supported by the Shanghai Committee of Science and Technology (Grant No. 21DZ1100100). B EXPERIMENTAL DETAILS B.1 LARGE-SCALE PRE-TRAINING Dataset. Our Transformer-M model is pre-trained on the training set of PCQM4Mv2 from the OGB Large-Scale Challenge (Hu et al., 2021). PCQM4Mv2 is a quantum chemistry dataset originally curated under the PubChemQC project (Maho, 2015; Nakata & Shimazaki, 2017). The total number of training samples is 3.37 million. Each molecule in the training set is associated with both 2D graph structures and 3D geometric structures. The HOMO-LUMO energy gap of each molecule is provided, which is obtained by DFT-based geometry optimization (Burke, 2012). According to the OGB-LSC (Hu et al., 2021), the HOMO-LUMO energy gap is one of the most practically-relevant quantum chemical properties of molecules since it is related to reactivity, photoexcitation, and charge transport. Being the largest publicly available dataset for molecular property prediction, PCQM4Mv2 is considered to be a challenging benchmark for molecular models. Settings. Our Transformer-M model consists of 12 layers. The dimension of hidden layers and feed-forward layers is set to 768. The number of attention heads is set to 32. The number of Gaussian Basis kernels is set to 128. We use AdamW (Kingma & Ba, 2014) as the optimizer and set its hyperparameter ϵ to 1e-8 and (β1, β2) to (0.9,0.999). The gradient clip norm is set to 5.0. The peak learning rate is set to 2e-4. The batch size is set to 1024. The model is trained for 1.5 million steps with a 90k-step warm-up stage. After the warm-up stage, the learning rate decays linearly to zero. The dropout ratios for the input embeddings, attention matrices, and hidden representations are set to 0.0, 0.1, and 0.0 respectively. The weight decay is set to 0.0. We also employ the stochastic depth (Huang et al., 2016) and set the probability to 0.2. The probability (p2D, p3D, p2D&3D) of each data instance entering the three modes mentioned in Section 4.1 is set to (0.2, 0.5, 0.3). The scaling factor σ of added noise in the 3D Position Denoising task is set to 0.2. The ratio of the supervised loss to the denoising loss is set to 1:1. All models are trained on 4 NVIDIA Tesla A100 GPUs. B.2 PCQM4MV2 Baselines. We compare our Transformer-M with several competitive baselines. These models fall into two categories: message passing neural network (MPNN) variants and Graph Transformers. For MPNN variants, we include two widely used models, GCN (Kipf & Welling, 2016) and GIN (Xu et al., 2019), and their variants with virtual node (VN) (Gilmer et al., 2017; Hu et al., 2020a). Additionally, we compare GINE-VN (Brossard et al., 2020) and DeeperGCN-VN (Li et al., 2020). GINE is the multi-hop version of GIN. DeeperGCN is a 12-layer GNN model with carefully designed aggregators. The result of MLP-Fingerprint (Hu et al., 2021) is also reported. We also compare several Graph Transformer models. Graphormer (Ying et al., 2021a) developed graph structural encodings and integrated them into a standard Transformer model. It achieved impressive performance across several world competitions (Ying et al., 2021b; Shi et al., 2022). CoAtGIN (Cui, 2022) is a hybrid architecture combining both Convolution and Attention. TokenGT (Kim et al., 2022) adopted the standard Transformer architecture without graph-specific modifications. GraphGPS (Rampášek et al., 2022) proposed a framework to integrate the positional and structural encodings, local message-passing mechanism, and global attention mechanism into the Transformer model. GRPE (Park et al., 2022) proposed a graph-specific relative positional encoding and considerd both node-spatial and node-edge relations. EGT (Hussain et al., 2022) exclusively used global self-attention as an aggregation mechanism rather than static localized convolutional aggregation, and utilized edge channels to capture structural information. B.3 PDBBIND Dataset. PDBBind is a well-known dataset that provides a comprehensive collection of experimentally measured binding affinity data for biomolecular complexes deposited in the Protein Data Bank (PDB) (Wang et al., 2005a). The task requires models to predict the binding affinity value pKa (or − logKd, − logKi) of protein-ligand complexes, which is extremely vital for drug discovery. In our experiment, we use the PDBBind v2016 dataset, which is widely used in recent works (Li et al., 2021). The PDBBind dataset includes three overlapped subsets called the general, refined, and core sets. The general set contains all 13,283 protein-ligand complexes, while the 4,057 complexes in the refined set are selected out of the general set with better quality. Moreover, the core set serves as the highest quality benchmark for testing. To avoid data leakage, we remove the data instances in the core set from the refined set. After training, we evaluate our model on the core set. The evaluation metrics include Pearson’s correlation coefficient (R), Mean Absolute Error (MAE), Root-Mean Squared Error (RMSE), and Standard Deviation (SD). Baselines. We compare our Transformer-M with several competitive baselines. These models mainly fall into three categories: classic Machine Learning methods, Convolution Neural Network (CNN) based methods, Graph Neural Network (GNN) based methods. First, we report the results of LR, SVR, and RF-Score (Ballester et al., 2010), which employed traditional machine learning approaches to predict the binding affinities. Second, inspired by the success of CNNs in computer vision, Stepniewska-Dziubinska et al. (2018) proposed the Pafnucy model that represents the complexes via a 3D grid and utilizes 3D convolution to produce feature maps. Zheng et al. (2019) introduced OnionNet, which also used CNNs to extract features based on rotation-free element-pair specific contacts between atoms of proteins and ligands. There are also several works that leverage GNNs to improve the performance of the PDBBind dataset. GraphDTA (Nguyen et al., 2020) represented protein-ligand complexes as 2D graphs and used GNN models to predict the affinity score. GNN-DTI (Lim et al., 2019) incorporated the 3D structures of protein-ligand complexes into GNNs. DMPNN (Yang et al., 2019) operated over a hybrid representation that combines convolutions and descriptors. SGCN (Danel et al., 2020) is a GCN-inspired architecture that leverages node positions. MAT (Maziarka et al., 2020) augmented the attention mechanism in the standard Transformer model with inter-atomic distances and molecular graph structures. DimeNet (Klicpera et al., 2020) developed the atom-pair embeddings and utilized directional information between atoms. CMPNN (Song et al., 2020) introduced a communicative kernel and a message booster module to strengthen the message passing between atoms. SIGN (Li et al., 2021)) proposed polar-inspired graph attention layers and pairwise interactive pooling layers to utilize the biomolecular structural information. Settings. We fine-tune the pre-trained Transformer-M on the PDBBind dataset. We use AdamW (Kingma & Ba, 2014) as the optimizer and set its hyperparameter ϵ to 1e-8 and (β1, β2) to (0.9,0.999). The gradient clip norm is set to 5.0. The peak learning rate is set to 2e-4. The total number of epochs is set to 30. The ratio of the warm-up steps to the total steps is set to 0.06. The batch size is set to 16. The dropout ratios for the input embeddings, attention matrices, and hidden representations are set to 0.0, 0.1, and 0.0 respectively. The weight decay is set to 0.0. Following (Ying et al., 2021a), We use FLAG (Kong et al., 2020) with minor modifications for graph data augmentation. In particular, except for the step size α and the number of adversarial attack steps m, we also employ a projection step in Zhu et al. (2020) with maximum perturbation γ. These hyperparaters are set to the following configurations: α = 0.01,m = 4, γ = 0.01. All models are trained on 2 NVIDIA Tesla V100 GPUs. B.4 QM9 Dataset. QM9 (Ramakrishnan et al., 2014) is a quantum chemistry benchmark consisting of 134k stable small organic molecules. These molecules correspond to the subset of all 133,885 species out of the GDB-17 chemical universe of 166 billion organic molecules. Each molecule is associated with 12 targets covering its energetic, electronic, and thermodynamic properties. The 3D geometric structure of the molecule is used as input. Following Thölke & De Fabritiis (2021), we randomly choose 10,000 and 10,831 molecules for validation and test evaluation, respectively. The remaining molecules are used to fine-tune our Transformer-M model. Baselines. We comprehensively compare our Transformer-M with both pre-training methods and 3D molecular models. First, we follow Jiao et al. (2022) to compare several pre-training methods. Hu et al. (2019) proposed a strategy to pre-train GNNs via both node-level and graph-level tasks. Sun et al. (2019) maximized the mutual information between graph-level representations and substructure representations as the pre-training tasks. You et al. (2020) instead used contrastive learning to pre-train GNNs. There are also several works that utilize 3D geometric structures during pre-training. Jing et al. (2021) maximized the mutual information between 2D and 3D representations. Fang et al. (2021) proposed a strategy to learn spatial information by utilizing both local and global 3D structures. Stärk et al. (2022) used two encoders to capture 2D and 3D structural information separately while maximizing the mutual information between 2D and 3D representations. Jiao et al. (2022) adopted an equivariant energy-based model and developed a node-level pretraining loss for force prediction. We report the results of these methods from (Jiao et al., 2022) for comparison. Second, we follow Thölke & De Fabritiis (2021) to compare 3D molecular models. Schütt et al. (2017) used continuous-filter convolution layers to model quantum interactions in molecules. Anderson et al. (2019) developed a GNN model equipped with activation functions being covariant to rotations. Klicpera et al. (2020) proposed directional message passing, which uses atom-pair embeddings and utilizes directional information between atoms. Schütt et al. (2021) proposed the polarizable atom interaction neural network (PaiNN) that uses an equivariant message passing mechanism. Hutchinson et al. (2021) built upon the Transformer model consisting of attention layers that are equivariant to arbitrary Lie groups and their discrete subgroups. Thölke & De Fabritiis (2021) also developed a Transformer variant with layers designed by prior physical and chemical knowledge. Satorras et al. (2021) proposed the EGNN model which does not require computationally expensive higher-order representations in immediate layers to keep equivariance, and can be easily scaled to higher-dimensional spaces. Godwin et al. (2022) proposed the 3D position denoising task and verified it on the Graph Network-based Simulator (GNS) model (Sanchez-Gonzalez et al., 2020). Settings. We fine-tune the pre-trained Transformer-M on the QM9 dataset. Following Thölke & De Fabritiis (2021), we adopt the Mean Squared Error (MSE) loss during training and use the Mean Absolute Error (MAE) loss function during evaluation. We also adopt label standardization for stable training. We use AdamW as the optimizer, and set the hyper-parameter ϵ to 1e-8 and (β1, β2) to (0.9,0.999). The gradient clip norm is set to 5.0. The peak learning rate is set to 7e-5. The batch size is set to 128. The dropout ratios for the input embeddings, attention matrices, and hidden representations are set to 0.0, 0.1, and 0.0 respectively. The weight decay is set to 0.0. The model is fine-tuned for 600k steps with a 60k-step warm-up stage. After the warm-up stage, the learning rate decays linearly to zero. All models are trained on 1 NVIDIA A100 GPU. B.5 MORE ANALYSIS Investigation on the generality of the design methodology of Transformer-M. In this work, we develop our Transformer-M model based on the Transformer backbone and integrate separate 2D and 3D channels (implemented by encoding methods) to encode the structural information of 2D and 3D molecular data. As stated in Section 3.2, it is a general design methodology for handling molecular data in different forms, which works well with different structural encoding instantiations. To demonstrate its generality and effectiveness, we further conduct experiments with other structural encodings from GRPE (Park et al., 2022) and EGT (Hussain et al., 2022), which are competitive baselines on the PCQM4Mv2 benchmark as shown in Table 1. All the hyperparameters are kept the same as the settings in Appendix B.1 for a fair comparison. The results are presented in Table 5. It can be easily seen that our Transformer-M model equipped with different encoding methods can consistently obtain significantly better performance than the corresponding vanilla 2D models, which indeed verifies the generality and effectiveness of the design methodology of Transformer-M. Investigation on the impact of 3D conformers calculated by different methods. Besides the versatility to handle molecules in different formats, our Transformer-M further achieves strong performance on various challenging molecular tasks, as shown in Section 4. On PCQM4Mv2 validation set (2D only), our Transformer-M establishes a new state-of-the-art, which mainly credits to the newly introduced 2D-3D joint training strategy in Section 3.2. The chemical knowledge in the 3D geometric structure can be leveraged during joint training and boost the performance of 2D tasks. Since the benefits of the 3D geometric structure are observed, it is natural to ask how the quality of calculated 3D conformers influences the performance of Transformer-M. To investigate this question, we additionally use the RDKit (Landrum, 2016) to generate one 3D conformed for each molecule in the training set of PCQM4Mv2. Compared to officially provided DFT-optimized geometric structures, the structures are less costly to obtain by using RDKit while also being less accurate. Thus, each molecule has its 2D molecular graph, 3D conformer calculated by DFT, and 3D conformer calculated by RDKit. Based on such a dataset, we conduct three additional experiments. Firstly, we train our Transformer-M model only using 2D molecular graphs. In this experiment, only the 2D channels are activated. Secondly, we train our Transformer-M model using both 2D molecular graphs (encoded by 2D channels) and 3D conformers generated by RDKit (encoded by 3D channels). Thirdly, we train our Transformer-M model using 2D molecular graphs, 3D conformers generated by RDKit, and 3D conformers calculated by DFT. In this experiment, we use two sets of 3D channels to separately encode structural information of 3D RDKit conformers and 3D DFT conformers. During training, when a data instance enters 3D or 2D+3D modes, both sets of 3D channels are activated and integrated. For all three experiments, the hyperparameters of Transformer-M are kept the same as the settings in Appendix B.1. The results are presented in Table 6. We can see that the quality of the 3D conformer matters in the final performance: leveraging 3D conformers generated by RDKit (second line) brings minor gains compared to using 2D molecular graphs only (first line). On the contrary, when leveraging 3D conformers calculated by DFT, the improvement is significant (the last two lines). From the practical view, it will be interesting to investigate the influence of 3D conformers calculated by methods that are more accurate than RDKit while more efficient than DFT, e.g., semiempirical methods (Dral et al., 2016), which we leave as future work. Investigation on the effectiveness of Transformer-M pre-training. We provide additional results on the effectiveness of our model on both PDBBind (2D+3D) and QM9 (3D) downstream datasets. Firstly, to verify the effectiveness of Transformer-M pre-training on the PDBBind dataset, we further pre-train the Graphormer model (Ying et al., 2021a) on the same PCQM4Mv2 dataset as a competitive pre-trained baseline. Since the Graphormer model can only handle graph data, we only use the 2D molecular graph of each data instance. All hyperparameters are kept the same as the settings in Appendix B.1. The results are presented in Table 7. We can draw the following conclusions: (1) pre-training is helpful (e.g., 0.797 (R of SIGN model, the best baseline) -> 0.804 (R of pre-trained Graphormer model)); (2) our pre-training method is more significant (e.g., 0.804 -> 0.830), which indeed demonstrate the effectiveness of our framework. Secondly, we demonstrate that our pre-training strategy helps learn a better Transformer model on downstream QM9 dataset. We conduct two additional experiments on the QM9 dataset. In the first experiment, we train the 3D geometric Transformer model (Transformer-M with 3D channel only) from scratch. In the second experiment, we use the 3D Position Denoising task as the objective to pre-train the 3D geometric Transformer on the PCQM4Mv2 and fine-tune the pre-trained checkpoint on QM9. Due to the time limits and constrained resources, we selected six QM9 targets for comparison. All the hyperparameters of pre-training and fine-tuning are kept the same. The results are presented in Table 8. It can be easily seen that our pre-training methods consistently and significantly improve the downstream performance on all six tasks, which indeed demonstrates the effectiveness of our general framework for 3D molecular data. We are aware that we achieve competitive rather than SOTA performance compared with baselines (5 best performance out of 12 targets, see Table 3). For U0, U, H, and G, there still exists a performance gap between our Transformer-M and some latest baselines, which use pretty complicated neural architectures. We believe that exploring more model alternatives and leveraging the wisdom in those networks into our Transformer-M will further improve the performance, which we will keep working on.
1. What is the focus and contribution of the paper regarding small molecule representation? 2. What are the strengths of the proposed approach, particularly in its simplicity and performance? 3. What are the weaknesses or areas for improvement in the paper, such as additional ablation studies or feature analysis? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper documents a simple method for combining the input representations of a small molecule in 2D and 3D in a single neural network model. The work demonstrates how one can train models using subsets of 2D, 3D, or mixed representations, by modifying the attention head of a transformer architecture to explicitly add a sum over channels of pairwise features generated from (2D) small molecule graphs and channels of pairwise features generated from the small molecule coordinates in 3D. The authors combine a small number of concepts from previous state-of-the art models for small molecules in selecting these features and the unsupervised pre-training tasks, and they manage to show that their new architecture achieves peak performance in the opengraph large-scale challenge for small molecules, and decent performance throughout other tasks. Strengths And Weaknesses A core strength of this paper is that the high-level idea is simple and clean, yet the results show that it works well: Transformer-M ranks top on the small-molecule OpenGraph large-scale challenge, a world-wide public competition challenge. This work is rather strong. The paper starts to explain the rationale for why the method works as well as it does by performing an ablation study. I'd have liked to see a couple additional ablations, or other ways to explain why the method works so well: can one keep 3D position denoising for the 3D entries, but drop the joint pre-trainig? How strongly do the various specific features in the pair-wise channels influence the final result? Do we expect the 2D and 3D features to be "aligned" for a given molecule after pre-training the model? Finally, and only as a minor curiosity, what is the impact, if any, of the mode distribution on a downstream task like the QM9 (and if you had that info, could you possibly combine tables 4 and 5 to a single one?) Clarity, Quality, Novelty And Reproducibility The work is rather original in combining 2D or 3D information of small molecules at will in a transformer. The quality of the work is high: it builds on top of other existing state-of-the-art ideas, and yet it leaves room for future improvement. The paper is clear, and the main point of this work is rather simple and powerful. The paper comes with code (though I didn't try to test it myself).
ICLR
Title One Transformer Can Understand Both 2D & 3D Molecular Data Abstract Unlike vision and language data which usually has a unique format, molecules can naturally be characterized using different chemical formulations. One can view a molecule as a 2D graph or define it as a collection of atoms located in a 3D space. For molecular representation learning, most previous works designed neural networks only for a particular data format, making the learned models likely to fail for other data formats. We believe a general-purpose neural network model for chemistry should be able to handle molecular tasks across data modalities. To achieve this goal, in this work, we develop a novel Transformer-based Molecular model called Transformer-M, which can take molecular data of 2D or 3D formats as input and generate meaningful semantic representations. Using the standard Transformer as the backbone architecture, Transformer-M develops two separated channels to encode 2D and 3D structural information and incorporate them with the atom features in the network modules. When the input data is in a particular format, the corresponding channel will be activated, and the other will be disabled. By training on 2D and 3D molecular data with properly designed supervised signals, Transformer-M automatically learns to leverage knowledge from different data modalities and correctly capture the representations. We conducted extensive experiments for Transformer-M. All empirical results show that Transformer-M can simultaneously achieve strong performance on 2D and 3D tasks, suggesting its broad applicability. The code and models will be made publicly available at https://github.com/lsj2408/Transformer-M. 1 INTRODUCTION Deep learning approaches have revolutionized many domains, including computer vision (He et al., 2016), natural language processing (Devlin et al., 2019; Brown et al., 2020), and games (Mnih et al., 2013; Silver et al., 2016). Recently, researchers have started investigating whether the power of neural networks could help solve important scientific problems in chemistry, e.g., predicting the property of molecules and simulating the molecular dynamics from large-scale training data (Hu et al., 2020a; 2021; Zhang et al., 2018; Chanussot et al., 2020). One key difference between chemistry and conventional domains such as vision and language is the multimodality of data. In vision and language, a data instance is usually characterized in a particular form. For example, an image is defined as RGB values in a pixel grid, while a sentence is defined as tokens in a sequence. In contrast, molecules naturally have different chemical formulations. A molecule can be represented as a sequence (Weininger, 1988), a 2D graph (Wiswesser, 1985), or a collection of atoms located in a 3D space. 2D and 3D structures are the most popularly used formulations as many valuable properties and statistics can be obtained from them (Chmiela et al., 2017; Stokes et al., 2020). However, as far as we know, most previous works focus on designing neural network models for either 2D or 3D structures, making the model learned in one form fail to be applied in tasks of the other form. We argue that a general-purpose neural network model in chemistry should at least be able to handle molecular tasks across data modalities. In this paper, we take the first step toward this goal by ∗These two authors contributed equally to this project †Correspondence to: Di He <dihe@pku.edu.cn> and Liwei Wang <wanglw@pku.edu.cn>. developing Transformer-M, a versatile Transformer-based Molecular model that performs well for both 2D and 3D molecular representation learning. Note that for a molecule, its 2D and 3D forms describe the same collection of atoms but use different characterizations of the structure. Therefore, the key challenge is to design a model expressive and compatible in capturing structural knowledge in different formulations and train the parameters to learn from both information. Transformer is more favorable than other architectures as it can explicitly plug structural signals in the model as bias terms (e.g., positional encodings (Vaswani et al., 2017; Raffel et al., 2020)). We can conveniently set 2D and 3D structural information as different bias terms through separated channels and incorporate them with the atom features in the attention layers. Architecture. The backbone network of our Transformer-M is composed of standard Transformer blocks. We develop two separate channels to encode 2D and 3D structural information. The 2D channel uses degree encoding, shortest path distance encoding, and edge encoding extracted from the 2D graph structure, following Ying et al. (2021a). The shortest path distance encoding and edge encoding reflect the spatial relations and bond features of a pair of atoms and are used as bias terms in the softmax attention. The degree encoding is added to the atom features in the input layer. For the 3D channel, we follow Shi et al. (2022) to use the 3D distance encoding to encode the spatial distance between atoms in the 3D geometric structure. Each atom pair’s Euclidean distance is encoded via the Gaussian Basis Kernel function (Scholkopf et al., 1997) and will be used as a bias term in the softmax attention. For each atom, we sum up the 3D distance encodings between it and all other atoms, and add it to atom features in the input layer. See Figure 1 for an illustration. Training. Except for the parameters in the two structural channels, all other parameters in Transformer-M (e.g., self-attention and feed-forward networks) are shared for different data modalities. We design a joint-training approach for Transformer-M to learn its parameters. During training, when the instances in a batch are only associated with 2D graph structures, the 2D channel will be activated, and the 3D channel will be disabled. Similarly, when the instances in a batch use 3D geometric structures, the 3D channel will be activated, and the 2D channel will be disabled. When both 2D and 3D information are given, both channels will be activated. In such a way, we can collect 2D and 3D data from separate databases and train Transformer-M with different training objectives, making the training process more flexible. We expect a single model to learn to identify and incorporate information from different modalities and efficiently utilize the parameters, leading to better generalization performance. Experimental Results. We use the PCQM4Mv2 dataset in the OGB Large-Scale Challenge (OGBLSC) (Hu et al., 2021) to train our Transformer-M, which consists of 3.4 million molecules of both 2D and 3D forms. The model is trained to predict the pre-computed HOMO-LUMO gap of each data instance in different formats with a pre-text 3D denoising task specifically for 3D data. With the pre-trained model, we directly use or fine-tune the parameters for various molecular tasks of different data formats. First, we show that on the validation set of the PCQM4Mv2 task, which only contains 2D molecular graphs, our Transformer-M surpasses all previous works by a large margin. The improvement is credited to the joint training, which effectively mitigates the overfitting problem. Second, On PDBBind (Wang et al., 2004; 2005b) (2D&3D), the fine-tuned Transformer-M achieves state-of-the-art performance compared to strong baselines. Lastly, on QM9 (Ramakrishnan et al., 2014) (3D) benchmark, the fine-tuned Transformer-M models achieve competitive performance compared to recent methods. All results show that our Transformer-M has the potential to be used as a general-purpose model in a broad range of applications in chemistry. 2 RELATED WORKS Neural networks for learning 2D molecular representations. Graph Neural Network (GNN) is popularly used in molecular graph representation learning (Kipf & Welling, 2016; Hamilton et al., 2017; Gilmer et al., 2017; Xu et al., 2019; Veličković et al., 2018). A GNN learns node and graph representations by recursively aggregating (i.e., message passing) and updating the node representations from neighbor representations. Different architectures are developed by using different aggregation and update strategies. We refer the readers to Wu et al. (2020) for a comprehensive survey. Recently, many works extended the Transformer model to graph tasks (Dwivedi & Bresson, 2020; Kreuzer et al., 2021; Ying et al., 2021a; Luo et al., 2022; Kim et al., 2022; Rampášek et al., 2022; Park et al., 2022; Hussain et al., 2022; Zhang et al., 2023). Seminal works include Graphormer (Ying et al., 2021a), which developed graph structural encodings and used them in a standard Transformer model. Neural networks for learning 3D molecular representations. Learning molecular representations with 3D geometric information is essential in many applications, such as molecular dynamics simulation. Recently, researchers have designed architectures to preserve invariant and equivariant properties for several necessary transformations like rotation and translation. Schütt et al. (2017) used continuous-filter convolutional layers to model quantum interactions in molecules. Thomas et al. (2018) used filters built from spherical harmonics to construct a rotation- and translationequivariant neural network. Klicpera et al. (2020) proposed directional message passing, which ensured their embeddings to be rotationally equivariant. Liu et al. (2022); Wang et al. (2022) use spherical coordinates to capture geometric information and achieve equivariance. Hutchinson et al. (2021); Thölke & De Fabritiis (2021) built Transformer models preserving equivariant properties. Shi et al. (2022) extended Ying et al. (2021a) to a 3D Transformer model which attains better results on large-scale molecular modeling challenges (Chanussot et al., 2020). Multi-view learning for molecules. The 2D graph structure and 3D geometric structure can be considered as different views of the same molecule. Inspired by the contrastive pre-training approach in vision (Chen et al., 2020; He et al., 2020; Radford et al., 2021), many works studied pre-training methods for molecules by jointly using the 2D and 3D information. Stärk et al. (2022) used two encoders to encode the 2D and 3D molecular information separately while maximizing the mutual information between the representations. Liu et al. (2021a) derived the GraphMVP framework, which uses contrastive learning and reconstruction to pre-train a 2D encoder and a 3D encoder. Zhu et al. (2022) unified the 2D and 3D pre-training methods above and proposed a 2D GNN model that can be enhanced by 3D geometric features. Different from these works, we aim to develop a single model which is compatible with both 2D and 3D molecular tasks. Furthermore, all the above works train models using paired 2D and 3D data, while such paired data is not a strong requirement to train our model. General-purpose models. Building a single agent that works for multiple tasks, even across modalities, is a recent discovery in deep learning. In the early years, researchers found that a single multilingual translation model can translate tens of languages using the same weights and perform better than a bilingual translation model for rare languages (Lample & Conneau, 2019; Conneau et al., 2019; Xue et al., 2020; Liu et al., 2020). Large-scale language model (Devlin et al., 2019; Brown et al., 2020) is another example that can be applied to different downstream tasks using in-context learning or fine-tuning. Reed et al. (2022) further pushed the boundary by building a single generalist agent, Gato. This agent uses the same network with the same weights but can play Atari, caption images, and make conversations like a human. Our work also lies in this direction. We focus on developing a general-purpose model in chemistry, which can take molecules in different formats as input and perform well on various molecular tasks with a small number of additional training data. 3 TRANSFORMER-M In this section, we introduce Transformer-M, a versatile Transformer serving as a general architecture for 2D and 3D molecular representation learning. First, we introduce notations and recap the preliminaries in the backbone Transformer architecture (Section 3.1). After that, we present the proposed Transformer-M model with two structural channels for different data modalities (Section 3.2). 3.1 NOTATIONS AND THE BACKBONE TRANSFORMER A molecule M is made up of a collection of atoms held together by attractive forces. We denote X ∈ Rn×d as the atoms with features, where n is the number of atoms, and d is the feature dimension. The structure of M can be represented in different formulations, such as 2D graph structure and 3D geometric structure. For the 2D graph structure, atoms are explicitly connected by chemical bonds, and we define M2D = (X, E), where e(i,j) ∈ E denotes the edge feature (i.e., the type of the bond) between atom i and j if the edge exists. For the 3D geometric structure, for each atom i, its position ri in the Cartesian coordinate system is provided. We define M3D = (X, R), where R = {r1, ..., rn} and ri ∈ R3. Our goal is to design a parametric model which can take either M2D or M3D (or both of them) as input, obtain contextual representations, and make predictions on downstream tasks. Transformer layer. The backbone architecture we use in this work is the Transformer model (Vaswani et al., 2017). A Transformer is composed of stacked Transformer blocks. A Transformer block consists of two layers: a self-attention layer followed by a feed-forward layer, with both layers having normalization (e.g., LayerNorm (Ba et al., 2016)) and skip connections (He et al., 2016). Denote X(l) as the input to the (l + 1)-th block and define X(0) = X . For an input X(l), the (l + 1)-th block works as follows: Ah(X(l)) = softmax ( X(l)W l,hQ (X (l)W l,hK ) ⊤ √ d ) ; (1) X̂(l) = X(l) + H∑ h=1 Ah(X(l))X(l)W l,hV W l,h O ; (2) X(l+1) = X̂(l) +GELU(X̂(l)W l1)W l 2, (3) where W l,hO ∈ RdH×d, W l,h Q ,W l,h K ,W l,h V ∈ Rd×dH , W l1 ∈ Rd×r,W l2 ∈ Rr×d. H is the number of attention heads, dH is the dimension of each head, and r is the dimension of the hidden layer. Ah(X) is usually referred to as the attention matrix. Positional encoding. Another essential component in the Transformer is positional encoding. Note that the self-attention layer and the feed-forward layer do not make use of the order of input elements (e.g., word tokens), making the model impossible to capture the structural information. The original paper (Vaswani et al., 2017) developed effective positional encodings to encode the sentence structural information and explicitly integrate them as bias terms into the model. Shortly, many works realized that positional encoding plays a crucial role in extending standard Transformer to more complicated data structures beyond language. By carefully designing structural encoding using domain knowledge, Transformer has successfully been applied to the image and graph domain and achieved impressive performance (Dosovitskiy et al., 2020; Liu et al., 2021b; Ying et al., 2021a). 3.2 TRANSFORMER-M AND TRAINING STRATEGY As we can see, the two molecular formulations defined in Section 3.1 use the same atom feature space but different characterizations of the structure (graph structure E v.s. geometric structure R). Therefore, the key challenge is to design a compatible architecture that can utilize either structural information in E or R (or both) and incorporate them with the atom features in a principled way. The Transformer is a suitable backbone to achieve the goal as we can encode structural information as bias terms and properly plug them into different modules. Furthermore, with Transformer, we can treat E and R in a unified way by decomposing the structural information into pair-wise and atom-wise encodings. Without loss of generality, we choose to use the encoding strategies in the graph and geometric Transformers proposed by Ying et al. (2021a); Shi et al. (2022). For the sake of completeness, we briefly introduce those structural encodings and show how to leverage them in Transformer-M. Note that our design methodology also works with other encoding strategies (Hussain et al., 2022; Park et al., 2022; Thölke & De Fabritiis, 2021). See Appendix B.5 for the detailed results. Encoding pair-wise relations in E. We use two terms to encode the structural relations between any atom pairs in the graph. First, we encode the shortest path distance (SPD) between two atoms to reflect their spatial relation. Let ΦSPDij denote the SPD encoding between atom i and j, which is a learnable scalar determined by the distance of the shortest path between i and j. Second, we encode the edge features (e.g., the chemical bond types) along the shortest path between i and j to reflect the bond information. For most molecules, there exists only one distinct shortest path between any two atoms. Denote the edges in the shortest path from i to j as SPij = (e1, e2, ..., eN ), and the edge encoding between i and j is defined as ΦEdgeij = 1 N ∑N n=1 en(wn) T , where wn are learnable vectors of the same dimension as the edge feature. Denote ΦSPD and ΦEdge as the matrix form of the SPD encoding and edge encoding, both of which are of shape n× n. Encoding pair-wise relations in R. We encode the Euclidean distance to reflect the spatial relation between any pair of atoms in the 3D space. For each atom pair (i, j), we first process their Euclidean distance with the Gaussian Basis Kernel function (Scholkopf et al., 1997), ψk(i,j) = − 1√ 2π|σk| exp ( − 12 ( γ(i,j)∥ri−rj∥+β(i,j)−µk |σk| )2) , k = 1, ...,K, where K is the number of Gaussian Basis kernels. Then the 3D Distance encoding Φ3D Distanceij is obtained according to Φ 3D Distance ij = GELU ( ψ(i,j)W 1 D ) W 2D, where ψ(i,j) = [ψ 1 (i,j); ...;ψ K (i,j)] ⊤, W 1D ∈ RK×K ,W 2D ∈ RK×1 are learnable parameters. γ(i,j), β(i,j) are learnable scalars indexed by the pair of atom types, and µk, σk are learnable kernel center and learnable scaling factor of the k-th Gaussian Basis Kernel. Denote Φ3D Distance as the matrix form of the 3D distance encoding, whose shape is n× n. Integrating ΦSPD, ΦEdge and Φ3D Distance in Transformer-M. All pair-wise encodings defined above capture the interatomic information, which is in a similar spirit to the relative positional encoding for sequential tasks (Raffel et al., 2020). Therefore, we similarly locate those pair-wise signals in the self-attention module to provide complementary information to the dot-product term XWQ(XWK) ⊤. For simplicity, we omit the index of attention head h and layer l, and the modified attention matrix is defined as: A(X) = softmax XWQ(XWK)⊤√ d + ΦSPD +ΦEdge︸ ︷︷ ︸ 2D pair-wise channel + Φ3D Distance︸ ︷︷ ︸ 3D pair-wise channel (4) Encoding atom-wise structural information in E. For atom i, Eqn. (4) computes the normalized weights according to the semantic (first term) and spatial relation (last three terms) between i and other atoms. However, the information is still not sufficient. For example, the importance (i.e., centrality) of each atom is missing in the attention. For each atom i, we use its degree as the centrality information. Formally, let ΨDegreei denote the degree encoding of the atom i, which is a d-dimensional learnable vector determined by the degree of the atom. Denote ΨDegree = [ΨDegree1 ,Ψ Degree 2 , ...,Ψ Degree n ] as the centrality encoding of all the atoms, which is of shape n× d. Encoding atom-wise structural information inR. Similar to the 2D atom-wise centrality encoding, for geometric data, we encode the centrality of each atom in the 3D space. For each atom i, we sum up the 3D Distance encodings between it and all other atoms. Let ΨSum of 3D Distancei denote the centrality encoding of atom i, we have ΨSum of 3D Distancei = ∑ j∈[n]ψ(i,j)W 3 D, where W 3 D ∈ RK×d is a learnable weight matrix. Similarly, we define ΨSum of 3D Distance as the encoding of all atoms, whose shape is n× d. Integrating ΨDegree and ΨSum of 3D Distance in Transformer-M. We add the atom-wise encodings of 2D and 3D structures to the atom features in the input layer. Formally, the inputX(0) is modified as: X(0) =X + ΨDegree︸ ︷︷ ︸ 2D atom-wise channel +ΨSum of 3D Distance︸ ︷︷ ︸ 3D atom-wise channel , (5) Through this simple way, the structural information of molecules in both 2D and 3D formats is integrated into one Transformer model. It is easy to check that Transformer-M preserves equivariant properties for both data formats. Training. The next step is to learn the parameters in Transformer-M to capture meaningful representations from each data format. To achieve this goal, we develop a simple and flexible joint training method to learn Transformer-M. We first collect datasets in different formats (2D/3D) and define supervised/self-supervised tasks (e.g., energy regression) on each format, and train the model on all the data toward each objective, respectively. To be concrete, during training, if a data instance comes from a dataset in the 2D format, the 2D channel is activated, and the 3D channel is disabled. The model parameter will be optimized to minimize the corresponding (i.e., 2D) objective. When a data instance comes from a dataset in the 3D format, only the 3D channel is activated, and the model will learn to minimize the 3D objective. Both channels are activated if the model takes molecules in both 2D and 3D formats as input. Compared with the multi-view learning approaches, we can train Transformer-M using unpaired 2D and 3D data, making the training process more flexible. The Transformer-M may generalize better due to the joint training. Several previous works (Liu et al., 2021a) observed that 2D graph structure and 3D geometric structure contain complementary chemical knowledge. For example, the 2D graph structure only contains bonds with bond type, while the 3D geometric structure contains fine-grained information such as lengths and angles. As another example, the 3D geometric structures are usually obtained from computational simulations like Density Functional Theory (DFT) (Burke, 2012), which could have approximation errors. The 2D graphs are constructed by domain experts, which to some extent, provide references to the 3D structure. By jointly training using 2D and 3D data with parameter sharing, our model can learn more chemical knowledge instead of overfitting to data noise and perform better on both 2D and 3D tasks. Future Directions. As an initial attempt, our Transformer-M opens up a way to develop generalpurpose molecular models to handle diverse chemical tasks in different data formats. We believe it is a starting point with more possibilities to explore in the future. For example, in this work, we use a simple way and linearly combine the structural information of 2D and 3D structures, and we believe there should be other efficient ways to fuse such encodings. Our model can also be combined with previous multi-view contrastive learning approaches. It is worth investigating how to pre-train our model using those methods. 4 EXPERIMENTS In this section, we empirically study the performance of Transformer-M. First, we pre-train our model on the PCQM4Mv2 training set from OGB Large-Scale Challenge (Hu et al., 2021) (Section 4.1). With the pre-trained model, we conduct experiments on molecular tasks in different data formats and evaluate the versatility and effectiveness of our Transformer-M. Due to space limitation, we study three representative tasks, PCQM4Mv2 (2D, Section 4.2), PDBBind (2D & 3D, Section 4.3) and QM9 (3D, Section 4.4). Ablation studies are presented in Section 4.5. All codes are implemented based on the official codebase of Graphormer (Ying et al., 2021a) in PyTorch (Paszke et al., 2019). 4.1 LARGE-SCALE PRE-TRAINING Our Transformer-M is pre-trained on the training set of PCQM4Mv2 from the OGB Large-Scale Challenge (Hu et al., 2021). The total number of training samples is 3.37 million. Each molecule is associated with its 2D graph structure and 3D geometric structure. The HOMO-LUMO energy gap of each molecule is provided as its label, which is obtained by DFT-based methods (Burke, 2012). We follow Ying et al. (2021a) and employ a 12-layer Transformer-M model. The dimension of hidden layers and feed-forward layers is set to 768. The number of attention heads is set to 32. The number of Gaussian Basis kernels is set to 128. To train Transformer-M, we provide three modes for each data instance: (1) activate the 2D channels and disable the 3D channels (2D mode); (2) activate the 3D channels and disable the 2D channels (3D mode); (3) activate both channels (2D+3D mode). The mode of each data instance during training is randomly drawn on the fly according to a pre-defined distribution, implemented similarly to Dropout (Srivastava et al., 2014). In this work, we use two training objectives. The first one is a supervised learning objective, which aims to predict the HOMO-LUMO energy gap of each molecule. Besides, we also use a self-supervised learning objective called 3D Position Denoising (Godwin et al., 2022; Zaidi et al., 2022), which is particularly effective. During training, if a data instance is in the 3D mode, we add Gaussian noise to the position of each atom and require the model to predict the noise from the noisy input. The model is optimized to minimize a linear combination of the two objectives above. Details of settings are in Appendix B.1. 4.2 PCQM4MV2 PERFORMANCE (2D) After the model is pre-trained, we evaluate our Transformer-M on the validation set of PCQM4Mv2. Note that the validation set of PCQM4Mv2 consists of molecules in the 2D format only. Therefore, we can use it to evaluate how well Transformer-M performs on 2D molecular data. The goal of the task is to predict the HOMU-LUMO energy gap, and the evaluation metric is the Mean Absolute Error (MAE). As our training objectives include the HOMO-LUMO gap prediction task, we didn’t fine-tune the model parameters on any data. During inference, only the 2D channels are activated. We choose several strong baselines covering message passing neural network (MPNN) variants and Graph Transformers. Detailed descriptions of baselines are presented in Appendix B.2. The results are shown in Table 1. It can be easily seen that our Transformer-M surpasses all baselines by a large margin, e.g., 8.2% relative MAE reduction compared to the previous best model (Rampášek et al., 2022), establishing a new state-of-the-art on PCQM4Mv2 dataset. Note that our general architecture is the same as the Graphormer model (Ying et al., 2021a). The only difference between Transformer-M and the Graphormer baseline is that Graphormer is trained on 2D data only, while Transformer-M is trained using both 2D and 3D structural information. Therefore, we can conclude that Transformer-M performs well on 2D molecular data, and the 2D-3D joint training with shared parameters indeed helps the model learn more chemical knowledge. 4.3 PDBBIND PERFORMANCE (2D & 3D) To verify the compatibility of our Transformer-M, we further fine-tune our model on the PDBBind dataset (version 2016, Wang et al. (2004; 2005b)), one of the most widely used datasets for structurebased virtual screening (Jiménez et al., 2018; Stepniewska-Dziubinska et al., 2018; Zheng et al., 2019). PDBBind dataset consists of protein-ligand complexes as data instances, which are obtained in bioassay experiments associated with the pKa (or − logKd, − logKi) affinity values. For each data instance, the 3D geometric structures are provided and the 2D graph structures are constructed via pre-defined rules. The task requires models to predict the binding affinity of protein-ligand complexes, which is extremely vital for drug discovery. After pre-trained on the PCQM4Mv2 training set, our Transformer-M model is fine-tuned and evaluated on the core set of the PDBBind dataset. We compare our model with competitive baselines covering classical methods, CNN-based methods, and GNNs. All experiments are repeated five times with different seeds. Average performance is reported. Due to space limitation, we present the details of baselines and experiment settings in Appendix B.3. The results are presented in Table 2. Our Transformer-M consistently outperforms all the baselines on all evaluation metrics by a large margin, e.g., 3.3% absolute improvement on Pearson’s correlation coefficient (R). It is worth noting that data instances of the PDBBind dataset are protein-ligand complexes, while our model is pre-trained on simple molecules, demonstrating the transferability of Transformer-M. 4.4 QM9 PERFORMANCE (3D) We use the QM9 dataset (Ramakrishnan et al., 2014) to evaluate our Transformer-M on molecular tasks in the 3D data format. QM9 is a quantum chemistry benchmark consisting of 134k stable small organic molecules. These molecules correspond to the subset of all 133,885 species out of the GDB-17 chemical universe of 166 billion organic molecules. Each molecule is associated with 12 targets covering its energetic, electronic, and thermodynamic properties. The 3D geometric structure of the molecule is used as input. Following Thölke & De Fabritiis (2021), we randomly choose 10,000 and 10,831 molecules for validation and test evaluation, respectively. The remaining molecules are used to fine-tune our Transformer-M model. We observed that several previous works used different data splitting ratios or didn’t describe the evaluation details. For a fair comparison, we choose baselines that use similar splitting ratios in the original papers. The details of baselines and experiment settings are presented in Appendix B.4. The results are presented in Table 3. It can be seen that our Transformer-M achieves competitive performance compared to those baselines, suggesting that the model is compatible with 3D molecular data. In particular, Transformer-M performs best on HUMO, LUMO, and HUMO-LUMO predictions. This indicates that the knowledge learned in the pre-training task transfers better to similar tasks. Note that the model doesn’t perform quite well on some other tasks. We believe the Transformer-M can be improved in several aspects, including employing a carefully designed output layer (Thölke & De Fabritiis, 2021) or pre-training with more self-supervised training signals. 4.5 ABLATION STUDY In this subsection, we conduct a series of experiments to investigate the key designs of our Transformer-M. In this paper, we use two training objectives to train the model, and we will ablate the effect of the two training objectives. Besides, we use three modes to activate different channels with a pre-defined distribution, and we will study the impact of the distribution on the final performance. Due to space limitation, we present more analysis on our Transformer-M model in Appendix B.5. Impact of the pre-training tasks. As stated in Section 4.1, our Transformer-M model is pre-trained on the PCQM4Mv2 training set via two tasks: (1) predicting the HOMO-LUMO gap of molecules in both 2D and 3D formats. (2) 3D position denoising. We conduct ablation studies on both PCQM4Mv2 and QM9 datasets to check whether both objectives benefit downstream tasks. In detail, we conduct two additional experiments. The first experiment is training Transformer-M models from scratch on PCQM4Mv2 using its 2D graph data and QM9 using its 3D geometric data to check the benefit of the overall pre-training method. The second experiment is pre-training Transformer-M without using the 3D denoising task to study the effectiveness of the proposed 2D-3D joint pre-training approach. The results are shown in Table 4. It can be seen that the joint pre-training significantly boosts the performance on both PCQM4Mv2 and QM9 datasets. Besides, the 3D Position Denoising task is also beneficial, especially on the QM9 dataset in the 3D format. Impact of mode distribution. Denote (p2D, p3D, p2D&3D) as the probability of the modes mentioned in Section 4.1. We conduct experiments to investigate the influence of different distributions on the model performance. We select three distribution with (p2D, p3D, p2D&3D) being: 1:1:1, 1:2:2, and 1:2:1. The results are presented in Table 4. We obtain consistent conclusions on both PCQM4Mv2 and QM9 datasets: 1) for all three configurations, our Transformer-M model achieves strong performance, which shows that our joint training is robust to hyperparameter selection; 2) Using a slightly larger probability on the 3D mode achieves the best results. 5 CONCLUSION In this work, we take the first step toward general-purpose molecular models. The proposed Transformer-M offers a promising way to handle molecular tasks in 2D and 3D formats. We use two separate channels to encode 2D and 3D structural information and integrate them into the backbone Transformer. When the input data is in a particular format, the corresponding channel will be activated, and the other will be disabled. Through simple training tasks on 2D and 3D molecular data, our model automatically learns to leverage chemical knowledge from different data formats and correctly capture the representations. Extensive experiments are conducted, and all empirical results show that our Transformer-M can achieve strong performance on 2D and 3D tasks simultaneously. The potential of our Transformer-M can be further explored in a broad range of applications in chemistry. ACKNOWLEDGEMENTS We thank Shanda Li for the helpful discussions. We also thank all the anonymous reviewers for the very careful and detailed reviews as well as the valuable suggestions. Their help has further enhanced our work. This work is supported by National Key R&D Program of China (2022ZD0114900) and National Science Foundation of China (NSFC62276005). This work is partially supported by the Shanghai Committee of Science and Technology (Grant No. 21DZ1100100). B EXPERIMENTAL DETAILS B.1 LARGE-SCALE PRE-TRAINING Dataset. Our Transformer-M model is pre-trained on the training set of PCQM4Mv2 from the OGB Large-Scale Challenge (Hu et al., 2021). PCQM4Mv2 is a quantum chemistry dataset originally curated under the PubChemQC project (Maho, 2015; Nakata & Shimazaki, 2017). The total number of training samples is 3.37 million. Each molecule in the training set is associated with both 2D graph structures and 3D geometric structures. The HOMO-LUMO energy gap of each molecule is provided, which is obtained by DFT-based geometry optimization (Burke, 2012). According to the OGB-LSC (Hu et al., 2021), the HOMO-LUMO energy gap is one of the most practically-relevant quantum chemical properties of molecules since it is related to reactivity, photoexcitation, and charge transport. Being the largest publicly available dataset for molecular property prediction, PCQM4Mv2 is considered to be a challenging benchmark for molecular models. Settings. Our Transformer-M model consists of 12 layers. The dimension of hidden layers and feed-forward layers is set to 768. The number of attention heads is set to 32. The number of Gaussian Basis kernels is set to 128. We use AdamW (Kingma & Ba, 2014) as the optimizer and set its hyperparameter ϵ to 1e-8 and (β1, β2) to (0.9,0.999). The gradient clip norm is set to 5.0. The peak learning rate is set to 2e-4. The batch size is set to 1024. The model is trained for 1.5 million steps with a 90k-step warm-up stage. After the warm-up stage, the learning rate decays linearly to zero. The dropout ratios for the input embeddings, attention matrices, and hidden representations are set to 0.0, 0.1, and 0.0 respectively. The weight decay is set to 0.0. We also employ the stochastic depth (Huang et al., 2016) and set the probability to 0.2. The probability (p2D, p3D, p2D&3D) of each data instance entering the three modes mentioned in Section 4.1 is set to (0.2, 0.5, 0.3). The scaling factor σ of added noise in the 3D Position Denoising task is set to 0.2. The ratio of the supervised loss to the denoising loss is set to 1:1. All models are trained on 4 NVIDIA Tesla A100 GPUs. B.2 PCQM4MV2 Baselines. We compare our Transformer-M with several competitive baselines. These models fall into two categories: message passing neural network (MPNN) variants and Graph Transformers. For MPNN variants, we include two widely used models, GCN (Kipf & Welling, 2016) and GIN (Xu et al., 2019), and their variants with virtual node (VN) (Gilmer et al., 2017; Hu et al., 2020a). Additionally, we compare GINE-VN (Brossard et al., 2020) and DeeperGCN-VN (Li et al., 2020). GINE is the multi-hop version of GIN. DeeperGCN is a 12-layer GNN model with carefully designed aggregators. The result of MLP-Fingerprint (Hu et al., 2021) is also reported. We also compare several Graph Transformer models. Graphormer (Ying et al., 2021a) developed graph structural encodings and integrated them into a standard Transformer model. It achieved impressive performance across several world competitions (Ying et al., 2021b; Shi et al., 2022). CoAtGIN (Cui, 2022) is a hybrid architecture combining both Convolution and Attention. TokenGT (Kim et al., 2022) adopted the standard Transformer architecture without graph-specific modifications. GraphGPS (Rampášek et al., 2022) proposed a framework to integrate the positional and structural encodings, local message-passing mechanism, and global attention mechanism into the Transformer model. GRPE (Park et al., 2022) proposed a graph-specific relative positional encoding and considerd both node-spatial and node-edge relations. EGT (Hussain et al., 2022) exclusively used global self-attention as an aggregation mechanism rather than static localized convolutional aggregation, and utilized edge channels to capture structural information. B.3 PDBBIND Dataset. PDBBind is a well-known dataset that provides a comprehensive collection of experimentally measured binding affinity data for biomolecular complexes deposited in the Protein Data Bank (PDB) (Wang et al., 2005a). The task requires models to predict the binding affinity value pKa (or − logKd, − logKi) of protein-ligand complexes, which is extremely vital for drug discovery. In our experiment, we use the PDBBind v2016 dataset, which is widely used in recent works (Li et al., 2021). The PDBBind dataset includes three overlapped subsets called the general, refined, and core sets. The general set contains all 13,283 protein-ligand complexes, while the 4,057 complexes in the refined set are selected out of the general set with better quality. Moreover, the core set serves as the highest quality benchmark for testing. To avoid data leakage, we remove the data instances in the core set from the refined set. After training, we evaluate our model on the core set. The evaluation metrics include Pearson’s correlation coefficient (R), Mean Absolute Error (MAE), Root-Mean Squared Error (RMSE), and Standard Deviation (SD). Baselines. We compare our Transformer-M with several competitive baselines. These models mainly fall into three categories: classic Machine Learning methods, Convolution Neural Network (CNN) based methods, Graph Neural Network (GNN) based methods. First, we report the results of LR, SVR, and RF-Score (Ballester et al., 2010), which employed traditional machine learning approaches to predict the binding affinities. Second, inspired by the success of CNNs in computer vision, Stepniewska-Dziubinska et al. (2018) proposed the Pafnucy model that represents the complexes via a 3D grid and utilizes 3D convolution to produce feature maps. Zheng et al. (2019) introduced OnionNet, which also used CNNs to extract features based on rotation-free element-pair specific contacts between atoms of proteins and ligands. There are also several works that leverage GNNs to improve the performance of the PDBBind dataset. GraphDTA (Nguyen et al., 2020) represented protein-ligand complexes as 2D graphs and used GNN models to predict the affinity score. GNN-DTI (Lim et al., 2019) incorporated the 3D structures of protein-ligand complexes into GNNs. DMPNN (Yang et al., 2019) operated over a hybrid representation that combines convolutions and descriptors. SGCN (Danel et al., 2020) is a GCN-inspired architecture that leverages node positions. MAT (Maziarka et al., 2020) augmented the attention mechanism in the standard Transformer model with inter-atomic distances and molecular graph structures. DimeNet (Klicpera et al., 2020) developed the atom-pair embeddings and utilized directional information between atoms. CMPNN (Song et al., 2020) introduced a communicative kernel and a message booster module to strengthen the message passing between atoms. SIGN (Li et al., 2021)) proposed polar-inspired graph attention layers and pairwise interactive pooling layers to utilize the biomolecular structural information. Settings. We fine-tune the pre-trained Transformer-M on the PDBBind dataset. We use AdamW (Kingma & Ba, 2014) as the optimizer and set its hyperparameter ϵ to 1e-8 and (β1, β2) to (0.9,0.999). The gradient clip norm is set to 5.0. The peak learning rate is set to 2e-4. The total number of epochs is set to 30. The ratio of the warm-up steps to the total steps is set to 0.06. The batch size is set to 16. The dropout ratios for the input embeddings, attention matrices, and hidden representations are set to 0.0, 0.1, and 0.0 respectively. The weight decay is set to 0.0. Following (Ying et al., 2021a), We use FLAG (Kong et al., 2020) with minor modifications for graph data augmentation. In particular, except for the step size α and the number of adversarial attack steps m, we also employ a projection step in Zhu et al. (2020) with maximum perturbation γ. These hyperparaters are set to the following configurations: α = 0.01,m = 4, γ = 0.01. All models are trained on 2 NVIDIA Tesla V100 GPUs. B.4 QM9 Dataset. QM9 (Ramakrishnan et al., 2014) is a quantum chemistry benchmark consisting of 134k stable small organic molecules. These molecules correspond to the subset of all 133,885 species out of the GDB-17 chemical universe of 166 billion organic molecules. Each molecule is associated with 12 targets covering its energetic, electronic, and thermodynamic properties. The 3D geometric structure of the molecule is used as input. Following Thölke & De Fabritiis (2021), we randomly choose 10,000 and 10,831 molecules for validation and test evaluation, respectively. The remaining molecules are used to fine-tune our Transformer-M model. Baselines. We comprehensively compare our Transformer-M with both pre-training methods and 3D molecular models. First, we follow Jiao et al. (2022) to compare several pre-training methods. Hu et al. (2019) proposed a strategy to pre-train GNNs via both node-level and graph-level tasks. Sun et al. (2019) maximized the mutual information between graph-level representations and substructure representations as the pre-training tasks. You et al. (2020) instead used contrastive learning to pre-train GNNs. There are also several works that utilize 3D geometric structures during pre-training. Jing et al. (2021) maximized the mutual information between 2D and 3D representations. Fang et al. (2021) proposed a strategy to learn spatial information by utilizing both local and global 3D structures. Stärk et al. (2022) used two encoders to capture 2D and 3D structural information separately while maximizing the mutual information between 2D and 3D representations. Jiao et al. (2022) adopted an equivariant energy-based model and developed a node-level pretraining loss for force prediction. We report the results of these methods from (Jiao et al., 2022) for comparison. Second, we follow Thölke & De Fabritiis (2021) to compare 3D molecular models. Schütt et al. (2017) used continuous-filter convolution layers to model quantum interactions in molecules. Anderson et al. (2019) developed a GNN model equipped with activation functions being covariant to rotations. Klicpera et al. (2020) proposed directional message passing, which uses atom-pair embeddings and utilizes directional information between atoms. Schütt et al. (2021) proposed the polarizable atom interaction neural network (PaiNN) that uses an equivariant message passing mechanism. Hutchinson et al. (2021) built upon the Transformer model consisting of attention layers that are equivariant to arbitrary Lie groups and their discrete subgroups. Thölke & De Fabritiis (2021) also developed a Transformer variant with layers designed by prior physical and chemical knowledge. Satorras et al. (2021) proposed the EGNN model which does not require computationally expensive higher-order representations in immediate layers to keep equivariance, and can be easily scaled to higher-dimensional spaces. Godwin et al. (2022) proposed the 3D position denoising task and verified it on the Graph Network-based Simulator (GNS) model (Sanchez-Gonzalez et al., 2020). Settings. We fine-tune the pre-trained Transformer-M on the QM9 dataset. Following Thölke & De Fabritiis (2021), we adopt the Mean Squared Error (MSE) loss during training and use the Mean Absolute Error (MAE) loss function during evaluation. We also adopt label standardization for stable training. We use AdamW as the optimizer, and set the hyper-parameter ϵ to 1e-8 and (β1, β2) to (0.9,0.999). The gradient clip norm is set to 5.0. The peak learning rate is set to 7e-5. The batch size is set to 128. The dropout ratios for the input embeddings, attention matrices, and hidden representations are set to 0.0, 0.1, and 0.0 respectively. The weight decay is set to 0.0. The model is fine-tuned for 600k steps with a 60k-step warm-up stage. After the warm-up stage, the learning rate decays linearly to zero. All models are trained on 1 NVIDIA A100 GPU. B.5 MORE ANALYSIS Investigation on the generality of the design methodology of Transformer-M. In this work, we develop our Transformer-M model based on the Transformer backbone and integrate separate 2D and 3D channels (implemented by encoding methods) to encode the structural information of 2D and 3D molecular data. As stated in Section 3.2, it is a general design methodology for handling molecular data in different forms, which works well with different structural encoding instantiations. To demonstrate its generality and effectiveness, we further conduct experiments with other structural encodings from GRPE (Park et al., 2022) and EGT (Hussain et al., 2022), which are competitive baselines on the PCQM4Mv2 benchmark as shown in Table 1. All the hyperparameters are kept the same as the settings in Appendix B.1 for a fair comparison. The results are presented in Table 5. It can be easily seen that our Transformer-M model equipped with different encoding methods can consistently obtain significantly better performance than the corresponding vanilla 2D models, which indeed verifies the generality and effectiveness of the design methodology of Transformer-M. Investigation on the impact of 3D conformers calculated by different methods. Besides the versatility to handle molecules in different formats, our Transformer-M further achieves strong performance on various challenging molecular tasks, as shown in Section 4. On PCQM4Mv2 validation set (2D only), our Transformer-M establishes a new state-of-the-art, which mainly credits to the newly introduced 2D-3D joint training strategy in Section 3.2. The chemical knowledge in the 3D geometric structure can be leveraged during joint training and boost the performance of 2D tasks. Since the benefits of the 3D geometric structure are observed, it is natural to ask how the quality of calculated 3D conformers influences the performance of Transformer-M. To investigate this question, we additionally use the RDKit (Landrum, 2016) to generate one 3D conformed for each molecule in the training set of PCQM4Mv2. Compared to officially provided DFT-optimized geometric structures, the structures are less costly to obtain by using RDKit while also being less accurate. Thus, each molecule has its 2D molecular graph, 3D conformer calculated by DFT, and 3D conformer calculated by RDKit. Based on such a dataset, we conduct three additional experiments. Firstly, we train our Transformer-M model only using 2D molecular graphs. In this experiment, only the 2D channels are activated. Secondly, we train our Transformer-M model using both 2D molecular graphs (encoded by 2D channels) and 3D conformers generated by RDKit (encoded by 3D channels). Thirdly, we train our Transformer-M model using 2D molecular graphs, 3D conformers generated by RDKit, and 3D conformers calculated by DFT. In this experiment, we use two sets of 3D channels to separately encode structural information of 3D RDKit conformers and 3D DFT conformers. During training, when a data instance enters 3D or 2D+3D modes, both sets of 3D channels are activated and integrated. For all three experiments, the hyperparameters of Transformer-M are kept the same as the settings in Appendix B.1. The results are presented in Table 6. We can see that the quality of the 3D conformer matters in the final performance: leveraging 3D conformers generated by RDKit (second line) brings minor gains compared to using 2D molecular graphs only (first line). On the contrary, when leveraging 3D conformers calculated by DFT, the improvement is significant (the last two lines). From the practical view, it will be interesting to investigate the influence of 3D conformers calculated by methods that are more accurate than RDKit while more efficient than DFT, e.g., semiempirical methods (Dral et al., 2016), which we leave as future work. Investigation on the effectiveness of Transformer-M pre-training. We provide additional results on the effectiveness of our model on both PDBBind (2D+3D) and QM9 (3D) downstream datasets. Firstly, to verify the effectiveness of Transformer-M pre-training on the PDBBind dataset, we further pre-train the Graphormer model (Ying et al., 2021a) on the same PCQM4Mv2 dataset as a competitive pre-trained baseline. Since the Graphormer model can only handle graph data, we only use the 2D molecular graph of each data instance. All hyperparameters are kept the same as the settings in Appendix B.1. The results are presented in Table 7. We can draw the following conclusions: (1) pre-training is helpful (e.g., 0.797 (R of SIGN model, the best baseline) -> 0.804 (R of pre-trained Graphormer model)); (2) our pre-training method is more significant (e.g., 0.804 -> 0.830), which indeed demonstrate the effectiveness of our framework. Secondly, we demonstrate that our pre-training strategy helps learn a better Transformer model on downstream QM9 dataset. We conduct two additional experiments on the QM9 dataset. In the first experiment, we train the 3D geometric Transformer model (Transformer-M with 3D channel only) from scratch. In the second experiment, we use the 3D Position Denoising task as the objective to pre-train the 3D geometric Transformer on the PCQM4Mv2 and fine-tune the pre-trained checkpoint on QM9. Due to the time limits and constrained resources, we selected six QM9 targets for comparison. All the hyperparameters of pre-training and fine-tuning are kept the same. The results are presented in Table 8. It can be easily seen that our pre-training methods consistently and significantly improve the downstream performance on all six tasks, which indeed demonstrates the effectiveness of our general framework for 3D molecular data. We are aware that we achieve competitive rather than SOTA performance compared with baselines (5 best performance out of 12 targets, see Table 3). For U0, U, H, and G, there still exists a performance gap between our Transformer-M and some latest baselines, which use pretty complicated neural architectures. We believe that exploring more model alternatives and leveraging the wisdom in those networks into our Transformer-M will further improve the performance, which we will keep working on.
1. What is the focus and contribution of the paper on molecular tasks? 2. What are the strengths of the proposed approach, particularly in its ability to unify 2D and 3D information? 3. What are the weaknesses of the paper, especially regarding its claims and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes Transformer-M, a Transformer-based model that can take both 2D and 3D molecular formats as input. It adopts various positional encoding techniques to unify 2D and 3D information into transformer as attention bias terms. This model is claimed to be a general-purpose model for molecular tasks. Experiments on several 2D and 3D molecular tasks have been conducted to evaluate the developed method. Strengths And Weaknesses #####Strength The motivation of developing a general-purpose model for molecular tasks is great and might be inspiring to the community. The empirical result of the proposed method is competitive, especially on PCQM4Mv2, where the task is the same as the pretraining objective. This manuscript is well organized and easy to follow. #####Weaknesses The main concern for this paper is that the novelty is not enough to me. To be specifically, both the 2D and 3D branches are proposed by previous work. The positional encodings are verified to be effective by existing work. This work combines these two branches as a single model, which is not novel in terms of technical contribution. Also, the empirical result on PCQM4Mv2 is good. However, the effectiveness of pretraining Transformer-M has not been demonstrated well, empirically. Specifically, the methods used for comparison in Table 2 (PDBBind) and Table 3 (QM9) do not use the same extra data for pretraining. In this sense, the comparison is not so rigorous and fair to me. Even though, the performance on QM9 is not good, except for the energy-related tasks that are consistent with the pretraining tasks. This cannot support the claim that the proposed Transformer-M is a general-purpose model. Clarity, Quality, Novelty And Reproducibility The paper is clearly written. I believe the results can be reproduced well. My concern is the novelty, as detailed before.
ICLR
Title ON THE USE OF CONVOLUTIONAL AUTO-ENCODER FOR INCREMENTAL CLASSIFIER LEARNING IN CONTEXT AWARE ADVERTISEMENT Abstract Context Aware Advertisement (CAA) is a type of advertisement appearing on websites or mobile apps. The advertisement is targeted on specific group of users and/or the content displayed on the websites or apps. This paper focuses on classifying images displayed on the websites by incremental learning classifier with Deep Convolutional Neural Network (DCNN) especially for Context Aware Advertisement (CAA) framework. Incrementally learning new knowledge with DCNN leads to catastrophic forgetting as previously stored information is replaced with new information. To prevent catastrophic forgetting, part of previously learned knowledge should be stored for the life time of incremental classifier. Storing information for life time involves privacy and legal concerns especially in context aware advertising framework. Here, we propose an incremental classifier learning method which addresses privacy and legal concerns while taking care of catastrophic forgetting problem. We conduct experiments on different datasets including CIFAR-100. Experimental results show that proposed system achieves relatively high performance compared to the state-of-the-art incremental learning methods. 1 INTRODUCTION Recently, deep neural networks (DNNs) have shown remarkable performance in several classification tasks in many domains such as speech, image, natural language processing, etc.,. Among deep networks, Deep Convolutional Neural Networks (DCNNs) are the most successful models for supervised image classification and set the state-of-the-art in many benchmarks competitions Lecun et al. (1998), Krizhevsky (2009) where the numbers of image categories have already been defined. It is common that machine learning systems with neural networks such as DCNN are trained in batch mode in which training samples from all the classes are available Wu et al.. Hence, DCNN based learning framework fits well with the requirements of benchmark evaluations. However, in real-world applications, more and more image categories will be available as time goes on and network will need to learn new tasks. The drawback of DCNN is inability to learn new information without redoing the whole learning process using all old and new information. If previously learned information is not available at the time of adding new information, DCNN tends to forget old memories. This is referred to as catastrophic forgetting problem. Human brain has ways to learning new information without forgetting old memories. Here, our focus is on incremental learning process which is similar to the process in human biological brains. Human memory is composed of three storage systems: 1)encoding, 2) long-term memory and 3) recall networks. Encoding system convert new information to a particular form for storage by paying attention Dorta. And, long-term memory has two main parts which are episodic and semantic memory and it acts as long-term warehouse and stores memories related to important personal events, feelings, silly facts, etc.,. Finally, the third network is recall system which is able to find information stored in our brain and ”pull it out” efficiently. Humans learn something new everyday and every moment which makes human brain to perform incremental learning process most of the time. To store information for long time in the brain, humans need to pay sufficient focus, time or attention on them at the time of learning. Furthermore, in order not to forget the previously learned knowledge, humans must have sufficient practise, usage which involves revisiting to the stored memory Kirby (2013). If we compare DCNN with human brain, DCNN only has batch learning capability. It has no modules for information encoding, storing encoded information and recalling it for the purpose of practising or revisiting it later which are important for incremental learning process. In this paper, we integrate these missing modules to DCNN for incremental learning. One concern of incremental learning system is the need to store training samples of categories trained previously. Previous methods such as state-of-the-art iCaRL Rebuffi et al. (2017b) and Roy et al. (2018) keep the old images for revisiting purpose. However, keeping original images long time in our CAA framework is not practical due to privacy and legal concerns. For example, it is not good to store negative images such as violence, accident scenes. In this paper, we address two issues: 1) fulfilling the privacy and legal requirement 2) dealing with catastrophic forgetting problem with simulation of human brain system. The experimental results show that proposed solution is effective in addressing privacy and legal concerns as well as deling with catastrophic forgetting problem to maintain system performance especially for CAA framework. 2 RELATED WORK Recently, there has been increased interest in incremental learning or lifelong learning and several approaches have been proposed to solve the problem. The approaches can be categorized into 3 main groups. The approaches in first group store the small subsets of training data from the previously learned tasks. The stored exemplars are used in training the network when adding the new tasks to alleviate the catastrophic forgetting problem. Example of such method is iCaRL Rebuffi et al. (2017b) which keeps a few exemplars to rehearse the previous tasks. Another recent method in this group is based on Gradient Episodic Memory (GEM) model Lopez-Paz & Ranzato (2017). However the usage of stored exemplars in GEM based approach is different from iCaRL Rebuffi et al. (2017b) which keeps the predictions of exemplars from past tasks invariant by means of distillation. GEM based approach define inequality constraints on loss to allow positive backward transfer for increasing performance on some preceding tasks. In second group, the approaches avoid storing any original training data from previously learned tasks. However, these approaches retain statistics of training samples such as means and standard deviations Kemker & Kanan (2018) of individual class distributions for tackling the catastrophic forgetting problem. Finally, the third group of algorithms totally avoid keeping any samples of previously learned tasks. Some of the approaches in this group tackle the catastrophic forgetting problem by dynamically expending the network for each new task Rusu et al. (2016), Rebuffi et al. (2017a), Yoon et al. (2018), Terekhov et al. (2015). Although these approaches are simple, they have scalability issues as the size of network grows with the increasing numbers of tasks. To avoid the scalability issue, some recent studies explore regularization methods over the network parameters Zenke et al. (2017), Kirkpatrick et al. or the study in Wu et al. investigates on generating synthesized images . Example of regularization method is Elastic Weight Consolidation (EWC) Zenke et al. (2017), Kirkpatrick et al., Lee et al. (2017), Liu et al. (2018), Chaudhry et al. (2018) based approach. EWC based approaches use a regularization term so that network parameters remain close to the parameters of the network trained for the previous classes. Similarly, Learning Without Forgetting (LWF) Li & Hoiem (2016) performs regularization on predictions instead of network weights. Our proposed approach falls under the second group and the prior work in this category is FearNet Kemker & Kanan (2018) which is brain-inspired algorithm for incremental class learning. The FearNet system involves three brain-inspired sub-modules: 1) short-term memory storage for recently learned knowledge which is inspired by the hippocampal complex (HC) of human brain 2) long-term memory storage inspired by the medical prefrontal cortex(mPFC) and 3) a sub-module to choose the memory system between HC and mPFC to use for a particular example. Catastrophic forgetting problem is addressed by using pseudorehearsal which allows the network to revisit the previous memories during incremental training. Although FearNet uses HC model as short-term storage and the model is erased after the information is transferred to long-term storage(mPFC) network. mPFC stores class statistics for pseudorehearsal in the form of mean and covariance matrix for each class to generate new exemplars (pseudo-examples). In human’s brain, encoding is crucial step to creating new memory Mastin (2018). Encoding process convert newly perceived knowledge into a construct for storage to recall it later. During encoding process, human pay more attention especially for memorable events. This causes neurons to fire more frequently, making the event to be encoded as a memory. For example, emotional events tend to increase attention and leading it to unforgettable memories. Hence, the more intense the attention, the more effective the encoding and this results the storing acquired knowledge for long time. In this paper, we propose the brain inspired incremental learning system and investigate the effect of quality of encoded information on incremental learning. We integrate Convolutional AutoEncoder (CAE) in incremental learning process for encoding knowledge. We show that our brain inspired model is efficient and CAE is useful for encoding for revisiting the stored knowledge later. The state-of-the-art system such as iCaRL Rebuffi et al. (2017b) requires to storing original training examples for all the previously learned classes, making it challenging for privacy and legal requirements especially for CAA framework which has limitations in storing original images (or exemplars) owned by others. To address privacy issues, the study in Wu et al. uses images generated by Generative Adversarial Networks (GAN) to replace the exemplar set of original training samples. However GANs have limitations on image quality and mode dropping. GAN based method works relatively well on simple images, however, it is not easy to get realistic scene images such as plane crash (for example) which we are interested. In our approach, we take care of privacy and legal by integrating CAE in our system. 3 DATASETS Experiments are conducted on the following datasets. CIFAR-100 contains 100 image classes. Each class has 500 training images and 100 testing images. Following the class-incremental benchmark protocol in iCaRL [22] on this dataset, 100 classes are arranged in a fixed random order and come in as P parts. Each part is with C = 100/P classes. A multi-class classifier is built with the first part that contains C classes. Then this classifier is adapted to recognize all 100 classes. IMDB-CAA We collect an IMage Database for Context Aware Advertisement (IMDB-CAA). Most of these images are negative scenes with violence, accidents, gambling etc. The database has about 12,000 color or gray-scale images with all images are manually labeled and irrelevant images are removed. The categories of the images in IMDB-CAA are listed in Table 1. Each category has 500 training images and 100 testing images. 4 PROPOSED BRAIN INSPIRED INCREMENTAL LEARNING SYSTEM In this paper, we investigate a strategy to simulate the encoding and storing information as in human brain learning process. By encoding the original images using Convolutional Auto-Encoder (CAE), the negative images become not easily viewable during storage. By using the incremental learning mechanism, only encoded information is kept for future learning once new data is available. In the following section we discuss about Convolutional Auto-Encoder (CAE) and presents its use to address privacy issues as well as to simulate human brain encoding mechanism. 4.1 CONVOLUTIONAL AUTO-ENCODER (CAE) CAEs are very popular in deep learning research for unsupervised learning methods. CAEs can be used for extracting useful features from unlabelled data as well as for detecting and removing input redundancies to preserve only essential aspects of information for robust and discriminative representations Masci et al. (2011), Turchenko & Luczak (2015). Unsupervised training process of CAEs tends to avoid local minima and improves the network’s performance and stability Erhan et al. (2010). As an auto-encoder, CAEs are based on encoder-decoder paradigm in which input data or image is mapped to a lower-dimensional space (referred to as encoder part) and then, encoded information is expended to regenerate the input data or image (referred to as decoder part). As discussed above, CAEs have encoder and decoder parts which we need for simulating the human brain. Encoder part is useful for information encoding, storing encoded information. And, decoder part is for decoding stored information for practising or revisiting it in future. For DCNN to achieve the capabilities as those in human brain, we propose to integrate the auto-encoder, CAE, to DCNN. By doing so, we expect DCNN to achieve the capability of information encoding, storing, and practising or revisiting the stored information later in order to overcome the catastrophic forgetting problem of DCNN. To overcome the catastrophic forgetting problem and address privacy requirements, we propose to use CAEs to encode the original images and keep encoded pseudo-examples instead of keeping original images. There are several advantages of using CAEs over GANs. These include 1) possibility of recalling or regenerating high quality images from stored encoded information for future revisiting. 2) There is freedom to control on the size of the pseoduexemplars by designing autoencoder as required. If accuracy is more important we can design the network for larger pseoduexemplars. However, if the storage is very limited, we can build the network for smaller pseoduexemplars with the certain expense of system performance. One may argue that instead of using CAEs, we may use some encryption method. The focus in encryption and cryptography are security, encryption/decryption time, etc., Mahajan & Sachdeva (2013). It is common that security of the algorithm depends on the length of the key. The longer key length will always support to good security feature Oad et al. (2014). And, longer key length also means larger file size. Hence, most of the encrypted file is larger than before it is encrypted. However, our focus is on privacy, storage, performance and simplicity for easy scalability. Storage and performance should be able to be adjusted according to requirement of the application. The CAE network involves 6 convolution layers and the first 3 layers are for encoder part and the last 3 layers are for decoder part. The numbers of filters in convolutional layers are set as {16, 8, 8, 8, 8, 16} respectively. The decoder part is a mirror of the encoder part. We use binary cross entropy as loss function and adaptive learning rate method for optimization function. We use subset of ImageNet database Deng et al. (2009) consisting of 150,000 images to train the CAE network. We set the heights and widths of the images as 224 to use as input for training the network. We use encoder part for encoding images and obtain pseudo-examples. Then, we employ decoder part to regenerate images from stored pseudo-examples. Figure 1 shows the comparisons of CAE regenerated images from different pseudo-exemplar sizes with original images and images generated by GAN. In Figure 1, we gradually reduce the size of pseudo-examples. The reconstructed image becomes blur when we reduce the size. At the 16.6% reduced size we notice that edges of the object becomes sharper with increased noise. This phenomena can be explained by the findings in image enhancement using deep autoencoders. In image enhancement task, there is a trade-off between denoising capability and perceived sharpness of enhanced image Lore et al. (2016). The model with higher denoising capability generates smoother edges and generated images becomes less sharp which eventually lose structural information. It can be seen in 3rd row from bottom in Figure 1 for 16.6% pseudo-exemplar size that object edges appears sharper at the cost of having more noise if we compare these images with reconstructed images of other pseudo-exemplar sizes. In fact, the characteristics such as sharper edges and textures are desirable properties especially for image classification with Convolutional Neural Network (CNN). 4.2 CAE INTEGRATED PROPOSED INCREMENTAL LEARNING (IL) SYSTEM The proposed IL system is developed based on ResNet152 He et al. (2015) pre-trained model. We integrate CAE to ResNet152 to simulate the processes of encoding and storing information as well as revisiting stored information as in human brain. CAE integrated proposed IL system is presented in Figure 2. We also integrate CAE on previous incremental learning method of iCaRL Rebuffi et al. (2017b). And, our focus is iCaRL to achieve capabilities as in human brain and to fulfill privacy requirements. 5 EXPERIMENTS We investigate the effect on quality of encoded information in our brain inspired incremental learning system by conducting a systematic series of experiments. We conduct experiments on CIFAR100 dataset. Firstly, we investigate effect of integrating CAE to state-of-the-art iCaRL system in terms of storage size. Secondly, we observe the performance of our proposed IL system on quality of encoded information. Thirdly, we compare performance of our proposed system with state-ofthe-art iCaRL as well as other recent systems. Finally, we present experiments on our IMDB-CAA dataset. As in Rebuffi et al. (2017b), the evaluation measure is the standard multi-class accuracy on the test set. Original size of CIFAR-100 is only 32x32x3 pixels while IMDB-CAA is at least 300x300x3 pixels. The image sizes of the two datasets are largely different. Unless otherwise stated, we employ CAE with the architecture presented in Section 4.1 with the pseudo-exemplar size of 28x28x8 in our experiments. 5.1 EXPERIMENTS ON CIFAR-100 DATASET We conduct experiments using public dataset CIFAR-100 which is used in state-of-the-art iCaRL Rebuffi et al. (2017b) and GAN based Wu et al. incremental learning systems. The following is experiments on the effect of different pseudo-exemplar set size on CAE integrated iCaRL system. 5.1.1 EFFECT OF PSEUDO-EXEMPLAR SET SIZE ON CAE INTEGRATED ICARL SYSTEM Original iCaRL system keeps real images (referred to as exemplars) for future revisiting. Here, we encode the images before we keep them. We integrate CAE to iCaRL system (with P=10, C=10) to conduct experiments. Quality of encoding process has effect on system performance. One way to improve quality of encoded information is to store as many examples as possible to cover the distribution of a particular image class. We gradually increase the numbers of pseudo-examples in our experiments. We use pseudo-examples only for revisiting past tasks. As for training new classes and testing, we use original samples. Table 2 shows the classification accuracies on different set sizes of pseudo-examples. In Table 2, pseudo-exemplar set size 2000 means that system keeps maximum 2000 samples for all past tasks at all time as in iCaRL system Rebuffi et al. (2017b). The results show that system performance improves with increasing pseudo-exemplar set size. We also use full pseudo-exemplar set to observe the highest performance that CAE integrated iCaRL can achieve. As can be seen in last row of Table 2, when we use full exemplar set, accuracy increase to 56.7%. In the following, we observe the effect of pseudo-exemplar sizes on CAE integrated proposed IL system. 5.1.2 EFFECT OF DIFFERENT PSEUDO-EXEMPLAR SIZES ON CAE INTEGRATED PROPOSED IL SYSTEM The size of pseudo-example has direct impact on storage. The smaller size is preferable especially for devices with limited storage. We conduct experiments to observe the effect on different sizes of pseudo-examples and system performance. We modify the layers of CAE presented in Section 4.1 to obtain different pseudo-exemplar sizes. The first 3 columns of Table 3 shows the experimental results. In the first column, ’Same’ means the size of pseudo-example is the same as original real image size. And, 67% means the size of a pseudo-example is 67% of the size of corresponding real image in the CIFAR-100 dataset. We use all 500 pseudo-exemplars to train each classes. The system is tested on real images as well as on pseudo-samples. Training is done using only pseudo-examples for all old and new tasks. 67% 47.47 49.65 67% 57.37 50% 44.82 49.56 50% 57.19 33.3% 42.6 47.1 33.3% 56.18 16.6% 51.5 50.66 16.6% 52.64 We achieve relatively high accuracies for both test conditions: pseudo-samples and real images when pseudo-exemplar size is the same as real image size. However, when we reduce the pseudoexemplar size to 67% performance starts to degrade. The quality of reconstructed images from pseudo-examples is not as good as real images as can be seen in Figure 1. With decreasing pseudoexemplar size, the reconstructed images becomes more and more blur and edges appear less and less sharp. However, when we further reduce the pseudo-exemplar size to 16.6% system performance improves as we can see in the last row of Table 3 for both pseudo-sample and real test images. As we have discussed in Section 4.1, pseudo-examples with the size of 16.6% gives reconstructed images with sharper edges at the expense of increasing noise. And, shapes and colors of background start to fade (or blend) and object standouts from the background. This makes feature maps of the CNN classifier easy to detect the edges of the object and we are able to achieve the higher classification performance with much less storage requirement. In the following section we compare our proposed system with recent studies. 5.1.3 COMPARING PROPOSED IL SYSTEM WITH STATE-OF-THE-ART SYSTEMS We compare our system with the state-of-the-art iCaRL system Rebuffi et al. (2017b), GAN based system Wu et al., FearNet system Kemker & Kanan (2018). Firstly, we compare our proposed system with iCaRL for same storage requirements. We conduct experiments with iCaRL on different exemplar storage settings and results are reported in the last 5th column of Table 3. The 4th column of the table shows the storage requirement of iCaRL system. For example, 67% means 67% of the training set is stored as exemplar set. As we can see, our proposed system perform better than iCaRL when the pseudo-exemplar size is the same is original image size. For less storage requirement option, our proposed system achieve very similar performance as in iCaRL for the storage requirement of 16.6%. Please note that iCaRL stores real exemplars while our system stores pseudo-examples and hence, our approach full fill the privacy and legal requirements. To compare our system with GAN based system Wu et al., FearNet system Kemker & Kanan (2018), we repeat the results of Table 3 together with the system performances of the state-of-the-art systems in Table 4. As we can see in the table, our system achieves 68.44% accuracy which is the highest among all systems. This setup is useful especially for our application in CAA framework in which storage is less concern than privacy. And, the proposed system also has the option with less storage requirement for reasonably high accuracy as can be seen in the last row of the table for 16.6% pseudo-exemplar size. As mentioned previously, human pays intense attention for better encoding of knowledge for unforgettable memory. Similarly, in our proposed system, quality of encoded statistics is directly related to size of the pseudo-example. However, smaller pseudo-exemplar size does not necessarily means having incomplete information. Another advantage of the proposed system is that we do not need to train individual CAE for each image class. We prepare only one CAE, train it only for once and, then, it can be used for life time of IL system. GAN based method needs training image generator for each image category. And, storage requirements of each GAN model could be much larger than that of the whole original training set especially for very small images such as CIFAR-100. 5.2 EXPERIMENTS ON IMDB-CAA DATASET Finally, we conduct experiments using IMDB-CAA dataset on our proposed IL system. Original size of the images in this data set is much larger than CIFAR-100 dataset. In the following we conduct experiments using different pseudo-exemplar sizes. We reduce the pseudo-exemplar sizes up to 4% of original image. Table 5 shows the results. For this IMDB-CAA dataset, the system performance with the offline batch training on real images is 86.6%. We use ResNet152 pre-trained model and fine-tuned with the training set in our offline batch training. As we can see in the 2nd row of Table 5, we can achieve the classification performance which is very close to offline batch training. Even at the pseudo-exemplar size of 4%, system performance is reasonably high. As in CIFAR-100 data set, IMDB-CAA also shows the best pseudostatistics for 16.6% size which gives 81.2% accuracy and the performance is comparable to offline batch training performance with original images. When we examine the images, we notice the same image characteristics, sharp edges, as in CIFAR-100. 5.3 CONCLUSIONS In this paper, we presents a strategy to simulate brain inspired information encoding and storing for incremental learning system. With this strategy, we also address the privacy and legal issues which is required for incremental learning in our Context Aware Advertisement (CAA) framework. Experimental results show that proposed strategy is simple, efficient and easily scalable. A single system is able to provide various options on system performance which is better than state-of-theart to the performance which is reasonably high with very little storage for both CIFAR-100 and IMDB-CAA datasets.
1. What is the focus of the paper, particularly regarding incremental learning? 2. What are the strengths and weaknesses of the proposed approach compared to prior works like iCaRL & Fear Net? 3. How does the reviewer assess the significance of the work regarding privacy and legal requirements? 4. What concerns does the reviewer have about the experiments and their analysis, especially regarding the choice of datasets and performance comparison? 5. Are there any questions or suggestions for improving the presentation and clarity of the paper's content?
Review
Review The paper extends an existing incremental learning method, mainly introducing the latent representations of an autoencoder instead of the original images. It includes a lot of hype in that it simulates the human brain - because it is based on the iCaRL & Fear Net formulation - and that it fulfils the privacy and legal requirements - because it stores and uses the auto-encoder representations instead of the images. Specific comments: - The title of the paper defines its topic to be context aware advertisement, whereas the main results and all comparisons are made on the CIFAR dataset. Only the last Table (5) provides the performance on the IMDB-CAA dataset, without any detailed analysis of the experiments. - The results in Table 3 are quite strange: the presented approach starts by outperforming iCaRL method, but then deteriorates very fast wrt size and is much lower than the original method, with no justification on this. Some improvement is shown in size 16.6% without, again, any logical explanation provided. - Section 4.2 does not provide any detail of the integration resulting in the presented system; Fig.2 does not provide a clear description either. - Language improvement is required in the experimental sections. -
ICLR
Title ON THE USE OF CONVOLUTIONAL AUTO-ENCODER FOR INCREMENTAL CLASSIFIER LEARNING IN CONTEXT AWARE ADVERTISEMENT Abstract Context Aware Advertisement (CAA) is a type of advertisement appearing on websites or mobile apps. The advertisement is targeted on specific group of users and/or the content displayed on the websites or apps. This paper focuses on classifying images displayed on the websites by incremental learning classifier with Deep Convolutional Neural Network (DCNN) especially for Context Aware Advertisement (CAA) framework. Incrementally learning new knowledge with DCNN leads to catastrophic forgetting as previously stored information is replaced with new information. To prevent catastrophic forgetting, part of previously learned knowledge should be stored for the life time of incremental classifier. Storing information for life time involves privacy and legal concerns especially in context aware advertising framework. Here, we propose an incremental classifier learning method which addresses privacy and legal concerns while taking care of catastrophic forgetting problem. We conduct experiments on different datasets including CIFAR-100. Experimental results show that proposed system achieves relatively high performance compared to the state-of-the-art incremental learning methods. 1 INTRODUCTION Recently, deep neural networks (DNNs) have shown remarkable performance in several classification tasks in many domains such as speech, image, natural language processing, etc.,. Among deep networks, Deep Convolutional Neural Networks (DCNNs) are the most successful models for supervised image classification and set the state-of-the-art in many benchmarks competitions Lecun et al. (1998), Krizhevsky (2009) where the numbers of image categories have already been defined. It is common that machine learning systems with neural networks such as DCNN are trained in batch mode in which training samples from all the classes are available Wu et al.. Hence, DCNN based learning framework fits well with the requirements of benchmark evaluations. However, in real-world applications, more and more image categories will be available as time goes on and network will need to learn new tasks. The drawback of DCNN is inability to learn new information without redoing the whole learning process using all old and new information. If previously learned information is not available at the time of adding new information, DCNN tends to forget old memories. This is referred to as catastrophic forgetting problem. Human brain has ways to learning new information without forgetting old memories. Here, our focus is on incremental learning process which is similar to the process in human biological brains. Human memory is composed of three storage systems: 1)encoding, 2) long-term memory and 3) recall networks. Encoding system convert new information to a particular form for storage by paying attention Dorta. And, long-term memory has two main parts which are episodic and semantic memory and it acts as long-term warehouse and stores memories related to important personal events, feelings, silly facts, etc.,. Finally, the third network is recall system which is able to find information stored in our brain and ”pull it out” efficiently. Humans learn something new everyday and every moment which makes human brain to perform incremental learning process most of the time. To store information for long time in the brain, humans need to pay sufficient focus, time or attention on them at the time of learning. Furthermore, in order not to forget the previously learned knowledge, humans must have sufficient practise, usage which involves revisiting to the stored memory Kirby (2013). If we compare DCNN with human brain, DCNN only has batch learning capability. It has no modules for information encoding, storing encoded information and recalling it for the purpose of practising or revisiting it later which are important for incremental learning process. In this paper, we integrate these missing modules to DCNN for incremental learning. One concern of incremental learning system is the need to store training samples of categories trained previously. Previous methods such as state-of-the-art iCaRL Rebuffi et al. (2017b) and Roy et al. (2018) keep the old images for revisiting purpose. However, keeping original images long time in our CAA framework is not practical due to privacy and legal concerns. For example, it is not good to store negative images such as violence, accident scenes. In this paper, we address two issues: 1) fulfilling the privacy and legal requirement 2) dealing with catastrophic forgetting problem with simulation of human brain system. The experimental results show that proposed solution is effective in addressing privacy and legal concerns as well as deling with catastrophic forgetting problem to maintain system performance especially for CAA framework. 2 RELATED WORK Recently, there has been increased interest in incremental learning or lifelong learning and several approaches have been proposed to solve the problem. The approaches can be categorized into 3 main groups. The approaches in first group store the small subsets of training data from the previously learned tasks. The stored exemplars are used in training the network when adding the new tasks to alleviate the catastrophic forgetting problem. Example of such method is iCaRL Rebuffi et al. (2017b) which keeps a few exemplars to rehearse the previous tasks. Another recent method in this group is based on Gradient Episodic Memory (GEM) model Lopez-Paz & Ranzato (2017). However the usage of stored exemplars in GEM based approach is different from iCaRL Rebuffi et al. (2017b) which keeps the predictions of exemplars from past tasks invariant by means of distillation. GEM based approach define inequality constraints on loss to allow positive backward transfer for increasing performance on some preceding tasks. In second group, the approaches avoid storing any original training data from previously learned tasks. However, these approaches retain statistics of training samples such as means and standard deviations Kemker & Kanan (2018) of individual class distributions for tackling the catastrophic forgetting problem. Finally, the third group of algorithms totally avoid keeping any samples of previously learned tasks. Some of the approaches in this group tackle the catastrophic forgetting problem by dynamically expending the network for each new task Rusu et al. (2016), Rebuffi et al. (2017a), Yoon et al. (2018), Terekhov et al. (2015). Although these approaches are simple, they have scalability issues as the size of network grows with the increasing numbers of tasks. To avoid the scalability issue, some recent studies explore regularization methods over the network parameters Zenke et al. (2017), Kirkpatrick et al. or the study in Wu et al. investigates on generating synthesized images . Example of regularization method is Elastic Weight Consolidation (EWC) Zenke et al. (2017), Kirkpatrick et al., Lee et al. (2017), Liu et al. (2018), Chaudhry et al. (2018) based approach. EWC based approaches use a regularization term so that network parameters remain close to the parameters of the network trained for the previous classes. Similarly, Learning Without Forgetting (LWF) Li & Hoiem (2016) performs regularization on predictions instead of network weights. Our proposed approach falls under the second group and the prior work in this category is FearNet Kemker & Kanan (2018) which is brain-inspired algorithm for incremental class learning. The FearNet system involves three brain-inspired sub-modules: 1) short-term memory storage for recently learned knowledge which is inspired by the hippocampal complex (HC) of human brain 2) long-term memory storage inspired by the medical prefrontal cortex(mPFC) and 3) a sub-module to choose the memory system between HC and mPFC to use for a particular example. Catastrophic forgetting problem is addressed by using pseudorehearsal which allows the network to revisit the previous memories during incremental training. Although FearNet uses HC model as short-term storage and the model is erased after the information is transferred to long-term storage(mPFC) network. mPFC stores class statistics for pseudorehearsal in the form of mean and covariance matrix for each class to generate new exemplars (pseudo-examples). In human’s brain, encoding is crucial step to creating new memory Mastin (2018). Encoding process convert newly perceived knowledge into a construct for storage to recall it later. During encoding process, human pay more attention especially for memorable events. This causes neurons to fire more frequently, making the event to be encoded as a memory. For example, emotional events tend to increase attention and leading it to unforgettable memories. Hence, the more intense the attention, the more effective the encoding and this results the storing acquired knowledge for long time. In this paper, we propose the brain inspired incremental learning system and investigate the effect of quality of encoded information on incremental learning. We integrate Convolutional AutoEncoder (CAE) in incremental learning process for encoding knowledge. We show that our brain inspired model is efficient and CAE is useful for encoding for revisiting the stored knowledge later. The state-of-the-art system such as iCaRL Rebuffi et al. (2017b) requires to storing original training examples for all the previously learned classes, making it challenging for privacy and legal requirements especially for CAA framework which has limitations in storing original images (or exemplars) owned by others. To address privacy issues, the study in Wu et al. uses images generated by Generative Adversarial Networks (GAN) to replace the exemplar set of original training samples. However GANs have limitations on image quality and mode dropping. GAN based method works relatively well on simple images, however, it is not easy to get realistic scene images such as plane crash (for example) which we are interested. In our approach, we take care of privacy and legal by integrating CAE in our system. 3 DATASETS Experiments are conducted on the following datasets. CIFAR-100 contains 100 image classes. Each class has 500 training images and 100 testing images. Following the class-incremental benchmark protocol in iCaRL [22] on this dataset, 100 classes are arranged in a fixed random order and come in as P parts. Each part is with C = 100/P classes. A multi-class classifier is built with the first part that contains C classes. Then this classifier is adapted to recognize all 100 classes. IMDB-CAA We collect an IMage Database for Context Aware Advertisement (IMDB-CAA). Most of these images are negative scenes with violence, accidents, gambling etc. The database has about 12,000 color or gray-scale images with all images are manually labeled and irrelevant images are removed. The categories of the images in IMDB-CAA are listed in Table 1. Each category has 500 training images and 100 testing images. 4 PROPOSED BRAIN INSPIRED INCREMENTAL LEARNING SYSTEM In this paper, we investigate a strategy to simulate the encoding and storing information as in human brain learning process. By encoding the original images using Convolutional Auto-Encoder (CAE), the negative images become not easily viewable during storage. By using the incremental learning mechanism, only encoded information is kept for future learning once new data is available. In the following section we discuss about Convolutional Auto-Encoder (CAE) and presents its use to address privacy issues as well as to simulate human brain encoding mechanism. 4.1 CONVOLUTIONAL AUTO-ENCODER (CAE) CAEs are very popular in deep learning research for unsupervised learning methods. CAEs can be used for extracting useful features from unlabelled data as well as for detecting and removing input redundancies to preserve only essential aspects of information for robust and discriminative representations Masci et al. (2011), Turchenko & Luczak (2015). Unsupervised training process of CAEs tends to avoid local minima and improves the network’s performance and stability Erhan et al. (2010). As an auto-encoder, CAEs are based on encoder-decoder paradigm in which input data or image is mapped to a lower-dimensional space (referred to as encoder part) and then, encoded information is expended to regenerate the input data or image (referred to as decoder part). As discussed above, CAEs have encoder and decoder parts which we need for simulating the human brain. Encoder part is useful for information encoding, storing encoded information. And, decoder part is for decoding stored information for practising or revisiting it in future. For DCNN to achieve the capabilities as those in human brain, we propose to integrate the auto-encoder, CAE, to DCNN. By doing so, we expect DCNN to achieve the capability of information encoding, storing, and practising or revisiting the stored information later in order to overcome the catastrophic forgetting problem of DCNN. To overcome the catastrophic forgetting problem and address privacy requirements, we propose to use CAEs to encode the original images and keep encoded pseudo-examples instead of keeping original images. There are several advantages of using CAEs over GANs. These include 1) possibility of recalling or regenerating high quality images from stored encoded information for future revisiting. 2) There is freedom to control on the size of the pseoduexemplars by designing autoencoder as required. If accuracy is more important we can design the network for larger pseoduexemplars. However, if the storage is very limited, we can build the network for smaller pseoduexemplars with the certain expense of system performance. One may argue that instead of using CAEs, we may use some encryption method. The focus in encryption and cryptography are security, encryption/decryption time, etc., Mahajan & Sachdeva (2013). It is common that security of the algorithm depends on the length of the key. The longer key length will always support to good security feature Oad et al. (2014). And, longer key length also means larger file size. Hence, most of the encrypted file is larger than before it is encrypted. However, our focus is on privacy, storage, performance and simplicity for easy scalability. Storage and performance should be able to be adjusted according to requirement of the application. The CAE network involves 6 convolution layers and the first 3 layers are for encoder part and the last 3 layers are for decoder part. The numbers of filters in convolutional layers are set as {16, 8, 8, 8, 8, 16} respectively. The decoder part is a mirror of the encoder part. We use binary cross entropy as loss function and adaptive learning rate method for optimization function. We use subset of ImageNet database Deng et al. (2009) consisting of 150,000 images to train the CAE network. We set the heights and widths of the images as 224 to use as input for training the network. We use encoder part for encoding images and obtain pseudo-examples. Then, we employ decoder part to regenerate images from stored pseudo-examples. Figure 1 shows the comparisons of CAE regenerated images from different pseudo-exemplar sizes with original images and images generated by GAN. In Figure 1, we gradually reduce the size of pseudo-examples. The reconstructed image becomes blur when we reduce the size. At the 16.6% reduced size we notice that edges of the object becomes sharper with increased noise. This phenomena can be explained by the findings in image enhancement using deep autoencoders. In image enhancement task, there is a trade-off between denoising capability and perceived sharpness of enhanced image Lore et al. (2016). The model with higher denoising capability generates smoother edges and generated images becomes less sharp which eventually lose structural information. It can be seen in 3rd row from bottom in Figure 1 for 16.6% pseudo-exemplar size that object edges appears sharper at the cost of having more noise if we compare these images with reconstructed images of other pseudo-exemplar sizes. In fact, the characteristics such as sharper edges and textures are desirable properties especially for image classification with Convolutional Neural Network (CNN). 4.2 CAE INTEGRATED PROPOSED INCREMENTAL LEARNING (IL) SYSTEM The proposed IL system is developed based on ResNet152 He et al. (2015) pre-trained model. We integrate CAE to ResNet152 to simulate the processes of encoding and storing information as well as revisiting stored information as in human brain. CAE integrated proposed IL system is presented in Figure 2. We also integrate CAE on previous incremental learning method of iCaRL Rebuffi et al. (2017b). And, our focus is iCaRL to achieve capabilities as in human brain and to fulfill privacy requirements. 5 EXPERIMENTS We investigate the effect on quality of encoded information in our brain inspired incremental learning system by conducting a systematic series of experiments. We conduct experiments on CIFAR100 dataset. Firstly, we investigate effect of integrating CAE to state-of-the-art iCaRL system in terms of storage size. Secondly, we observe the performance of our proposed IL system on quality of encoded information. Thirdly, we compare performance of our proposed system with state-ofthe-art iCaRL as well as other recent systems. Finally, we present experiments on our IMDB-CAA dataset. As in Rebuffi et al. (2017b), the evaluation measure is the standard multi-class accuracy on the test set. Original size of CIFAR-100 is only 32x32x3 pixels while IMDB-CAA is at least 300x300x3 pixels. The image sizes of the two datasets are largely different. Unless otherwise stated, we employ CAE with the architecture presented in Section 4.1 with the pseudo-exemplar size of 28x28x8 in our experiments. 5.1 EXPERIMENTS ON CIFAR-100 DATASET We conduct experiments using public dataset CIFAR-100 which is used in state-of-the-art iCaRL Rebuffi et al. (2017b) and GAN based Wu et al. incremental learning systems. The following is experiments on the effect of different pseudo-exemplar set size on CAE integrated iCaRL system. 5.1.1 EFFECT OF PSEUDO-EXEMPLAR SET SIZE ON CAE INTEGRATED ICARL SYSTEM Original iCaRL system keeps real images (referred to as exemplars) for future revisiting. Here, we encode the images before we keep them. We integrate CAE to iCaRL system (with P=10, C=10) to conduct experiments. Quality of encoding process has effect on system performance. One way to improve quality of encoded information is to store as many examples as possible to cover the distribution of a particular image class. We gradually increase the numbers of pseudo-examples in our experiments. We use pseudo-examples only for revisiting past tasks. As for training new classes and testing, we use original samples. Table 2 shows the classification accuracies on different set sizes of pseudo-examples. In Table 2, pseudo-exemplar set size 2000 means that system keeps maximum 2000 samples for all past tasks at all time as in iCaRL system Rebuffi et al. (2017b). The results show that system performance improves with increasing pseudo-exemplar set size. We also use full pseudo-exemplar set to observe the highest performance that CAE integrated iCaRL can achieve. As can be seen in last row of Table 2, when we use full exemplar set, accuracy increase to 56.7%. In the following, we observe the effect of pseudo-exemplar sizes on CAE integrated proposed IL system. 5.1.2 EFFECT OF DIFFERENT PSEUDO-EXEMPLAR SIZES ON CAE INTEGRATED PROPOSED IL SYSTEM The size of pseudo-example has direct impact on storage. The smaller size is preferable especially for devices with limited storage. We conduct experiments to observe the effect on different sizes of pseudo-examples and system performance. We modify the layers of CAE presented in Section 4.1 to obtain different pseudo-exemplar sizes. The first 3 columns of Table 3 shows the experimental results. In the first column, ’Same’ means the size of pseudo-example is the same as original real image size. And, 67% means the size of a pseudo-example is 67% of the size of corresponding real image in the CIFAR-100 dataset. We use all 500 pseudo-exemplars to train each classes. The system is tested on real images as well as on pseudo-samples. Training is done using only pseudo-examples for all old and new tasks. 67% 47.47 49.65 67% 57.37 50% 44.82 49.56 50% 57.19 33.3% 42.6 47.1 33.3% 56.18 16.6% 51.5 50.66 16.6% 52.64 We achieve relatively high accuracies for both test conditions: pseudo-samples and real images when pseudo-exemplar size is the same as real image size. However, when we reduce the pseudoexemplar size to 67% performance starts to degrade. The quality of reconstructed images from pseudo-examples is not as good as real images as can be seen in Figure 1. With decreasing pseudoexemplar size, the reconstructed images becomes more and more blur and edges appear less and less sharp. However, when we further reduce the pseudo-exemplar size to 16.6% system performance improves as we can see in the last row of Table 3 for both pseudo-sample and real test images. As we have discussed in Section 4.1, pseudo-examples with the size of 16.6% gives reconstructed images with sharper edges at the expense of increasing noise. And, shapes and colors of background start to fade (or blend) and object standouts from the background. This makes feature maps of the CNN classifier easy to detect the edges of the object and we are able to achieve the higher classification performance with much less storage requirement. In the following section we compare our proposed system with recent studies. 5.1.3 COMPARING PROPOSED IL SYSTEM WITH STATE-OF-THE-ART SYSTEMS We compare our system with the state-of-the-art iCaRL system Rebuffi et al. (2017b), GAN based system Wu et al., FearNet system Kemker & Kanan (2018). Firstly, we compare our proposed system with iCaRL for same storage requirements. We conduct experiments with iCaRL on different exemplar storage settings and results are reported in the last 5th column of Table 3. The 4th column of the table shows the storage requirement of iCaRL system. For example, 67% means 67% of the training set is stored as exemplar set. As we can see, our proposed system perform better than iCaRL when the pseudo-exemplar size is the same is original image size. For less storage requirement option, our proposed system achieve very similar performance as in iCaRL for the storage requirement of 16.6%. Please note that iCaRL stores real exemplars while our system stores pseudo-examples and hence, our approach full fill the privacy and legal requirements. To compare our system with GAN based system Wu et al., FearNet system Kemker & Kanan (2018), we repeat the results of Table 3 together with the system performances of the state-of-the-art systems in Table 4. As we can see in the table, our system achieves 68.44% accuracy which is the highest among all systems. This setup is useful especially for our application in CAA framework in which storage is less concern than privacy. And, the proposed system also has the option with less storage requirement for reasonably high accuracy as can be seen in the last row of the table for 16.6% pseudo-exemplar size. As mentioned previously, human pays intense attention for better encoding of knowledge for unforgettable memory. Similarly, in our proposed system, quality of encoded statistics is directly related to size of the pseudo-example. However, smaller pseudo-exemplar size does not necessarily means having incomplete information. Another advantage of the proposed system is that we do not need to train individual CAE for each image class. We prepare only one CAE, train it only for once and, then, it can be used for life time of IL system. GAN based method needs training image generator for each image category. And, storage requirements of each GAN model could be much larger than that of the whole original training set especially for very small images such as CIFAR-100. 5.2 EXPERIMENTS ON IMDB-CAA DATASET Finally, we conduct experiments using IMDB-CAA dataset on our proposed IL system. Original size of the images in this data set is much larger than CIFAR-100 dataset. In the following we conduct experiments using different pseudo-exemplar sizes. We reduce the pseudo-exemplar sizes up to 4% of original image. Table 5 shows the results. For this IMDB-CAA dataset, the system performance with the offline batch training on real images is 86.6%. We use ResNet152 pre-trained model and fine-tuned with the training set in our offline batch training. As we can see in the 2nd row of Table 5, we can achieve the classification performance which is very close to offline batch training. Even at the pseudo-exemplar size of 4%, system performance is reasonably high. As in CIFAR-100 data set, IMDB-CAA also shows the best pseudostatistics for 16.6% size which gives 81.2% accuracy and the performance is comparable to offline batch training performance with original images. When we examine the images, we notice the same image characteristics, sharp edges, as in CIFAR-100. 5.3 CONCLUSIONS In this paper, we presents a strategy to simulate brain inspired information encoding and storing for incremental learning system. With this strategy, we also address the privacy and legal issues which is required for incremental learning in our Context Aware Advertisement (CAA) framework. Experimental results show that proposed strategy is simple, efficient and easily scalable. A single system is able to provide various options on system performance which is better than state-of-theart to the performance which is reasonably high with very little storage for both CIFAR-100 and IMDB-CAA datasets.
1. What is the main contribution of the paper regarding image classification and deep learning? 2. What are the weaknesses of the paper's claims and proposals, particularly in terms of inspiration from the human brain and encryption methods? 3. Is the paper appropriate for publication in a venue like ICLR, considering its focus and content?
Review
Review This paper describes a system for classifying images displayed on the websites by using an incremental learning classifier with Deep Convolutional Neural Network to be used in context aware advertisement. This is more of an application paper which is not the focus of a venue like ICLR. Further the paper makes several misleading claims. They claim that their system is inspired by the human brain while providing scant evidence to prove that claim (Unless we take it in a very broad sense to mean that neural networks resemble the human brain). Also they propose a convolutional autoencoder as a kind of an encryption method to store images to alleviate privacy and legal concerns. This is not a good idea because encoding an image using a convnet is not a substitute for encryption. In fact any image hence encoded can be decoded easily to reveal the original contents of the image. Overall this paper is not appropriate for ICLR in its current form.
ICLR
Title ON THE USE OF CONVOLUTIONAL AUTO-ENCODER FOR INCREMENTAL CLASSIFIER LEARNING IN CONTEXT AWARE ADVERTISEMENT Abstract Context Aware Advertisement (CAA) is a type of advertisement appearing on websites or mobile apps. The advertisement is targeted on specific group of users and/or the content displayed on the websites or apps. This paper focuses on classifying images displayed on the websites by incremental learning classifier with Deep Convolutional Neural Network (DCNN) especially for Context Aware Advertisement (CAA) framework. Incrementally learning new knowledge with DCNN leads to catastrophic forgetting as previously stored information is replaced with new information. To prevent catastrophic forgetting, part of previously learned knowledge should be stored for the life time of incremental classifier. Storing information for life time involves privacy and legal concerns especially in context aware advertising framework. Here, we propose an incremental classifier learning method which addresses privacy and legal concerns while taking care of catastrophic forgetting problem. We conduct experiments on different datasets including CIFAR-100. Experimental results show that proposed system achieves relatively high performance compared to the state-of-the-art incremental learning methods. 1 INTRODUCTION Recently, deep neural networks (DNNs) have shown remarkable performance in several classification tasks in many domains such as speech, image, natural language processing, etc.,. Among deep networks, Deep Convolutional Neural Networks (DCNNs) are the most successful models for supervised image classification and set the state-of-the-art in many benchmarks competitions Lecun et al. (1998), Krizhevsky (2009) where the numbers of image categories have already been defined. It is common that machine learning systems with neural networks such as DCNN are trained in batch mode in which training samples from all the classes are available Wu et al.. Hence, DCNN based learning framework fits well with the requirements of benchmark evaluations. However, in real-world applications, more and more image categories will be available as time goes on and network will need to learn new tasks. The drawback of DCNN is inability to learn new information without redoing the whole learning process using all old and new information. If previously learned information is not available at the time of adding new information, DCNN tends to forget old memories. This is referred to as catastrophic forgetting problem. Human brain has ways to learning new information without forgetting old memories. Here, our focus is on incremental learning process which is similar to the process in human biological brains. Human memory is composed of three storage systems: 1)encoding, 2) long-term memory and 3) recall networks. Encoding system convert new information to a particular form for storage by paying attention Dorta. And, long-term memory has two main parts which are episodic and semantic memory and it acts as long-term warehouse and stores memories related to important personal events, feelings, silly facts, etc.,. Finally, the third network is recall system which is able to find information stored in our brain and ”pull it out” efficiently. Humans learn something new everyday and every moment which makes human brain to perform incremental learning process most of the time. To store information for long time in the brain, humans need to pay sufficient focus, time or attention on them at the time of learning. Furthermore, in order not to forget the previously learned knowledge, humans must have sufficient practise, usage which involves revisiting to the stored memory Kirby (2013). If we compare DCNN with human brain, DCNN only has batch learning capability. It has no modules for information encoding, storing encoded information and recalling it for the purpose of practising or revisiting it later which are important for incremental learning process. In this paper, we integrate these missing modules to DCNN for incremental learning. One concern of incremental learning system is the need to store training samples of categories trained previously. Previous methods such as state-of-the-art iCaRL Rebuffi et al. (2017b) and Roy et al. (2018) keep the old images for revisiting purpose. However, keeping original images long time in our CAA framework is not practical due to privacy and legal concerns. For example, it is not good to store negative images such as violence, accident scenes. In this paper, we address two issues: 1) fulfilling the privacy and legal requirement 2) dealing with catastrophic forgetting problem with simulation of human brain system. The experimental results show that proposed solution is effective in addressing privacy and legal concerns as well as deling with catastrophic forgetting problem to maintain system performance especially for CAA framework. 2 RELATED WORK Recently, there has been increased interest in incremental learning or lifelong learning and several approaches have been proposed to solve the problem. The approaches can be categorized into 3 main groups. The approaches in first group store the small subsets of training data from the previously learned tasks. The stored exemplars are used in training the network when adding the new tasks to alleviate the catastrophic forgetting problem. Example of such method is iCaRL Rebuffi et al. (2017b) which keeps a few exemplars to rehearse the previous tasks. Another recent method in this group is based on Gradient Episodic Memory (GEM) model Lopez-Paz & Ranzato (2017). However the usage of stored exemplars in GEM based approach is different from iCaRL Rebuffi et al. (2017b) which keeps the predictions of exemplars from past tasks invariant by means of distillation. GEM based approach define inequality constraints on loss to allow positive backward transfer for increasing performance on some preceding tasks. In second group, the approaches avoid storing any original training data from previously learned tasks. However, these approaches retain statistics of training samples such as means and standard deviations Kemker & Kanan (2018) of individual class distributions for tackling the catastrophic forgetting problem. Finally, the third group of algorithms totally avoid keeping any samples of previously learned tasks. Some of the approaches in this group tackle the catastrophic forgetting problem by dynamically expending the network for each new task Rusu et al. (2016), Rebuffi et al. (2017a), Yoon et al. (2018), Terekhov et al. (2015). Although these approaches are simple, they have scalability issues as the size of network grows with the increasing numbers of tasks. To avoid the scalability issue, some recent studies explore regularization methods over the network parameters Zenke et al. (2017), Kirkpatrick et al. or the study in Wu et al. investigates on generating synthesized images . Example of regularization method is Elastic Weight Consolidation (EWC) Zenke et al. (2017), Kirkpatrick et al., Lee et al. (2017), Liu et al. (2018), Chaudhry et al. (2018) based approach. EWC based approaches use a regularization term so that network parameters remain close to the parameters of the network trained for the previous classes. Similarly, Learning Without Forgetting (LWF) Li & Hoiem (2016) performs regularization on predictions instead of network weights. Our proposed approach falls under the second group and the prior work in this category is FearNet Kemker & Kanan (2018) which is brain-inspired algorithm for incremental class learning. The FearNet system involves three brain-inspired sub-modules: 1) short-term memory storage for recently learned knowledge which is inspired by the hippocampal complex (HC) of human brain 2) long-term memory storage inspired by the medical prefrontal cortex(mPFC) and 3) a sub-module to choose the memory system between HC and mPFC to use for a particular example. Catastrophic forgetting problem is addressed by using pseudorehearsal which allows the network to revisit the previous memories during incremental training. Although FearNet uses HC model as short-term storage and the model is erased after the information is transferred to long-term storage(mPFC) network. mPFC stores class statistics for pseudorehearsal in the form of mean and covariance matrix for each class to generate new exemplars (pseudo-examples). In human’s brain, encoding is crucial step to creating new memory Mastin (2018). Encoding process convert newly perceived knowledge into a construct for storage to recall it later. During encoding process, human pay more attention especially for memorable events. This causes neurons to fire more frequently, making the event to be encoded as a memory. For example, emotional events tend to increase attention and leading it to unforgettable memories. Hence, the more intense the attention, the more effective the encoding and this results the storing acquired knowledge for long time. In this paper, we propose the brain inspired incremental learning system and investigate the effect of quality of encoded information on incremental learning. We integrate Convolutional AutoEncoder (CAE) in incremental learning process for encoding knowledge. We show that our brain inspired model is efficient and CAE is useful for encoding for revisiting the stored knowledge later. The state-of-the-art system such as iCaRL Rebuffi et al. (2017b) requires to storing original training examples for all the previously learned classes, making it challenging for privacy and legal requirements especially for CAA framework which has limitations in storing original images (or exemplars) owned by others. To address privacy issues, the study in Wu et al. uses images generated by Generative Adversarial Networks (GAN) to replace the exemplar set of original training samples. However GANs have limitations on image quality and mode dropping. GAN based method works relatively well on simple images, however, it is not easy to get realistic scene images such as plane crash (for example) which we are interested. In our approach, we take care of privacy and legal by integrating CAE in our system. 3 DATASETS Experiments are conducted on the following datasets. CIFAR-100 contains 100 image classes. Each class has 500 training images and 100 testing images. Following the class-incremental benchmark protocol in iCaRL [22] on this dataset, 100 classes are arranged in a fixed random order and come in as P parts. Each part is with C = 100/P classes. A multi-class classifier is built with the first part that contains C classes. Then this classifier is adapted to recognize all 100 classes. IMDB-CAA We collect an IMage Database for Context Aware Advertisement (IMDB-CAA). Most of these images are negative scenes with violence, accidents, gambling etc. The database has about 12,000 color or gray-scale images with all images are manually labeled and irrelevant images are removed. The categories of the images in IMDB-CAA are listed in Table 1. Each category has 500 training images and 100 testing images. 4 PROPOSED BRAIN INSPIRED INCREMENTAL LEARNING SYSTEM In this paper, we investigate a strategy to simulate the encoding and storing information as in human brain learning process. By encoding the original images using Convolutional Auto-Encoder (CAE), the negative images become not easily viewable during storage. By using the incremental learning mechanism, only encoded information is kept for future learning once new data is available. In the following section we discuss about Convolutional Auto-Encoder (CAE) and presents its use to address privacy issues as well as to simulate human brain encoding mechanism. 4.1 CONVOLUTIONAL AUTO-ENCODER (CAE) CAEs are very popular in deep learning research for unsupervised learning methods. CAEs can be used for extracting useful features from unlabelled data as well as for detecting and removing input redundancies to preserve only essential aspects of information for robust and discriminative representations Masci et al. (2011), Turchenko & Luczak (2015). Unsupervised training process of CAEs tends to avoid local minima and improves the network’s performance and stability Erhan et al. (2010). As an auto-encoder, CAEs are based on encoder-decoder paradigm in which input data or image is mapped to a lower-dimensional space (referred to as encoder part) and then, encoded information is expended to regenerate the input data or image (referred to as decoder part). As discussed above, CAEs have encoder and decoder parts which we need for simulating the human brain. Encoder part is useful for information encoding, storing encoded information. And, decoder part is for decoding stored information for practising or revisiting it in future. For DCNN to achieve the capabilities as those in human brain, we propose to integrate the auto-encoder, CAE, to DCNN. By doing so, we expect DCNN to achieve the capability of information encoding, storing, and practising or revisiting the stored information later in order to overcome the catastrophic forgetting problem of DCNN. To overcome the catastrophic forgetting problem and address privacy requirements, we propose to use CAEs to encode the original images and keep encoded pseudo-examples instead of keeping original images. There are several advantages of using CAEs over GANs. These include 1) possibility of recalling or regenerating high quality images from stored encoded information for future revisiting. 2) There is freedom to control on the size of the pseoduexemplars by designing autoencoder as required. If accuracy is more important we can design the network for larger pseoduexemplars. However, if the storage is very limited, we can build the network for smaller pseoduexemplars with the certain expense of system performance. One may argue that instead of using CAEs, we may use some encryption method. The focus in encryption and cryptography are security, encryption/decryption time, etc., Mahajan & Sachdeva (2013). It is common that security of the algorithm depends on the length of the key. The longer key length will always support to good security feature Oad et al. (2014). And, longer key length also means larger file size. Hence, most of the encrypted file is larger than before it is encrypted. However, our focus is on privacy, storage, performance and simplicity for easy scalability. Storage and performance should be able to be adjusted according to requirement of the application. The CAE network involves 6 convolution layers and the first 3 layers are for encoder part and the last 3 layers are for decoder part. The numbers of filters in convolutional layers are set as {16, 8, 8, 8, 8, 16} respectively. The decoder part is a mirror of the encoder part. We use binary cross entropy as loss function and adaptive learning rate method for optimization function. We use subset of ImageNet database Deng et al. (2009) consisting of 150,000 images to train the CAE network. We set the heights and widths of the images as 224 to use as input for training the network. We use encoder part for encoding images and obtain pseudo-examples. Then, we employ decoder part to regenerate images from stored pseudo-examples. Figure 1 shows the comparisons of CAE regenerated images from different pseudo-exemplar sizes with original images and images generated by GAN. In Figure 1, we gradually reduce the size of pseudo-examples. The reconstructed image becomes blur when we reduce the size. At the 16.6% reduced size we notice that edges of the object becomes sharper with increased noise. This phenomena can be explained by the findings in image enhancement using deep autoencoders. In image enhancement task, there is a trade-off between denoising capability and perceived sharpness of enhanced image Lore et al. (2016). The model with higher denoising capability generates smoother edges and generated images becomes less sharp which eventually lose structural information. It can be seen in 3rd row from bottom in Figure 1 for 16.6% pseudo-exemplar size that object edges appears sharper at the cost of having more noise if we compare these images with reconstructed images of other pseudo-exemplar sizes. In fact, the characteristics such as sharper edges and textures are desirable properties especially for image classification with Convolutional Neural Network (CNN). 4.2 CAE INTEGRATED PROPOSED INCREMENTAL LEARNING (IL) SYSTEM The proposed IL system is developed based on ResNet152 He et al. (2015) pre-trained model. We integrate CAE to ResNet152 to simulate the processes of encoding and storing information as well as revisiting stored information as in human brain. CAE integrated proposed IL system is presented in Figure 2. We also integrate CAE on previous incremental learning method of iCaRL Rebuffi et al. (2017b). And, our focus is iCaRL to achieve capabilities as in human brain and to fulfill privacy requirements. 5 EXPERIMENTS We investigate the effect on quality of encoded information in our brain inspired incremental learning system by conducting a systematic series of experiments. We conduct experiments on CIFAR100 dataset. Firstly, we investigate effect of integrating CAE to state-of-the-art iCaRL system in terms of storage size. Secondly, we observe the performance of our proposed IL system on quality of encoded information. Thirdly, we compare performance of our proposed system with state-ofthe-art iCaRL as well as other recent systems. Finally, we present experiments on our IMDB-CAA dataset. As in Rebuffi et al. (2017b), the evaluation measure is the standard multi-class accuracy on the test set. Original size of CIFAR-100 is only 32x32x3 pixels while IMDB-CAA is at least 300x300x3 pixels. The image sizes of the two datasets are largely different. Unless otherwise stated, we employ CAE with the architecture presented in Section 4.1 with the pseudo-exemplar size of 28x28x8 in our experiments. 5.1 EXPERIMENTS ON CIFAR-100 DATASET We conduct experiments using public dataset CIFAR-100 which is used in state-of-the-art iCaRL Rebuffi et al. (2017b) and GAN based Wu et al. incremental learning systems. The following is experiments on the effect of different pseudo-exemplar set size on CAE integrated iCaRL system. 5.1.1 EFFECT OF PSEUDO-EXEMPLAR SET SIZE ON CAE INTEGRATED ICARL SYSTEM Original iCaRL system keeps real images (referred to as exemplars) for future revisiting. Here, we encode the images before we keep them. We integrate CAE to iCaRL system (with P=10, C=10) to conduct experiments. Quality of encoding process has effect on system performance. One way to improve quality of encoded information is to store as many examples as possible to cover the distribution of a particular image class. We gradually increase the numbers of pseudo-examples in our experiments. We use pseudo-examples only for revisiting past tasks. As for training new classes and testing, we use original samples. Table 2 shows the classification accuracies on different set sizes of pseudo-examples. In Table 2, pseudo-exemplar set size 2000 means that system keeps maximum 2000 samples for all past tasks at all time as in iCaRL system Rebuffi et al. (2017b). The results show that system performance improves with increasing pseudo-exemplar set size. We also use full pseudo-exemplar set to observe the highest performance that CAE integrated iCaRL can achieve. As can be seen in last row of Table 2, when we use full exemplar set, accuracy increase to 56.7%. In the following, we observe the effect of pseudo-exemplar sizes on CAE integrated proposed IL system. 5.1.2 EFFECT OF DIFFERENT PSEUDO-EXEMPLAR SIZES ON CAE INTEGRATED PROPOSED IL SYSTEM The size of pseudo-example has direct impact on storage. The smaller size is preferable especially for devices with limited storage. We conduct experiments to observe the effect on different sizes of pseudo-examples and system performance. We modify the layers of CAE presented in Section 4.1 to obtain different pseudo-exemplar sizes. The first 3 columns of Table 3 shows the experimental results. In the first column, ’Same’ means the size of pseudo-example is the same as original real image size. And, 67% means the size of a pseudo-example is 67% of the size of corresponding real image in the CIFAR-100 dataset. We use all 500 pseudo-exemplars to train each classes. The system is tested on real images as well as on pseudo-samples. Training is done using only pseudo-examples for all old and new tasks. 67% 47.47 49.65 67% 57.37 50% 44.82 49.56 50% 57.19 33.3% 42.6 47.1 33.3% 56.18 16.6% 51.5 50.66 16.6% 52.64 We achieve relatively high accuracies for both test conditions: pseudo-samples and real images when pseudo-exemplar size is the same as real image size. However, when we reduce the pseudoexemplar size to 67% performance starts to degrade. The quality of reconstructed images from pseudo-examples is not as good as real images as can be seen in Figure 1. With decreasing pseudoexemplar size, the reconstructed images becomes more and more blur and edges appear less and less sharp. However, when we further reduce the pseudo-exemplar size to 16.6% system performance improves as we can see in the last row of Table 3 for both pseudo-sample and real test images. As we have discussed in Section 4.1, pseudo-examples with the size of 16.6% gives reconstructed images with sharper edges at the expense of increasing noise. And, shapes and colors of background start to fade (or blend) and object standouts from the background. This makes feature maps of the CNN classifier easy to detect the edges of the object and we are able to achieve the higher classification performance with much less storage requirement. In the following section we compare our proposed system with recent studies. 5.1.3 COMPARING PROPOSED IL SYSTEM WITH STATE-OF-THE-ART SYSTEMS We compare our system with the state-of-the-art iCaRL system Rebuffi et al. (2017b), GAN based system Wu et al., FearNet system Kemker & Kanan (2018). Firstly, we compare our proposed system with iCaRL for same storage requirements. We conduct experiments with iCaRL on different exemplar storage settings and results are reported in the last 5th column of Table 3. The 4th column of the table shows the storage requirement of iCaRL system. For example, 67% means 67% of the training set is stored as exemplar set. As we can see, our proposed system perform better than iCaRL when the pseudo-exemplar size is the same is original image size. For less storage requirement option, our proposed system achieve very similar performance as in iCaRL for the storage requirement of 16.6%. Please note that iCaRL stores real exemplars while our system stores pseudo-examples and hence, our approach full fill the privacy and legal requirements. To compare our system with GAN based system Wu et al., FearNet system Kemker & Kanan (2018), we repeat the results of Table 3 together with the system performances of the state-of-the-art systems in Table 4. As we can see in the table, our system achieves 68.44% accuracy which is the highest among all systems. This setup is useful especially for our application in CAA framework in which storage is less concern than privacy. And, the proposed system also has the option with less storage requirement for reasonably high accuracy as can be seen in the last row of the table for 16.6% pseudo-exemplar size. As mentioned previously, human pays intense attention for better encoding of knowledge for unforgettable memory. Similarly, in our proposed system, quality of encoded statistics is directly related to size of the pseudo-example. However, smaller pseudo-exemplar size does not necessarily means having incomplete information. Another advantage of the proposed system is that we do not need to train individual CAE for each image class. We prepare only one CAE, train it only for once and, then, it can be used for life time of IL system. GAN based method needs training image generator for each image category. And, storage requirements of each GAN model could be much larger than that of the whole original training set especially for very small images such as CIFAR-100. 5.2 EXPERIMENTS ON IMDB-CAA DATASET Finally, we conduct experiments using IMDB-CAA dataset on our proposed IL system. Original size of the images in this data set is much larger than CIFAR-100 dataset. In the following we conduct experiments using different pseudo-exemplar sizes. We reduce the pseudo-exemplar sizes up to 4% of original image. Table 5 shows the results. For this IMDB-CAA dataset, the system performance with the offline batch training on real images is 86.6%. We use ResNet152 pre-trained model and fine-tuned with the training set in our offline batch training. As we can see in the 2nd row of Table 5, we can achieve the classification performance which is very close to offline batch training. Even at the pseudo-exemplar size of 4%, system performance is reasonably high. As in CIFAR-100 data set, IMDB-CAA also shows the best pseudostatistics for 16.6% size which gives 81.2% accuracy and the performance is comparable to offline batch training performance with original images. When we examine the images, we notice the same image characteristics, sharp edges, as in CIFAR-100. 5.3 CONCLUSIONS In this paper, we presents a strategy to simulate brain inspired information encoding and storing for incremental learning system. With this strategy, we also address the privacy and legal issues which is required for incremental learning in our Context Aware Advertisement (CAA) framework. Experimental results show that proposed strategy is simple, efficient and easily scalable. A single system is able to provide various options on system performance which is better than state-of-theart to the performance which is reasonably high with very little storage for both CIFAR-100 and IMDB-CAA datasets.
1. What is the main contribution of the paper regarding incremental learning? 2. What are the weaknesses of the paper, particularly in its method description and experimental results? 3. How does the reviewer assess the significance of the problem addressed in the paper? 4. Do you have any questions or concerns about the proposed method, such as its relationship to previous works like iCARL and the meaning of "pseudo-exemplars"?
Review
Review The paper adresses the problem of incremental learning when data from new classes are available as a stream and one wants to be able to update to learn new observed classes without forgetting the older ones. There is a budget issue here and one does not want to just keep the whole training set of all known previously observed classes but rather one wants to consider a maximum memory budget allowed to store what is necessary for an optimal incremental learning (typical examples, statistics etc). There is also a privacy issue preventing from storing original training samples. This is a relevant problem that is has gain interest in the last few years. It is related to topics such as few shot learning and meta few shot learning (with respect ti the number of examples per class that are kept, which is limited) and somehow to budget learning . Yet these topics and associated references are surprisingly not evoked in the text. The paper is rather well written but it strongly lacks precision about the proposed method. A description of the ICARL state of the art method is missing and would have been mandatory since the proposed work appears to build on iCARL method. Actually the description of the method is very short since the dedicated section (§4) is mainly used to describe a rather standard convolutional auto encoder architecture. At the end one tries to guess what the proposed method consists in. As far as i understand it is based on iCARL method where selected examples of past observed classes are not stored as is but in their encoded form (by the convolutional autoencoder). At the end my understanding of the proposed approach is that it consists in an incremental progress of a state of the art method, then an incremental work with limited innovation. By the way i am not sure of the meaning of pseudo exemplar as used in the proposed method. Are these drawn following a distribution computed on training samples ? Or are these pseudo exemplar because you use reconstructed samples from encodings (by the CAE). When looking at experimental results the proposed method seem to bring some benefit but it does not look fully convincing. As written in the paper the proposed system outperforms iCARL in case the examples are encoded in the same dimension as original examples (hence no benefit on the storage side) but reaches similar performance when using less storage capacity.
ICLR
Title #Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning Abstract Count-based exploration algorithms are known to perform near-optimally when used in conjunction with tabular reinforcement learning (RL) methods for solving small discrete Markov decision processes (MDPs). It is generally thought that count-based methods cannot be applied in high-dimensional state spaces, since most states will only occur once. Recent deep RL exploration strategies are able to deal with high-dimensional continuous state spaces through complex heuristics, often relying on optimism in the face of uncertainty or intrinsic motivation. In this work, we describe a surprising finding: a simple generalization of the classic count-based approach can reach near state-of-the-art performance on various highdimensional and/or continuous deep RL benchmarks. States are mapped to hash codes, which allows to count their occurrences with a hash table. These counts are then used to compute a reward bonus according to the classic count-based exploration theory. We find that simple hash functions can achieve surprisingly good results on many challenging tasks. Furthermore, we show that a domain-dependent learned hash code may further improve these results. Detailed analysis reveals important aspects of a good hash function: 1) having appropriate granularity and 2) encoding information relevant to solving the MDP. This exploration strategy achieves near state-of-the-art performance on both continuous control tasks and Atari 2600 games, hence providing a simple yet powerful baseline for solving MDPs that require considerable exploration. 1 Introduction Reinforcement learning (RL) studies an agent acting in an initially unknown environment, learning through trial and error to maximize rewards. It is impossible for the agent to act near-optimally until it has sufficiently explored the environment and identified all of the opportunities for high reward, in all scenarios. A core challenge in RL is how to balance exploration—actively seeking out novel states and actions that might yield high rewards and lead to long-term gains; and exploitation—maximizing short-term rewards using the agent’s current knowledge. While there are exploration techniques for finite MDPs that enjoy theoretical guarantees, there are no fully satisfying techniques for highdimensional state spaces; therefore, developing more general and robust exploration techniques is an active area of research. Most of the recent state-of-the-art RL results have been obtained using simple exploration strategies such as uniform sampling (Mnih et al., 2015) and i.i.d./correlated Gaussian noise (Schulman et al., 2015; Lillicrap et al., 2015). Although these heuristics are sufficient in tasks with well-shaped rewards, the sample complexity can grow exponentially (with state space size) in tasks with sparse rewards (Osband et al., 2016b). Recently developed exploration strategies for deep RL have led to significantly improved performance on environments with sparse rewards. Bootstrapped DQN ∗These authors contributed equally. (Osband et al., 2016a) led to faster learning in a range of Atari 2600 games by training an ensemble of Q-functions. Intrinsic motivation methods using pseudo-counts achieve state-of-the-art performance on Montezuma’s Revenge, an extremely challenging Atari 2600 game (Bellemare et al., 2016). Variational Information Maximizing Exploration (VIME, Houthooft et al. (2016)) encourages the agent to explore by acquiring information about environment dynamics, and performs well on various robotic locomotion problems with sparse rewards. However, we have not seen a very simple and fast method that can work across different domains. Some of the classic, theoretically-justified exploration methods are based on counting state-action visitations, and turning this count into a bonus reward. In the bandit setting, the well-known UCB algorithm of Lai & Robbins (1985) chooses the action at at time t that maximizes r̂ (at ) + √ 2 log t n(at ) where r̂ (at ) is the estimated reward, and n(at ) is the number of times action at was previously chosen. In the MDP setting, some of the algorithms have similar structure, for example, Model Based Interval Estimation–Exploration Bonus (MBIE-EB) of Strehl & Littman (2008) counts state-action pairs with a table n(s, a) and adding a bonus reward of the form β√ n(s,a) to encourage exploring less visited pairs. Kolter & Ng (2009) show that the inverse-square-root dependence is optimal. MBIE and related algorithms assume that the augmented MDP is solved analytically at each timestep, which is only practical for small finite state spaces. This paper presents a simple approach for exploration, which extends classic counting-based methods to high-dimensional, continuous state spaces. We discretize the state space with a hash function and apply a bonus based on the state-visitation count. The hash function can be chosen to appropriately balance generalization across states, and distinguishing between states. We select problems from rllab (Duan et al., 2016) and Atari 2600 (Bellemare et al., 2012) featuring sparse rewards, and demonstrate near state-of-the-art performance on several games known to be hard for naïve exploration strategies. The main strength of the presented approach is that it is fast, flexible and complementary to most existing RL algorithms. In summary, this paper proposes a generalization of classic count-based exploration to high-dimensional spaces through hashing (Section 2); demonstrates its effectiveness on challenging deep RL benchmark problems and analyzes key components of well-designed hash functions (Section 3). 2 Methodology 2.1 Notation This paper assumes a finite-horizon discounted Markov decision process (MDP), defined by (S,A,P, r, ρ0, γ,T ), in which S is the state space, A the action space, P a transition probability distribution, r : S × A → R≥0 a reward function, ρ0 an initial state distribution, γ ∈ (0, 1] a discount factor, and T the horizon. The goal of RL is to maximize the total expected discounted reward Eπ,P [∑T t=0 γ tr (st, at ) ] over a policy π, which outputs a distribution over actions given a state. 2.2 Count-Based Exploration via Static Hashing Our approach discretizes the state space with a hash function φ : S → Z. An exploration bonus is added to the reward function, defined as r+(s, a) = β√ n(φ(s)) , (1) where β ∈ R≥0 is the bonus coefficient. Initially the counts n(·) are set to zero for the whole range of φ. For every state st encountered at time step t, n(φ(st )) is increased by one. The agent is trained with rewards (r + r+), while performance is evaluated as the sum of rewards without bonuses. Note that our approach is a departure from count-based exploration methods such as MBIE-EB since we use a state-space count n(s) rather than a state-action count n(s, a). State-action counts n(s, a) are investigated in Appendix A.6, but no significant performance gains over state counting could be witnessed. Algorithm 1: Count-based exploration through static hashing 1 Define state preprocessor g : S → RK 2 (In case of SimHash) Initialize A ∈ Rk×K with entries drawn i.i.d. from the standard Gaussian distribution N (0, 1) 3 Initialize a hash table with values n(·) ≡ 0 4 for each iteration j do 5 Collect a set of state-action samples {(sm, am)}Mm=0 with policy π 6 Compute hash codes through any LSH method, e.g., for SimHash, φ(sm) = sgn(Ag(sm)) 7 Update the hash table counts ∀m : 0 ≤ m ≤ M as n(φ(sm)) ← n(φ(sm)) + 1 8 Update the policy π using rewards { r (sm, am) + β√ n(φ(sm )) }M m=0 with any RL algorithm Clearly the performance of this method will strongly depend on the choice of hash function φ. One important choice we can make regards the granularity of the discretization: we would like for “distant” states to be be counted separately while “similar” states are merged. If desired, we can incorporate prior knowledge into the choice of φ, if there would be a set of salient state features which are known to be relevant. Algorithm 1 summarizes our method. The main idea is to use locality-sensitive hashing (LSH) to convert continuous, high-dimensional data to discrete hash codes. LSH is a popular class of hash functions for querying nearest neighbors based on certain similarity metrics (Andoni & Indyk, 2006). A computationally efficient type of LSH is SimHash (Charikar, 2002), which measures similarity by angular distance. SimHash retrieves a binary code of state s ∈ S as φ(s) = sgn(Ag(s)) ∈ {−1, 1}k, (2) where g : S → Rd is an optional preprocessing function and A is a k × d matrix with i.i.d. entries drawn from a standard Gaussian distributionN (0, 1). The value for k controls the granularity: higher values lead to fewer collisions and are thus more likely to distinguish states. 2.3 Count-Based Exploration via Learned Hashing When the MDP states have a complex structure, as is the case with image observations, measuring their similarity directly in pixel space fails to provide the semantic similarity measure one would desire. Previous work in computer vision (Lowe, 1999; Dalal & Triggs, 2005; Tola et al., 2010) introduce manually designed feature representations of images that are suitable for semantic tasks including detection and classification. More recent methods learn complex features directly from data by training convolutional neural networks (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; He et al., 2015). Considering these results, it may be difficult for SimHash to cluster states appropriately using only raw pixels. Therefore, we propose to use an autoencoder (AE) consisting of convolutional, dense, and transposed convolutional layers to learn meaningful hash codes in one of its hidden layers. This AE takes as input states s and contains one special dense layer comprised of K saturating activation functions, Algorithm 2: Count-based exploration using learned hash codes 1 Define state preprocessor g : S → BK as the binary code resulting from the autoencoder (AE) 2 Initialize A ∈ Rk×K with entries drawn i.i.d. from the standard Gaussian distribution N (0, 1) 3 Initialize a hash table with values n(·) ≡ 0 4 for each iteration j do 5 Collect a set of state-action samples {(sm, am)}Mm=0 with policy π 6 Add the state samples {sm}Mm=0 to a FIFO replay pool R 7 if j mod jupdate = 0 then 8 Update the AE loss function in Eq. (3) using samples drawn from the replay pool {sn}Nn=1 ∼ R, for example using stochastic gradient descent 9 Compute g(sm) = bb(sm)e, the K-dim rounded hash code for sm learned by the AE 10 Project g(sm) to a lower dimension k via SimHash as φ(sm) = sgn(Ag(sm)) 11 Update the hash table counts ∀m : 0 ≤ m ≤ M as n(φ(sm)) ← n(φ(sm)) + 1 12 Update the policy π using rewards { r (sm, am) + β√ n(φ(sm )) }M m=0 with any RL algorithm more specifically sigmoid functions. By rounding the sigmoid output b(s) of this layer to the closest binary number, any state s can be binarized. Since gradients cannot be back-propagated through a rounding function, an alternative method must be used to ensure that distinct states are mapped to distinct binary codes. Therefore, uniform noise U (−a, a) is added to the sigmoid output. By choosing uniform noise with a sufficiently high variance, the AE is only capable of reconstructing distinct inputs s if its hidden dense layer outputs values b(s) that are sufficiently far apart from each other (Gregor et al., 2016). Feeding a state s to the AE input, extracting b(s) and rounding it to bb(s)e yields a learned binary code. As such, the loss function L(·) over a set of collected states {si }Ni=1 is defined as L ( {sn}Nn=1 ) = − 1 N N∑ n=1 log p(sn) − λ K K∑ i=1 min { (1 − bi (sn))2 , bi (sn)2 } . (3) This objective function consists of a cross-entropy term and a term that pressures the binary code layer to take on binary values, scaled by λ ∈ R≥0. The reasoning behind this is that uniform noise U (−a, a) alone is insufficient, in case the AE does not use a particular sigmoid unit. This term ensures that an unused binary code output is assigned an arbitrary binary value. When omitting this term, the code is more prone to oscillations, causing unwanted bit flips, and destabilizing the counting process. In order to make the AE train sufficiently fast—which is required since it is updated during the agent’s training—we make use of a pixel-wise softmax output layer (van den Oord et al., 2016) that shares weights between all pixels. The different softmax outputs merge together pixel intensities into discrete bins. The architectural details are described in Appendix A.1 and are depicted in Figure 1. Because the code dimension often needs to be large in order to correctly reconstruct the input, we apply a downsampling procedure to the resulting binary code bb(s)e, which can be done through random projection to a lower-dimensional space via SimHash as in Eq. (2). One the one hand, it is important that the mapping from state to code needs to remain relatively consistent over time, which is nontrivial as the AE is constantly updated according to the latest data (Algorithm 2 line 8). An obvious solution would be to significantly downsample the binary code to a very low dimension, or by slowing down the training process. But on the other hand, the code has to remain relatively unique for states that are both distinct and close together on the image manifold. This is tackled both by the second term in Eq. (3) and by the saturating behavior of the sigmoid units. As such, states that are already well represented in the AE hidden layers tend to saturate the sigmoid units, causing the resulting loss gradients to be close to zero and making the code less prone to change. 3 Experiments Experiments were designed to investigate and answer the following research questions: 1. Can count-based exploration through hashing improve performance significantly across different domains? How does the proposed method compare to the current state of the art in exploration for deep RL? 2. What is the impact of learned or static state preprocessing on the overall performance when image observations are used? 3. What factors contribute to good performance, e.g., what is the appropriate level of granularity of the hash function? To answer question 1, we run the proposed method on deep RL benchmarks (rllab and ALE) that feature sparse rewards, and compare it to other state-of-the-art algorithms. Question 2 is answered by trying out different image preprocessors on Atari 2600 games. Finally, we investigate question 3 in Section 3.3 and 3.4. Trust Region Policy Optimization (TRPO, Schulman et al. (2015)) is chosen as the RL algorithm for all experiments, because it can handle both discrete and continuous action spaces, it can conveniently ensure stable improvement in the policy performance, and is relatively insensitive to hyperparameter changes. The hyperparameters settings are reported in Appendix A.1. 3.1 Continuous Control The rllab benchmark (Duan et al., 2016) consists of various control tasks to test deep RL algorithms. We selected several variants of the basic and locomotion tasks that use sparse rewards, as shown in Figure 2, and adopt the experimental setup as defined in (Houthooft et al., 2016)—a description can be found in Appendix A.2. These tasks are all highly difficult to solve with naïve exploration strategies, such as adding Gaussian noise to the actions. Figure 3 shows the results of TRPO (baseline), TRPO-SimHash, and VIME (Houthooft et al., 2016) on the classic tasks MountainCar and CartPoleSwingup, the locomotion task HalfCheetah, and the hierarchical task SwimmerGather. Using count-based exploration with hashing is capable of reaching the goal in all environments (which corresponds to a nonzero return), while baseline TRPO with Gaussian control noise fails completely. Although TRPO-SimHash picks up the sparse reward on HalfCheetah, it does not perform as well as VIME. In contrast, the performance of SimHash is comparable with VIME on MountainCar, while it outperforms VIME on SwimmerGather. 3.2 Arcade Learning Environment The Arcade Learning Environment (ALE, Bellemare et al. (2012)), which consists of Atari 2600 video games, is an important benchmark for deep RL due to its high-dimensional state space and wide variety of games. In order to demonstrate the effectiveness of the proposed exploration strategy, six games are selected featuring long horizons while requiring significant exploration: Freeway, Frostbite, Gravitar, Montezuma’s Revenge, Solaris, and Venture. The agent is trained for 500 iterations in all experiments, with each iteration consisting of 0.1 M steps (the TRPO batch size, corresponds to 0.4 M frames). Policies and value functions are neural networks with identical architectures to (Mnih et al., 2016). Although the policy and baseline take into account the previous four frames, the counting algorithm only looks at the latest frame. BASS To compare with the autoencoder-based learned hash code, we propose using Basic Abstraction of the ScreenShots (BASS, also called Basic; see Bellemare et al. (2012)) as a static preprocessing function g. BASS is a hand-designed feature transformation for images in Atari 2600 games. BASS builds on the following observations specific to Atari: 1) the game screen has a low resolution, 2) most objects are large and monochrome, and 3) winning depends mostly on knowing object locations and motions. We designed an adapted version of BASS1, that divides the RGB screen into square cells, computes the average intensity of each color channel inside a cell, and assigns the resulting values to bins that uniformly partition the intensity range [0, 255]. Mathematically, let C be the cell size (width and height), B the number of bins, (i, j) cell location, (x, y) pixel location, and z the channel. feature(i, j, z) = ⌊ B 255C2 ∑ (x,y)∈ cell(i, j) I (x, y, z) ⌋ . (4) Afterwards, the resulting integer-valued feature tensor is converted to an integer hash code (φ(st ) in Line 6 of Algorithm 1). A BASS feature can be regarded as a miniature that efficiently encodes object locations, but remains invariant to negligible object motions. It is easy to implement and introduces little computation overhead. However, it is designed for generic Atari game images and may not capture the structure of each specific game very well. We compare our results to double DQN (van Hasselt et al., 2016b), dueling network (Wang et al., 2016), A3C+ (Bellemare et al., 2016), double DQN with pseudo-counts (Bellemare et al., 2016), Gorila (Nair et al., 2015), and DQN Pop-Art (van Hasselt et al., 2016a) on the “null op” metric2. We show training curves in Figure 4 and summarize all results in Table 1. Surprisingly, TRPO-pixelSimHash already outperforms the baseline by a large margin and beats the previous best result on Frostbite. TRPO-BASS-SimHash achieves significant improvement over TRPO-pixel-SimHash on 1The original BASS exploits the fact that at most 128 colors can appear on the screen. Our adapted version does not make this assumption. 2The agent takes no action for a random number (within 30) of frames at the beginning of each episode. Montezuma’s Revenge and Venture, where it captures object locations better than other methods.3 TRPO-AE-SimHash achieves near state-of-the-art performance on Freeway, Frostbite and Solaris.4 As observed in Table 1, preprocessing images with BASS or using a learned hash code through the AE leads to much better performance on Gravitar, Montezuma’s Revenge and Venture. Therefore, an static or adaptive preprocessing step can be important for a good hash function. In conclusion, our count-based exploration method is able to achieve remarkable performance gains even with simple hash functions like SimHash on the raw pixel space. If coupled with domain-dependent state preprocessing techniques, it can sometimes achieve far better results. 3.3 Granularity While our proposed method is able to achieve remarkable results without requiring much tuning, the granularity of the hash function should be chosen wisely. Granularity plays a critical role in count-based exploration, where the hash function should cluster states without under-generalizing or over-generalizing. Table 2 summarizes granularity parameters for our hash functions. In Table 3 we summarize the performance of TRPO-pixel-SimHash under different granularities. We choose Frostbite and Venture on which TRPO-pixel-SimHash outperforms the baseline, and choose as reward bonus coefficient β = 0.01 × 256k to keep average bonus rewards at approximately the same scale. k = 16 only corresponds to 65536 distinct hash codes, which is insufficient to distinguish between semantically distinct states and hence leads to worse performance. We observed that k = 512 tends to capture trivial image details in Frostbite, leading the agent to believe that every state is new and equally worth exploring. Similar results are observed while tuning the granularity parameters for TRPO-BASS-SimHash and TRPO-AE-SimHash. The best granularity depends on both the hash function and the MDP. While adjusting granularity parameter, we observed that it is important to lower the bonus coefficient as granularity is increased. This is because a higher granularity is likely to cause lower state counts, leading to higher bonus rewards that may overwhelm the true rewards. 3We provide videos of example game play and visualizations of the difference bewteen Pixel-SimHash and BASS-SimHash at https://www.youtube.com/playlist?list=PLAd-UMX6FkBQdLNWtY8nH1-pzYJA_1T55 4Note that some design choices in other algorithms also impact exploration, such as ε-greedy and entropy regularization. Nevertheless, it is still valuable to position our results within the current literature. 3.4 A Case Study of Montezuma’s Revenge Montezuma’s Revenge is widely known for its extremely sparse rewards and difficult exploration (Bellemare et al., 2016). While our method does not outperform Bellemare et al. (2016) on this game, we investigate the reasons behind this through various experiments. The experiment process below again demonstrates the importance of a hash function having the correct granularity and encoding relevant information for solving the MDP. Our first attempt is to use game RAM states instead of image observations as inputs to the policy (details in Appendix A.1), which leads to a game score of 2500 with TRPO-BASS-SimHash. Our second attempt is to manually design a hash function that incorporates domain knowledge, called SmartHash, which uses an integer-valued vector consisting of the agent’s (x, y) location, room number and other useful RAM information as the hash code (details in Appendix A.3). The best SmartHash agent is able to obtain a score of 3500. Still the performance is not optimal. We observe that a slight change in the agent’s coordinates does not always result in a semantically distinct state, and thus the hash code may remain unchanged. Therefore we choose grid size s and replace the x coordinate by b(x − xmin)/sc (similarly for y). The bonus coefficient is chosen as β = 0.01 √ s to maintain the scale relative to the true reward5 (see Table 4). Finally, the best agent is able to obtain 6600 total rewards after training for 1000 iterations (1000 M time steps), with a grid size s = 10. During our pursuit, we had another interesting discovery that the ideal hash function should not simply cluster states by their visual similarity, but instead by their relevance to solving the MDP. We 5The bonus scaling is chosen by assuming all states are visited uniformly and the average bonus reward should remain the same for any grid size. experimented with including enemy locations in the first two rooms into SmartHash (s = 10), and observed that average score dropped to 1672 (at iteration 1000). Though it is important for the agent to dodge enemies, the agent also erroneously “enjoys” watching enemy motions at distance (since new states are constantly observed) and “forgets” that his main objective is to enter other rooms. An alternative hash function keeps the same entry “enemy locations”, but instead only puts randomly sampled values in it, which surprisingly achieves better performance (3112). However, by ignoring enemy locations altogether, the agent achieves a much higher score (5661) (see Figure 5). In retrospect, we examine the hash codes generated by BASS-SimHash and find that codes clearly distinguish between visually different states (including various enemy locations), but fails to emphasize that the agent needs to explore different rooms. Again this example showcases the importance of encoding relevant information in designing hash functions. 4 Related Work Classic count-based methods such as MBIE (Strehl & Littman, 2005), MBIE-EB and (Kolter & Ng, 2009) solve an approximate Bellman equation as an inner loop before the agent takes an action (Strehl & Littman, 2008). As such, bonus rewards are propagated immediately throughout the state-action space. In contrast, contemporary deep RL algorithms propagate the bonus signal based on rollouts collected from interacting with environments, with value-based (Mnih et al., 2015) or policy gradient-based (Schulman et al., 2015; Mnih et al., 2016) methods, at limited speed. In addition, our proposed method is intended to work with contemporary deep RL algorithms, it differs from classical count-based method in that our method relies on visiting unseen states first, before the bonus reward can be assigned, making uninformed exploration strategies still a necessity at the beginning. Filling the gaps between our method and classic theories is an important direction of future research. A related line of classical explorationmethods is based on the idea of optimism in the face of uncertainty (Brafman & Tennenholtz, 2002) but not restricted to using counting to implement “optimism”, e.g. R-Max (Brafman & Tennenholtz, 2002), UCRL (Jaksch et al., 2010), and E3 (Kearns & Singh, 2002). These methods, similar to MBIE and MBIE-EB, have theoretical guarantees in tabular settings. Bayesian RL methods (Kolter & Ng, 2009; Guez et al., 2014; Sun et al., 2011; Ghavamzadeh et al., 2015), which keep track of a distribution over MDPs, are an alternative to optimism-based methods. Extensions to continuous state space have been proposed by Pazis & Parr (2013) and Osband et al. (2016b). Another type of exploration is curiosity-based exploration. These methods try to capture the agent’s surprise about transition dynamics. As the agent tries to optimize for surprise, it naturally discovers novel states. We refer the reader to Schmidhuber (2010) and Oudeyer & Kaplan (2007) for an extensive review on curiosity and intrinsic rewards. Several exploration strategies for deep RL have been proposed to handle high-dimensional state space recently. Houthooft et al. (2016) propose VIME, in which information gain is measured in Bayesian neural networks modeling the MDP dynamics, which is used an exploration bonus. Stadie et al. (2015) propose to use the prediction error of a learned dynamics model as an exploration bonus. Thompson sampling through bootstrapping is proposed by Osband et al. (2016a), using bootstrapped Q-functions. The most related exploration strategy is proposed by Bellemare et al. (2016), in which an exploration bonus is added inversely proportional to the square root of a pseudo-count quantity. A state pseudocount is derived from its log-probability improvement according to a density model over the state space, which in the limit converges to the empirical count. Our method is similar to pseudo-count approach in the sense that both methods are performing approximate counting to have the necessary generalization over unseen states. The difference is that a density model has to be designed and learned to achieve good generalization for pseudo-count whereas in our case generalization is obtained by a wide range of simple hash functions (not necessarily SimHash). Another interesting connection is that our method also implies a density model ρ(s) = n(φ(s))N over all visited states, where N is the total number of states visited. Another method similar to hashing is proposed by Abel et al. (2016), which clusters states and counts cluster centers instead of the true states, but this method has yet to be tested on standard exploration benchmark problems. 5 Conclusions This paper demonstrates that a generalization of classical counting techniques through hashing is able to provide an appropriate signal for exploration, even in continuous and/or high-dimensional MDPs using function approximators, resulting in near state-of-the-art performance across benchmarks. It provides a simple yet powerful baseline for solving MDPs that require informed exploration. Acknowledgments We would like to thank our colleagues at Berkeley and OpenAI for insightful discussions. This research was funded in part by ONR through a PECASE award. Yan Duan was also supported by a Berkeley AI Research lab Fellowship and a Huawei Fellowship. Xi Chen was also supported by a Berkeley AI Research lab Fellowship. We gratefully acknowledge the support of the NSF through grant IIS-1619362 and of the ARC through a Laureate Fellowship (FL110100281) and through the ARC Centre of Excellence for Mathematical and Statistical Frontiers. Adam Stooke gratefully acknowledges funding from a Fannie and John Hertz Foundation fellowship. Rein Houthooft is supported by a Ph.D. Fellowship of the Research Foundation - Flanders (FWO). A Appendices A.1 Hyperparameter Settings For the rllab experiments, we used batch size 5000 for all tasks except SwimmerGather, for which we used batch size 50000. CartpoleSwingup makes use of a neural network policy with one layer of 32 tanh units. The other tasks make use of a two layer neural network policy of 32 tanh units each for MountainCar and HalfCheetah, and of 64 and 32 tanh units for SwimmerGather. The outputs are modeled by a fully factorized Gaussian distributionN (µ, σ2I), in which µ is modeled as the network output, while σ is a parameter. CartPoleSwingup makes use of a neural network baseline with one layer of 32 ReLU units, while all other tasks make use of a linear baseline function. For all tasks, we used TRPO step size 0.01 and discount factor γ = 0.99. We choose SimHash parameter k = 32 and bonus coefficient β = 0.01, found through a coarse grid search. For Atari experiments, a batch size of 100000 is used, while the KL divergence step size is set to 0.01. The policy and baseline both have the following architecture: 2 convolutional layers with respectively 16 and 32 filters, sizes 8 × 8 and 4 × 4, strides 4 and 2, using no padding, feeding into a single hidden layer of 256 units. The nonlinearities are rectified linear units (ReLUs). The input frames are downsampled to 52 × 52. The input to policy and baseline consists of the 4 previous frames, corresponding to the frame skip of 4. The discount factor was set to γ = 0.995. All inputs are rescaled to [−1, 1] element-wise. All experiments used 5 different training seeds, except the experiments with the learned hash code, which uses 3 different training seeds. Batch normalization (Ioffe & Szegedy, 2015) is used at each policy and baseline layer. TRPO-pixel-SimHash uses binary codes of size k = 256; BASS (TRPO-BASS-SimHash) extracts features using cell size C = 20 and B = 20 bins. The autoencoder for the learned embedding (TRPO-AE-SimHash) uses a binary hidden layer of 512 bit, which are projected to 64 bit. RAM states in Atari 2600 games are integer-valued vectors over length 128 in the range [0, 255]. Experiments on Montezuma’s Revenge with RAM observations use a policy consisting of 2 hidden layers, each of size 32. RAM states are rescaled to a range [−1, 1]. Unlike images, only the current RAM is shown to the agent. Experiment results are averaged over 10 random seeds. In addition, we apply counting Bloom filters (Fan et al., 2000) to maintain a small hash table. Details can be found in Appendix A.5. The autoencoder used for the learned hash code has a 512 bit binary code layer, using sigmoid units, to which uniform noise U (−a, a) with a = 0.3 is added. The loss function Eq. (3), using λ = 10, is updated every jupdate = 3 iterations. The architecture looks as follows: an input layer of size 52 × 52, representing the image luminance is followed by 3 consecutive 6 × 6 convolutional layers with stride 2 and 96 filters feed into a fully connected layer of size 1024, which connects to the binary code layer. This binary code layer feeds into a fully-connected layer of 1024 units, connecting to a fully-connected layer of 2400 units. This layer feeds into 3 consecutive 6× 6 transposed convolutional layers of which the final one connects to a pixel-wise softmax layer with 64 bins, representing the pixel intensities. Moreover, label smoothing is applied to the different softmax bins, in which the log-probability of each of the bins is increased by 0.003, before normalizing. The softmax weights are shared among each pixel. All output nonlinearities are ReLUs; Adam (Kingma & Ba, 2015) is used as an optimization scheme; batch normalization (Ioffe & Szegedy, 2015) is applied to each layer. The architecture was shown in Figure 1 of Section 2.3. A.2 Description of the Adapted rllab Tasks This section describes the continuous control environments used in the experiments. The tasks are implemented as described in Duan et al. (2016), following the sparse reward adaptation of Houthooft et al. (2016). The tasks have the following state and action dimensions: CartPoleSwingup, S ⊆ R4, A ⊆ R1; MountainCar S ⊆ R3, A ⊆ R1; HalfCheetah, S ⊆ R20, A ⊆ R6; SwimmerGather, S ⊆ R33, A ⊆ R2. For the sparse reward experiments, the tasks have been modified as follows. In CartPoleSwingup, the agent receives a reward of +1 when cos(β) > 0.8, with β the pole angle. In MountainCar, the agent receives a reward of +1 when the goal state is reached, namely escaping the valley from the right side. Therefore, the agent has to figure out how to swing up the pole in the absence of any initial external rewards. In HalfCheetah, the agent receives a reward of +1 when xbody > 5. As such, it has to figure out how to move forward without any initial external reward. The time horizon is set to T = 500 for all tasks. A.3 Examples of Atari 2600 RAM Entries Table 5 lists the semantic interpretation of certain RAM entries in Montezuma’s Revenge. SmartHash, as described in Section 3.4, makes use of RAM indices 3, 42, 43, 27, and 67. “Beam walls” are deadly barriers that occur periodically in some rooms. A.4 Analysis of Learned Binary Representation Figure 6 shows the downsampled codes learned by the autoencoder for several Atari 2600 games (Frostbite, Freeway, and Montezuma’s Revenge). Each row depicts 50 consecutive frames (from 0 to 49, going from left to right, top to bottom). The pictures in the right column depict the binary codes that correspond with each of these frames (one frame per row). Figure 7 shows the reconstructions of several subsequent images according to the autoencoder. A.5 Counting Bloom Filter/Count-Min Sketch We experimented with directly building a hashing dictionary with keys φ(s) and values the state counts, but observed an unnecessary increase in computation time. Our implementation converts the integer hash codes into binary numbers and then into the “bytes” type in Python. The hash table is a dictionary using those bytes as keys. However, an alternative technique called Count-Min Sketch (Cormode & Muthukrishnan, 2005), with a data structure identical to counting Bloom filters (Fan et al., 2000), can count with a fixed integer array and thus reduce computation time. Specifically, let p1, . . . , pl be distinct large prime numbers and define φ j (s) = φ(s) mod pj . The count of state s is returned as min1≤ j≤l n j ( φ j (s) ) . To increase the count of s, we increment n j ( φ j (s) ) by 1 for all j. Intuitively, the method replaces φ by weaker hash functions, while it reduces the probability of over-counting by reporting counts agreed by all such weaker hash functions. The final hash code is represented as ( φ1(s), . . . , φl (s) ) . Throughout all experiments above, the prime numbers for the counting Bloom filter are 999931, 999953, 999959, 999961, 999979, and 999983, which we abbreviate as “6 M”. In addition, we experimented with 6 other prime numbers, each approximately 15 M, which we abbreviate as “90 M”. As we can see in Figure 8, counting states with a dictionary or with Bloom filters lead to similar performance, but the computation time of latter is lower. Moreover, there is little difference between direct counting and using a very larger table for Bloom filters, as the average bonus rewards are almost the same, indicating the same degree of exploration-exploitation trade-off. On the other hand, Bloom filters require a fixed table size, which may not be known beforehand. Theory of Bloom Filters Bloom filters (Bloom, 1970) are popular for determining whether a data sample s′ belongs to a dataset D. Suppose we have l functions φ j that independently assign each data sample to an integer between 1 and p uniformly at random. Initially 1, 2, . . . , p are marked as 0. Then every s ∈ D is “inserted” through marking φ j (s) as 1 for all j. A new sample s′ is reported as a member of D only if φ j (s) are marked as 1 for all j. A bloom filter has zero false negative rate (any s ∈ D is reported a member), while the false positive rate (probability of reporting a nonmember as a member) decays exponentially in l. Though Bloom filters support data insertion, it does not allow data deletion. Counting Bloom filters (Fan et al., 2000) maintain a counter n(·) for each number between 1 and p. Inserting/deleting s corresponds to incrementing/decrementing n ( φ j (s) ) by 1 for all j. Similarly, s is considered a member if ∀ j : n ( φ j (s) ) = 0. Count-Min sketch is designed to support memory-efficient counting without introducing too many over-counts. It maintains a separate count n j for each hash function φ j defined as φ j (s) = φ(s) mod pj , where pj is a large prime number. For simplicity, we may assume that pj ≈ p ∀ j and φ j assigns s to any of 1, . . . , p with uniform probability. We now derive the probability of over-counting. Let s be a fixed data sample (not necessarily inserted yet) and suppose a dataset D of N samples are inserted. We assume that pl N . Let n := min1≤ j≤l n j ( φ j (s) ) be the count returned by the Bloom filter. We are interested in computing Prob(n > 0|s < D). Due to assumptions about φ j , we know n j (φ(s)) ∼ Binomial ( N, 1p ) . Therefore, Prob(n > 0|s < D) = Prob(n > 0, s < D) Prob(s < D) = Prob(n > 0) − Prob(s ∈ D) Prob(s < D) ≈ Prob(n > 0) Prob(s < D) = ∏l j=1 Prob(n j (φ j (s)) > 0) (1 − 1/pl)N = (1 − (1 − 1/p)N )l (1 − 1/pl)N ≈ (1 − e −N/p)l e−N/pl ≈ (1 − e−N/p)l . (5) In particular, the probability of over-counting decays exponentially in l. We refer the readers to (Cormode & Muthukrishnan, 2005) for other properties of the Count-Min sketch. A.6 Robustness Analysis Apart from the experimental results shown in Table 1 and Table 3, additional experiments have been performed to study several properties of our algorithm. Hyperparameter sensitivity To study the performance sensitivity to hyperparameter changes, we focus on evaluating TRPO-RAM-SimHash on the Atari 2600 game Frostbite, where the method has a clear advantage over the baseline. Because the final scores can vary between different random seeds, we evaluated each set of hyperparameters with 30 seeds. To reduce computation time and cost, RAM states are used instead of image observations. 128 – 1475 4248 2801 3239 3621 1543 395 256 – 2583 4497 4437 7849 3516 2260 374 The results are summarized in Table 6. Herein, k refers to the length of the binary code for hashing while β is the multiplicative coefficient for the reward bonus, as defined in Section 2.2. This table demonstrates that most hyperparameter settings outperform the baseline (β = 0) significantly. Moreover, the final scores show a clear pattern in response to changing hyperparameters. Small β-values lead to insufficient exploration, while large β-values cause the bonus rewards to overwhelm the true rewards. With a fixed k, the scores are roughly concave in β, peaking at around 0.2. Higher granularity k leads to better performance. Therefore, it can be concluded that the proposed exploration method is robust to hyperparameter changes in comparison to the baseline, and that the best parameter settings can obtained from a relatively coarse-grained grid search. State and state-action counting Continuing the results in Table 6, the performance of state-action counting is studied using the same experimental setup, summarized in Table 7. In particular, a bonus reward r+ = β√ n(s,a) instead of r+ = β√ n(s) is assigned. These results show that the relative performance of state counting compared to state-action counting depends highly on the selected hyperparameter settings. However, we notice that the best performance is achieved using state counting with k = 256 and β = 0.2. 128 1475 / 808 4248 / 4302 2801 / 4802 3239 / 7291 3621 / 4243 1543 / 1941 395 / 362
1. What is the main contribution of the paper regarding reinforcement learning exploration? 2. What are the strengths and weaknesses of the proposed approach compared to other methods? 3. How does the reviewer assess the robustness of the conclusions drawn from the results? 4. Why did the authors choose state-based counts instead of state-action based counts, and what could be the reason for their similar performance in Atari games? 5. What is the significance of using locality-sensitive hashing states in building a table of visit counts? 6. How does the technique compare to other exploration algorithms in different domains and games? 7. What is the appeal of the technique, and how does it differ from current alternatives like VIME, density estimation, and pseudo-counts? 8. What is the impact of omitting DQN from the comparison, given its relevance to the field and its use in many variants? 9. Is there any concern about the engineering involved in getting the method to work, and how might this affect the long-term impact of the paper? 10. Are there any suggestions or ideas for future research related to this work?
Review
Review The paper proposes a new exploration scheme for reinforcement learning using locality-sensitive hashing states to build a table of visit counts which are then used to encourage exploration in the style of MBIE-EB of Strehl and Littman. Several points are appealing about this approach: first, it is quite simple compared to the current alternatives (e.g. VIME, density estimation and pseudo-counts). Second, the paper presents results across several domains, including classic benchmarks, continuous control domains, and Atari 2600 games. In addition, there are results for comparison from several other algorithms (DQN variants), many of which are quite recent. The results indicate that the approach clearly improves over the baseline. The results against other exploration algorithms are not as clear (more dependent on the individual domain/game), but I think this is fine as the appeal of the technique is its simplicity. Third, the paper presents results on the sensitivity to the granularity of the abstraction. I have only one main complaint, which is it seems there was some engineering involved to get this to work, and I do not have much confidence in the robustness of the conclusions. I am left uncertain as to how the story changes given slight perturbations over hyper-parameter values or enabling/disabling of certain choices. For example, how critical was using PixelCNN (or tying the weights?) or noisifying the output in the autoencoder, or what happens if you remove the custom additions to BASS? The granularity results show that the choice of resolution is sensitive, and even across games the story is not consistent. The authors decide to use state-based counts instead of state-action based counts, deviating from the theory, which is odd because the reason to used LSH in the first place is to get closer to what MBIE-EB would advise via tabular counts. There are several explanations as to why state-based versus state-action based counts perform similarly in Atari; the authors do not offer any. Why? It seems like the technique could be easily used in DQN as well, and many of the variants the authors compare to are DQN-based, so omitting DQN here again seems strange. The authors justify their choice of TRPO by saying it ensures safe policy improvement, though it is not clear that this is still true when adding these exploration bonuses. The case study on Montezuma's revenge, while interesting, involves using domain knowledge and so does not really fit well with the rest of the paper. So, in the end, simple and elegant idea to help with exploration tested in many domains, though I am not certain which of the many pieces are critical for the story to hold versus just slightly helpful, which could hurt the long-term impact of the paper. --- After response: Thank you for the thorough response, and again my apologies for the late reply. I appreciate the follow-up version on the robustness of SimHash and state counting vs. state-action counting. The paper addresses an important problem (exploration), suggesting a "simple" (compared to density estimation) counting method via hashing. It is a nice alternative approach to the one offered by Bellemare et al. If discussion among reviewers were possible, I would now try to assemble an argument to accept the paper. Specifically, I am not as concerned about beating the state of the art in Montezuma's as Reviewer3 as the merit of the current paper is one the simplicity of the hashing and on the wide comparison of domains vs. the baseline TRPO. This paper shows that we should not give up on simple hashing. There still seems to be a bunch of fiddly bits to get this to work, and I am still not confident that these results are easily reproducible. Nonetheless, it is an interesting new contrasting approach to exploration which deserves attention. Not important for the decision: The argument in the rebuttal concerning DQN & A3C is a bit of a straw man. I did not mention anything at all about A3C, I strictly referred to DQN, which is less sensitive to parameter-tuning than A3C. Also, Bellemare 2016 main result on Montezuma used DQN. Hence the omission of these techniques applied to DQN still seems a bit strange (for the Atari experiments). The figure S9 from Mnih et al. points to instances of asynchronous one-step Sarsa with varied thread counts.. of course this will be sensitive to parameters: it is both asynchronous online algorithms *and* the parameter varied is the thread count! This is hardly indicative of DQN's sensitivity to parameters, since DQN is (a) single-threaded (b) uses experience replay, leading to slower policy changes. Another source of stability, DQN uses a target network that changes infrequently. Perhaps the authors made a mistake in the reference graph in the figure? (I see no Figure 9 in https://arxiv.org/pdf/1602.01783v2.pdf , I assume the authors meant Figure S9)
ICLR
Title #Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning Abstract Count-based exploration algorithms are known to perform near-optimally when used in conjunction with tabular reinforcement learning (RL) methods for solving small discrete Markov decision processes (MDPs). It is generally thought that count-based methods cannot be applied in high-dimensional state spaces, since most states will only occur once. Recent deep RL exploration strategies are able to deal with high-dimensional continuous state spaces through complex heuristics, often relying on optimism in the face of uncertainty or intrinsic motivation. In this work, we describe a surprising finding: a simple generalization of the classic count-based approach can reach near state-of-the-art performance on various highdimensional and/or continuous deep RL benchmarks. States are mapped to hash codes, which allows to count their occurrences with a hash table. These counts are then used to compute a reward bonus according to the classic count-based exploration theory. We find that simple hash functions can achieve surprisingly good results on many challenging tasks. Furthermore, we show that a domain-dependent learned hash code may further improve these results. Detailed analysis reveals important aspects of a good hash function: 1) having appropriate granularity and 2) encoding information relevant to solving the MDP. This exploration strategy achieves near state-of-the-art performance on both continuous control tasks and Atari 2600 games, hence providing a simple yet powerful baseline for solving MDPs that require considerable exploration. 1 Introduction Reinforcement learning (RL) studies an agent acting in an initially unknown environment, learning through trial and error to maximize rewards. It is impossible for the agent to act near-optimally until it has sufficiently explored the environment and identified all of the opportunities for high reward, in all scenarios. A core challenge in RL is how to balance exploration—actively seeking out novel states and actions that might yield high rewards and lead to long-term gains; and exploitation—maximizing short-term rewards using the agent’s current knowledge. While there are exploration techniques for finite MDPs that enjoy theoretical guarantees, there are no fully satisfying techniques for highdimensional state spaces; therefore, developing more general and robust exploration techniques is an active area of research. Most of the recent state-of-the-art RL results have been obtained using simple exploration strategies such as uniform sampling (Mnih et al., 2015) and i.i.d./correlated Gaussian noise (Schulman et al., 2015; Lillicrap et al., 2015). Although these heuristics are sufficient in tasks with well-shaped rewards, the sample complexity can grow exponentially (with state space size) in tasks with sparse rewards (Osband et al., 2016b). Recently developed exploration strategies for deep RL have led to significantly improved performance on environments with sparse rewards. Bootstrapped DQN ∗These authors contributed equally. (Osband et al., 2016a) led to faster learning in a range of Atari 2600 games by training an ensemble of Q-functions. Intrinsic motivation methods using pseudo-counts achieve state-of-the-art performance on Montezuma’s Revenge, an extremely challenging Atari 2600 game (Bellemare et al., 2016). Variational Information Maximizing Exploration (VIME, Houthooft et al. (2016)) encourages the agent to explore by acquiring information about environment dynamics, and performs well on various robotic locomotion problems with sparse rewards. However, we have not seen a very simple and fast method that can work across different domains. Some of the classic, theoretically-justified exploration methods are based on counting state-action visitations, and turning this count into a bonus reward. In the bandit setting, the well-known UCB algorithm of Lai & Robbins (1985) chooses the action at at time t that maximizes r̂ (at ) + √ 2 log t n(at ) where r̂ (at ) is the estimated reward, and n(at ) is the number of times action at was previously chosen. In the MDP setting, some of the algorithms have similar structure, for example, Model Based Interval Estimation–Exploration Bonus (MBIE-EB) of Strehl & Littman (2008) counts state-action pairs with a table n(s, a) and adding a bonus reward of the form β√ n(s,a) to encourage exploring less visited pairs. Kolter & Ng (2009) show that the inverse-square-root dependence is optimal. MBIE and related algorithms assume that the augmented MDP is solved analytically at each timestep, which is only practical for small finite state spaces. This paper presents a simple approach for exploration, which extends classic counting-based methods to high-dimensional, continuous state spaces. We discretize the state space with a hash function and apply a bonus based on the state-visitation count. The hash function can be chosen to appropriately balance generalization across states, and distinguishing between states. We select problems from rllab (Duan et al., 2016) and Atari 2600 (Bellemare et al., 2012) featuring sparse rewards, and demonstrate near state-of-the-art performance on several games known to be hard for naïve exploration strategies. The main strength of the presented approach is that it is fast, flexible and complementary to most existing RL algorithms. In summary, this paper proposes a generalization of classic count-based exploration to high-dimensional spaces through hashing (Section 2); demonstrates its effectiveness on challenging deep RL benchmark problems and analyzes key components of well-designed hash functions (Section 3). 2 Methodology 2.1 Notation This paper assumes a finite-horizon discounted Markov decision process (MDP), defined by (S,A,P, r, ρ0, γ,T ), in which S is the state space, A the action space, P a transition probability distribution, r : S × A → R≥0 a reward function, ρ0 an initial state distribution, γ ∈ (0, 1] a discount factor, and T the horizon. The goal of RL is to maximize the total expected discounted reward Eπ,P [∑T t=0 γ tr (st, at ) ] over a policy π, which outputs a distribution over actions given a state. 2.2 Count-Based Exploration via Static Hashing Our approach discretizes the state space with a hash function φ : S → Z. An exploration bonus is added to the reward function, defined as r+(s, a) = β√ n(φ(s)) , (1) where β ∈ R≥0 is the bonus coefficient. Initially the counts n(·) are set to zero for the whole range of φ. For every state st encountered at time step t, n(φ(st )) is increased by one. The agent is trained with rewards (r + r+), while performance is evaluated as the sum of rewards without bonuses. Note that our approach is a departure from count-based exploration methods such as MBIE-EB since we use a state-space count n(s) rather than a state-action count n(s, a). State-action counts n(s, a) are investigated in Appendix A.6, but no significant performance gains over state counting could be witnessed. Algorithm 1: Count-based exploration through static hashing 1 Define state preprocessor g : S → RK 2 (In case of SimHash) Initialize A ∈ Rk×K with entries drawn i.i.d. from the standard Gaussian distribution N (0, 1) 3 Initialize a hash table with values n(·) ≡ 0 4 for each iteration j do 5 Collect a set of state-action samples {(sm, am)}Mm=0 with policy π 6 Compute hash codes through any LSH method, e.g., for SimHash, φ(sm) = sgn(Ag(sm)) 7 Update the hash table counts ∀m : 0 ≤ m ≤ M as n(φ(sm)) ← n(φ(sm)) + 1 8 Update the policy π using rewards { r (sm, am) + β√ n(φ(sm )) }M m=0 with any RL algorithm Clearly the performance of this method will strongly depend on the choice of hash function φ. One important choice we can make regards the granularity of the discretization: we would like for “distant” states to be be counted separately while “similar” states are merged. If desired, we can incorporate prior knowledge into the choice of φ, if there would be a set of salient state features which are known to be relevant. Algorithm 1 summarizes our method. The main idea is to use locality-sensitive hashing (LSH) to convert continuous, high-dimensional data to discrete hash codes. LSH is a popular class of hash functions for querying nearest neighbors based on certain similarity metrics (Andoni & Indyk, 2006). A computationally efficient type of LSH is SimHash (Charikar, 2002), which measures similarity by angular distance. SimHash retrieves a binary code of state s ∈ S as φ(s) = sgn(Ag(s)) ∈ {−1, 1}k, (2) where g : S → Rd is an optional preprocessing function and A is a k × d matrix with i.i.d. entries drawn from a standard Gaussian distributionN (0, 1). The value for k controls the granularity: higher values lead to fewer collisions and are thus more likely to distinguish states. 2.3 Count-Based Exploration via Learned Hashing When the MDP states have a complex structure, as is the case with image observations, measuring their similarity directly in pixel space fails to provide the semantic similarity measure one would desire. Previous work in computer vision (Lowe, 1999; Dalal & Triggs, 2005; Tola et al., 2010) introduce manually designed feature representations of images that are suitable for semantic tasks including detection and classification. More recent methods learn complex features directly from data by training convolutional neural networks (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; He et al., 2015). Considering these results, it may be difficult for SimHash to cluster states appropriately using only raw pixels. Therefore, we propose to use an autoencoder (AE) consisting of convolutional, dense, and transposed convolutional layers to learn meaningful hash codes in one of its hidden layers. This AE takes as input states s and contains one special dense layer comprised of K saturating activation functions, Algorithm 2: Count-based exploration using learned hash codes 1 Define state preprocessor g : S → BK as the binary code resulting from the autoencoder (AE) 2 Initialize A ∈ Rk×K with entries drawn i.i.d. from the standard Gaussian distribution N (0, 1) 3 Initialize a hash table with values n(·) ≡ 0 4 for each iteration j do 5 Collect a set of state-action samples {(sm, am)}Mm=0 with policy π 6 Add the state samples {sm}Mm=0 to a FIFO replay pool R 7 if j mod jupdate = 0 then 8 Update the AE loss function in Eq. (3) using samples drawn from the replay pool {sn}Nn=1 ∼ R, for example using stochastic gradient descent 9 Compute g(sm) = bb(sm)e, the K-dim rounded hash code for sm learned by the AE 10 Project g(sm) to a lower dimension k via SimHash as φ(sm) = sgn(Ag(sm)) 11 Update the hash table counts ∀m : 0 ≤ m ≤ M as n(φ(sm)) ← n(φ(sm)) + 1 12 Update the policy π using rewards { r (sm, am) + β√ n(φ(sm )) }M m=0 with any RL algorithm more specifically sigmoid functions. By rounding the sigmoid output b(s) of this layer to the closest binary number, any state s can be binarized. Since gradients cannot be back-propagated through a rounding function, an alternative method must be used to ensure that distinct states are mapped to distinct binary codes. Therefore, uniform noise U (−a, a) is added to the sigmoid output. By choosing uniform noise with a sufficiently high variance, the AE is only capable of reconstructing distinct inputs s if its hidden dense layer outputs values b(s) that are sufficiently far apart from each other (Gregor et al., 2016). Feeding a state s to the AE input, extracting b(s) and rounding it to bb(s)e yields a learned binary code. As such, the loss function L(·) over a set of collected states {si }Ni=1 is defined as L ( {sn}Nn=1 ) = − 1 N N∑ n=1 log p(sn) − λ K K∑ i=1 min { (1 − bi (sn))2 , bi (sn)2 } . (3) This objective function consists of a cross-entropy term and a term that pressures the binary code layer to take on binary values, scaled by λ ∈ R≥0. The reasoning behind this is that uniform noise U (−a, a) alone is insufficient, in case the AE does not use a particular sigmoid unit. This term ensures that an unused binary code output is assigned an arbitrary binary value. When omitting this term, the code is more prone to oscillations, causing unwanted bit flips, and destabilizing the counting process. In order to make the AE train sufficiently fast—which is required since it is updated during the agent’s training—we make use of a pixel-wise softmax output layer (van den Oord et al., 2016) that shares weights between all pixels. The different softmax outputs merge together pixel intensities into discrete bins. The architectural details are described in Appendix A.1 and are depicted in Figure 1. Because the code dimension often needs to be large in order to correctly reconstruct the input, we apply a downsampling procedure to the resulting binary code bb(s)e, which can be done through random projection to a lower-dimensional space via SimHash as in Eq. (2). One the one hand, it is important that the mapping from state to code needs to remain relatively consistent over time, which is nontrivial as the AE is constantly updated according to the latest data (Algorithm 2 line 8). An obvious solution would be to significantly downsample the binary code to a very low dimension, or by slowing down the training process. But on the other hand, the code has to remain relatively unique for states that are both distinct and close together on the image manifold. This is tackled both by the second term in Eq. (3) and by the saturating behavior of the sigmoid units. As such, states that are already well represented in the AE hidden layers tend to saturate the sigmoid units, causing the resulting loss gradients to be close to zero and making the code less prone to change. 3 Experiments Experiments were designed to investigate and answer the following research questions: 1. Can count-based exploration through hashing improve performance significantly across different domains? How does the proposed method compare to the current state of the art in exploration for deep RL? 2. What is the impact of learned or static state preprocessing on the overall performance when image observations are used? 3. What factors contribute to good performance, e.g., what is the appropriate level of granularity of the hash function? To answer question 1, we run the proposed method on deep RL benchmarks (rllab and ALE) that feature sparse rewards, and compare it to other state-of-the-art algorithms. Question 2 is answered by trying out different image preprocessors on Atari 2600 games. Finally, we investigate question 3 in Section 3.3 and 3.4. Trust Region Policy Optimization (TRPO, Schulman et al. (2015)) is chosen as the RL algorithm for all experiments, because it can handle both discrete and continuous action spaces, it can conveniently ensure stable improvement in the policy performance, and is relatively insensitive to hyperparameter changes. The hyperparameters settings are reported in Appendix A.1. 3.1 Continuous Control The rllab benchmark (Duan et al., 2016) consists of various control tasks to test deep RL algorithms. We selected several variants of the basic and locomotion tasks that use sparse rewards, as shown in Figure 2, and adopt the experimental setup as defined in (Houthooft et al., 2016)—a description can be found in Appendix A.2. These tasks are all highly difficult to solve with naïve exploration strategies, such as adding Gaussian noise to the actions. Figure 3 shows the results of TRPO (baseline), TRPO-SimHash, and VIME (Houthooft et al., 2016) on the classic tasks MountainCar and CartPoleSwingup, the locomotion task HalfCheetah, and the hierarchical task SwimmerGather. Using count-based exploration with hashing is capable of reaching the goal in all environments (which corresponds to a nonzero return), while baseline TRPO with Gaussian control noise fails completely. Although TRPO-SimHash picks up the sparse reward on HalfCheetah, it does not perform as well as VIME. In contrast, the performance of SimHash is comparable with VIME on MountainCar, while it outperforms VIME on SwimmerGather. 3.2 Arcade Learning Environment The Arcade Learning Environment (ALE, Bellemare et al. (2012)), which consists of Atari 2600 video games, is an important benchmark for deep RL due to its high-dimensional state space and wide variety of games. In order to demonstrate the effectiveness of the proposed exploration strategy, six games are selected featuring long horizons while requiring significant exploration: Freeway, Frostbite, Gravitar, Montezuma’s Revenge, Solaris, and Venture. The agent is trained for 500 iterations in all experiments, with each iteration consisting of 0.1 M steps (the TRPO batch size, corresponds to 0.4 M frames). Policies and value functions are neural networks with identical architectures to (Mnih et al., 2016). Although the policy and baseline take into account the previous four frames, the counting algorithm only looks at the latest frame. BASS To compare with the autoencoder-based learned hash code, we propose using Basic Abstraction of the ScreenShots (BASS, also called Basic; see Bellemare et al. (2012)) as a static preprocessing function g. BASS is a hand-designed feature transformation for images in Atari 2600 games. BASS builds on the following observations specific to Atari: 1) the game screen has a low resolution, 2) most objects are large and monochrome, and 3) winning depends mostly on knowing object locations and motions. We designed an adapted version of BASS1, that divides the RGB screen into square cells, computes the average intensity of each color channel inside a cell, and assigns the resulting values to bins that uniformly partition the intensity range [0, 255]. Mathematically, let C be the cell size (width and height), B the number of bins, (i, j) cell location, (x, y) pixel location, and z the channel. feature(i, j, z) = ⌊ B 255C2 ∑ (x,y)∈ cell(i, j) I (x, y, z) ⌋ . (4) Afterwards, the resulting integer-valued feature tensor is converted to an integer hash code (φ(st ) in Line 6 of Algorithm 1). A BASS feature can be regarded as a miniature that efficiently encodes object locations, but remains invariant to negligible object motions. It is easy to implement and introduces little computation overhead. However, it is designed for generic Atari game images and may not capture the structure of each specific game very well. We compare our results to double DQN (van Hasselt et al., 2016b), dueling network (Wang et al., 2016), A3C+ (Bellemare et al., 2016), double DQN with pseudo-counts (Bellemare et al., 2016), Gorila (Nair et al., 2015), and DQN Pop-Art (van Hasselt et al., 2016a) on the “null op” metric2. We show training curves in Figure 4 and summarize all results in Table 1. Surprisingly, TRPO-pixelSimHash already outperforms the baseline by a large margin and beats the previous best result on Frostbite. TRPO-BASS-SimHash achieves significant improvement over TRPO-pixel-SimHash on 1The original BASS exploits the fact that at most 128 colors can appear on the screen. Our adapted version does not make this assumption. 2The agent takes no action for a random number (within 30) of frames at the beginning of each episode. Montezuma’s Revenge and Venture, where it captures object locations better than other methods.3 TRPO-AE-SimHash achieves near state-of-the-art performance on Freeway, Frostbite and Solaris.4 As observed in Table 1, preprocessing images with BASS or using a learned hash code through the AE leads to much better performance on Gravitar, Montezuma’s Revenge and Venture. Therefore, an static or adaptive preprocessing step can be important for a good hash function. In conclusion, our count-based exploration method is able to achieve remarkable performance gains even with simple hash functions like SimHash on the raw pixel space. If coupled with domain-dependent state preprocessing techniques, it can sometimes achieve far better results. 3.3 Granularity While our proposed method is able to achieve remarkable results without requiring much tuning, the granularity of the hash function should be chosen wisely. Granularity plays a critical role in count-based exploration, where the hash function should cluster states without under-generalizing or over-generalizing. Table 2 summarizes granularity parameters for our hash functions. In Table 3 we summarize the performance of TRPO-pixel-SimHash under different granularities. We choose Frostbite and Venture on which TRPO-pixel-SimHash outperforms the baseline, and choose as reward bonus coefficient β = 0.01 × 256k to keep average bonus rewards at approximately the same scale. k = 16 only corresponds to 65536 distinct hash codes, which is insufficient to distinguish between semantically distinct states and hence leads to worse performance. We observed that k = 512 tends to capture trivial image details in Frostbite, leading the agent to believe that every state is new and equally worth exploring. Similar results are observed while tuning the granularity parameters for TRPO-BASS-SimHash and TRPO-AE-SimHash. The best granularity depends on both the hash function and the MDP. While adjusting granularity parameter, we observed that it is important to lower the bonus coefficient as granularity is increased. This is because a higher granularity is likely to cause lower state counts, leading to higher bonus rewards that may overwhelm the true rewards. 3We provide videos of example game play and visualizations of the difference bewteen Pixel-SimHash and BASS-SimHash at https://www.youtube.com/playlist?list=PLAd-UMX6FkBQdLNWtY8nH1-pzYJA_1T55 4Note that some design choices in other algorithms also impact exploration, such as ε-greedy and entropy regularization. Nevertheless, it is still valuable to position our results within the current literature. 3.4 A Case Study of Montezuma’s Revenge Montezuma’s Revenge is widely known for its extremely sparse rewards and difficult exploration (Bellemare et al., 2016). While our method does not outperform Bellemare et al. (2016) on this game, we investigate the reasons behind this through various experiments. The experiment process below again demonstrates the importance of a hash function having the correct granularity and encoding relevant information for solving the MDP. Our first attempt is to use game RAM states instead of image observations as inputs to the policy (details in Appendix A.1), which leads to a game score of 2500 with TRPO-BASS-SimHash. Our second attempt is to manually design a hash function that incorporates domain knowledge, called SmartHash, which uses an integer-valued vector consisting of the agent’s (x, y) location, room number and other useful RAM information as the hash code (details in Appendix A.3). The best SmartHash agent is able to obtain a score of 3500. Still the performance is not optimal. We observe that a slight change in the agent’s coordinates does not always result in a semantically distinct state, and thus the hash code may remain unchanged. Therefore we choose grid size s and replace the x coordinate by b(x − xmin)/sc (similarly for y). The bonus coefficient is chosen as β = 0.01 √ s to maintain the scale relative to the true reward5 (see Table 4). Finally, the best agent is able to obtain 6600 total rewards after training for 1000 iterations (1000 M time steps), with a grid size s = 10. During our pursuit, we had another interesting discovery that the ideal hash function should not simply cluster states by their visual similarity, but instead by their relevance to solving the MDP. We 5The bonus scaling is chosen by assuming all states are visited uniformly and the average bonus reward should remain the same for any grid size. experimented with including enemy locations in the first two rooms into SmartHash (s = 10), and observed that average score dropped to 1672 (at iteration 1000). Though it is important for the agent to dodge enemies, the agent also erroneously “enjoys” watching enemy motions at distance (since new states are constantly observed) and “forgets” that his main objective is to enter other rooms. An alternative hash function keeps the same entry “enemy locations”, but instead only puts randomly sampled values in it, which surprisingly achieves better performance (3112). However, by ignoring enemy locations altogether, the agent achieves a much higher score (5661) (see Figure 5). In retrospect, we examine the hash codes generated by BASS-SimHash and find that codes clearly distinguish between visually different states (including various enemy locations), but fails to emphasize that the agent needs to explore different rooms. Again this example showcases the importance of encoding relevant information in designing hash functions. 4 Related Work Classic count-based methods such as MBIE (Strehl & Littman, 2005), MBIE-EB and (Kolter & Ng, 2009) solve an approximate Bellman equation as an inner loop before the agent takes an action (Strehl & Littman, 2008). As such, bonus rewards are propagated immediately throughout the state-action space. In contrast, contemporary deep RL algorithms propagate the bonus signal based on rollouts collected from interacting with environments, with value-based (Mnih et al., 2015) or policy gradient-based (Schulman et al., 2015; Mnih et al., 2016) methods, at limited speed. In addition, our proposed method is intended to work with contemporary deep RL algorithms, it differs from classical count-based method in that our method relies on visiting unseen states first, before the bonus reward can be assigned, making uninformed exploration strategies still a necessity at the beginning. Filling the gaps between our method and classic theories is an important direction of future research. A related line of classical explorationmethods is based on the idea of optimism in the face of uncertainty (Brafman & Tennenholtz, 2002) but not restricted to using counting to implement “optimism”, e.g. R-Max (Brafman & Tennenholtz, 2002), UCRL (Jaksch et al., 2010), and E3 (Kearns & Singh, 2002). These methods, similar to MBIE and MBIE-EB, have theoretical guarantees in tabular settings. Bayesian RL methods (Kolter & Ng, 2009; Guez et al., 2014; Sun et al., 2011; Ghavamzadeh et al., 2015), which keep track of a distribution over MDPs, are an alternative to optimism-based methods. Extensions to continuous state space have been proposed by Pazis & Parr (2013) and Osband et al. (2016b). Another type of exploration is curiosity-based exploration. These methods try to capture the agent’s surprise about transition dynamics. As the agent tries to optimize for surprise, it naturally discovers novel states. We refer the reader to Schmidhuber (2010) and Oudeyer & Kaplan (2007) for an extensive review on curiosity and intrinsic rewards. Several exploration strategies for deep RL have been proposed to handle high-dimensional state space recently. Houthooft et al. (2016) propose VIME, in which information gain is measured in Bayesian neural networks modeling the MDP dynamics, which is used an exploration bonus. Stadie et al. (2015) propose to use the prediction error of a learned dynamics model as an exploration bonus. Thompson sampling through bootstrapping is proposed by Osband et al. (2016a), using bootstrapped Q-functions. The most related exploration strategy is proposed by Bellemare et al. (2016), in which an exploration bonus is added inversely proportional to the square root of a pseudo-count quantity. A state pseudocount is derived from its log-probability improvement according to a density model over the state space, which in the limit converges to the empirical count. Our method is similar to pseudo-count approach in the sense that both methods are performing approximate counting to have the necessary generalization over unseen states. The difference is that a density model has to be designed and learned to achieve good generalization for pseudo-count whereas in our case generalization is obtained by a wide range of simple hash functions (not necessarily SimHash). Another interesting connection is that our method also implies a density model ρ(s) = n(φ(s))N over all visited states, where N is the total number of states visited. Another method similar to hashing is proposed by Abel et al. (2016), which clusters states and counts cluster centers instead of the true states, but this method has yet to be tested on standard exploration benchmark problems. 5 Conclusions This paper demonstrates that a generalization of classical counting techniques through hashing is able to provide an appropriate signal for exploration, even in continuous and/or high-dimensional MDPs using function approximators, resulting in near state-of-the-art performance across benchmarks. It provides a simple yet powerful baseline for solving MDPs that require informed exploration. Acknowledgments We would like to thank our colleagues at Berkeley and OpenAI for insightful discussions. This research was funded in part by ONR through a PECASE award. Yan Duan was also supported by a Berkeley AI Research lab Fellowship and a Huawei Fellowship. Xi Chen was also supported by a Berkeley AI Research lab Fellowship. We gratefully acknowledge the support of the NSF through grant IIS-1619362 and of the ARC through a Laureate Fellowship (FL110100281) and through the ARC Centre of Excellence for Mathematical and Statistical Frontiers. Adam Stooke gratefully acknowledges funding from a Fannie and John Hertz Foundation fellowship. Rein Houthooft is supported by a Ph.D. Fellowship of the Research Foundation - Flanders (FWO). A Appendices A.1 Hyperparameter Settings For the rllab experiments, we used batch size 5000 for all tasks except SwimmerGather, for which we used batch size 50000. CartpoleSwingup makes use of a neural network policy with one layer of 32 tanh units. The other tasks make use of a two layer neural network policy of 32 tanh units each for MountainCar and HalfCheetah, and of 64 and 32 tanh units for SwimmerGather. The outputs are modeled by a fully factorized Gaussian distributionN (µ, σ2I), in which µ is modeled as the network output, while σ is a parameter. CartPoleSwingup makes use of a neural network baseline with one layer of 32 ReLU units, while all other tasks make use of a linear baseline function. For all tasks, we used TRPO step size 0.01 and discount factor γ = 0.99. We choose SimHash parameter k = 32 and bonus coefficient β = 0.01, found through a coarse grid search. For Atari experiments, a batch size of 100000 is used, while the KL divergence step size is set to 0.01. The policy and baseline both have the following architecture: 2 convolutional layers with respectively 16 and 32 filters, sizes 8 × 8 and 4 × 4, strides 4 and 2, using no padding, feeding into a single hidden layer of 256 units. The nonlinearities are rectified linear units (ReLUs). The input frames are downsampled to 52 × 52. The input to policy and baseline consists of the 4 previous frames, corresponding to the frame skip of 4. The discount factor was set to γ = 0.995. All inputs are rescaled to [−1, 1] element-wise. All experiments used 5 different training seeds, except the experiments with the learned hash code, which uses 3 different training seeds. Batch normalization (Ioffe & Szegedy, 2015) is used at each policy and baseline layer. TRPO-pixel-SimHash uses binary codes of size k = 256; BASS (TRPO-BASS-SimHash) extracts features using cell size C = 20 and B = 20 bins. The autoencoder for the learned embedding (TRPO-AE-SimHash) uses a binary hidden layer of 512 bit, which are projected to 64 bit. RAM states in Atari 2600 games are integer-valued vectors over length 128 in the range [0, 255]. Experiments on Montezuma’s Revenge with RAM observations use a policy consisting of 2 hidden layers, each of size 32. RAM states are rescaled to a range [−1, 1]. Unlike images, only the current RAM is shown to the agent. Experiment results are averaged over 10 random seeds. In addition, we apply counting Bloom filters (Fan et al., 2000) to maintain a small hash table. Details can be found in Appendix A.5. The autoencoder used for the learned hash code has a 512 bit binary code layer, using sigmoid units, to which uniform noise U (−a, a) with a = 0.3 is added. The loss function Eq. (3), using λ = 10, is updated every jupdate = 3 iterations. The architecture looks as follows: an input layer of size 52 × 52, representing the image luminance is followed by 3 consecutive 6 × 6 convolutional layers with stride 2 and 96 filters feed into a fully connected layer of size 1024, which connects to the binary code layer. This binary code layer feeds into a fully-connected layer of 1024 units, connecting to a fully-connected layer of 2400 units. This layer feeds into 3 consecutive 6× 6 transposed convolutional layers of which the final one connects to a pixel-wise softmax layer with 64 bins, representing the pixel intensities. Moreover, label smoothing is applied to the different softmax bins, in which the log-probability of each of the bins is increased by 0.003, before normalizing. The softmax weights are shared among each pixel. All output nonlinearities are ReLUs; Adam (Kingma & Ba, 2015) is used as an optimization scheme; batch normalization (Ioffe & Szegedy, 2015) is applied to each layer. The architecture was shown in Figure 1 of Section 2.3. A.2 Description of the Adapted rllab Tasks This section describes the continuous control environments used in the experiments. The tasks are implemented as described in Duan et al. (2016), following the sparse reward adaptation of Houthooft et al. (2016). The tasks have the following state and action dimensions: CartPoleSwingup, S ⊆ R4, A ⊆ R1; MountainCar S ⊆ R3, A ⊆ R1; HalfCheetah, S ⊆ R20, A ⊆ R6; SwimmerGather, S ⊆ R33, A ⊆ R2. For the sparse reward experiments, the tasks have been modified as follows. In CartPoleSwingup, the agent receives a reward of +1 when cos(β) > 0.8, with β the pole angle. In MountainCar, the agent receives a reward of +1 when the goal state is reached, namely escaping the valley from the right side. Therefore, the agent has to figure out how to swing up the pole in the absence of any initial external rewards. In HalfCheetah, the agent receives a reward of +1 when xbody > 5. As such, it has to figure out how to move forward without any initial external reward. The time horizon is set to T = 500 for all tasks. A.3 Examples of Atari 2600 RAM Entries Table 5 lists the semantic interpretation of certain RAM entries in Montezuma’s Revenge. SmartHash, as described in Section 3.4, makes use of RAM indices 3, 42, 43, 27, and 67. “Beam walls” are deadly barriers that occur periodically in some rooms. A.4 Analysis of Learned Binary Representation Figure 6 shows the downsampled codes learned by the autoencoder for several Atari 2600 games (Frostbite, Freeway, and Montezuma’s Revenge). Each row depicts 50 consecutive frames (from 0 to 49, going from left to right, top to bottom). The pictures in the right column depict the binary codes that correspond with each of these frames (one frame per row). Figure 7 shows the reconstructions of several subsequent images according to the autoencoder. A.5 Counting Bloom Filter/Count-Min Sketch We experimented with directly building a hashing dictionary with keys φ(s) and values the state counts, but observed an unnecessary increase in computation time. Our implementation converts the integer hash codes into binary numbers and then into the “bytes” type in Python. The hash table is a dictionary using those bytes as keys. However, an alternative technique called Count-Min Sketch (Cormode & Muthukrishnan, 2005), with a data structure identical to counting Bloom filters (Fan et al., 2000), can count with a fixed integer array and thus reduce computation time. Specifically, let p1, . . . , pl be distinct large prime numbers and define φ j (s) = φ(s) mod pj . The count of state s is returned as min1≤ j≤l n j ( φ j (s) ) . To increase the count of s, we increment n j ( φ j (s) ) by 1 for all j. Intuitively, the method replaces φ by weaker hash functions, while it reduces the probability of over-counting by reporting counts agreed by all such weaker hash functions. The final hash code is represented as ( φ1(s), . . . , φl (s) ) . Throughout all experiments above, the prime numbers for the counting Bloom filter are 999931, 999953, 999959, 999961, 999979, and 999983, which we abbreviate as “6 M”. In addition, we experimented with 6 other prime numbers, each approximately 15 M, which we abbreviate as “90 M”. As we can see in Figure 8, counting states with a dictionary or with Bloom filters lead to similar performance, but the computation time of latter is lower. Moreover, there is little difference between direct counting and using a very larger table for Bloom filters, as the average bonus rewards are almost the same, indicating the same degree of exploration-exploitation trade-off. On the other hand, Bloom filters require a fixed table size, which may not be known beforehand. Theory of Bloom Filters Bloom filters (Bloom, 1970) are popular for determining whether a data sample s′ belongs to a dataset D. Suppose we have l functions φ j that independently assign each data sample to an integer between 1 and p uniformly at random. Initially 1, 2, . . . , p are marked as 0. Then every s ∈ D is “inserted” through marking φ j (s) as 1 for all j. A new sample s′ is reported as a member of D only if φ j (s) are marked as 1 for all j. A bloom filter has zero false negative rate (any s ∈ D is reported a member), while the false positive rate (probability of reporting a nonmember as a member) decays exponentially in l. Though Bloom filters support data insertion, it does not allow data deletion. Counting Bloom filters (Fan et al., 2000) maintain a counter n(·) for each number between 1 and p. Inserting/deleting s corresponds to incrementing/decrementing n ( φ j (s) ) by 1 for all j. Similarly, s is considered a member if ∀ j : n ( φ j (s) ) = 0. Count-Min sketch is designed to support memory-efficient counting without introducing too many over-counts. It maintains a separate count n j for each hash function φ j defined as φ j (s) = φ(s) mod pj , where pj is a large prime number. For simplicity, we may assume that pj ≈ p ∀ j and φ j assigns s to any of 1, . . . , p with uniform probability. We now derive the probability of over-counting. Let s be a fixed data sample (not necessarily inserted yet) and suppose a dataset D of N samples are inserted. We assume that pl N . Let n := min1≤ j≤l n j ( φ j (s) ) be the count returned by the Bloom filter. We are interested in computing Prob(n > 0|s < D). Due to assumptions about φ j , we know n j (φ(s)) ∼ Binomial ( N, 1p ) . Therefore, Prob(n > 0|s < D) = Prob(n > 0, s < D) Prob(s < D) = Prob(n > 0) − Prob(s ∈ D) Prob(s < D) ≈ Prob(n > 0) Prob(s < D) = ∏l j=1 Prob(n j (φ j (s)) > 0) (1 − 1/pl)N = (1 − (1 − 1/p)N )l (1 − 1/pl)N ≈ (1 − e −N/p)l e−N/pl ≈ (1 − e−N/p)l . (5) In particular, the probability of over-counting decays exponentially in l. We refer the readers to (Cormode & Muthukrishnan, 2005) for other properties of the Count-Min sketch. A.6 Robustness Analysis Apart from the experimental results shown in Table 1 and Table 3, additional experiments have been performed to study several properties of our algorithm. Hyperparameter sensitivity To study the performance sensitivity to hyperparameter changes, we focus on evaluating TRPO-RAM-SimHash on the Atari 2600 game Frostbite, where the method has a clear advantage over the baseline. Because the final scores can vary between different random seeds, we evaluated each set of hyperparameters with 30 seeds. To reduce computation time and cost, RAM states are used instead of image observations. 128 – 1475 4248 2801 3239 3621 1543 395 256 – 2583 4497 4437 7849 3516 2260 374 The results are summarized in Table 6. Herein, k refers to the length of the binary code for hashing while β is the multiplicative coefficient for the reward bonus, as defined in Section 2.2. This table demonstrates that most hyperparameter settings outperform the baseline (β = 0) significantly. Moreover, the final scores show a clear pattern in response to changing hyperparameters. Small β-values lead to insufficient exploration, while large β-values cause the bonus rewards to overwhelm the true rewards. With a fixed k, the scores are roughly concave in β, peaking at around 0.2. Higher granularity k leads to better performance. Therefore, it can be concluded that the proposed exploration method is robust to hyperparameter changes in comparison to the baseline, and that the best parameter settings can obtained from a relatively coarse-grained grid search. State and state-action counting Continuing the results in Table 6, the performance of state-action counting is studied using the same experimental setup, summarized in Table 7. In particular, a bonus reward r+ = β√ n(s,a) instead of r+ = β√ n(s) is assigned. These results show that the relative performance of state counting compared to state-action counting depends highly on the selected hyperparameter settings. However, we notice that the best performance is achieved using state counting with k = 256 and β = 0.2. 128 1475 / 808 4248 / 4302 2801 / 4802 3239 / 7291 3621 / 4243 1543 / 1941 395 / 362
1. What is the main contribution of the paper regarding count-based exploration? 2. What are the strengths and weaknesses of the proposed approach using hash functions? 3. How does the reviewer assess the effectiveness and efficiency of the proposed method compared to prior works? 4. What are the limitations of the paper's approach regarding its applicability in different domains? 5. Are there any concerns or suggestions regarding the use of alternative methods, such as learning-based density estimators?
Review
Review This paper introduces a new way of extending the count based exploration approach to domains where counts are not readily available. The way in which the authors do it is through hash functions. Experiments are conducted on several domains including control and Atari. It is nice that the authors confirmed the results of Bellemare in that given the right "density" estimator, count based exploration can be effective. It is also great the observe that given the right features, we can crack games like Montezuma's revenge to some extend. I, however, have several complaints: First, by using hashing, the authors did not seem to be able to achieve significant improvements over past approaches. Without "feature engineering", the authors achieved only a fraction of the performance achieved in Bellemare et al. on Montezuma's Revenge. The proposed approaches In the control domains, the authors also does not outperform VIME. So experimentally, it is very hard to justify the approach. Second, hashing, although could be effective in the domains that the authors tested on, it may not be the best way of estimating densities going forward. As the environments get more complicated, some learning methods, are required for the understanding of the environments instead of blind hashing. The authors claim that the advantage of the proposed method over Bellemare et al. is that one does not have to design density estimators. But I would argue that density estimators have become readily available (PixelCNN, VAEs, Real NVP, GANs) that they can be as easily applied as can hashing. Training the density estimators is not difficult problem as more.
ICLR
Title #Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning Abstract Count-based exploration algorithms are known to perform near-optimally when used in conjunction with tabular reinforcement learning (RL) methods for solving small discrete Markov decision processes (MDPs). It is generally thought that count-based methods cannot be applied in high-dimensional state spaces, since most states will only occur once. Recent deep RL exploration strategies are able to deal with high-dimensional continuous state spaces through complex heuristics, often relying on optimism in the face of uncertainty or intrinsic motivation. In this work, we describe a surprising finding: a simple generalization of the classic count-based approach can reach near state-of-the-art performance on various highdimensional and/or continuous deep RL benchmarks. States are mapped to hash codes, which allows to count their occurrences with a hash table. These counts are then used to compute a reward bonus according to the classic count-based exploration theory. We find that simple hash functions can achieve surprisingly good results on many challenging tasks. Furthermore, we show that a domain-dependent learned hash code may further improve these results. Detailed analysis reveals important aspects of a good hash function: 1) having appropriate granularity and 2) encoding information relevant to solving the MDP. This exploration strategy achieves near state-of-the-art performance on both continuous control tasks and Atari 2600 games, hence providing a simple yet powerful baseline for solving MDPs that require considerable exploration. 1 Introduction Reinforcement learning (RL) studies an agent acting in an initially unknown environment, learning through trial and error to maximize rewards. It is impossible for the agent to act near-optimally until it has sufficiently explored the environment and identified all of the opportunities for high reward, in all scenarios. A core challenge in RL is how to balance exploration—actively seeking out novel states and actions that might yield high rewards and lead to long-term gains; and exploitation—maximizing short-term rewards using the agent’s current knowledge. While there are exploration techniques for finite MDPs that enjoy theoretical guarantees, there are no fully satisfying techniques for highdimensional state spaces; therefore, developing more general and robust exploration techniques is an active area of research. Most of the recent state-of-the-art RL results have been obtained using simple exploration strategies such as uniform sampling (Mnih et al., 2015) and i.i.d./correlated Gaussian noise (Schulman et al., 2015; Lillicrap et al., 2015). Although these heuristics are sufficient in tasks with well-shaped rewards, the sample complexity can grow exponentially (with state space size) in tasks with sparse rewards (Osband et al., 2016b). Recently developed exploration strategies for deep RL have led to significantly improved performance on environments with sparse rewards. Bootstrapped DQN ∗These authors contributed equally. (Osband et al., 2016a) led to faster learning in a range of Atari 2600 games by training an ensemble of Q-functions. Intrinsic motivation methods using pseudo-counts achieve state-of-the-art performance on Montezuma’s Revenge, an extremely challenging Atari 2600 game (Bellemare et al., 2016). Variational Information Maximizing Exploration (VIME, Houthooft et al. (2016)) encourages the agent to explore by acquiring information about environment dynamics, and performs well on various robotic locomotion problems with sparse rewards. However, we have not seen a very simple and fast method that can work across different domains. Some of the classic, theoretically-justified exploration methods are based on counting state-action visitations, and turning this count into a bonus reward. In the bandit setting, the well-known UCB algorithm of Lai & Robbins (1985) chooses the action at at time t that maximizes r̂ (at ) + √ 2 log t n(at ) where r̂ (at ) is the estimated reward, and n(at ) is the number of times action at was previously chosen. In the MDP setting, some of the algorithms have similar structure, for example, Model Based Interval Estimation–Exploration Bonus (MBIE-EB) of Strehl & Littman (2008) counts state-action pairs with a table n(s, a) and adding a bonus reward of the form β√ n(s,a) to encourage exploring less visited pairs. Kolter & Ng (2009) show that the inverse-square-root dependence is optimal. MBIE and related algorithms assume that the augmented MDP is solved analytically at each timestep, which is only practical for small finite state spaces. This paper presents a simple approach for exploration, which extends classic counting-based methods to high-dimensional, continuous state spaces. We discretize the state space with a hash function and apply a bonus based on the state-visitation count. The hash function can be chosen to appropriately balance generalization across states, and distinguishing between states. We select problems from rllab (Duan et al., 2016) and Atari 2600 (Bellemare et al., 2012) featuring sparse rewards, and demonstrate near state-of-the-art performance on several games known to be hard for naïve exploration strategies. The main strength of the presented approach is that it is fast, flexible and complementary to most existing RL algorithms. In summary, this paper proposes a generalization of classic count-based exploration to high-dimensional spaces through hashing (Section 2); demonstrates its effectiveness on challenging deep RL benchmark problems and analyzes key components of well-designed hash functions (Section 3). 2 Methodology 2.1 Notation This paper assumes a finite-horizon discounted Markov decision process (MDP), defined by (S,A,P, r, ρ0, γ,T ), in which S is the state space, A the action space, P a transition probability distribution, r : S × A → R≥0 a reward function, ρ0 an initial state distribution, γ ∈ (0, 1] a discount factor, and T the horizon. The goal of RL is to maximize the total expected discounted reward Eπ,P [∑T t=0 γ tr (st, at ) ] over a policy π, which outputs a distribution over actions given a state. 2.2 Count-Based Exploration via Static Hashing Our approach discretizes the state space with a hash function φ : S → Z. An exploration bonus is added to the reward function, defined as r+(s, a) = β√ n(φ(s)) , (1) where β ∈ R≥0 is the bonus coefficient. Initially the counts n(·) are set to zero for the whole range of φ. For every state st encountered at time step t, n(φ(st )) is increased by one. The agent is trained with rewards (r + r+), while performance is evaluated as the sum of rewards without bonuses. Note that our approach is a departure from count-based exploration methods such as MBIE-EB since we use a state-space count n(s) rather than a state-action count n(s, a). State-action counts n(s, a) are investigated in Appendix A.6, but no significant performance gains over state counting could be witnessed. Algorithm 1: Count-based exploration through static hashing 1 Define state preprocessor g : S → RK 2 (In case of SimHash) Initialize A ∈ Rk×K with entries drawn i.i.d. from the standard Gaussian distribution N (0, 1) 3 Initialize a hash table with values n(·) ≡ 0 4 for each iteration j do 5 Collect a set of state-action samples {(sm, am)}Mm=0 with policy π 6 Compute hash codes through any LSH method, e.g., for SimHash, φ(sm) = sgn(Ag(sm)) 7 Update the hash table counts ∀m : 0 ≤ m ≤ M as n(φ(sm)) ← n(φ(sm)) + 1 8 Update the policy π using rewards { r (sm, am) + β√ n(φ(sm )) }M m=0 with any RL algorithm Clearly the performance of this method will strongly depend on the choice of hash function φ. One important choice we can make regards the granularity of the discretization: we would like for “distant” states to be be counted separately while “similar” states are merged. If desired, we can incorporate prior knowledge into the choice of φ, if there would be a set of salient state features which are known to be relevant. Algorithm 1 summarizes our method. The main idea is to use locality-sensitive hashing (LSH) to convert continuous, high-dimensional data to discrete hash codes. LSH is a popular class of hash functions for querying nearest neighbors based on certain similarity metrics (Andoni & Indyk, 2006). A computationally efficient type of LSH is SimHash (Charikar, 2002), which measures similarity by angular distance. SimHash retrieves a binary code of state s ∈ S as φ(s) = sgn(Ag(s)) ∈ {−1, 1}k, (2) where g : S → Rd is an optional preprocessing function and A is a k × d matrix with i.i.d. entries drawn from a standard Gaussian distributionN (0, 1). The value for k controls the granularity: higher values lead to fewer collisions and are thus more likely to distinguish states. 2.3 Count-Based Exploration via Learned Hashing When the MDP states have a complex structure, as is the case with image observations, measuring their similarity directly in pixel space fails to provide the semantic similarity measure one would desire. Previous work in computer vision (Lowe, 1999; Dalal & Triggs, 2005; Tola et al., 2010) introduce manually designed feature representations of images that are suitable for semantic tasks including detection and classification. More recent methods learn complex features directly from data by training convolutional neural networks (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; He et al., 2015). Considering these results, it may be difficult for SimHash to cluster states appropriately using only raw pixels. Therefore, we propose to use an autoencoder (AE) consisting of convolutional, dense, and transposed convolutional layers to learn meaningful hash codes in one of its hidden layers. This AE takes as input states s and contains one special dense layer comprised of K saturating activation functions, Algorithm 2: Count-based exploration using learned hash codes 1 Define state preprocessor g : S → BK as the binary code resulting from the autoencoder (AE) 2 Initialize A ∈ Rk×K with entries drawn i.i.d. from the standard Gaussian distribution N (0, 1) 3 Initialize a hash table with values n(·) ≡ 0 4 for each iteration j do 5 Collect a set of state-action samples {(sm, am)}Mm=0 with policy π 6 Add the state samples {sm}Mm=0 to a FIFO replay pool R 7 if j mod jupdate = 0 then 8 Update the AE loss function in Eq. (3) using samples drawn from the replay pool {sn}Nn=1 ∼ R, for example using stochastic gradient descent 9 Compute g(sm) = bb(sm)e, the K-dim rounded hash code for sm learned by the AE 10 Project g(sm) to a lower dimension k via SimHash as φ(sm) = sgn(Ag(sm)) 11 Update the hash table counts ∀m : 0 ≤ m ≤ M as n(φ(sm)) ← n(φ(sm)) + 1 12 Update the policy π using rewards { r (sm, am) + β√ n(φ(sm )) }M m=0 with any RL algorithm more specifically sigmoid functions. By rounding the sigmoid output b(s) of this layer to the closest binary number, any state s can be binarized. Since gradients cannot be back-propagated through a rounding function, an alternative method must be used to ensure that distinct states are mapped to distinct binary codes. Therefore, uniform noise U (−a, a) is added to the sigmoid output. By choosing uniform noise with a sufficiently high variance, the AE is only capable of reconstructing distinct inputs s if its hidden dense layer outputs values b(s) that are sufficiently far apart from each other (Gregor et al., 2016). Feeding a state s to the AE input, extracting b(s) and rounding it to bb(s)e yields a learned binary code. As such, the loss function L(·) over a set of collected states {si }Ni=1 is defined as L ( {sn}Nn=1 ) = − 1 N N∑ n=1 log p(sn) − λ K K∑ i=1 min { (1 − bi (sn))2 , bi (sn)2 } . (3) This objective function consists of a cross-entropy term and a term that pressures the binary code layer to take on binary values, scaled by λ ∈ R≥0. The reasoning behind this is that uniform noise U (−a, a) alone is insufficient, in case the AE does not use a particular sigmoid unit. This term ensures that an unused binary code output is assigned an arbitrary binary value. When omitting this term, the code is more prone to oscillations, causing unwanted bit flips, and destabilizing the counting process. In order to make the AE train sufficiently fast—which is required since it is updated during the agent’s training—we make use of a pixel-wise softmax output layer (van den Oord et al., 2016) that shares weights between all pixels. The different softmax outputs merge together pixel intensities into discrete bins. The architectural details are described in Appendix A.1 and are depicted in Figure 1. Because the code dimension often needs to be large in order to correctly reconstruct the input, we apply a downsampling procedure to the resulting binary code bb(s)e, which can be done through random projection to a lower-dimensional space via SimHash as in Eq. (2). One the one hand, it is important that the mapping from state to code needs to remain relatively consistent over time, which is nontrivial as the AE is constantly updated according to the latest data (Algorithm 2 line 8). An obvious solution would be to significantly downsample the binary code to a very low dimension, or by slowing down the training process. But on the other hand, the code has to remain relatively unique for states that are both distinct and close together on the image manifold. This is tackled both by the second term in Eq. (3) and by the saturating behavior of the sigmoid units. As such, states that are already well represented in the AE hidden layers tend to saturate the sigmoid units, causing the resulting loss gradients to be close to zero and making the code less prone to change. 3 Experiments Experiments were designed to investigate and answer the following research questions: 1. Can count-based exploration through hashing improve performance significantly across different domains? How does the proposed method compare to the current state of the art in exploration for deep RL? 2. What is the impact of learned or static state preprocessing on the overall performance when image observations are used? 3. What factors contribute to good performance, e.g., what is the appropriate level of granularity of the hash function? To answer question 1, we run the proposed method on deep RL benchmarks (rllab and ALE) that feature sparse rewards, and compare it to other state-of-the-art algorithms. Question 2 is answered by trying out different image preprocessors on Atari 2600 games. Finally, we investigate question 3 in Section 3.3 and 3.4. Trust Region Policy Optimization (TRPO, Schulman et al. (2015)) is chosen as the RL algorithm for all experiments, because it can handle both discrete and continuous action spaces, it can conveniently ensure stable improvement in the policy performance, and is relatively insensitive to hyperparameter changes. The hyperparameters settings are reported in Appendix A.1. 3.1 Continuous Control The rllab benchmark (Duan et al., 2016) consists of various control tasks to test deep RL algorithms. We selected several variants of the basic and locomotion tasks that use sparse rewards, as shown in Figure 2, and adopt the experimental setup as defined in (Houthooft et al., 2016)—a description can be found in Appendix A.2. These tasks are all highly difficult to solve with naïve exploration strategies, such as adding Gaussian noise to the actions. Figure 3 shows the results of TRPO (baseline), TRPO-SimHash, and VIME (Houthooft et al., 2016) on the classic tasks MountainCar and CartPoleSwingup, the locomotion task HalfCheetah, and the hierarchical task SwimmerGather. Using count-based exploration with hashing is capable of reaching the goal in all environments (which corresponds to a nonzero return), while baseline TRPO with Gaussian control noise fails completely. Although TRPO-SimHash picks up the sparse reward on HalfCheetah, it does not perform as well as VIME. In contrast, the performance of SimHash is comparable with VIME on MountainCar, while it outperforms VIME on SwimmerGather. 3.2 Arcade Learning Environment The Arcade Learning Environment (ALE, Bellemare et al. (2012)), which consists of Atari 2600 video games, is an important benchmark for deep RL due to its high-dimensional state space and wide variety of games. In order to demonstrate the effectiveness of the proposed exploration strategy, six games are selected featuring long horizons while requiring significant exploration: Freeway, Frostbite, Gravitar, Montezuma’s Revenge, Solaris, and Venture. The agent is trained for 500 iterations in all experiments, with each iteration consisting of 0.1 M steps (the TRPO batch size, corresponds to 0.4 M frames). Policies and value functions are neural networks with identical architectures to (Mnih et al., 2016). Although the policy and baseline take into account the previous four frames, the counting algorithm only looks at the latest frame. BASS To compare with the autoencoder-based learned hash code, we propose using Basic Abstraction of the ScreenShots (BASS, also called Basic; see Bellemare et al. (2012)) as a static preprocessing function g. BASS is a hand-designed feature transformation for images in Atari 2600 games. BASS builds on the following observations specific to Atari: 1) the game screen has a low resolution, 2) most objects are large and monochrome, and 3) winning depends mostly on knowing object locations and motions. We designed an adapted version of BASS1, that divides the RGB screen into square cells, computes the average intensity of each color channel inside a cell, and assigns the resulting values to bins that uniformly partition the intensity range [0, 255]. Mathematically, let C be the cell size (width and height), B the number of bins, (i, j) cell location, (x, y) pixel location, and z the channel. feature(i, j, z) = ⌊ B 255C2 ∑ (x,y)∈ cell(i, j) I (x, y, z) ⌋ . (4) Afterwards, the resulting integer-valued feature tensor is converted to an integer hash code (φ(st ) in Line 6 of Algorithm 1). A BASS feature can be regarded as a miniature that efficiently encodes object locations, but remains invariant to negligible object motions. It is easy to implement and introduces little computation overhead. However, it is designed for generic Atari game images and may not capture the structure of each specific game very well. We compare our results to double DQN (van Hasselt et al., 2016b), dueling network (Wang et al., 2016), A3C+ (Bellemare et al., 2016), double DQN with pseudo-counts (Bellemare et al., 2016), Gorila (Nair et al., 2015), and DQN Pop-Art (van Hasselt et al., 2016a) on the “null op” metric2. We show training curves in Figure 4 and summarize all results in Table 1. Surprisingly, TRPO-pixelSimHash already outperforms the baseline by a large margin and beats the previous best result on Frostbite. TRPO-BASS-SimHash achieves significant improvement over TRPO-pixel-SimHash on 1The original BASS exploits the fact that at most 128 colors can appear on the screen. Our adapted version does not make this assumption. 2The agent takes no action for a random number (within 30) of frames at the beginning of each episode. Montezuma’s Revenge and Venture, where it captures object locations better than other methods.3 TRPO-AE-SimHash achieves near state-of-the-art performance on Freeway, Frostbite and Solaris.4 As observed in Table 1, preprocessing images with BASS or using a learned hash code through the AE leads to much better performance on Gravitar, Montezuma’s Revenge and Venture. Therefore, an static or adaptive preprocessing step can be important for a good hash function. In conclusion, our count-based exploration method is able to achieve remarkable performance gains even with simple hash functions like SimHash on the raw pixel space. If coupled with domain-dependent state preprocessing techniques, it can sometimes achieve far better results. 3.3 Granularity While our proposed method is able to achieve remarkable results without requiring much tuning, the granularity of the hash function should be chosen wisely. Granularity plays a critical role in count-based exploration, where the hash function should cluster states without under-generalizing or over-generalizing. Table 2 summarizes granularity parameters for our hash functions. In Table 3 we summarize the performance of TRPO-pixel-SimHash under different granularities. We choose Frostbite and Venture on which TRPO-pixel-SimHash outperforms the baseline, and choose as reward bonus coefficient β = 0.01 × 256k to keep average bonus rewards at approximately the same scale. k = 16 only corresponds to 65536 distinct hash codes, which is insufficient to distinguish between semantically distinct states and hence leads to worse performance. We observed that k = 512 tends to capture trivial image details in Frostbite, leading the agent to believe that every state is new and equally worth exploring. Similar results are observed while tuning the granularity parameters for TRPO-BASS-SimHash and TRPO-AE-SimHash. The best granularity depends on both the hash function and the MDP. While adjusting granularity parameter, we observed that it is important to lower the bonus coefficient as granularity is increased. This is because a higher granularity is likely to cause lower state counts, leading to higher bonus rewards that may overwhelm the true rewards. 3We provide videos of example game play and visualizations of the difference bewteen Pixel-SimHash and BASS-SimHash at https://www.youtube.com/playlist?list=PLAd-UMX6FkBQdLNWtY8nH1-pzYJA_1T55 4Note that some design choices in other algorithms also impact exploration, such as ε-greedy and entropy regularization. Nevertheless, it is still valuable to position our results within the current literature. 3.4 A Case Study of Montezuma’s Revenge Montezuma’s Revenge is widely known for its extremely sparse rewards and difficult exploration (Bellemare et al., 2016). While our method does not outperform Bellemare et al. (2016) on this game, we investigate the reasons behind this through various experiments. The experiment process below again demonstrates the importance of a hash function having the correct granularity and encoding relevant information for solving the MDP. Our first attempt is to use game RAM states instead of image observations as inputs to the policy (details in Appendix A.1), which leads to a game score of 2500 with TRPO-BASS-SimHash. Our second attempt is to manually design a hash function that incorporates domain knowledge, called SmartHash, which uses an integer-valued vector consisting of the agent’s (x, y) location, room number and other useful RAM information as the hash code (details in Appendix A.3). The best SmartHash agent is able to obtain a score of 3500. Still the performance is not optimal. We observe that a slight change in the agent’s coordinates does not always result in a semantically distinct state, and thus the hash code may remain unchanged. Therefore we choose grid size s and replace the x coordinate by b(x − xmin)/sc (similarly for y). The bonus coefficient is chosen as β = 0.01 √ s to maintain the scale relative to the true reward5 (see Table 4). Finally, the best agent is able to obtain 6600 total rewards after training for 1000 iterations (1000 M time steps), with a grid size s = 10. During our pursuit, we had another interesting discovery that the ideal hash function should not simply cluster states by their visual similarity, but instead by their relevance to solving the MDP. We 5The bonus scaling is chosen by assuming all states are visited uniformly and the average bonus reward should remain the same for any grid size. experimented with including enemy locations in the first two rooms into SmartHash (s = 10), and observed that average score dropped to 1672 (at iteration 1000). Though it is important for the agent to dodge enemies, the agent also erroneously “enjoys” watching enemy motions at distance (since new states are constantly observed) and “forgets” that his main objective is to enter other rooms. An alternative hash function keeps the same entry “enemy locations”, but instead only puts randomly sampled values in it, which surprisingly achieves better performance (3112). However, by ignoring enemy locations altogether, the agent achieves a much higher score (5661) (see Figure 5). In retrospect, we examine the hash codes generated by BASS-SimHash and find that codes clearly distinguish between visually different states (including various enemy locations), but fails to emphasize that the agent needs to explore different rooms. Again this example showcases the importance of encoding relevant information in designing hash functions. 4 Related Work Classic count-based methods such as MBIE (Strehl & Littman, 2005), MBIE-EB and (Kolter & Ng, 2009) solve an approximate Bellman equation as an inner loop before the agent takes an action (Strehl & Littman, 2008). As such, bonus rewards are propagated immediately throughout the state-action space. In contrast, contemporary deep RL algorithms propagate the bonus signal based on rollouts collected from interacting with environments, with value-based (Mnih et al., 2015) or policy gradient-based (Schulman et al., 2015; Mnih et al., 2016) methods, at limited speed. In addition, our proposed method is intended to work with contemporary deep RL algorithms, it differs from classical count-based method in that our method relies on visiting unseen states first, before the bonus reward can be assigned, making uninformed exploration strategies still a necessity at the beginning. Filling the gaps between our method and classic theories is an important direction of future research. A related line of classical explorationmethods is based on the idea of optimism in the face of uncertainty (Brafman & Tennenholtz, 2002) but not restricted to using counting to implement “optimism”, e.g. R-Max (Brafman & Tennenholtz, 2002), UCRL (Jaksch et al., 2010), and E3 (Kearns & Singh, 2002). These methods, similar to MBIE and MBIE-EB, have theoretical guarantees in tabular settings. Bayesian RL methods (Kolter & Ng, 2009; Guez et al., 2014; Sun et al., 2011; Ghavamzadeh et al., 2015), which keep track of a distribution over MDPs, are an alternative to optimism-based methods. Extensions to continuous state space have been proposed by Pazis & Parr (2013) and Osband et al. (2016b). Another type of exploration is curiosity-based exploration. These methods try to capture the agent’s surprise about transition dynamics. As the agent tries to optimize for surprise, it naturally discovers novel states. We refer the reader to Schmidhuber (2010) and Oudeyer & Kaplan (2007) for an extensive review on curiosity and intrinsic rewards. Several exploration strategies for deep RL have been proposed to handle high-dimensional state space recently. Houthooft et al. (2016) propose VIME, in which information gain is measured in Bayesian neural networks modeling the MDP dynamics, which is used an exploration bonus. Stadie et al. (2015) propose to use the prediction error of a learned dynamics model as an exploration bonus. Thompson sampling through bootstrapping is proposed by Osband et al. (2016a), using bootstrapped Q-functions. The most related exploration strategy is proposed by Bellemare et al. (2016), in which an exploration bonus is added inversely proportional to the square root of a pseudo-count quantity. A state pseudocount is derived from its log-probability improvement according to a density model over the state space, which in the limit converges to the empirical count. Our method is similar to pseudo-count approach in the sense that both methods are performing approximate counting to have the necessary generalization over unseen states. The difference is that a density model has to be designed and learned to achieve good generalization for pseudo-count whereas in our case generalization is obtained by a wide range of simple hash functions (not necessarily SimHash). Another interesting connection is that our method also implies a density model ρ(s) = n(φ(s))N over all visited states, where N is the total number of states visited. Another method similar to hashing is proposed by Abel et al. (2016), which clusters states and counts cluster centers instead of the true states, but this method has yet to be tested on standard exploration benchmark problems. 5 Conclusions This paper demonstrates that a generalization of classical counting techniques through hashing is able to provide an appropriate signal for exploration, even in continuous and/or high-dimensional MDPs using function approximators, resulting in near state-of-the-art performance across benchmarks. It provides a simple yet powerful baseline for solving MDPs that require informed exploration. Acknowledgments We would like to thank our colleagues at Berkeley and OpenAI for insightful discussions. This research was funded in part by ONR through a PECASE award. Yan Duan was also supported by a Berkeley AI Research lab Fellowship and a Huawei Fellowship. Xi Chen was also supported by a Berkeley AI Research lab Fellowship. We gratefully acknowledge the support of the NSF through grant IIS-1619362 and of the ARC through a Laureate Fellowship (FL110100281) and through the ARC Centre of Excellence for Mathematical and Statistical Frontiers. Adam Stooke gratefully acknowledges funding from a Fannie and John Hertz Foundation fellowship. Rein Houthooft is supported by a Ph.D. Fellowship of the Research Foundation - Flanders (FWO). A Appendices A.1 Hyperparameter Settings For the rllab experiments, we used batch size 5000 for all tasks except SwimmerGather, for which we used batch size 50000. CartpoleSwingup makes use of a neural network policy with one layer of 32 tanh units. The other tasks make use of a two layer neural network policy of 32 tanh units each for MountainCar and HalfCheetah, and of 64 and 32 tanh units for SwimmerGather. The outputs are modeled by a fully factorized Gaussian distributionN (µ, σ2I), in which µ is modeled as the network output, while σ is a parameter. CartPoleSwingup makes use of a neural network baseline with one layer of 32 ReLU units, while all other tasks make use of a linear baseline function. For all tasks, we used TRPO step size 0.01 and discount factor γ = 0.99. We choose SimHash parameter k = 32 and bonus coefficient β = 0.01, found through a coarse grid search. For Atari experiments, a batch size of 100000 is used, while the KL divergence step size is set to 0.01. The policy and baseline both have the following architecture: 2 convolutional layers with respectively 16 and 32 filters, sizes 8 × 8 and 4 × 4, strides 4 and 2, using no padding, feeding into a single hidden layer of 256 units. The nonlinearities are rectified linear units (ReLUs). The input frames are downsampled to 52 × 52. The input to policy and baseline consists of the 4 previous frames, corresponding to the frame skip of 4. The discount factor was set to γ = 0.995. All inputs are rescaled to [−1, 1] element-wise. All experiments used 5 different training seeds, except the experiments with the learned hash code, which uses 3 different training seeds. Batch normalization (Ioffe & Szegedy, 2015) is used at each policy and baseline layer. TRPO-pixel-SimHash uses binary codes of size k = 256; BASS (TRPO-BASS-SimHash) extracts features using cell size C = 20 and B = 20 bins. The autoencoder for the learned embedding (TRPO-AE-SimHash) uses a binary hidden layer of 512 bit, which are projected to 64 bit. RAM states in Atari 2600 games are integer-valued vectors over length 128 in the range [0, 255]. Experiments on Montezuma’s Revenge with RAM observations use a policy consisting of 2 hidden layers, each of size 32. RAM states are rescaled to a range [−1, 1]. Unlike images, only the current RAM is shown to the agent. Experiment results are averaged over 10 random seeds. In addition, we apply counting Bloom filters (Fan et al., 2000) to maintain a small hash table. Details can be found in Appendix A.5. The autoencoder used for the learned hash code has a 512 bit binary code layer, using sigmoid units, to which uniform noise U (−a, a) with a = 0.3 is added. The loss function Eq. (3), using λ = 10, is updated every jupdate = 3 iterations. The architecture looks as follows: an input layer of size 52 × 52, representing the image luminance is followed by 3 consecutive 6 × 6 convolutional layers with stride 2 and 96 filters feed into a fully connected layer of size 1024, which connects to the binary code layer. This binary code layer feeds into a fully-connected layer of 1024 units, connecting to a fully-connected layer of 2400 units. This layer feeds into 3 consecutive 6× 6 transposed convolutional layers of which the final one connects to a pixel-wise softmax layer with 64 bins, representing the pixel intensities. Moreover, label smoothing is applied to the different softmax bins, in which the log-probability of each of the bins is increased by 0.003, before normalizing. The softmax weights are shared among each pixel. All output nonlinearities are ReLUs; Adam (Kingma & Ba, 2015) is used as an optimization scheme; batch normalization (Ioffe & Szegedy, 2015) is applied to each layer. The architecture was shown in Figure 1 of Section 2.3. A.2 Description of the Adapted rllab Tasks This section describes the continuous control environments used in the experiments. The tasks are implemented as described in Duan et al. (2016), following the sparse reward adaptation of Houthooft et al. (2016). The tasks have the following state and action dimensions: CartPoleSwingup, S ⊆ R4, A ⊆ R1; MountainCar S ⊆ R3, A ⊆ R1; HalfCheetah, S ⊆ R20, A ⊆ R6; SwimmerGather, S ⊆ R33, A ⊆ R2. For the sparse reward experiments, the tasks have been modified as follows. In CartPoleSwingup, the agent receives a reward of +1 when cos(β) > 0.8, with β the pole angle. In MountainCar, the agent receives a reward of +1 when the goal state is reached, namely escaping the valley from the right side. Therefore, the agent has to figure out how to swing up the pole in the absence of any initial external rewards. In HalfCheetah, the agent receives a reward of +1 when xbody > 5. As such, it has to figure out how to move forward without any initial external reward. The time horizon is set to T = 500 for all tasks. A.3 Examples of Atari 2600 RAM Entries Table 5 lists the semantic interpretation of certain RAM entries in Montezuma’s Revenge. SmartHash, as described in Section 3.4, makes use of RAM indices 3, 42, 43, 27, and 67. “Beam walls” are deadly barriers that occur periodically in some rooms. A.4 Analysis of Learned Binary Representation Figure 6 shows the downsampled codes learned by the autoencoder for several Atari 2600 games (Frostbite, Freeway, and Montezuma’s Revenge). Each row depicts 50 consecutive frames (from 0 to 49, going from left to right, top to bottom). The pictures in the right column depict the binary codes that correspond with each of these frames (one frame per row). Figure 7 shows the reconstructions of several subsequent images according to the autoencoder. A.5 Counting Bloom Filter/Count-Min Sketch We experimented with directly building a hashing dictionary with keys φ(s) and values the state counts, but observed an unnecessary increase in computation time. Our implementation converts the integer hash codes into binary numbers and then into the “bytes” type in Python. The hash table is a dictionary using those bytes as keys. However, an alternative technique called Count-Min Sketch (Cormode & Muthukrishnan, 2005), with a data structure identical to counting Bloom filters (Fan et al., 2000), can count with a fixed integer array and thus reduce computation time. Specifically, let p1, . . . , pl be distinct large prime numbers and define φ j (s) = φ(s) mod pj . The count of state s is returned as min1≤ j≤l n j ( φ j (s) ) . To increase the count of s, we increment n j ( φ j (s) ) by 1 for all j. Intuitively, the method replaces φ by weaker hash functions, while it reduces the probability of over-counting by reporting counts agreed by all such weaker hash functions. The final hash code is represented as ( φ1(s), . . . , φl (s) ) . Throughout all experiments above, the prime numbers for the counting Bloom filter are 999931, 999953, 999959, 999961, 999979, and 999983, which we abbreviate as “6 M”. In addition, we experimented with 6 other prime numbers, each approximately 15 M, which we abbreviate as “90 M”. As we can see in Figure 8, counting states with a dictionary or with Bloom filters lead to similar performance, but the computation time of latter is lower. Moreover, there is little difference between direct counting and using a very larger table for Bloom filters, as the average bonus rewards are almost the same, indicating the same degree of exploration-exploitation trade-off. On the other hand, Bloom filters require a fixed table size, which may not be known beforehand. Theory of Bloom Filters Bloom filters (Bloom, 1970) are popular for determining whether a data sample s′ belongs to a dataset D. Suppose we have l functions φ j that independently assign each data sample to an integer between 1 and p uniformly at random. Initially 1, 2, . . . , p are marked as 0. Then every s ∈ D is “inserted” through marking φ j (s) as 1 for all j. A new sample s′ is reported as a member of D only if φ j (s) are marked as 1 for all j. A bloom filter has zero false negative rate (any s ∈ D is reported a member), while the false positive rate (probability of reporting a nonmember as a member) decays exponentially in l. Though Bloom filters support data insertion, it does not allow data deletion. Counting Bloom filters (Fan et al., 2000) maintain a counter n(·) for each number between 1 and p. Inserting/deleting s corresponds to incrementing/decrementing n ( φ j (s) ) by 1 for all j. Similarly, s is considered a member if ∀ j : n ( φ j (s) ) = 0. Count-Min sketch is designed to support memory-efficient counting without introducing too many over-counts. It maintains a separate count n j for each hash function φ j defined as φ j (s) = φ(s) mod pj , where pj is a large prime number. For simplicity, we may assume that pj ≈ p ∀ j and φ j assigns s to any of 1, . . . , p with uniform probability. We now derive the probability of over-counting. Let s be a fixed data sample (not necessarily inserted yet) and suppose a dataset D of N samples are inserted. We assume that pl N . Let n := min1≤ j≤l n j ( φ j (s) ) be the count returned by the Bloom filter. We are interested in computing Prob(n > 0|s < D). Due to assumptions about φ j , we know n j (φ(s)) ∼ Binomial ( N, 1p ) . Therefore, Prob(n > 0|s < D) = Prob(n > 0, s < D) Prob(s < D) = Prob(n > 0) − Prob(s ∈ D) Prob(s < D) ≈ Prob(n > 0) Prob(s < D) = ∏l j=1 Prob(n j (φ j (s)) > 0) (1 − 1/pl)N = (1 − (1 − 1/p)N )l (1 − 1/pl)N ≈ (1 − e −N/p)l e−N/pl ≈ (1 − e−N/p)l . (5) In particular, the probability of over-counting decays exponentially in l. We refer the readers to (Cormode & Muthukrishnan, 2005) for other properties of the Count-Min sketch. A.6 Robustness Analysis Apart from the experimental results shown in Table 1 and Table 3, additional experiments have been performed to study several properties of our algorithm. Hyperparameter sensitivity To study the performance sensitivity to hyperparameter changes, we focus on evaluating TRPO-RAM-SimHash on the Atari 2600 game Frostbite, where the method has a clear advantage over the baseline. Because the final scores can vary between different random seeds, we evaluated each set of hyperparameters with 30 seeds. To reduce computation time and cost, RAM states are used instead of image observations. 128 – 1475 4248 2801 3239 3621 1543 395 256 – 2583 4497 4437 7849 3516 2260 374 The results are summarized in Table 6. Herein, k refers to the length of the binary code for hashing while β is the multiplicative coefficient for the reward bonus, as defined in Section 2.2. This table demonstrates that most hyperparameter settings outperform the baseline (β = 0) significantly. Moreover, the final scores show a clear pattern in response to changing hyperparameters. Small β-values lead to insufficient exploration, while large β-values cause the bonus rewards to overwhelm the true rewards. With a fixed k, the scores are roughly concave in β, peaking at around 0.2. Higher granularity k leads to better performance. Therefore, it can be concluded that the proposed exploration method is robust to hyperparameter changes in comparison to the baseline, and that the best parameter settings can obtained from a relatively coarse-grained grid search. State and state-action counting Continuing the results in Table 6, the performance of state-action counting is studied using the same experimental setup, summarized in Table 7. In particular, a bonus reward r+ = β√ n(s,a) instead of r+ = β√ n(s) is assigned. These results show that the relative performance of state counting compared to state-action counting depends highly on the selected hyperparameter settings. However, we notice that the best performance is achieved using state counting with k = 256 and β = 0.2. 128 1475 / 808 4248 / 4302 2801 / 4802 3239 / 7291 3621 / 4243 1543 / 1941 395 / 362
1. What is the main contribution of the paper in high-dimensional RL applications? 2. How does the proposed method work, and what are its key components? 3. How effective is the method in different Atari games, and how does it compare to other approaches? 4. What are the limitations of the proposed method, and how might it be improved? 5. What is the reviewer's concern regarding the update of the hash code during training, and how might the authors address this issue? 6. How can the authors improve the clarity of Section 2.3 (Learned Embedding) in future versions of the paper?
Review
Review This paper proposed to use a simple count-based exploration technique in high-dimensional RL application (e.g., Atari Games). The counting is based on state hash, which implicitly groups (quantizes) similar state together. The hash is computed either via hand-designed features or learned features (unsupervisedly with auto-encoder). The new state to be explored receives a bonus similar to UCB (to encourage further exploration). Overall the paper is solid with quite extensive experiments. I wonder how it generalizes to more Atari games. Montezuma’s Revenge may be particularly suitable for approaches that implicitly/explicitly cluster states together (like the proposed one), as it has multiple distinct scenarios, each with small variations in terms of visual appearance, showing clustering structures. On the other hand, such approaches might not work as well if the state space is fully continuous (e.g. in RLLab experiments). The authors did not answer my question about why the hash code needs to be updated during training. I think it is mainly because the code still needs to be adaptive for a particular game (to achieve lower reconstruction error) in the first few iterations . After that stabilization is the most important. Sec. 2.3 (Learned embedding) is quite confusing (but very important). I hope that the authors could make it more clear (e.g., by writing an algorithm block) in the next version.
ICLR
Title Are all outliers alike? On Understanding the Diversity of Outliers for Detecting OODs Abstract Deep neural networks (DNNs) are known to produce incorrect predictions with very high confidence on out-of-distribution (OOD) inputs. This limitation is one of the key challenges in the adoption of deep learning models in high-assurance systems such as autonomous driving, air traffic management, and medical diagnosis. This challenge has received significant attention recently, and several techniques have been developed to detect inputs where the model’s prediction cannot be trusted. These techniques use different statistical, geometric, or topological signatures. This paper presents a taxonomy of OOD outlier inputs based on their source and nature of uncertainty. We demonstrate how different existing detection approaches fail to detect certain types of outliers. We utilize these insights to develop a novel integrated detection approach that uses multiple attributes corresponding to different types of outliers. Our results include experiments on CIFAR10, SVHN and MNIST as in-distribution data and Imagenet, LSUN, SVHN (for CIFAR10), CIFAR10 (for SVHN), KMNIST, and F-MNIST as OOD data across different DNN architectures such as ResNet34, WideResNet, DenseNet, and LeNet5. 1 INTRODUCTION Deep neural networks (DNNs) have achieved remarkable performance-levels in many areas such as computer vision (Gkioxari et al., 2015), speech recognition (Hannun et al., 2014), and text analysis (Majumder et al., 2017). But their deployment in the safety-critical systems such as self-driving vehicles (Bojarski et al., 2016), aircraft collision avoidance (Julian & Kochenderfer, 2017), and medical diagnoses (De Fauw et al., 2018) is hindered by their brittleness. One major challenge is the inability of DNNs to be self-aware of when new inputs are outside the training distribution and likely to produce incorrect predictions. It has been widely reported in literature (Guo et al., 2017a; Hendrycks & Gimpel, 2016) that deep neural networks exhibit overconfident incorrect predictions on inputs which are outside the training distribution. The responsible deployment of deep neural network models in high-assurance applications necessitates detection of out-of-distribution (OOD) data so that DNNs can abstain from making decisions on those. Recent approaches for OOD detection consider different statistical, geometric or topological signatures in data that differentiate OODs from the training distribution. For example, the changes in the softmax scores due to input perturbations and temperature scaling have been used to detect OODs (Hendrycks & Gimpel, 2016; Liang et al., 2017; Guo et al., 2017b). Papernot & McDaniel (2018) use the conformance among the labels of the nearest neighbors while Tack et al. (2020) use cosine similarity (modulated by the norm of the feature vector) to the nearest training sample for the detection of OODs. Lee et al. (2018) consider the Mahalanobis distance of an input from the in-distribution data to detect OODs. Several other metrics such as reconstruction error (An & Cho, 2015), likelihood-ratio between the in-distribution and OOD samples (Ren et al., 2019), trust scores (ratio of the distance to the nearest class different from the predicted class and the distance to the predicted class) (Jiang et al., 2018), density function (Liu et al., 2020; Hendrycks et al., 2019a), probability distribution of the softmax scores (Lee et al., 2017; Hendrycks et al., 2019b; Tack et al., 2020; Hendrycks et al., 2019a) have also been used to detect OODs. All these methods attempt to develop a uniform approach with a single signature to detect all OODs accompanied by empirical evaluations that use datasets such as CIFAR10 as in-distribution data and other datasets such as SVHN as OOD. Our study shows that OODs can be of diverse types with different defining characteristics. Consequently, an integrated approach that takes into account the diversity of these outliers is needed for effective OOD detection. We make the following three contributions in this paper: • Taxonomy of OODs. We define a taxonomy of OOD samples that classify OODs into different types based on aleatoric vs epistemic uncertainty (Hüllermeier & Waegeman, 2019), distance from the predicted class vs the distance from the tied training distribution, and uncertainty in the principal components vs uncertainty in non-principal components with low variance. • Incompleteness of existing uniform OOD detection approaches. We examine the limitations of the state-of-the-art approaches to detect various types of OOD samples. We observe that not all outliers are alike and existing approaches fail to detect particular types of OODs. We use a toy dataset comprising two halfmoons as two different classes to demonstrate these limitations. • An integrated OOD detection approach. We propose an integrated approach that can detect different types of OOD inputs. We demonstrate the effectiveness of our approach on several benchmarks, and compare against state-of-the-art OOD detection approaches such as the ODIN (Liang et al., 2017) and Mahalanobis distance method (Lee et al., 2018). 2 OOD TAXONOMY AND EXISTING DETECTION METHODS DNNs predict the class of a new input based on the classification boundaries learned from the samples of the training distribution. Aleatory uncertainty is high for inputs which are close to the classification boundaries, and epistemic uncertainty is high when the input is far from the learned distributions of all classes (Hora, 1996; Hüllermeier & Waegeman, 2019). Given the predicted class of a DNN model on a given input, we can observe the distance of the input from the distribution of this particular class and identify it as an OOD if this distance is high. We use this top-down inference approach to detect this type of OODs which are characterized by an inconsistency in model’s prediction and input’s distance from the distribution of the predicted class. Further, typical inputs to DNNs are high-dimensional and can be decomposed into principal and non-principal components based on the direction of high variation; this yields another dimension for classification of OODs. We, thus, categorize an OOD using the following three criteria. 1. Is the OOD associated with higher epistemic or aleatoric uncertainty, i.e., is the input away from in-distribution data or can it be confused between multiple classes? 2. Is the epistemic uncertainty of an OOD sample unconditional or is it conditioned on the class predicted by the DNN model? 3. Is the OOD an outlier due to unusually high deviation in the principal components of the data or due to small deviation in the non-principal (and hence, statistically invariant) components? Figure 1 demonstrates different types of OODs which differ along these criteria. Type 1 OODs have high epistemic uncertainty and are away from the indistribution data. Type 2 OODs have high epistemic uncertainty with respect to each of the 3 classes even though approximating all in-distribution (ID) data using a single Guassian distribution will miss these outliers. Type 3 OODs have high aleatoric uncertainty as they are close to the decision boundary between class 0 and class 1. Type 4 and 5 have high epistemic uncertainty with respect to their closest classes. While Type 4 OODs are far from the distribution along the principal axis, Type 5 OODs vary along a relatively invariant axis where even a small deviation indicates that the sample is an OOD. Limitations of Existing Detection Methods. We empirically demonstrate the limitations of existing OOD detection methods on a two-dimensional (2D) half-moon dataset with two classes. As shown in Figure 2, we consider three clusters of OOD samples: cluster A (black), B (brown) and C(red). Figure 2 (right) shows the 2D penultimate features of the classifier. Different approaches differ in their ability to detect different OOD types as illustrated in Figure 3. • Figure 3(a) shows that the Mahalanobis distance (Lee et al., 2018) from the mean and tied covariance of all the training data in the feature space cannot detect OODs in the clusters B and C corresponding to class-conditional epistemic uncertainty and aleatoric uncertainty, respectively. It attains the overall true negative rate (TNR) of 39.09% at the 95% true positive rate (TPR). • Figure 3(b) shows that the softmax prediction probability (SPB) (Hendrycks & Gimpel, 2016) cannot detect the OODs in cluster A corresponding to high epsitemic uncertainty. The TNR ( at 95% TPR) reported by the SPB technique is 60.91%. • Figure 3(c) shows that class-wise Principal Component Analysis (PCA) (Hoffmann, 2007) cannot detect OODs in cluster C corresponding to high aleatoric uncertainty. We performed PCA of the two classes separately in the feature space and used the minimum reconstruction error to detect OODs. This obtained overall TNR of 80.91% (at 95% TPR). • Figure 3(d) shows that K-Nearest Neighbor (kNN) (Papernot & McDaniel, 2018) nonconformance in the labels of the nearest neighbors cannot detect OODs in clusters A and B with high epistemic uncertainty. The overall TNR (at 95% TPR) reported by this technique is 15%. These observations can be explained by the focus of different detection techniques on measuring different forms of uncertainty. This motivates our integrated OOD detection method. 3 INTEGRATED OOD DETECTION METHOD Complementary information about different OOD types can be used to detect a wider range of OODs. Figure 4 shows the improvement in the TNR of the OOD detector composed with information about different classes of OODs on the two half-moons dataset. Non-conformity in the labels of the nearest neighbors captures OODs in cluster C. Mahalanobis distance from the tied in-distribution detects OODs in cluster A. Reconstruction error from the PCA of the 2 class distributions captures OODs in cluster B. Softmax scores further strengthens the OOD detection by reporting OODs in cluster C that are undetected by the other three methods. The integrated OOD detection approach, thus, uses the following attributes, each specialized in detecting a specific type (or a combination of types) of OODs: 1. Mahalanobis distance from the in-distribution density estimate that considers either tied (Lee et al., 2018) or class-wise covariance estimate. This attribute captures the overall or classconditional epistemic uncertainty of an OOD. Our refinement to also use class-wise covariance significantly improves detection of OODs when coupled with PCA approach described below. 2. Conformance measure among the variance of the Annoy (Bernhardsson, 2018) nearest neighbors calculated as the Mahalanobis distance of the input’s conformance to the closest class conformance. Our experiments found this to be very effective in capturing aleatoric uncertainty. This new attribute is a fusion of nearest-neighbor and Mahalanobis distance methods in literature. 3. Prediction confidence of the classifier as the maximum softmax score on the perturbed input where the perturbation used is the same as ODIN approach (Liang et al., 2017). This boosts the detection of high aleatoric uncertainty by sharpening the class-wise distributions. 4. Reconstruction error using top 40% of PCA components where the components are obtained via class conditional PCA of the training data. This boosts the detection of high class-wise epistemic uncertainty by eliminating irrelevant features. This fusion of attributes from existing state-of-the-art detection methods and new attributes was found to be the most effective integrated appraoch capable of detecting the different types of OODs. We evaluated it on several benchmarks as discussed in Section 4 with ablation study in Appendix. 4 EXPERIMENTAL RESULTS Attributes forming the signature of the OOD detector used in the experiments The signature of the OOD detector used in the experiments is the weighted sum of four attributes, one from each of the following four categories: 1. Distance from the in-distribution density estimate: We use mahalanobis distance of the input with respect to the closest class conditional distribution. The parameters of this distance are chosen from one of the following two categories: • empirical class means and tied empirical covariance of training samples • empirical class means and empirical class covariance of training samples 2. Reconstruction error: We perform class conditional PCA empirically from the training samples. We use minimum reconstruction error of the input from the top 40% eigen vectors of the class conditional eigen spaces. 3. Prediction confidence of the classifier: We use maximum value of the temperature scaled softmax scores (S) on the perturbed input. Perturbations to the input (x) are made according to the following equation (Liang et al., 2017) x̃ = x− sign(−∇xlogSŷ(x;T )) (1) The values of the magnitude of noise ( ) and the temperature scaling parameter (T ) are chosen from one of the following three categories: • = 0 and T = 0 • = 0 and T = 10 • = 0.005 and T = 10 4. Conformance measure among the nearest neighbors: We compute an m-dimensional feature vector to capture the conformance among the input’s nearest neighbors in the training samples, where m is the dimension of the input. We call this m-dimensional feature vector as the conformance vector. The conformance vector is calculated by taking the mean deviation along each dimension of the nearest neighbors from the input. We hypothesize that this deviation for the in-distribution samples would vary from the OODs due to aleatory uncertainty. The value of the conformance measure is calculated by computing mahalanobis distance of the input’s conformance vector to the closest class conformance distribution. Similar to the distance for the in-distribution density estimate, the parameters of this mahalanobis distance are chosen from the following two categories: • empirical class means and tied empirical covariance on the conformance vectors of the training samples • empirical class means and empirical class covariance on the conformance vectors of the training samples The value of the number of the nearest neighbors is chosen from the set {10, 20, 30, 40, 50} via validation. We used Annoy (Approximate Nearest Neighbors Oh Yeah) (Bernhardsson, 2018) to compute the nearest neighbors. The weights of the four attributes forming the signature of the OOD detector are generated in the following manner. We use a small subset (1000 samples) of both the in-distribution and the generated OOD data to train a binary classifier using the logistic loss. The OOD data used to train the classifier is generated by perturbing the in-distribution data using the Fast Gradient Sign attack (FGSM) (Goodfellow et al., 2014). The trained classifier (or OOD detector) is then evaluated on the real OOD dataset at the True Positive Rate of 95%. The best result, in terms of the highest TNR on the validation dataset (from the training phase of the OOD detector), from the twelve combinations of the aforementioned sub-categories (one from each of the four attributes) are then reported on the test (or real) OOD datasets. Datasets and metrics. We evaluate the proposed integrated OOD detection on benchmarks such as CIFAR10 (Krizhevsky et al., 2009) and SVHN (Netzer et al., 2011). We consider standard metrics (Hendrycks & Gimpel, 2016; Liang et al., 2017; Lee et al., 2018) such as the true negative rate (TNR) at 95% true positive rate (TPR), the area under the receiver operating characteristic curve (AUROC), area under precision recall curve (AUPR), and the detection accuracy (DTACC) to evaluate our performance. DNN-based classifier architectures. To demonstrate that the proposed approach generalizes across various network architectures, we consider a wide range of DNN models such as , ResNet (He et al., 2016), WideResNet (Zagoruyko & Komodakis, 2016), and DenseNet (Huang et al., 2017). Comparison with the state-of-the-art. We compare our approach with the three state-of-the-art approaches: SPB (Hendrycks & Gimpel, 2016), ODIN (Liang et al., 2017), and Mahalanobis (Lee et al., 2018). For the ODIN method, the perturbation noise is chosen from the set {0, 0.0005, 0.001, 0.0014, 0.002, 0.0024, 0.005, 0.01, 0.05, 0.1, 0.2}, and the temperature T is chosen from the set {1, 10, 100, 1000}. These values are chosen from the validation set of the adversarial samples of the in-distribution data generated by the FGSM attack. For the Mahalanobis method, we consider their best results obtained after feature ensemble and input preprocessing with the hyperparameters of their OOD detector tuned on the in-distribution and adversarial samples generated by the FGSM attack. The magnitude of the noise used in pre-processing of the inputs is chosen from the set {0.0, 0.01, 0.005, 0.002, 0.0014, 0.001, 0.0005}. CIFAR10. With CIFAR10 as in-distribution, we consider SVHN (Netzer et al., 2011), TinyImagenet (Deng et al., 2009), and LSUN (Yu et al., 2015) as the OOD datasets. For CIFAR10, we consider two DNNs: ResNet50, and WideResNet. Table 1 shows the results. SVHN. With SVHN as in-distribution, we consider CIFAR10, Imagenet, and LSUN and as the OOD datasets. For SVHN, we use the DenseNet classifier. Table 1 shows the results. Key observations. We do not consider pre-processing of the inputs in our integrated OOD detector. Even without input pre-processing and with the exception of CIFAR10 OOD dataset for SVHN indistribution trained on DenseNet, we could perform equally well (and even out-perform in most of the cases) as the Mahalanobis method on its best results generated after pre-processing the input. We also consider a Subset-CIFAR100 as OODs for CIFAR10. Specifically, from the CIFAR100 classes, we select sea, road, bee, and butterfly as OODs which are visually similar to the ship, automobile, and bird classes in the CIFAR10, respectively. Thus, there can be numerous OOD samples due to aleatoric and class-conditional epistemic uncertainty which makes the OOD detection challenging. Figure 5 shows the t-SNE (Maaten & Hinton, 2008) plot of the penultimate features from the ResNet50 model trained on CIFAR10. We show 4 examples of OODs (2 due to epistemic and 2 due to aleatoric uncertainty) from Subset-CIFAR100. These OODs were detected by our integrated approach but missed by the Mahalanobis approach. These observations justify the effectiveness of integrating multiple attributes to detect OOD samples. Additional experimental results in the appendix. We also compare the performance of the integrated OOD detector with the SPB, ODIN and Mahalanobis detector in supervised settings, as reported by the Mahalanobis method for OOD detection (Lee et al., 2018). These results include experiments on CIFAR10, SVHN and MNIST as in-distribution data and Imagenet,LSUN, SVHN (for CIFAR10), CIFAR10 (for SVHN), KMNIST, and F-MNISTas OOD data across different DNN architectures such as ResNet34, WideResNet,DenseNet, and LeNet5. All these results, along with the ablation studies on OOD detectors with single attributes are included in the Appendix. In almost all of the reported results in the Appendix, our OOD detector could outperform the compared state-of-the-art methods with improvements of even 2X higher TNR at 95% TPR in some cases. 5 DISCUSSION AND FUTURE WORK Recent techniques propose refinement in the training process of the classifiers for OOD detection. Some of these techniques include fine-tuning the classifier’s training with an auxiliary cost function for OOD detection (Hendrycks et al., 2019a; Liu et al., 2020). Other techniques make use of selfsupervised models for OOD detection (Tack et al., 2020; Hendrycks et al., 2019b). We perform preliminary experiments to compare the performance of these techniques with our integrated OOD detector that makes use of the feature space of the pre-trained classifiers to distinguish in-distribution samples from OODs. Our approach does not require modification of the training cost function of the original task. These results are reported in the Appendix. We consider making use of the feature space of in our OOD detection technique as a promising prospective future work. Another direction of the future work is to explore the score functions used in these refined training processes for OOD detection (Liu et al., 2020; Hendrycks et al., 2019a; Tack et al., 2020; Hendrycks et al., 2019b) as attributes (or categories of attributes) forming the signature of the integrated OOD detector. Another avenue of future work is to explore OOD generation techniques other than adversarial examples generated by the FGSM attack for training of the integrated OOD detector. 6 CONCLUSION We introduced a taxonomy of OODs and proposed an integrated approach to detect different types of OODs. Our taxonomy classifies OOD on the nature of their uncertainty and we demonstrated that no single state-of-the-art approach detects all these OOD types. Motivated by this observation, we formulated an integrated approach that fuses multiple attributes to target different types of OODs. We have performed extensive experiments on a synthetic dataset and several benchmark datasets (e.g., MNIST, CIFAR10, SVHN). Our experiments show that our approach can accurately detect various types of OODs coming from a wide range of OOD datasets such as KMNIST, FashionMNIST, SVHN, LSUN, and Imagenet. We have shown that our approach generalizes over multiple DNN architectures and performs robustly when the OOD samples are similar to in-distribution data. A APPENDIX A.1 DEFINING OODS DUE TO EPISTEMIC AND ALEATORIC UNCERTAINTY In general, let there be k classes c1, c2, . . . , ck and the distribution of training data for each class is p(x|ci). The overall training distribution is denoted by p(x). Now, given a new input x̂ to the trained DNN model M , let ĉ =M(x̂) denote the predicted class. The flowchart in Figure 6 shows different sources of uncertainty that could make x̂ an OOD. A.2 ADDITIONAL EXPERIMENTAL RESULTS We first present preliminary results for comparison with the OOD detection techniques based on fine-tuning of the classifiers (Hendrycks et al., 2019a; Liu et al., 2020; Tack et al., 2020; Hendrycks et al., 2019b). We then present our results on various vision datasets and different architectures of the pre-trained DNN based classifiers for these datasets in comparison to the ODIN, the Mahalanobis and the SPB methods in supervised settings. Finally, we then report results from the ablation study on OOD detection with individual attributes and compare it with our integrated approach on OOD detection. A.2.1 COMPARISON WITH THE OOD DETECTION TECHNIQUES BASED ON REFINEMENT OF THE TRAINING PROCESS FOR CLASSIFIERS Recent techniques propose refinement in the training process of the classifiers for OOD detection. Some of these techniques include fine-tuning the training of classifiers with a trainable cost function for OOD detection (Hendrycks et al., 2019a; Liu et al., 2020), self-supervised training of the classifiers to enhance OOD detection (Tack et al., 2020; Hendrycks et al., 2019b) etc. We perform preliminary experiments to compare the performance of these techniques with our integrated OOD detector that uses features of the pre-trained classifiers to distinguish in-distribution samples from OODs. Table 2 compares TNR (at 95% TPR), AUROC and AUPR for the energy based OOD detector (Liu et al., 2020) and our integrated OOD detector on CIFAR10 with pretrained WideResNet model. The integrated OOD detector was trained on in-distribution and adversarial samples generated by FGSM attack. Table 3 compares the results of the WideResNet model trained on CIFAR10 and fine-tuned with the outlier exposure from the 80 Million Tiny Images with our OOD detector that uses features from the pre-trained WideResNet model trained on CIFAR10. Since 80 Million Tiny Images dataset is no longer available for use, we used a small subset of ImageNet (treated as OOD dataset for CIFAR10 and SVHN datasets (Lee et al., 2018)) for generating OODs for training of the integrated OOD detector. Table 4 compares the OOD detection performance of the self-supervised training based OOD detector with our method. We trained our OOD detector with the in-distribution CIFAR10 as in-distribution samples and adversarial samples generated by FGSM attack from the test dataset of CIFAR10 as OODs. The trained OOD detector was then tested on LSUN as OODs and the results are reported in Table 4. With ResNet-50 as the classifier for CIFAR10, we trained our OOD detector with the in-distribution CIFAR10 as in-distribution samples and adversarial samples generated by FGSM from the test dataset of CIFAR10 as OODs. The trained OOD detector was then tested on SVHN as OODs and these results are compared with the contrastive based learning for OOD detection (Tack et al., 2020) in table 5. A.2.2 COMPARISON WITH THE STATE-OF-THE-ART OOD DETECTION METHODS IN SUPERVISED SETTINGS ON PRE-TRAINED CLASSIFIERS We compare our results with the state-of-the-art methods in supervised settings, as reported by the Mahalanobis method for OOD detection (Lee et al., 2018). In supervised settings, a small subset of the real OOD dataset is used in the training of the OOD detector. Datasets and metrics. We evaluate the proposed integrated OOD detection on benchmarks such as MNIST (LeCun et al., 1998), CIFAR10 (Krizhevsky et al., 2009), and SVHN (Netzer et al., 2011). We consider standard metrics (Hendrycks & Gimpel, 2016; Liang et al., 2017; Lee et al., 2018) such as the true negative rate (TNR) at 95% true positive rate (TPR), the area under the receiver operating characteristic curve (AUROC), area under precision recall curve (AUPR) with both in-distribution and OODs as positive samples (AUPR IN and AUPR OUT respectively), and the detection accuracy (DTACC) to evaluate our performance. DNN-based classifier architectures. To demonstrate that the proposed approach generalizes across various network architectures, we consider a wide range of DNN models such as Lenet (LeCun et al., 1998) , ResNet (He et al., 2016) , and DenseNet (Huang et al., 2017). Comparison with the state-of-the-art. We compare our approach with the three state-of-the-art approaches: SPB (Hendrycks & Gimpel, 2016), ODIN (Liang et al., 2017) and Mahalanobis (Lee et al., 2018). Since, these experiments are performed in supervised settings, we fix T = 10 and = 0.005 for generating results from the ODIN method. For Mahalanobis distance, we consider the distance in the penultimate layer feature space as well as features from all the layers of the DNN without preprocessing of the input in either settings. MNIST. With MNIST as in-distribution, we consider KMNIST (Clanuwat et al., 2018) and FashionMNIST(F-MNIST) (Xiao et al., 2017) as OOD datasets. For MNIST, we use the LeNet5 (LeCun et al., 1998) DNN. Results in terms of TNR (at 95% TPR), AUROC, and DTACC are reported in tables 6, 7, and 8. Table 6 shows the results with the features from the penultimate layer in comparison to the ODIN and Mahalanobis methods. Table 7 shows the results with the features from all the layers in comparison to the Mahalanobis method. Table 8 shows the results with the features from the penultimate layer in comparison to the SPB method. In all these settings, our approach outperforms the state-of-the-art approaches for both the OOD datasets. Results in comparison to AUPR IN and AUPR OUT are shown in table 9. Here also, our technique out-performs all the three OOD detectors on all the test cases. CIFAR10. With CIFAR10 as in-distribution, we consider STL10 (Coates et al., 2011), SVHN (Netzer et al., 2011), Imagenet (Deng et al., 2009), LSUN (Yu et al., 2015), and a subset of CIFAR100 (SCIFAR100) (Krizhevsky et al., 2009) as OOD datasets. For CIFAR10, we consider three DNNs: DenseNet, ResNet34, and ResNet50. Results in terms of TNR (at 95% TPR), AUROC, and DTACC are reported in tables 6, 7, and 8. Table 6 shows the results with the features from the penultimate layer in comparison to the ODIN and Mahalanobis methods. Table 7 shows the results with the features from all the layers in comparison to the Mahalanobis method. Table 8 shows the results with the features from the penultimate layer in comparison to the SPB method. Results in comparison to AUPR IN and AUPR OUT are shown in tables 10, 11, and 12. Here also, the integrated OOD detection technique could out-perform the other three detectors on most of the test cases. Note that images from STL10 and the subset of CIFAR100 are quite similar to CIFAR10 images. Furthermore, from the CIFAR100 classes, we select sea, road, bee, and butterfly as OODs which are visually similar to the ship, automobile, and bird classes in the CIFAR10, respectively. SVHN. With SVHN as in-distribution, we consider STL10, CIFAR10, Imagenet, LSUN and, SCIFAR100 as OOD datasets. For SVHN, we consider two DNNs: DenseNet and ResNet34. Results in terms of TNR (at 95% TPR), AUROC, and DTACC are reported in tables 6, 7, and 8. Table 6 shows the results with the features from the penultimate layer in comparison to the ODIN and Mahalanobis methods. Table 7 shows the results with the features from all the layers in comparison to the Mahalanobis method. Table 8 shows the results with the features from the penultimate layer in comparison to the SPB method. Results in comparison to AUPR IN and AUPR OUT are shown in tables 13, and 14. Here also, the integrated OOD detection technique could out-perform the other three detectors on most of the test cases. Key observations. As shown in Table 6 and Table 7, and Table 8, our approach outperforms the state-of-the-art on all three datasets and with various DNN architectures. On CIFAR10, in terms of the TNR metric, our approach with Resnet50 outperforms Mahalanobis by 56% when SVHN is OOD and our approach with Resnet34 outperforms ODIN by 36% when LSUN is OOD. While considering STL10 and Subset-CIFAR100 as OODs for CIFAR10, the images from both these datasets are quite similar to CIFAR-10 images. Thus, there can be numerous OOD samples due to aleatoric and class-conditional epistemic uncertainty which makes detection challenging. Although our performance is low on the STL10 dataset, it still outperforms the state-of-the-art. For instance, the proposed approach achieves a 27% better TNR score than the Mahalanobis using ResNet50. On SVHN, in terms of the TNR metric, our approach outperforms ODIN and Mahalanobis by 63% and 13%, respectively on SCIFAR100 using ResNet34. The above observations justify the effectiveness of integrating multiple attributes to detect OOD samples. A.2.3 ABLATION STUDY We report ablation study on OOD detection with individual attributes and compare it with our integrated approach on the penultimate feature space of the classifier in the supervised settings as described in the previous section. We call the OOD detector with Mahalanobis distance estimated on class mean and tied covariance (Lee et al., 2018) as Mahala-Tied. Detector based on Mahalanobis distance estimated on class mean and class covariance is referred as Mahala-Class. Similarly conformance among the K-nearest neighbors (KNN) measured by Mahala-Tied and Mahala-Class is referred as KNN-Tied and KNN-Class respectively in these experiments. Results for this study on CIFAR10 with DenseNet architecture, SVHN with DenseNet and ResNet34 architectures are shown in Tables 15, 16 and 17 respectively. The integrated approach could out-perform all the single attribute based OOD detector in all the tested cases due to detection of diverse OODs. An important observation made from these experiments is that the performance of the single attribute based methods could depend on the architecture of the classifier. For example, while the performance of PCA was really bad in case of DenseNet (for both CIFAR10 and SVHN) as compared to all other methods, it could out-perform all but the integrated approach for SVHN on ResNet34.
1. What is the focus of the paper regarding outlier detection? 2. What are the strengths of the proposed approach in terms of combining different methods? 3. What are the weaknesses of the paper, particularly regarding the experimental setup and the choice of combination method? 4. Do you have any concerns about the applicability of the proposed method in real-world scenarios? 5. What are some potential improvements that could be made to the proposed approach?
Review
Review ########################################################################## Summary: The authors explore the different kinds of outliers and show that the methods previously proposed detect different kinds of OOD and not a single one can detect them all. The authors propose an interesting study of the different kind of outlier on synthetic data which illustrates well the different characteristics of the outlier types. The authors then propose to combine different methods to increase the OOD detection rate. Experiments are conducted on 3 images classification datasets using different deep neural networks. For each dataset, samples from other databases are introduced as outliers and must be detected. The combination method yield better detection rates than baseline methods in almost all configurations. ########################################################################## Reasons for score: The main idea of the paper is simple : combine different OOD detection metrics to increase the detection rate on different types of outliers. The proposed method indeed increases the OOD detection rate for almost all the experimental settings tested by the authors. However, the method to create the OOD samples is always the same: in-distribution samples come from a database whereas out of distribution samples are drawn from another database. It would be interesting to show that the method also increases the detection rate of outliers inside a given database. This could be done by reporting the classification rate of the DNN in an abstaining scheme : if the OOD metric is greater than a threshold, the sample is not classified (rejected). If the OOD detection method is useful, the classification rate of the DNN can be freely increased by increasing the threshold and rejecting more and more samples. The author do not justify their choice of the combination method. Computing all the OOD metrics can be computationaly expensive, is it necessary to compute them all ? Are this combination of metric the best ? In which conditions ? The combination method should be described in the body of the paper, not in appendix. Guo 2017 appears twice in the bibliography.
ICLR
Title Are all outliers alike? On Understanding the Diversity of Outliers for Detecting OODs Abstract Deep neural networks (DNNs) are known to produce incorrect predictions with very high confidence on out-of-distribution (OOD) inputs. This limitation is one of the key challenges in the adoption of deep learning models in high-assurance systems such as autonomous driving, air traffic management, and medical diagnosis. This challenge has received significant attention recently, and several techniques have been developed to detect inputs where the model’s prediction cannot be trusted. These techniques use different statistical, geometric, or topological signatures. This paper presents a taxonomy of OOD outlier inputs based on their source and nature of uncertainty. We demonstrate how different existing detection approaches fail to detect certain types of outliers. We utilize these insights to develop a novel integrated detection approach that uses multiple attributes corresponding to different types of outliers. Our results include experiments on CIFAR10, SVHN and MNIST as in-distribution data and Imagenet, LSUN, SVHN (for CIFAR10), CIFAR10 (for SVHN), KMNIST, and F-MNIST as OOD data across different DNN architectures such as ResNet34, WideResNet, DenseNet, and LeNet5. 1 INTRODUCTION Deep neural networks (DNNs) have achieved remarkable performance-levels in many areas such as computer vision (Gkioxari et al., 2015), speech recognition (Hannun et al., 2014), and text analysis (Majumder et al., 2017). But their deployment in the safety-critical systems such as self-driving vehicles (Bojarski et al., 2016), aircraft collision avoidance (Julian & Kochenderfer, 2017), and medical diagnoses (De Fauw et al., 2018) is hindered by their brittleness. One major challenge is the inability of DNNs to be self-aware of when new inputs are outside the training distribution and likely to produce incorrect predictions. It has been widely reported in literature (Guo et al., 2017a; Hendrycks & Gimpel, 2016) that deep neural networks exhibit overconfident incorrect predictions on inputs which are outside the training distribution. The responsible deployment of deep neural network models in high-assurance applications necessitates detection of out-of-distribution (OOD) data so that DNNs can abstain from making decisions on those. Recent approaches for OOD detection consider different statistical, geometric or topological signatures in data that differentiate OODs from the training distribution. For example, the changes in the softmax scores due to input perturbations and temperature scaling have been used to detect OODs (Hendrycks & Gimpel, 2016; Liang et al., 2017; Guo et al., 2017b). Papernot & McDaniel (2018) use the conformance among the labels of the nearest neighbors while Tack et al. (2020) use cosine similarity (modulated by the norm of the feature vector) to the nearest training sample for the detection of OODs. Lee et al. (2018) consider the Mahalanobis distance of an input from the in-distribution data to detect OODs. Several other metrics such as reconstruction error (An & Cho, 2015), likelihood-ratio between the in-distribution and OOD samples (Ren et al., 2019), trust scores (ratio of the distance to the nearest class different from the predicted class and the distance to the predicted class) (Jiang et al., 2018), density function (Liu et al., 2020; Hendrycks et al., 2019a), probability distribution of the softmax scores (Lee et al., 2017; Hendrycks et al., 2019b; Tack et al., 2020; Hendrycks et al., 2019a) have also been used to detect OODs. All these methods attempt to develop a uniform approach with a single signature to detect all OODs accompanied by empirical evaluations that use datasets such as CIFAR10 as in-distribution data and other datasets such as SVHN as OOD. Our study shows that OODs can be of diverse types with different defining characteristics. Consequently, an integrated approach that takes into account the diversity of these outliers is needed for effective OOD detection. We make the following three contributions in this paper: • Taxonomy of OODs. We define a taxonomy of OOD samples that classify OODs into different types based on aleatoric vs epistemic uncertainty (Hüllermeier & Waegeman, 2019), distance from the predicted class vs the distance from the tied training distribution, and uncertainty in the principal components vs uncertainty in non-principal components with low variance. • Incompleteness of existing uniform OOD detection approaches. We examine the limitations of the state-of-the-art approaches to detect various types of OOD samples. We observe that not all outliers are alike and existing approaches fail to detect particular types of OODs. We use a toy dataset comprising two halfmoons as two different classes to demonstrate these limitations. • An integrated OOD detection approach. We propose an integrated approach that can detect different types of OOD inputs. We demonstrate the effectiveness of our approach on several benchmarks, and compare against state-of-the-art OOD detection approaches such as the ODIN (Liang et al., 2017) and Mahalanobis distance method (Lee et al., 2018). 2 OOD TAXONOMY AND EXISTING DETECTION METHODS DNNs predict the class of a new input based on the classification boundaries learned from the samples of the training distribution. Aleatory uncertainty is high for inputs which are close to the classification boundaries, and epistemic uncertainty is high when the input is far from the learned distributions of all classes (Hora, 1996; Hüllermeier & Waegeman, 2019). Given the predicted class of a DNN model on a given input, we can observe the distance of the input from the distribution of this particular class and identify it as an OOD if this distance is high. We use this top-down inference approach to detect this type of OODs which are characterized by an inconsistency in model’s prediction and input’s distance from the distribution of the predicted class. Further, typical inputs to DNNs are high-dimensional and can be decomposed into principal and non-principal components based on the direction of high variation; this yields another dimension for classification of OODs. We, thus, categorize an OOD using the following three criteria. 1. Is the OOD associated with higher epistemic or aleatoric uncertainty, i.e., is the input away from in-distribution data or can it be confused between multiple classes? 2. Is the epistemic uncertainty of an OOD sample unconditional or is it conditioned on the class predicted by the DNN model? 3. Is the OOD an outlier due to unusually high deviation in the principal components of the data or due to small deviation in the non-principal (and hence, statistically invariant) components? Figure 1 demonstrates different types of OODs which differ along these criteria. Type 1 OODs have high epistemic uncertainty and are away from the indistribution data. Type 2 OODs have high epistemic uncertainty with respect to each of the 3 classes even though approximating all in-distribution (ID) data using a single Guassian distribution will miss these outliers. Type 3 OODs have high aleatoric uncertainty as they are close to the decision boundary between class 0 and class 1. Type 4 and 5 have high epistemic uncertainty with respect to their closest classes. While Type 4 OODs are far from the distribution along the principal axis, Type 5 OODs vary along a relatively invariant axis where even a small deviation indicates that the sample is an OOD. Limitations of Existing Detection Methods. We empirically demonstrate the limitations of existing OOD detection methods on a two-dimensional (2D) half-moon dataset with two classes. As shown in Figure 2, we consider three clusters of OOD samples: cluster A (black), B (brown) and C(red). Figure 2 (right) shows the 2D penultimate features of the classifier. Different approaches differ in their ability to detect different OOD types as illustrated in Figure 3. • Figure 3(a) shows that the Mahalanobis distance (Lee et al., 2018) from the mean and tied covariance of all the training data in the feature space cannot detect OODs in the clusters B and C corresponding to class-conditional epistemic uncertainty and aleatoric uncertainty, respectively. It attains the overall true negative rate (TNR) of 39.09% at the 95% true positive rate (TPR). • Figure 3(b) shows that the softmax prediction probability (SPB) (Hendrycks & Gimpel, 2016) cannot detect the OODs in cluster A corresponding to high epsitemic uncertainty. The TNR ( at 95% TPR) reported by the SPB technique is 60.91%. • Figure 3(c) shows that class-wise Principal Component Analysis (PCA) (Hoffmann, 2007) cannot detect OODs in cluster C corresponding to high aleatoric uncertainty. We performed PCA of the two classes separately in the feature space and used the minimum reconstruction error to detect OODs. This obtained overall TNR of 80.91% (at 95% TPR). • Figure 3(d) shows that K-Nearest Neighbor (kNN) (Papernot & McDaniel, 2018) nonconformance in the labels of the nearest neighbors cannot detect OODs in clusters A and B with high epistemic uncertainty. The overall TNR (at 95% TPR) reported by this technique is 15%. These observations can be explained by the focus of different detection techniques on measuring different forms of uncertainty. This motivates our integrated OOD detection method. 3 INTEGRATED OOD DETECTION METHOD Complementary information about different OOD types can be used to detect a wider range of OODs. Figure 4 shows the improvement in the TNR of the OOD detector composed with information about different classes of OODs on the two half-moons dataset. Non-conformity in the labels of the nearest neighbors captures OODs in cluster C. Mahalanobis distance from the tied in-distribution detects OODs in cluster A. Reconstruction error from the PCA of the 2 class distributions captures OODs in cluster B. Softmax scores further strengthens the OOD detection by reporting OODs in cluster C that are undetected by the other three methods. The integrated OOD detection approach, thus, uses the following attributes, each specialized in detecting a specific type (or a combination of types) of OODs: 1. Mahalanobis distance from the in-distribution density estimate that considers either tied (Lee et al., 2018) or class-wise covariance estimate. This attribute captures the overall or classconditional epistemic uncertainty of an OOD. Our refinement to also use class-wise covariance significantly improves detection of OODs when coupled with PCA approach described below. 2. Conformance measure among the variance of the Annoy (Bernhardsson, 2018) nearest neighbors calculated as the Mahalanobis distance of the input’s conformance to the closest class conformance. Our experiments found this to be very effective in capturing aleatoric uncertainty. This new attribute is a fusion of nearest-neighbor and Mahalanobis distance methods in literature. 3. Prediction confidence of the classifier as the maximum softmax score on the perturbed input where the perturbation used is the same as ODIN approach (Liang et al., 2017). This boosts the detection of high aleatoric uncertainty by sharpening the class-wise distributions. 4. Reconstruction error using top 40% of PCA components where the components are obtained via class conditional PCA of the training data. This boosts the detection of high class-wise epistemic uncertainty by eliminating irrelevant features. This fusion of attributes from existing state-of-the-art detection methods and new attributes was found to be the most effective integrated appraoch capable of detecting the different types of OODs. We evaluated it on several benchmarks as discussed in Section 4 with ablation study in Appendix. 4 EXPERIMENTAL RESULTS Attributes forming the signature of the OOD detector used in the experiments The signature of the OOD detector used in the experiments is the weighted sum of four attributes, one from each of the following four categories: 1. Distance from the in-distribution density estimate: We use mahalanobis distance of the input with respect to the closest class conditional distribution. The parameters of this distance are chosen from one of the following two categories: • empirical class means and tied empirical covariance of training samples • empirical class means and empirical class covariance of training samples 2. Reconstruction error: We perform class conditional PCA empirically from the training samples. We use minimum reconstruction error of the input from the top 40% eigen vectors of the class conditional eigen spaces. 3. Prediction confidence of the classifier: We use maximum value of the temperature scaled softmax scores (S) on the perturbed input. Perturbations to the input (x) are made according to the following equation (Liang et al., 2017) x̃ = x− sign(−∇xlogSŷ(x;T )) (1) The values of the magnitude of noise ( ) and the temperature scaling parameter (T ) are chosen from one of the following three categories: • = 0 and T = 0 • = 0 and T = 10 • = 0.005 and T = 10 4. Conformance measure among the nearest neighbors: We compute an m-dimensional feature vector to capture the conformance among the input’s nearest neighbors in the training samples, where m is the dimension of the input. We call this m-dimensional feature vector as the conformance vector. The conformance vector is calculated by taking the mean deviation along each dimension of the nearest neighbors from the input. We hypothesize that this deviation for the in-distribution samples would vary from the OODs due to aleatory uncertainty. The value of the conformance measure is calculated by computing mahalanobis distance of the input’s conformance vector to the closest class conformance distribution. Similar to the distance for the in-distribution density estimate, the parameters of this mahalanobis distance are chosen from the following two categories: • empirical class means and tied empirical covariance on the conformance vectors of the training samples • empirical class means and empirical class covariance on the conformance vectors of the training samples The value of the number of the nearest neighbors is chosen from the set {10, 20, 30, 40, 50} via validation. We used Annoy (Approximate Nearest Neighbors Oh Yeah) (Bernhardsson, 2018) to compute the nearest neighbors. The weights of the four attributes forming the signature of the OOD detector are generated in the following manner. We use a small subset (1000 samples) of both the in-distribution and the generated OOD data to train a binary classifier using the logistic loss. The OOD data used to train the classifier is generated by perturbing the in-distribution data using the Fast Gradient Sign attack (FGSM) (Goodfellow et al., 2014). The trained classifier (or OOD detector) is then evaluated on the real OOD dataset at the True Positive Rate of 95%. The best result, in terms of the highest TNR on the validation dataset (from the training phase of the OOD detector), from the twelve combinations of the aforementioned sub-categories (one from each of the four attributes) are then reported on the test (or real) OOD datasets. Datasets and metrics. We evaluate the proposed integrated OOD detection on benchmarks such as CIFAR10 (Krizhevsky et al., 2009) and SVHN (Netzer et al., 2011). We consider standard metrics (Hendrycks & Gimpel, 2016; Liang et al., 2017; Lee et al., 2018) such as the true negative rate (TNR) at 95% true positive rate (TPR), the area under the receiver operating characteristic curve (AUROC), area under precision recall curve (AUPR), and the detection accuracy (DTACC) to evaluate our performance. DNN-based classifier architectures. To demonstrate that the proposed approach generalizes across various network architectures, we consider a wide range of DNN models such as , ResNet (He et al., 2016), WideResNet (Zagoruyko & Komodakis, 2016), and DenseNet (Huang et al., 2017). Comparison with the state-of-the-art. We compare our approach with the three state-of-the-art approaches: SPB (Hendrycks & Gimpel, 2016), ODIN (Liang et al., 2017), and Mahalanobis (Lee et al., 2018). For the ODIN method, the perturbation noise is chosen from the set {0, 0.0005, 0.001, 0.0014, 0.002, 0.0024, 0.005, 0.01, 0.05, 0.1, 0.2}, and the temperature T is chosen from the set {1, 10, 100, 1000}. These values are chosen from the validation set of the adversarial samples of the in-distribution data generated by the FGSM attack. For the Mahalanobis method, we consider their best results obtained after feature ensemble and input preprocessing with the hyperparameters of their OOD detector tuned on the in-distribution and adversarial samples generated by the FGSM attack. The magnitude of the noise used in pre-processing of the inputs is chosen from the set {0.0, 0.01, 0.005, 0.002, 0.0014, 0.001, 0.0005}. CIFAR10. With CIFAR10 as in-distribution, we consider SVHN (Netzer et al., 2011), TinyImagenet (Deng et al., 2009), and LSUN (Yu et al., 2015) as the OOD datasets. For CIFAR10, we consider two DNNs: ResNet50, and WideResNet. Table 1 shows the results. SVHN. With SVHN as in-distribution, we consider CIFAR10, Imagenet, and LSUN and as the OOD datasets. For SVHN, we use the DenseNet classifier. Table 1 shows the results. Key observations. We do not consider pre-processing of the inputs in our integrated OOD detector. Even without input pre-processing and with the exception of CIFAR10 OOD dataset for SVHN indistribution trained on DenseNet, we could perform equally well (and even out-perform in most of the cases) as the Mahalanobis method on its best results generated after pre-processing the input. We also consider a Subset-CIFAR100 as OODs for CIFAR10. Specifically, from the CIFAR100 classes, we select sea, road, bee, and butterfly as OODs which are visually similar to the ship, automobile, and bird classes in the CIFAR10, respectively. Thus, there can be numerous OOD samples due to aleatoric and class-conditional epistemic uncertainty which makes the OOD detection challenging. Figure 5 shows the t-SNE (Maaten & Hinton, 2008) plot of the penultimate features from the ResNet50 model trained on CIFAR10. We show 4 examples of OODs (2 due to epistemic and 2 due to aleatoric uncertainty) from Subset-CIFAR100. These OODs were detected by our integrated approach but missed by the Mahalanobis approach. These observations justify the effectiveness of integrating multiple attributes to detect OOD samples. Additional experimental results in the appendix. We also compare the performance of the integrated OOD detector with the SPB, ODIN and Mahalanobis detector in supervised settings, as reported by the Mahalanobis method for OOD detection (Lee et al., 2018). These results include experiments on CIFAR10, SVHN and MNIST as in-distribution data and Imagenet,LSUN, SVHN (for CIFAR10), CIFAR10 (for SVHN), KMNIST, and F-MNISTas OOD data across different DNN architectures such as ResNet34, WideResNet,DenseNet, and LeNet5. All these results, along with the ablation studies on OOD detectors with single attributes are included in the Appendix. In almost all of the reported results in the Appendix, our OOD detector could outperform the compared state-of-the-art methods with improvements of even 2X higher TNR at 95% TPR in some cases. 5 DISCUSSION AND FUTURE WORK Recent techniques propose refinement in the training process of the classifiers for OOD detection. Some of these techniques include fine-tuning the classifier’s training with an auxiliary cost function for OOD detection (Hendrycks et al., 2019a; Liu et al., 2020). Other techniques make use of selfsupervised models for OOD detection (Tack et al., 2020; Hendrycks et al., 2019b). We perform preliminary experiments to compare the performance of these techniques with our integrated OOD detector that makes use of the feature space of the pre-trained classifiers to distinguish in-distribution samples from OODs. Our approach does not require modification of the training cost function of the original task. These results are reported in the Appendix. We consider making use of the feature space of in our OOD detection technique as a promising prospective future work. Another direction of the future work is to explore the score functions used in these refined training processes for OOD detection (Liu et al., 2020; Hendrycks et al., 2019a; Tack et al., 2020; Hendrycks et al., 2019b) as attributes (or categories of attributes) forming the signature of the integrated OOD detector. Another avenue of future work is to explore OOD generation techniques other than adversarial examples generated by the FGSM attack for training of the integrated OOD detector. 6 CONCLUSION We introduced a taxonomy of OODs and proposed an integrated approach to detect different types of OODs. Our taxonomy classifies OOD on the nature of their uncertainty and we demonstrated that no single state-of-the-art approach detects all these OOD types. Motivated by this observation, we formulated an integrated approach that fuses multiple attributes to target different types of OODs. We have performed extensive experiments on a synthetic dataset and several benchmark datasets (e.g., MNIST, CIFAR10, SVHN). Our experiments show that our approach can accurately detect various types of OODs coming from a wide range of OOD datasets such as KMNIST, FashionMNIST, SVHN, LSUN, and Imagenet. We have shown that our approach generalizes over multiple DNN architectures and performs robustly when the OOD samples are similar to in-distribution data. A APPENDIX A.1 DEFINING OODS DUE TO EPISTEMIC AND ALEATORIC UNCERTAINTY In general, let there be k classes c1, c2, . . . , ck and the distribution of training data for each class is p(x|ci). The overall training distribution is denoted by p(x). Now, given a new input x̂ to the trained DNN model M , let ĉ =M(x̂) denote the predicted class. The flowchart in Figure 6 shows different sources of uncertainty that could make x̂ an OOD. A.2 ADDITIONAL EXPERIMENTAL RESULTS We first present preliminary results for comparison with the OOD detection techniques based on fine-tuning of the classifiers (Hendrycks et al., 2019a; Liu et al., 2020; Tack et al., 2020; Hendrycks et al., 2019b). We then present our results on various vision datasets and different architectures of the pre-trained DNN based classifiers for these datasets in comparison to the ODIN, the Mahalanobis and the SPB methods in supervised settings. Finally, we then report results from the ablation study on OOD detection with individual attributes and compare it with our integrated approach on OOD detection. A.2.1 COMPARISON WITH THE OOD DETECTION TECHNIQUES BASED ON REFINEMENT OF THE TRAINING PROCESS FOR CLASSIFIERS Recent techniques propose refinement in the training process of the classifiers for OOD detection. Some of these techniques include fine-tuning the training of classifiers with a trainable cost function for OOD detection (Hendrycks et al., 2019a; Liu et al., 2020), self-supervised training of the classifiers to enhance OOD detection (Tack et al., 2020; Hendrycks et al., 2019b) etc. We perform preliminary experiments to compare the performance of these techniques with our integrated OOD detector that uses features of the pre-trained classifiers to distinguish in-distribution samples from OODs. Table 2 compares TNR (at 95% TPR), AUROC and AUPR for the energy based OOD detector (Liu et al., 2020) and our integrated OOD detector on CIFAR10 with pretrained WideResNet model. The integrated OOD detector was trained on in-distribution and adversarial samples generated by FGSM attack. Table 3 compares the results of the WideResNet model trained on CIFAR10 and fine-tuned with the outlier exposure from the 80 Million Tiny Images with our OOD detector that uses features from the pre-trained WideResNet model trained on CIFAR10. Since 80 Million Tiny Images dataset is no longer available for use, we used a small subset of ImageNet (treated as OOD dataset for CIFAR10 and SVHN datasets (Lee et al., 2018)) for generating OODs for training of the integrated OOD detector. Table 4 compares the OOD detection performance of the self-supervised training based OOD detector with our method. We trained our OOD detector with the in-distribution CIFAR10 as in-distribution samples and adversarial samples generated by FGSM attack from the test dataset of CIFAR10 as OODs. The trained OOD detector was then tested on LSUN as OODs and the results are reported in Table 4. With ResNet-50 as the classifier for CIFAR10, we trained our OOD detector with the in-distribution CIFAR10 as in-distribution samples and adversarial samples generated by FGSM from the test dataset of CIFAR10 as OODs. The trained OOD detector was then tested on SVHN as OODs and these results are compared with the contrastive based learning for OOD detection (Tack et al., 2020) in table 5. A.2.2 COMPARISON WITH THE STATE-OF-THE-ART OOD DETECTION METHODS IN SUPERVISED SETTINGS ON PRE-TRAINED CLASSIFIERS We compare our results with the state-of-the-art methods in supervised settings, as reported by the Mahalanobis method for OOD detection (Lee et al., 2018). In supervised settings, a small subset of the real OOD dataset is used in the training of the OOD detector. Datasets and metrics. We evaluate the proposed integrated OOD detection on benchmarks such as MNIST (LeCun et al., 1998), CIFAR10 (Krizhevsky et al., 2009), and SVHN (Netzer et al., 2011). We consider standard metrics (Hendrycks & Gimpel, 2016; Liang et al., 2017; Lee et al., 2018) such as the true negative rate (TNR) at 95% true positive rate (TPR), the area under the receiver operating characteristic curve (AUROC), area under precision recall curve (AUPR) with both in-distribution and OODs as positive samples (AUPR IN and AUPR OUT respectively), and the detection accuracy (DTACC) to evaluate our performance. DNN-based classifier architectures. To demonstrate that the proposed approach generalizes across various network architectures, we consider a wide range of DNN models such as Lenet (LeCun et al., 1998) , ResNet (He et al., 2016) , and DenseNet (Huang et al., 2017). Comparison with the state-of-the-art. We compare our approach with the three state-of-the-art approaches: SPB (Hendrycks & Gimpel, 2016), ODIN (Liang et al., 2017) and Mahalanobis (Lee et al., 2018). Since, these experiments are performed in supervised settings, we fix T = 10 and = 0.005 for generating results from the ODIN method. For Mahalanobis distance, we consider the distance in the penultimate layer feature space as well as features from all the layers of the DNN without preprocessing of the input in either settings. MNIST. With MNIST as in-distribution, we consider KMNIST (Clanuwat et al., 2018) and FashionMNIST(F-MNIST) (Xiao et al., 2017) as OOD datasets. For MNIST, we use the LeNet5 (LeCun et al., 1998) DNN. Results in terms of TNR (at 95% TPR), AUROC, and DTACC are reported in tables 6, 7, and 8. Table 6 shows the results with the features from the penultimate layer in comparison to the ODIN and Mahalanobis methods. Table 7 shows the results with the features from all the layers in comparison to the Mahalanobis method. Table 8 shows the results with the features from the penultimate layer in comparison to the SPB method. In all these settings, our approach outperforms the state-of-the-art approaches for both the OOD datasets. Results in comparison to AUPR IN and AUPR OUT are shown in table 9. Here also, our technique out-performs all the three OOD detectors on all the test cases. CIFAR10. With CIFAR10 as in-distribution, we consider STL10 (Coates et al., 2011), SVHN (Netzer et al., 2011), Imagenet (Deng et al., 2009), LSUN (Yu et al., 2015), and a subset of CIFAR100 (SCIFAR100) (Krizhevsky et al., 2009) as OOD datasets. For CIFAR10, we consider three DNNs: DenseNet, ResNet34, and ResNet50. Results in terms of TNR (at 95% TPR), AUROC, and DTACC are reported in tables 6, 7, and 8. Table 6 shows the results with the features from the penultimate layer in comparison to the ODIN and Mahalanobis methods. Table 7 shows the results with the features from all the layers in comparison to the Mahalanobis method. Table 8 shows the results with the features from the penultimate layer in comparison to the SPB method. Results in comparison to AUPR IN and AUPR OUT are shown in tables 10, 11, and 12. Here also, the integrated OOD detection technique could out-perform the other three detectors on most of the test cases. Note that images from STL10 and the subset of CIFAR100 are quite similar to CIFAR10 images. Furthermore, from the CIFAR100 classes, we select sea, road, bee, and butterfly as OODs which are visually similar to the ship, automobile, and bird classes in the CIFAR10, respectively. SVHN. With SVHN as in-distribution, we consider STL10, CIFAR10, Imagenet, LSUN and, SCIFAR100 as OOD datasets. For SVHN, we consider two DNNs: DenseNet and ResNet34. Results in terms of TNR (at 95% TPR), AUROC, and DTACC are reported in tables 6, 7, and 8. Table 6 shows the results with the features from the penultimate layer in comparison to the ODIN and Mahalanobis methods. Table 7 shows the results with the features from all the layers in comparison to the Mahalanobis method. Table 8 shows the results with the features from the penultimate layer in comparison to the SPB method. Results in comparison to AUPR IN and AUPR OUT are shown in tables 13, and 14. Here also, the integrated OOD detection technique could out-perform the other three detectors on most of the test cases. Key observations. As shown in Table 6 and Table 7, and Table 8, our approach outperforms the state-of-the-art on all three datasets and with various DNN architectures. On CIFAR10, in terms of the TNR metric, our approach with Resnet50 outperforms Mahalanobis by 56% when SVHN is OOD and our approach with Resnet34 outperforms ODIN by 36% when LSUN is OOD. While considering STL10 and Subset-CIFAR100 as OODs for CIFAR10, the images from both these datasets are quite similar to CIFAR-10 images. Thus, there can be numerous OOD samples due to aleatoric and class-conditional epistemic uncertainty which makes detection challenging. Although our performance is low on the STL10 dataset, it still outperforms the state-of-the-art. For instance, the proposed approach achieves a 27% better TNR score than the Mahalanobis using ResNet50. On SVHN, in terms of the TNR metric, our approach outperforms ODIN and Mahalanobis by 63% and 13%, respectively on SCIFAR100 using ResNet34. The above observations justify the effectiveness of integrating multiple attributes to detect OOD samples. A.2.3 ABLATION STUDY We report ablation study on OOD detection with individual attributes and compare it with our integrated approach on the penultimate feature space of the classifier in the supervised settings as described in the previous section. We call the OOD detector with Mahalanobis distance estimated on class mean and tied covariance (Lee et al., 2018) as Mahala-Tied. Detector based on Mahalanobis distance estimated on class mean and class covariance is referred as Mahala-Class. Similarly conformance among the K-nearest neighbors (KNN) measured by Mahala-Tied and Mahala-Class is referred as KNN-Tied and KNN-Class respectively in these experiments. Results for this study on CIFAR10 with DenseNet architecture, SVHN with DenseNet and ResNet34 architectures are shown in Tables 15, 16 and 17 respectively. The integrated approach could out-perform all the single attribute based OOD detector in all the tested cases due to detection of diverse OODs. An important observation made from these experiments is that the performance of the single attribute based methods could depend on the architecture of the classifier. For example, while the performance of PCA was really bad in case of DenseNet (for both CIFAR10 and SVHN) as compared to all other methods, it could out-perform all but the integrated approach for SVHN on ResNet34.
1. What are the strengths and weaknesses of the paper regarding its contribution, comparison with other works, and discussion of the final detection score? 2. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 3. Do you have any questions or concerns about the taxonomy of OOD samples, the criteria for OOD categorization, and the visualization of figures? 4. Why did the authors choose a subset of CIFAR100 as an OOD test set but not the whole dataset? 5. Can the authors explain and discuss the final detection score and its hyperparameters? 6. How does the reviewer evaluate the relevance and effectiveness of the references cited in the review?
Review
Review -- Paper Summary: The paper presents the idea of fusion of attributes from existing sota ood detection methods to achieve higher detection performance. -- Review : The three criteria presented in section two are questions rather than criteria. It is better to be re-worded into criteria. Figure 1 suggests the "tied distribution of all training data" is different than the combination of "class distributions". I wish authors could explain the difference between Type 4 and 5 in the ood sample taxonomy. The relation between five types of OOD with three criteria for OOD categorization is not clear. The visualization in all figures could be improved: figure 1: too many colors. better to use different shape or numbers directly in the figure. figure 5: not necessary to include, hard to see and comprehend. the total number of figures can be reduced by eliminating some and combining others. What was the reason to choose a subset of cifar100 as ood test set but not the whole dataset? Authors emphasize reporting detection TNR in the manuscript while FNR is missing from the measurements. I suggest authors either report both or use threshold agnostic metrics like area under precision recall curve (AUPR) or area under receiver operating curve (AUROC) for reporting as in the Table. I can't find an explanation and/or discussion on the final detection score and it's hyperparametere. The results from the Mahanalobis technique [7] does not match the original paper. If authors did not use a subset of ood samples for tuning, it should be reported in the paper. -- Strengths: interesting taxonomy of ood samples and the following conclusion for integrated detection score. -- Weaknesses: limited on contribution no discussion on final detection score and its hyperparameters. comparison with more recent techniques including Outlier Exposure [1], Self-supervised reject classifier [2], Geometric self-superivised learning [3,4], and contrastive learning [5,6] are missing in this paper. [1] Hendrycks, D., Mazeika, M., & Dietterich, T. (2018, September). Deep Anomaly Detection with Outlier Exposure. ICLR 2019 [2] Mohseni, Sina, et al. "Self-Supervised Learning for Generalizable Out-of-Distribution Detection." AAAI. 2020. [3] Hendrycks, D., Mazeika, M., Kadavath, S., & Song, D. (2019). Using self-supervised learning can improve model robustness and uncertainty. In Advances in Neural Information Processing Systems (pp. 15663-15674). [4] Golan, Izhak, and Ran El-Yaniv. "Deep anomaly detection using geometric transformations." Advances in Neural Information Processing Systems. 2018. [5] Tack, J., Mo, S., Jeong, J., & Shin, J. (2020). Csi: Novelty detection via contrastive learning on distributionally shifted instances. arXiv preprint arXiv:2007.08176. [6] Winkens, J., Bunel, R., Roy, A. G., Stanforth, R., Natarajan, V., Ledsam, J. R., ... & Cemgil, T. (2020). Contrastive training for improved out-of-distribution detection. arXiv preprint arXiv:2007.05566. [8] Liu, Hao, and Pieter Abbeel. "Hybrid discriminative-generative training via contrastive learning." arXiv preprint arXiv:2007.09070 (2020). [7] Lee, K., Lee, K., Lee, H., & Shin, J. (2018). A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In Advances in Neural Information Processing Systems (pp. 7167-7177).
ICLR
Title Are all outliers alike? On Understanding the Diversity of Outliers for Detecting OODs Abstract Deep neural networks (DNNs) are known to produce incorrect predictions with very high confidence on out-of-distribution (OOD) inputs. This limitation is one of the key challenges in the adoption of deep learning models in high-assurance systems such as autonomous driving, air traffic management, and medical diagnosis. This challenge has received significant attention recently, and several techniques have been developed to detect inputs where the model’s prediction cannot be trusted. These techniques use different statistical, geometric, or topological signatures. This paper presents a taxonomy of OOD outlier inputs based on their source and nature of uncertainty. We demonstrate how different existing detection approaches fail to detect certain types of outliers. We utilize these insights to develop a novel integrated detection approach that uses multiple attributes corresponding to different types of outliers. Our results include experiments on CIFAR10, SVHN and MNIST as in-distribution data and Imagenet, LSUN, SVHN (for CIFAR10), CIFAR10 (for SVHN), KMNIST, and F-MNIST as OOD data across different DNN architectures such as ResNet34, WideResNet, DenseNet, and LeNet5. 1 INTRODUCTION Deep neural networks (DNNs) have achieved remarkable performance-levels in many areas such as computer vision (Gkioxari et al., 2015), speech recognition (Hannun et al., 2014), and text analysis (Majumder et al., 2017). But their deployment in the safety-critical systems such as self-driving vehicles (Bojarski et al., 2016), aircraft collision avoidance (Julian & Kochenderfer, 2017), and medical diagnoses (De Fauw et al., 2018) is hindered by their brittleness. One major challenge is the inability of DNNs to be self-aware of when new inputs are outside the training distribution and likely to produce incorrect predictions. It has been widely reported in literature (Guo et al., 2017a; Hendrycks & Gimpel, 2016) that deep neural networks exhibit overconfident incorrect predictions on inputs which are outside the training distribution. The responsible deployment of deep neural network models in high-assurance applications necessitates detection of out-of-distribution (OOD) data so that DNNs can abstain from making decisions on those. Recent approaches for OOD detection consider different statistical, geometric or topological signatures in data that differentiate OODs from the training distribution. For example, the changes in the softmax scores due to input perturbations and temperature scaling have been used to detect OODs (Hendrycks & Gimpel, 2016; Liang et al., 2017; Guo et al., 2017b). Papernot & McDaniel (2018) use the conformance among the labels of the nearest neighbors while Tack et al. (2020) use cosine similarity (modulated by the norm of the feature vector) to the nearest training sample for the detection of OODs. Lee et al. (2018) consider the Mahalanobis distance of an input from the in-distribution data to detect OODs. Several other metrics such as reconstruction error (An & Cho, 2015), likelihood-ratio between the in-distribution and OOD samples (Ren et al., 2019), trust scores (ratio of the distance to the nearest class different from the predicted class and the distance to the predicted class) (Jiang et al., 2018), density function (Liu et al., 2020; Hendrycks et al., 2019a), probability distribution of the softmax scores (Lee et al., 2017; Hendrycks et al., 2019b; Tack et al., 2020; Hendrycks et al., 2019a) have also been used to detect OODs. All these methods attempt to develop a uniform approach with a single signature to detect all OODs accompanied by empirical evaluations that use datasets such as CIFAR10 as in-distribution data and other datasets such as SVHN as OOD. Our study shows that OODs can be of diverse types with different defining characteristics. Consequently, an integrated approach that takes into account the diversity of these outliers is needed for effective OOD detection. We make the following three contributions in this paper: • Taxonomy of OODs. We define a taxonomy of OOD samples that classify OODs into different types based on aleatoric vs epistemic uncertainty (Hüllermeier & Waegeman, 2019), distance from the predicted class vs the distance from the tied training distribution, and uncertainty in the principal components vs uncertainty in non-principal components with low variance. • Incompleteness of existing uniform OOD detection approaches. We examine the limitations of the state-of-the-art approaches to detect various types of OOD samples. We observe that not all outliers are alike and existing approaches fail to detect particular types of OODs. We use a toy dataset comprising two halfmoons as two different classes to demonstrate these limitations. • An integrated OOD detection approach. We propose an integrated approach that can detect different types of OOD inputs. We demonstrate the effectiveness of our approach on several benchmarks, and compare against state-of-the-art OOD detection approaches such as the ODIN (Liang et al., 2017) and Mahalanobis distance method (Lee et al., 2018). 2 OOD TAXONOMY AND EXISTING DETECTION METHODS DNNs predict the class of a new input based on the classification boundaries learned from the samples of the training distribution. Aleatory uncertainty is high for inputs which are close to the classification boundaries, and epistemic uncertainty is high when the input is far from the learned distributions of all classes (Hora, 1996; Hüllermeier & Waegeman, 2019). Given the predicted class of a DNN model on a given input, we can observe the distance of the input from the distribution of this particular class and identify it as an OOD if this distance is high. We use this top-down inference approach to detect this type of OODs which are characterized by an inconsistency in model’s prediction and input’s distance from the distribution of the predicted class. Further, typical inputs to DNNs are high-dimensional and can be decomposed into principal and non-principal components based on the direction of high variation; this yields another dimension for classification of OODs. We, thus, categorize an OOD using the following three criteria. 1. Is the OOD associated with higher epistemic or aleatoric uncertainty, i.e., is the input away from in-distribution data or can it be confused between multiple classes? 2. Is the epistemic uncertainty of an OOD sample unconditional or is it conditioned on the class predicted by the DNN model? 3. Is the OOD an outlier due to unusually high deviation in the principal components of the data or due to small deviation in the non-principal (and hence, statistically invariant) components? Figure 1 demonstrates different types of OODs which differ along these criteria. Type 1 OODs have high epistemic uncertainty and are away from the indistribution data. Type 2 OODs have high epistemic uncertainty with respect to each of the 3 classes even though approximating all in-distribution (ID) data using a single Guassian distribution will miss these outliers. Type 3 OODs have high aleatoric uncertainty as they are close to the decision boundary between class 0 and class 1. Type 4 and 5 have high epistemic uncertainty with respect to their closest classes. While Type 4 OODs are far from the distribution along the principal axis, Type 5 OODs vary along a relatively invariant axis where even a small deviation indicates that the sample is an OOD. Limitations of Existing Detection Methods. We empirically demonstrate the limitations of existing OOD detection methods on a two-dimensional (2D) half-moon dataset with two classes. As shown in Figure 2, we consider three clusters of OOD samples: cluster A (black), B (brown) and C(red). Figure 2 (right) shows the 2D penultimate features of the classifier. Different approaches differ in their ability to detect different OOD types as illustrated in Figure 3. • Figure 3(a) shows that the Mahalanobis distance (Lee et al., 2018) from the mean and tied covariance of all the training data in the feature space cannot detect OODs in the clusters B and C corresponding to class-conditional epistemic uncertainty and aleatoric uncertainty, respectively. It attains the overall true negative rate (TNR) of 39.09% at the 95% true positive rate (TPR). • Figure 3(b) shows that the softmax prediction probability (SPB) (Hendrycks & Gimpel, 2016) cannot detect the OODs in cluster A corresponding to high epsitemic uncertainty. The TNR ( at 95% TPR) reported by the SPB technique is 60.91%. • Figure 3(c) shows that class-wise Principal Component Analysis (PCA) (Hoffmann, 2007) cannot detect OODs in cluster C corresponding to high aleatoric uncertainty. We performed PCA of the two classes separately in the feature space and used the minimum reconstruction error to detect OODs. This obtained overall TNR of 80.91% (at 95% TPR). • Figure 3(d) shows that K-Nearest Neighbor (kNN) (Papernot & McDaniel, 2018) nonconformance in the labels of the nearest neighbors cannot detect OODs in clusters A and B with high epistemic uncertainty. The overall TNR (at 95% TPR) reported by this technique is 15%. These observations can be explained by the focus of different detection techniques on measuring different forms of uncertainty. This motivates our integrated OOD detection method. 3 INTEGRATED OOD DETECTION METHOD Complementary information about different OOD types can be used to detect a wider range of OODs. Figure 4 shows the improvement in the TNR of the OOD detector composed with information about different classes of OODs on the two half-moons dataset. Non-conformity in the labels of the nearest neighbors captures OODs in cluster C. Mahalanobis distance from the tied in-distribution detects OODs in cluster A. Reconstruction error from the PCA of the 2 class distributions captures OODs in cluster B. Softmax scores further strengthens the OOD detection by reporting OODs in cluster C that are undetected by the other three methods. The integrated OOD detection approach, thus, uses the following attributes, each specialized in detecting a specific type (or a combination of types) of OODs: 1. Mahalanobis distance from the in-distribution density estimate that considers either tied (Lee et al., 2018) or class-wise covariance estimate. This attribute captures the overall or classconditional epistemic uncertainty of an OOD. Our refinement to also use class-wise covariance significantly improves detection of OODs when coupled with PCA approach described below. 2. Conformance measure among the variance of the Annoy (Bernhardsson, 2018) nearest neighbors calculated as the Mahalanobis distance of the input’s conformance to the closest class conformance. Our experiments found this to be very effective in capturing aleatoric uncertainty. This new attribute is a fusion of nearest-neighbor and Mahalanobis distance methods in literature. 3. Prediction confidence of the classifier as the maximum softmax score on the perturbed input where the perturbation used is the same as ODIN approach (Liang et al., 2017). This boosts the detection of high aleatoric uncertainty by sharpening the class-wise distributions. 4. Reconstruction error using top 40% of PCA components where the components are obtained via class conditional PCA of the training data. This boosts the detection of high class-wise epistemic uncertainty by eliminating irrelevant features. This fusion of attributes from existing state-of-the-art detection methods and new attributes was found to be the most effective integrated appraoch capable of detecting the different types of OODs. We evaluated it on several benchmarks as discussed in Section 4 with ablation study in Appendix. 4 EXPERIMENTAL RESULTS Attributes forming the signature of the OOD detector used in the experiments The signature of the OOD detector used in the experiments is the weighted sum of four attributes, one from each of the following four categories: 1. Distance from the in-distribution density estimate: We use mahalanobis distance of the input with respect to the closest class conditional distribution. The parameters of this distance are chosen from one of the following two categories: • empirical class means and tied empirical covariance of training samples • empirical class means and empirical class covariance of training samples 2. Reconstruction error: We perform class conditional PCA empirically from the training samples. We use minimum reconstruction error of the input from the top 40% eigen vectors of the class conditional eigen spaces. 3. Prediction confidence of the classifier: We use maximum value of the temperature scaled softmax scores (S) on the perturbed input. Perturbations to the input (x) are made according to the following equation (Liang et al., 2017) x̃ = x− sign(−∇xlogSŷ(x;T )) (1) The values of the magnitude of noise ( ) and the temperature scaling parameter (T ) are chosen from one of the following three categories: • = 0 and T = 0 • = 0 and T = 10 • = 0.005 and T = 10 4. Conformance measure among the nearest neighbors: We compute an m-dimensional feature vector to capture the conformance among the input’s nearest neighbors in the training samples, where m is the dimension of the input. We call this m-dimensional feature vector as the conformance vector. The conformance vector is calculated by taking the mean deviation along each dimension of the nearest neighbors from the input. We hypothesize that this deviation for the in-distribution samples would vary from the OODs due to aleatory uncertainty. The value of the conformance measure is calculated by computing mahalanobis distance of the input’s conformance vector to the closest class conformance distribution. Similar to the distance for the in-distribution density estimate, the parameters of this mahalanobis distance are chosen from the following two categories: • empirical class means and tied empirical covariance on the conformance vectors of the training samples • empirical class means and empirical class covariance on the conformance vectors of the training samples The value of the number of the nearest neighbors is chosen from the set {10, 20, 30, 40, 50} via validation. We used Annoy (Approximate Nearest Neighbors Oh Yeah) (Bernhardsson, 2018) to compute the nearest neighbors. The weights of the four attributes forming the signature of the OOD detector are generated in the following manner. We use a small subset (1000 samples) of both the in-distribution and the generated OOD data to train a binary classifier using the logistic loss. The OOD data used to train the classifier is generated by perturbing the in-distribution data using the Fast Gradient Sign attack (FGSM) (Goodfellow et al., 2014). The trained classifier (or OOD detector) is then evaluated on the real OOD dataset at the True Positive Rate of 95%. The best result, in terms of the highest TNR on the validation dataset (from the training phase of the OOD detector), from the twelve combinations of the aforementioned sub-categories (one from each of the four attributes) are then reported on the test (or real) OOD datasets. Datasets and metrics. We evaluate the proposed integrated OOD detection on benchmarks such as CIFAR10 (Krizhevsky et al., 2009) and SVHN (Netzer et al., 2011). We consider standard metrics (Hendrycks & Gimpel, 2016; Liang et al., 2017; Lee et al., 2018) such as the true negative rate (TNR) at 95% true positive rate (TPR), the area under the receiver operating characteristic curve (AUROC), area under precision recall curve (AUPR), and the detection accuracy (DTACC) to evaluate our performance. DNN-based classifier architectures. To demonstrate that the proposed approach generalizes across various network architectures, we consider a wide range of DNN models such as , ResNet (He et al., 2016), WideResNet (Zagoruyko & Komodakis, 2016), and DenseNet (Huang et al., 2017). Comparison with the state-of-the-art. We compare our approach with the three state-of-the-art approaches: SPB (Hendrycks & Gimpel, 2016), ODIN (Liang et al., 2017), and Mahalanobis (Lee et al., 2018). For the ODIN method, the perturbation noise is chosen from the set {0, 0.0005, 0.001, 0.0014, 0.002, 0.0024, 0.005, 0.01, 0.05, 0.1, 0.2}, and the temperature T is chosen from the set {1, 10, 100, 1000}. These values are chosen from the validation set of the adversarial samples of the in-distribution data generated by the FGSM attack. For the Mahalanobis method, we consider their best results obtained after feature ensemble and input preprocessing with the hyperparameters of their OOD detector tuned on the in-distribution and adversarial samples generated by the FGSM attack. The magnitude of the noise used in pre-processing of the inputs is chosen from the set {0.0, 0.01, 0.005, 0.002, 0.0014, 0.001, 0.0005}. CIFAR10. With CIFAR10 as in-distribution, we consider SVHN (Netzer et al., 2011), TinyImagenet (Deng et al., 2009), and LSUN (Yu et al., 2015) as the OOD datasets. For CIFAR10, we consider two DNNs: ResNet50, and WideResNet. Table 1 shows the results. SVHN. With SVHN as in-distribution, we consider CIFAR10, Imagenet, and LSUN and as the OOD datasets. For SVHN, we use the DenseNet classifier. Table 1 shows the results. Key observations. We do not consider pre-processing of the inputs in our integrated OOD detector. Even without input pre-processing and with the exception of CIFAR10 OOD dataset for SVHN indistribution trained on DenseNet, we could perform equally well (and even out-perform in most of the cases) as the Mahalanobis method on its best results generated after pre-processing the input. We also consider a Subset-CIFAR100 as OODs for CIFAR10. Specifically, from the CIFAR100 classes, we select sea, road, bee, and butterfly as OODs which are visually similar to the ship, automobile, and bird classes in the CIFAR10, respectively. Thus, there can be numerous OOD samples due to aleatoric and class-conditional epistemic uncertainty which makes the OOD detection challenging. Figure 5 shows the t-SNE (Maaten & Hinton, 2008) plot of the penultimate features from the ResNet50 model trained on CIFAR10. We show 4 examples of OODs (2 due to epistemic and 2 due to aleatoric uncertainty) from Subset-CIFAR100. These OODs were detected by our integrated approach but missed by the Mahalanobis approach. These observations justify the effectiveness of integrating multiple attributes to detect OOD samples. Additional experimental results in the appendix. We also compare the performance of the integrated OOD detector with the SPB, ODIN and Mahalanobis detector in supervised settings, as reported by the Mahalanobis method for OOD detection (Lee et al., 2018). These results include experiments on CIFAR10, SVHN and MNIST as in-distribution data and Imagenet,LSUN, SVHN (for CIFAR10), CIFAR10 (for SVHN), KMNIST, and F-MNISTas OOD data across different DNN architectures such as ResNet34, WideResNet,DenseNet, and LeNet5. All these results, along with the ablation studies on OOD detectors with single attributes are included in the Appendix. In almost all of the reported results in the Appendix, our OOD detector could outperform the compared state-of-the-art methods with improvements of even 2X higher TNR at 95% TPR in some cases. 5 DISCUSSION AND FUTURE WORK Recent techniques propose refinement in the training process of the classifiers for OOD detection. Some of these techniques include fine-tuning the classifier’s training with an auxiliary cost function for OOD detection (Hendrycks et al., 2019a; Liu et al., 2020). Other techniques make use of selfsupervised models for OOD detection (Tack et al., 2020; Hendrycks et al., 2019b). We perform preliminary experiments to compare the performance of these techniques with our integrated OOD detector that makes use of the feature space of the pre-trained classifiers to distinguish in-distribution samples from OODs. Our approach does not require modification of the training cost function of the original task. These results are reported in the Appendix. We consider making use of the feature space of in our OOD detection technique as a promising prospective future work. Another direction of the future work is to explore the score functions used in these refined training processes for OOD detection (Liu et al., 2020; Hendrycks et al., 2019a; Tack et al., 2020; Hendrycks et al., 2019b) as attributes (or categories of attributes) forming the signature of the integrated OOD detector. Another avenue of future work is to explore OOD generation techniques other than adversarial examples generated by the FGSM attack for training of the integrated OOD detector. 6 CONCLUSION We introduced a taxonomy of OODs and proposed an integrated approach to detect different types of OODs. Our taxonomy classifies OOD on the nature of their uncertainty and we demonstrated that no single state-of-the-art approach detects all these OOD types. Motivated by this observation, we formulated an integrated approach that fuses multiple attributes to target different types of OODs. We have performed extensive experiments on a synthetic dataset and several benchmark datasets (e.g., MNIST, CIFAR10, SVHN). Our experiments show that our approach can accurately detect various types of OODs coming from a wide range of OOD datasets such as KMNIST, FashionMNIST, SVHN, LSUN, and Imagenet. We have shown that our approach generalizes over multiple DNN architectures and performs robustly when the OOD samples are similar to in-distribution data. A APPENDIX A.1 DEFINING OODS DUE TO EPISTEMIC AND ALEATORIC UNCERTAINTY In general, let there be k classes c1, c2, . . . , ck and the distribution of training data for each class is p(x|ci). The overall training distribution is denoted by p(x). Now, given a new input x̂ to the trained DNN model M , let ĉ =M(x̂) denote the predicted class. The flowchart in Figure 6 shows different sources of uncertainty that could make x̂ an OOD. A.2 ADDITIONAL EXPERIMENTAL RESULTS We first present preliminary results for comparison with the OOD detection techniques based on fine-tuning of the classifiers (Hendrycks et al., 2019a; Liu et al., 2020; Tack et al., 2020; Hendrycks et al., 2019b). We then present our results on various vision datasets and different architectures of the pre-trained DNN based classifiers for these datasets in comparison to the ODIN, the Mahalanobis and the SPB methods in supervised settings. Finally, we then report results from the ablation study on OOD detection with individual attributes and compare it with our integrated approach on OOD detection. A.2.1 COMPARISON WITH THE OOD DETECTION TECHNIQUES BASED ON REFINEMENT OF THE TRAINING PROCESS FOR CLASSIFIERS Recent techniques propose refinement in the training process of the classifiers for OOD detection. Some of these techniques include fine-tuning the training of classifiers with a trainable cost function for OOD detection (Hendrycks et al., 2019a; Liu et al., 2020), self-supervised training of the classifiers to enhance OOD detection (Tack et al., 2020; Hendrycks et al., 2019b) etc. We perform preliminary experiments to compare the performance of these techniques with our integrated OOD detector that uses features of the pre-trained classifiers to distinguish in-distribution samples from OODs. Table 2 compares TNR (at 95% TPR), AUROC and AUPR for the energy based OOD detector (Liu et al., 2020) and our integrated OOD detector on CIFAR10 with pretrained WideResNet model. The integrated OOD detector was trained on in-distribution and adversarial samples generated by FGSM attack. Table 3 compares the results of the WideResNet model trained on CIFAR10 and fine-tuned with the outlier exposure from the 80 Million Tiny Images with our OOD detector that uses features from the pre-trained WideResNet model trained on CIFAR10. Since 80 Million Tiny Images dataset is no longer available for use, we used a small subset of ImageNet (treated as OOD dataset for CIFAR10 and SVHN datasets (Lee et al., 2018)) for generating OODs for training of the integrated OOD detector. Table 4 compares the OOD detection performance of the self-supervised training based OOD detector with our method. We trained our OOD detector with the in-distribution CIFAR10 as in-distribution samples and adversarial samples generated by FGSM attack from the test dataset of CIFAR10 as OODs. The trained OOD detector was then tested on LSUN as OODs and the results are reported in Table 4. With ResNet-50 as the classifier for CIFAR10, we trained our OOD detector with the in-distribution CIFAR10 as in-distribution samples and adversarial samples generated by FGSM from the test dataset of CIFAR10 as OODs. The trained OOD detector was then tested on SVHN as OODs and these results are compared with the contrastive based learning for OOD detection (Tack et al., 2020) in table 5. A.2.2 COMPARISON WITH THE STATE-OF-THE-ART OOD DETECTION METHODS IN SUPERVISED SETTINGS ON PRE-TRAINED CLASSIFIERS We compare our results with the state-of-the-art methods in supervised settings, as reported by the Mahalanobis method for OOD detection (Lee et al., 2018). In supervised settings, a small subset of the real OOD dataset is used in the training of the OOD detector. Datasets and metrics. We evaluate the proposed integrated OOD detection on benchmarks such as MNIST (LeCun et al., 1998), CIFAR10 (Krizhevsky et al., 2009), and SVHN (Netzer et al., 2011). We consider standard metrics (Hendrycks & Gimpel, 2016; Liang et al., 2017; Lee et al., 2018) such as the true negative rate (TNR) at 95% true positive rate (TPR), the area under the receiver operating characteristic curve (AUROC), area under precision recall curve (AUPR) with both in-distribution and OODs as positive samples (AUPR IN and AUPR OUT respectively), and the detection accuracy (DTACC) to evaluate our performance. DNN-based classifier architectures. To demonstrate that the proposed approach generalizes across various network architectures, we consider a wide range of DNN models such as Lenet (LeCun et al., 1998) , ResNet (He et al., 2016) , and DenseNet (Huang et al., 2017). Comparison with the state-of-the-art. We compare our approach with the three state-of-the-art approaches: SPB (Hendrycks & Gimpel, 2016), ODIN (Liang et al., 2017) and Mahalanobis (Lee et al., 2018). Since, these experiments are performed in supervised settings, we fix T = 10 and = 0.005 for generating results from the ODIN method. For Mahalanobis distance, we consider the distance in the penultimate layer feature space as well as features from all the layers of the DNN without preprocessing of the input in either settings. MNIST. With MNIST as in-distribution, we consider KMNIST (Clanuwat et al., 2018) and FashionMNIST(F-MNIST) (Xiao et al., 2017) as OOD datasets. For MNIST, we use the LeNet5 (LeCun et al., 1998) DNN. Results in terms of TNR (at 95% TPR), AUROC, and DTACC are reported in tables 6, 7, and 8. Table 6 shows the results with the features from the penultimate layer in comparison to the ODIN and Mahalanobis methods. Table 7 shows the results with the features from all the layers in comparison to the Mahalanobis method. Table 8 shows the results with the features from the penultimate layer in comparison to the SPB method. In all these settings, our approach outperforms the state-of-the-art approaches for both the OOD datasets. Results in comparison to AUPR IN and AUPR OUT are shown in table 9. Here also, our technique out-performs all the three OOD detectors on all the test cases. CIFAR10. With CIFAR10 as in-distribution, we consider STL10 (Coates et al., 2011), SVHN (Netzer et al., 2011), Imagenet (Deng et al., 2009), LSUN (Yu et al., 2015), and a subset of CIFAR100 (SCIFAR100) (Krizhevsky et al., 2009) as OOD datasets. For CIFAR10, we consider three DNNs: DenseNet, ResNet34, and ResNet50. Results in terms of TNR (at 95% TPR), AUROC, and DTACC are reported in tables 6, 7, and 8. Table 6 shows the results with the features from the penultimate layer in comparison to the ODIN and Mahalanobis methods. Table 7 shows the results with the features from all the layers in comparison to the Mahalanobis method. Table 8 shows the results with the features from the penultimate layer in comparison to the SPB method. Results in comparison to AUPR IN and AUPR OUT are shown in tables 10, 11, and 12. Here also, the integrated OOD detection technique could out-perform the other three detectors on most of the test cases. Note that images from STL10 and the subset of CIFAR100 are quite similar to CIFAR10 images. Furthermore, from the CIFAR100 classes, we select sea, road, bee, and butterfly as OODs which are visually similar to the ship, automobile, and bird classes in the CIFAR10, respectively. SVHN. With SVHN as in-distribution, we consider STL10, CIFAR10, Imagenet, LSUN and, SCIFAR100 as OOD datasets. For SVHN, we consider two DNNs: DenseNet and ResNet34. Results in terms of TNR (at 95% TPR), AUROC, and DTACC are reported in tables 6, 7, and 8. Table 6 shows the results with the features from the penultimate layer in comparison to the ODIN and Mahalanobis methods. Table 7 shows the results with the features from all the layers in comparison to the Mahalanobis method. Table 8 shows the results with the features from the penultimate layer in comparison to the SPB method. Results in comparison to AUPR IN and AUPR OUT are shown in tables 13, and 14. Here also, the integrated OOD detection technique could out-perform the other three detectors on most of the test cases. Key observations. As shown in Table 6 and Table 7, and Table 8, our approach outperforms the state-of-the-art on all three datasets and with various DNN architectures. On CIFAR10, in terms of the TNR metric, our approach with Resnet50 outperforms Mahalanobis by 56% when SVHN is OOD and our approach with Resnet34 outperforms ODIN by 36% when LSUN is OOD. While considering STL10 and Subset-CIFAR100 as OODs for CIFAR10, the images from both these datasets are quite similar to CIFAR-10 images. Thus, there can be numerous OOD samples due to aleatoric and class-conditional epistemic uncertainty which makes detection challenging. Although our performance is low on the STL10 dataset, it still outperforms the state-of-the-art. For instance, the proposed approach achieves a 27% better TNR score than the Mahalanobis using ResNet50. On SVHN, in terms of the TNR metric, our approach outperforms ODIN and Mahalanobis by 63% and 13%, respectively on SCIFAR100 using ResNet34. The above observations justify the effectiveness of integrating multiple attributes to detect OOD samples. A.2.3 ABLATION STUDY We report ablation study on OOD detection with individual attributes and compare it with our integrated approach on the penultimate feature space of the classifier in the supervised settings as described in the previous section. We call the OOD detector with Mahalanobis distance estimated on class mean and tied covariance (Lee et al., 2018) as Mahala-Tied. Detector based on Mahalanobis distance estimated on class mean and class covariance is referred as Mahala-Class. Similarly conformance among the K-nearest neighbors (KNN) measured by Mahala-Tied and Mahala-Class is referred as KNN-Tied and KNN-Class respectively in these experiments. Results for this study on CIFAR10 with DenseNet architecture, SVHN with DenseNet and ResNet34 architectures are shown in Tables 15, 16 and 17 respectively. The integrated approach could out-perform all the single attribute based OOD detector in all the tested cases due to detection of diverse OODs. An important observation made from these experiments is that the performance of the single attribute based methods could depend on the architecture of the classifier. For example, while the performance of PCA was really bad in case of DenseNet (for both CIFAR10 and SVHN) as compared to all other methods, it could out-perform all but the integrated approach for SVHN on ResNet34.
1. What are the contributions of the paper regarding OOD detection? 2. What are the strengths of the proposed integrated OOD detection approach? 3. What are the weaknesses of the paper, particularly in terms of theoretical analysis? 4. How does the reviewer assess the significance of the novel OOD taxonomy introduced by the authors? 5. Can the authors provide further explanations or clarifications regarding the limitations of current OOD detection algorithms and the proposed integrated approach?
Review
Review ########################################################################## Summary: This paper introduces a novel taxonomy for OOD outliers. The authors analyze current OOD detection approaches and uncover their limitations. They propose to fuse several existing approaches into a combined one and extensively evaluate it on various data sets (CIFAR,10, SVNH, MNIST, STL10, ImageNet, etc.). The proposed integrated OOD detection approach clearly shows superior performance. ########################################################################## Reasons: Overall, I vote for accepting. The authors make several key contributions: The introduce a novel OOD taxonomy, analyse current OOD detection approaches on a toy data set, propose an integrated OOD detection approach, which shows a superior performance in their extensive evaluation. ########################################################################## Pros: Introduction of a sound and helpful OOD taxonomy Limitation analysis of state-of-the-art OOD detection algorithms Proposal of a new integrated approach to detect different kind of OOD inputs that unifies the advanatges of underlying algorithms. Extensive evaluation of new approach shows clearly superior performance. On a variety of data sets (CIFAR,10, SVNH, MNIST, STL10, ImageNet, etc.) the proposed approach outperforms the baselines on all evaluation criteria (TNR, AUROC, DTACC, AUPR IN, AUPR OUT) for various classifier neural network architectures (LeNet, ResNet, DenseNet). ########################################################################## Cons: The demonstration of the limitations of current OOD detection algorithms is solely empirical (based on a toy data set). Theoretic motivations (if possible) would be a great addition. Similarly, a sound theoretical derivation for the proposed integrated approach is lacking. Further toy data sets beyond the two half moon data set would be helpful to better understand the implications of all algorithms. ########################################################################## Questions during rebuttal period: Please address and clarify the cons above
ICLR
Title Are all outliers alike? On Understanding the Diversity of Outliers for Detecting OODs Abstract Deep neural networks (DNNs) are known to produce incorrect predictions with very high confidence on out-of-distribution (OOD) inputs. This limitation is one of the key challenges in the adoption of deep learning models in high-assurance systems such as autonomous driving, air traffic management, and medical diagnosis. This challenge has received significant attention recently, and several techniques have been developed to detect inputs where the model’s prediction cannot be trusted. These techniques use different statistical, geometric, or topological signatures. This paper presents a taxonomy of OOD outlier inputs based on their source and nature of uncertainty. We demonstrate how different existing detection approaches fail to detect certain types of outliers. We utilize these insights to develop a novel integrated detection approach that uses multiple attributes corresponding to different types of outliers. Our results include experiments on CIFAR10, SVHN and MNIST as in-distribution data and Imagenet, LSUN, SVHN (for CIFAR10), CIFAR10 (for SVHN), KMNIST, and F-MNIST as OOD data across different DNN architectures such as ResNet34, WideResNet, DenseNet, and LeNet5. 1 INTRODUCTION Deep neural networks (DNNs) have achieved remarkable performance-levels in many areas such as computer vision (Gkioxari et al., 2015), speech recognition (Hannun et al., 2014), and text analysis (Majumder et al., 2017). But their deployment in the safety-critical systems such as self-driving vehicles (Bojarski et al., 2016), aircraft collision avoidance (Julian & Kochenderfer, 2017), and medical diagnoses (De Fauw et al., 2018) is hindered by their brittleness. One major challenge is the inability of DNNs to be self-aware of when new inputs are outside the training distribution and likely to produce incorrect predictions. It has been widely reported in literature (Guo et al., 2017a; Hendrycks & Gimpel, 2016) that deep neural networks exhibit overconfident incorrect predictions on inputs which are outside the training distribution. The responsible deployment of deep neural network models in high-assurance applications necessitates detection of out-of-distribution (OOD) data so that DNNs can abstain from making decisions on those. Recent approaches for OOD detection consider different statistical, geometric or topological signatures in data that differentiate OODs from the training distribution. For example, the changes in the softmax scores due to input perturbations and temperature scaling have been used to detect OODs (Hendrycks & Gimpel, 2016; Liang et al., 2017; Guo et al., 2017b). Papernot & McDaniel (2018) use the conformance among the labels of the nearest neighbors while Tack et al. (2020) use cosine similarity (modulated by the norm of the feature vector) to the nearest training sample for the detection of OODs. Lee et al. (2018) consider the Mahalanobis distance of an input from the in-distribution data to detect OODs. Several other metrics such as reconstruction error (An & Cho, 2015), likelihood-ratio between the in-distribution and OOD samples (Ren et al., 2019), trust scores (ratio of the distance to the nearest class different from the predicted class and the distance to the predicted class) (Jiang et al., 2018), density function (Liu et al., 2020; Hendrycks et al., 2019a), probability distribution of the softmax scores (Lee et al., 2017; Hendrycks et al., 2019b; Tack et al., 2020; Hendrycks et al., 2019a) have also been used to detect OODs. All these methods attempt to develop a uniform approach with a single signature to detect all OODs accompanied by empirical evaluations that use datasets such as CIFAR10 as in-distribution data and other datasets such as SVHN as OOD. Our study shows that OODs can be of diverse types with different defining characteristics. Consequently, an integrated approach that takes into account the diversity of these outliers is needed for effective OOD detection. We make the following three contributions in this paper: • Taxonomy of OODs. We define a taxonomy of OOD samples that classify OODs into different types based on aleatoric vs epistemic uncertainty (Hüllermeier & Waegeman, 2019), distance from the predicted class vs the distance from the tied training distribution, and uncertainty in the principal components vs uncertainty in non-principal components with low variance. • Incompleteness of existing uniform OOD detection approaches. We examine the limitations of the state-of-the-art approaches to detect various types of OOD samples. We observe that not all outliers are alike and existing approaches fail to detect particular types of OODs. We use a toy dataset comprising two halfmoons as two different classes to demonstrate these limitations. • An integrated OOD detection approach. We propose an integrated approach that can detect different types of OOD inputs. We demonstrate the effectiveness of our approach on several benchmarks, and compare against state-of-the-art OOD detection approaches such as the ODIN (Liang et al., 2017) and Mahalanobis distance method (Lee et al., 2018). 2 OOD TAXONOMY AND EXISTING DETECTION METHODS DNNs predict the class of a new input based on the classification boundaries learned from the samples of the training distribution. Aleatory uncertainty is high for inputs which are close to the classification boundaries, and epistemic uncertainty is high when the input is far from the learned distributions of all classes (Hora, 1996; Hüllermeier & Waegeman, 2019). Given the predicted class of a DNN model on a given input, we can observe the distance of the input from the distribution of this particular class and identify it as an OOD if this distance is high. We use this top-down inference approach to detect this type of OODs which are characterized by an inconsistency in model’s prediction and input’s distance from the distribution of the predicted class. Further, typical inputs to DNNs are high-dimensional and can be decomposed into principal and non-principal components based on the direction of high variation; this yields another dimension for classification of OODs. We, thus, categorize an OOD using the following three criteria. 1. Is the OOD associated with higher epistemic or aleatoric uncertainty, i.e., is the input away from in-distribution data or can it be confused between multiple classes? 2. Is the epistemic uncertainty of an OOD sample unconditional or is it conditioned on the class predicted by the DNN model? 3. Is the OOD an outlier due to unusually high deviation in the principal components of the data or due to small deviation in the non-principal (and hence, statistically invariant) components? Figure 1 demonstrates different types of OODs which differ along these criteria. Type 1 OODs have high epistemic uncertainty and are away from the indistribution data. Type 2 OODs have high epistemic uncertainty with respect to each of the 3 classes even though approximating all in-distribution (ID) data using a single Guassian distribution will miss these outliers. Type 3 OODs have high aleatoric uncertainty as they are close to the decision boundary between class 0 and class 1. Type 4 and 5 have high epistemic uncertainty with respect to their closest classes. While Type 4 OODs are far from the distribution along the principal axis, Type 5 OODs vary along a relatively invariant axis where even a small deviation indicates that the sample is an OOD. Limitations of Existing Detection Methods. We empirically demonstrate the limitations of existing OOD detection methods on a two-dimensional (2D) half-moon dataset with two classes. As shown in Figure 2, we consider three clusters of OOD samples: cluster A (black), B (brown) and C(red). Figure 2 (right) shows the 2D penultimate features of the classifier. Different approaches differ in their ability to detect different OOD types as illustrated in Figure 3. • Figure 3(a) shows that the Mahalanobis distance (Lee et al., 2018) from the mean and tied covariance of all the training data in the feature space cannot detect OODs in the clusters B and C corresponding to class-conditional epistemic uncertainty and aleatoric uncertainty, respectively. It attains the overall true negative rate (TNR) of 39.09% at the 95% true positive rate (TPR). • Figure 3(b) shows that the softmax prediction probability (SPB) (Hendrycks & Gimpel, 2016) cannot detect the OODs in cluster A corresponding to high epsitemic uncertainty. The TNR ( at 95% TPR) reported by the SPB technique is 60.91%. • Figure 3(c) shows that class-wise Principal Component Analysis (PCA) (Hoffmann, 2007) cannot detect OODs in cluster C corresponding to high aleatoric uncertainty. We performed PCA of the two classes separately in the feature space and used the minimum reconstruction error to detect OODs. This obtained overall TNR of 80.91% (at 95% TPR). • Figure 3(d) shows that K-Nearest Neighbor (kNN) (Papernot & McDaniel, 2018) nonconformance in the labels of the nearest neighbors cannot detect OODs in clusters A and B with high epistemic uncertainty. The overall TNR (at 95% TPR) reported by this technique is 15%. These observations can be explained by the focus of different detection techniques on measuring different forms of uncertainty. This motivates our integrated OOD detection method. 3 INTEGRATED OOD DETECTION METHOD Complementary information about different OOD types can be used to detect a wider range of OODs. Figure 4 shows the improvement in the TNR of the OOD detector composed with information about different classes of OODs on the two half-moons dataset. Non-conformity in the labels of the nearest neighbors captures OODs in cluster C. Mahalanobis distance from the tied in-distribution detects OODs in cluster A. Reconstruction error from the PCA of the 2 class distributions captures OODs in cluster B. Softmax scores further strengthens the OOD detection by reporting OODs in cluster C that are undetected by the other three methods. The integrated OOD detection approach, thus, uses the following attributes, each specialized in detecting a specific type (or a combination of types) of OODs: 1. Mahalanobis distance from the in-distribution density estimate that considers either tied (Lee et al., 2018) or class-wise covariance estimate. This attribute captures the overall or classconditional epistemic uncertainty of an OOD. Our refinement to also use class-wise covariance significantly improves detection of OODs when coupled with PCA approach described below. 2. Conformance measure among the variance of the Annoy (Bernhardsson, 2018) nearest neighbors calculated as the Mahalanobis distance of the input’s conformance to the closest class conformance. Our experiments found this to be very effective in capturing aleatoric uncertainty. This new attribute is a fusion of nearest-neighbor and Mahalanobis distance methods in literature. 3. Prediction confidence of the classifier as the maximum softmax score on the perturbed input where the perturbation used is the same as ODIN approach (Liang et al., 2017). This boosts the detection of high aleatoric uncertainty by sharpening the class-wise distributions. 4. Reconstruction error using top 40% of PCA components where the components are obtained via class conditional PCA of the training data. This boosts the detection of high class-wise epistemic uncertainty by eliminating irrelevant features. This fusion of attributes from existing state-of-the-art detection methods and new attributes was found to be the most effective integrated appraoch capable of detecting the different types of OODs. We evaluated it on several benchmarks as discussed in Section 4 with ablation study in Appendix. 4 EXPERIMENTAL RESULTS Attributes forming the signature of the OOD detector used in the experiments The signature of the OOD detector used in the experiments is the weighted sum of four attributes, one from each of the following four categories: 1. Distance from the in-distribution density estimate: We use mahalanobis distance of the input with respect to the closest class conditional distribution. The parameters of this distance are chosen from one of the following two categories: • empirical class means and tied empirical covariance of training samples • empirical class means and empirical class covariance of training samples 2. Reconstruction error: We perform class conditional PCA empirically from the training samples. We use minimum reconstruction error of the input from the top 40% eigen vectors of the class conditional eigen spaces. 3. Prediction confidence of the classifier: We use maximum value of the temperature scaled softmax scores (S) on the perturbed input. Perturbations to the input (x) are made according to the following equation (Liang et al., 2017) x̃ = x− sign(−∇xlogSŷ(x;T )) (1) The values of the magnitude of noise ( ) and the temperature scaling parameter (T ) are chosen from one of the following three categories: • = 0 and T = 0 • = 0 and T = 10 • = 0.005 and T = 10 4. Conformance measure among the nearest neighbors: We compute an m-dimensional feature vector to capture the conformance among the input’s nearest neighbors in the training samples, where m is the dimension of the input. We call this m-dimensional feature vector as the conformance vector. The conformance vector is calculated by taking the mean deviation along each dimension of the nearest neighbors from the input. We hypothesize that this deviation for the in-distribution samples would vary from the OODs due to aleatory uncertainty. The value of the conformance measure is calculated by computing mahalanobis distance of the input’s conformance vector to the closest class conformance distribution. Similar to the distance for the in-distribution density estimate, the parameters of this mahalanobis distance are chosen from the following two categories: • empirical class means and tied empirical covariance on the conformance vectors of the training samples • empirical class means and empirical class covariance on the conformance vectors of the training samples The value of the number of the nearest neighbors is chosen from the set {10, 20, 30, 40, 50} via validation. We used Annoy (Approximate Nearest Neighbors Oh Yeah) (Bernhardsson, 2018) to compute the nearest neighbors. The weights of the four attributes forming the signature of the OOD detector are generated in the following manner. We use a small subset (1000 samples) of both the in-distribution and the generated OOD data to train a binary classifier using the logistic loss. The OOD data used to train the classifier is generated by perturbing the in-distribution data using the Fast Gradient Sign attack (FGSM) (Goodfellow et al., 2014). The trained classifier (or OOD detector) is then evaluated on the real OOD dataset at the True Positive Rate of 95%. The best result, in terms of the highest TNR on the validation dataset (from the training phase of the OOD detector), from the twelve combinations of the aforementioned sub-categories (one from each of the four attributes) are then reported on the test (or real) OOD datasets. Datasets and metrics. We evaluate the proposed integrated OOD detection on benchmarks such as CIFAR10 (Krizhevsky et al., 2009) and SVHN (Netzer et al., 2011). We consider standard metrics (Hendrycks & Gimpel, 2016; Liang et al., 2017; Lee et al., 2018) such as the true negative rate (TNR) at 95% true positive rate (TPR), the area under the receiver operating characteristic curve (AUROC), area under precision recall curve (AUPR), and the detection accuracy (DTACC) to evaluate our performance. DNN-based classifier architectures. To demonstrate that the proposed approach generalizes across various network architectures, we consider a wide range of DNN models such as , ResNet (He et al., 2016), WideResNet (Zagoruyko & Komodakis, 2016), and DenseNet (Huang et al., 2017). Comparison with the state-of-the-art. We compare our approach with the three state-of-the-art approaches: SPB (Hendrycks & Gimpel, 2016), ODIN (Liang et al., 2017), and Mahalanobis (Lee et al., 2018). For the ODIN method, the perturbation noise is chosen from the set {0, 0.0005, 0.001, 0.0014, 0.002, 0.0024, 0.005, 0.01, 0.05, 0.1, 0.2}, and the temperature T is chosen from the set {1, 10, 100, 1000}. These values are chosen from the validation set of the adversarial samples of the in-distribution data generated by the FGSM attack. For the Mahalanobis method, we consider their best results obtained after feature ensemble and input preprocessing with the hyperparameters of their OOD detector tuned on the in-distribution and adversarial samples generated by the FGSM attack. The magnitude of the noise used in pre-processing of the inputs is chosen from the set {0.0, 0.01, 0.005, 0.002, 0.0014, 0.001, 0.0005}. CIFAR10. With CIFAR10 as in-distribution, we consider SVHN (Netzer et al., 2011), TinyImagenet (Deng et al., 2009), and LSUN (Yu et al., 2015) as the OOD datasets. For CIFAR10, we consider two DNNs: ResNet50, and WideResNet. Table 1 shows the results. SVHN. With SVHN as in-distribution, we consider CIFAR10, Imagenet, and LSUN and as the OOD datasets. For SVHN, we use the DenseNet classifier. Table 1 shows the results. Key observations. We do not consider pre-processing of the inputs in our integrated OOD detector. Even without input pre-processing and with the exception of CIFAR10 OOD dataset for SVHN indistribution trained on DenseNet, we could perform equally well (and even out-perform in most of the cases) as the Mahalanobis method on its best results generated after pre-processing the input. We also consider a Subset-CIFAR100 as OODs for CIFAR10. Specifically, from the CIFAR100 classes, we select sea, road, bee, and butterfly as OODs which are visually similar to the ship, automobile, and bird classes in the CIFAR10, respectively. Thus, there can be numerous OOD samples due to aleatoric and class-conditional epistemic uncertainty which makes the OOD detection challenging. Figure 5 shows the t-SNE (Maaten & Hinton, 2008) plot of the penultimate features from the ResNet50 model trained on CIFAR10. We show 4 examples of OODs (2 due to epistemic and 2 due to aleatoric uncertainty) from Subset-CIFAR100. These OODs were detected by our integrated approach but missed by the Mahalanobis approach. These observations justify the effectiveness of integrating multiple attributes to detect OOD samples. Additional experimental results in the appendix. We also compare the performance of the integrated OOD detector with the SPB, ODIN and Mahalanobis detector in supervised settings, as reported by the Mahalanobis method for OOD detection (Lee et al., 2018). These results include experiments on CIFAR10, SVHN and MNIST as in-distribution data and Imagenet,LSUN, SVHN (for CIFAR10), CIFAR10 (for SVHN), KMNIST, and F-MNISTas OOD data across different DNN architectures such as ResNet34, WideResNet,DenseNet, and LeNet5. All these results, along with the ablation studies on OOD detectors with single attributes are included in the Appendix. In almost all of the reported results in the Appendix, our OOD detector could outperform the compared state-of-the-art methods with improvements of even 2X higher TNR at 95% TPR in some cases. 5 DISCUSSION AND FUTURE WORK Recent techniques propose refinement in the training process of the classifiers for OOD detection. Some of these techniques include fine-tuning the classifier’s training with an auxiliary cost function for OOD detection (Hendrycks et al., 2019a; Liu et al., 2020). Other techniques make use of selfsupervised models for OOD detection (Tack et al., 2020; Hendrycks et al., 2019b). We perform preliminary experiments to compare the performance of these techniques with our integrated OOD detector that makes use of the feature space of the pre-trained classifiers to distinguish in-distribution samples from OODs. Our approach does not require modification of the training cost function of the original task. These results are reported in the Appendix. We consider making use of the feature space of in our OOD detection technique as a promising prospective future work. Another direction of the future work is to explore the score functions used in these refined training processes for OOD detection (Liu et al., 2020; Hendrycks et al., 2019a; Tack et al., 2020; Hendrycks et al., 2019b) as attributes (or categories of attributes) forming the signature of the integrated OOD detector. Another avenue of future work is to explore OOD generation techniques other than adversarial examples generated by the FGSM attack for training of the integrated OOD detector. 6 CONCLUSION We introduced a taxonomy of OODs and proposed an integrated approach to detect different types of OODs. Our taxonomy classifies OOD on the nature of their uncertainty and we demonstrated that no single state-of-the-art approach detects all these OOD types. Motivated by this observation, we formulated an integrated approach that fuses multiple attributes to target different types of OODs. We have performed extensive experiments on a synthetic dataset and several benchmark datasets (e.g., MNIST, CIFAR10, SVHN). Our experiments show that our approach can accurately detect various types of OODs coming from a wide range of OOD datasets such as KMNIST, FashionMNIST, SVHN, LSUN, and Imagenet. We have shown that our approach generalizes over multiple DNN architectures and performs robustly when the OOD samples are similar to in-distribution data. A APPENDIX A.1 DEFINING OODS DUE TO EPISTEMIC AND ALEATORIC UNCERTAINTY In general, let there be k classes c1, c2, . . . , ck and the distribution of training data for each class is p(x|ci). The overall training distribution is denoted by p(x). Now, given a new input x̂ to the trained DNN model M , let ĉ =M(x̂) denote the predicted class. The flowchart in Figure 6 shows different sources of uncertainty that could make x̂ an OOD. A.2 ADDITIONAL EXPERIMENTAL RESULTS We first present preliminary results for comparison with the OOD detection techniques based on fine-tuning of the classifiers (Hendrycks et al., 2019a; Liu et al., 2020; Tack et al., 2020; Hendrycks et al., 2019b). We then present our results on various vision datasets and different architectures of the pre-trained DNN based classifiers for these datasets in comparison to the ODIN, the Mahalanobis and the SPB methods in supervised settings. Finally, we then report results from the ablation study on OOD detection with individual attributes and compare it with our integrated approach on OOD detection. A.2.1 COMPARISON WITH THE OOD DETECTION TECHNIQUES BASED ON REFINEMENT OF THE TRAINING PROCESS FOR CLASSIFIERS Recent techniques propose refinement in the training process of the classifiers for OOD detection. Some of these techniques include fine-tuning the training of classifiers with a trainable cost function for OOD detection (Hendrycks et al., 2019a; Liu et al., 2020), self-supervised training of the classifiers to enhance OOD detection (Tack et al., 2020; Hendrycks et al., 2019b) etc. We perform preliminary experiments to compare the performance of these techniques with our integrated OOD detector that uses features of the pre-trained classifiers to distinguish in-distribution samples from OODs. Table 2 compares TNR (at 95% TPR), AUROC and AUPR for the energy based OOD detector (Liu et al., 2020) and our integrated OOD detector on CIFAR10 with pretrained WideResNet model. The integrated OOD detector was trained on in-distribution and adversarial samples generated by FGSM attack. Table 3 compares the results of the WideResNet model trained on CIFAR10 and fine-tuned with the outlier exposure from the 80 Million Tiny Images with our OOD detector that uses features from the pre-trained WideResNet model trained on CIFAR10. Since 80 Million Tiny Images dataset is no longer available for use, we used a small subset of ImageNet (treated as OOD dataset for CIFAR10 and SVHN datasets (Lee et al., 2018)) for generating OODs for training of the integrated OOD detector. Table 4 compares the OOD detection performance of the self-supervised training based OOD detector with our method. We trained our OOD detector with the in-distribution CIFAR10 as in-distribution samples and adversarial samples generated by FGSM attack from the test dataset of CIFAR10 as OODs. The trained OOD detector was then tested on LSUN as OODs and the results are reported in Table 4. With ResNet-50 as the classifier for CIFAR10, we trained our OOD detector with the in-distribution CIFAR10 as in-distribution samples and adversarial samples generated by FGSM from the test dataset of CIFAR10 as OODs. The trained OOD detector was then tested on SVHN as OODs and these results are compared with the contrastive based learning for OOD detection (Tack et al., 2020) in table 5. A.2.2 COMPARISON WITH THE STATE-OF-THE-ART OOD DETECTION METHODS IN SUPERVISED SETTINGS ON PRE-TRAINED CLASSIFIERS We compare our results with the state-of-the-art methods in supervised settings, as reported by the Mahalanobis method for OOD detection (Lee et al., 2018). In supervised settings, a small subset of the real OOD dataset is used in the training of the OOD detector. Datasets and metrics. We evaluate the proposed integrated OOD detection on benchmarks such as MNIST (LeCun et al., 1998), CIFAR10 (Krizhevsky et al., 2009), and SVHN (Netzer et al., 2011). We consider standard metrics (Hendrycks & Gimpel, 2016; Liang et al., 2017; Lee et al., 2018) such as the true negative rate (TNR) at 95% true positive rate (TPR), the area under the receiver operating characteristic curve (AUROC), area under precision recall curve (AUPR) with both in-distribution and OODs as positive samples (AUPR IN and AUPR OUT respectively), and the detection accuracy (DTACC) to evaluate our performance. DNN-based classifier architectures. To demonstrate that the proposed approach generalizes across various network architectures, we consider a wide range of DNN models such as Lenet (LeCun et al., 1998) , ResNet (He et al., 2016) , and DenseNet (Huang et al., 2017). Comparison with the state-of-the-art. We compare our approach with the three state-of-the-art approaches: SPB (Hendrycks & Gimpel, 2016), ODIN (Liang et al., 2017) and Mahalanobis (Lee et al., 2018). Since, these experiments are performed in supervised settings, we fix T = 10 and = 0.005 for generating results from the ODIN method. For Mahalanobis distance, we consider the distance in the penultimate layer feature space as well as features from all the layers of the DNN without preprocessing of the input in either settings. MNIST. With MNIST as in-distribution, we consider KMNIST (Clanuwat et al., 2018) and FashionMNIST(F-MNIST) (Xiao et al., 2017) as OOD datasets. For MNIST, we use the LeNet5 (LeCun et al., 1998) DNN. Results in terms of TNR (at 95% TPR), AUROC, and DTACC are reported in tables 6, 7, and 8. Table 6 shows the results with the features from the penultimate layer in comparison to the ODIN and Mahalanobis methods. Table 7 shows the results with the features from all the layers in comparison to the Mahalanobis method. Table 8 shows the results with the features from the penultimate layer in comparison to the SPB method. In all these settings, our approach outperforms the state-of-the-art approaches for both the OOD datasets. Results in comparison to AUPR IN and AUPR OUT are shown in table 9. Here also, our technique out-performs all the three OOD detectors on all the test cases. CIFAR10. With CIFAR10 as in-distribution, we consider STL10 (Coates et al., 2011), SVHN (Netzer et al., 2011), Imagenet (Deng et al., 2009), LSUN (Yu et al., 2015), and a subset of CIFAR100 (SCIFAR100) (Krizhevsky et al., 2009) as OOD datasets. For CIFAR10, we consider three DNNs: DenseNet, ResNet34, and ResNet50. Results in terms of TNR (at 95% TPR), AUROC, and DTACC are reported in tables 6, 7, and 8. Table 6 shows the results with the features from the penultimate layer in comparison to the ODIN and Mahalanobis methods. Table 7 shows the results with the features from all the layers in comparison to the Mahalanobis method. Table 8 shows the results with the features from the penultimate layer in comparison to the SPB method. Results in comparison to AUPR IN and AUPR OUT are shown in tables 10, 11, and 12. Here also, the integrated OOD detection technique could out-perform the other three detectors on most of the test cases. Note that images from STL10 and the subset of CIFAR100 are quite similar to CIFAR10 images. Furthermore, from the CIFAR100 classes, we select sea, road, bee, and butterfly as OODs which are visually similar to the ship, automobile, and bird classes in the CIFAR10, respectively. SVHN. With SVHN as in-distribution, we consider STL10, CIFAR10, Imagenet, LSUN and, SCIFAR100 as OOD datasets. For SVHN, we consider two DNNs: DenseNet and ResNet34. Results in terms of TNR (at 95% TPR), AUROC, and DTACC are reported in tables 6, 7, and 8. Table 6 shows the results with the features from the penultimate layer in comparison to the ODIN and Mahalanobis methods. Table 7 shows the results with the features from all the layers in comparison to the Mahalanobis method. Table 8 shows the results with the features from the penultimate layer in comparison to the SPB method. Results in comparison to AUPR IN and AUPR OUT are shown in tables 13, and 14. Here also, the integrated OOD detection technique could out-perform the other three detectors on most of the test cases. Key observations. As shown in Table 6 and Table 7, and Table 8, our approach outperforms the state-of-the-art on all three datasets and with various DNN architectures. On CIFAR10, in terms of the TNR metric, our approach with Resnet50 outperforms Mahalanobis by 56% when SVHN is OOD and our approach with Resnet34 outperforms ODIN by 36% when LSUN is OOD. While considering STL10 and Subset-CIFAR100 as OODs for CIFAR10, the images from both these datasets are quite similar to CIFAR-10 images. Thus, there can be numerous OOD samples due to aleatoric and class-conditional epistemic uncertainty which makes detection challenging. Although our performance is low on the STL10 dataset, it still outperforms the state-of-the-art. For instance, the proposed approach achieves a 27% better TNR score than the Mahalanobis using ResNet50. On SVHN, in terms of the TNR metric, our approach outperforms ODIN and Mahalanobis by 63% and 13%, respectively on SCIFAR100 using ResNet34. The above observations justify the effectiveness of integrating multiple attributes to detect OOD samples. A.2.3 ABLATION STUDY We report ablation study on OOD detection with individual attributes and compare it with our integrated approach on the penultimate feature space of the classifier in the supervised settings as described in the previous section. We call the OOD detector with Mahalanobis distance estimated on class mean and tied covariance (Lee et al., 2018) as Mahala-Tied. Detector based on Mahalanobis distance estimated on class mean and class covariance is referred as Mahala-Class. Similarly conformance among the K-nearest neighbors (KNN) measured by Mahala-Tied and Mahala-Class is referred as KNN-Tied and KNN-Class respectively in these experiments. Results for this study on CIFAR10 with DenseNet architecture, SVHN with DenseNet and ResNet34 architectures are shown in Tables 15, 16 and 17 respectively. The integrated approach could out-perform all the single attribute based OOD detector in all the tested cases due to detection of diverse OODs. An important observation made from these experiments is that the performance of the single attribute based methods could depend on the architecture of the classifier. For example, while the performance of PCA was really bad in case of DenseNet (for both CIFAR10 and SVHN) as compared to all other methods, it could out-perform all but the integrated approach for SVHN on ResNet34.
1. What are the strengths and weaknesses of the proposed approach in comparison to other state-of-the-art methods? 2. How does the taxonomy of OODs compare to previous works like ODIN and Mahalanobis? Are there any differences or similarities between them? 3. Can you explain why certain examples that are close to the in-distribution should be treated as OOD? Is it because they represent a different type of uncertainty than traditional OOD inputs? 4. How do you respond to the concern about treating STL10 images as OOD when they contain CIFAR10-like images? Shouldn't we expect the classifier trained on CIFAR10 to have correct predictions on some of those images? 5. How can the analysis for the simple two-dimensional dataset be applied to high-dimensional datasets? Are there any limitations or challenges in doing so? 6. Could you provide more details about how the best results from the twelve combinations of sub-categories were selected? Did you use test OOD data to select the best results? 7. Can you describe how existing state-of-the-art detection methods are integrated into the proposed method? Is it a straightforward combination, or is there something novel in the way these methods are combined? 8. What are the major concerns regarding the effectiveness of the proposed method based on the current experiments performed? Do the new experimental results alleviate some of these concerns? 9. Are there any missing experimental details about the method that need to be addressed? For example, what validation dataset was used to select the best binary classifier? 10. How would you address the concern that the proposed approach needs too many hyperparameters and lacks rigorous analysis for why integrating different attributions improves OOD detection?
Review
Review This paper introduces a taxonomy of OODs and proposed an integrated approach to detect different types of OODs. Their taxonomy classifies OOD on the nature of their uncertainty and they show that no single state-of-the-art approach detects all these OOD types. Motivated by this observation, they combine multiple existing OOD detection methods to detect various types of OODs. In general, this paper is easy to understand. But I have the following concerns: Lack of discussions about some important related work. They only compare their method to ODIN and Mahalanobis methods. But there are some other OOD detection methods which also achieve state-of-the-art results, such as [1][2][3]. Could the authors compare their method to these methods? In their taxonomy, they consider examples that are very close to in-distribution as OOD. I am wondering whether we should treat those examples as OOD since they are too close to the in-distribution. I think previous works like ODIN and Mahalanobis all assume that OOD inputs are far away from the in-distribution. In the experimental setup, they consider STL10 as an OOD dataset for CIFAR10. But STL10 contains CIFAR10 alike images. It is unconvincing that we should treat those images as OOD. And I think the classifier trained on CIFAR10 may have correct predictions on some of those images. Could the authors explain why we should treat those images as OOD? I am wondering whether the analysis for the simple two-dimensional dataset could be applied to high-dimensional datasets. In the high-dimensional space, their conclusion about which method detects which type of OOD may not hold. Could the authors explain it? In Appendix A.2.1, they mention that the best results from the twelve combinations of the aforementioned sub-categories (one from each of the four attributions) are reported. Could the authors explain how they select the best results? Do they use the test OOD data to select the best results? Could the author describe how they integrate the existing state-of-the-art detection methods in detail? It is hard for me to understand what they exactly do in their proposed method. --------- After Reading the Updated Paper ---------- Thanks for the update. After reading the revised paper, I still have some major concerns: The current experiments performed are not enough to demonstrate the effectiveness of the proposed method. The old experiment results (Table 6, 7, 8) are not convincing since the authors train a binary classifier as an OOD detector using a subset of the test OOD data, which is not realizable in practice. We should assume that the test OOD data are unknown during learning the OOD detector. The new experimental results where they train the binary classifier using adversarial examples generated on in-distribution data (follow the Mahalanobis method) in Table 1 are limited. For example, on CIFAR10, they only report results for ResNet50 and WideResNet, but I also want to know the results for DenseNet (Mahalanobis method [4] performs very well on CIFAR10/SVHN using DenseNet under the same setting). Some experimental details about their method are missing. The authors mention that they train 12 binary classifiers and then select the best one on the validation dataset. But they don't provide the details about the validation dataset, which is critical for their results. Based on their previous response, it seems they use a subset of test OOD data to select the best classifier, which is not allowed I think. Based on the current description of experimental settings, it is hard for me to evaluate the reported results. The proposed approach needs a lot of hyper-parameters (4 attributes, 12 combinations, the weights of the binary classifier, etc) and it is unclear how to tune these hyper-parameters and how they would affect the results. The current ablation study is limited I think. This paper doesn't have rigorous analysis for why integrating different attributions would improve OOD detection. I think this is an empirical paper but the experiments provided are not sufficient to demonstrate the effectiveness of the proposed method. To clarify, I didn't agree to raise the score previously. What I said was that the previous paper needed significant revision and I could not recommend acceptance. I still have some major concerns after reading the revised paper. Thus, I keep the same rating and think the paper is not ready for publication. I hope the authors could keep improving their paper. [1] Hendrycks, Dan, Mantas Mazeika, and Thomas Dietterich. "Deep anomaly detection with outlier exposure." arXiv preprint arXiv:1812.04606 (2018). [2] Liu, Weitang, et al. "Energy-based Out-of-distribution Detection." arXiv preprint arXiv:2010.03759 (2020). [3] Lakshminarayanan, Balaji, Alexander Pritzel, and Charles Blundell. "Simple and scalable predictive uncertainty estimation using deep ensembles." Advances in neural information processing systems. 2017. [4] Lee, Kimin, et al. "A simple unified framework for detecting out-of-distribution samples and adversarial attacks." Advances in Neural Information Processing Systems. 2018.
ICLR
Title Deep Graph Infomax Abstract We present Deep Graph Infomax (DGI), a general approach for learning node representations within graph-structured data in an unsupervised manner. DGI relies on maximizing mutual information between patch representations and corresponding high-level summaries of graphs—both derived using established graph convolutional network architectures. The learnt patch representations summarize subgraphs centered around nodes of interest, and can thus be reused for downstream node-wise learning tasks. In contrast to most prior approaches to unsupervised learning with GCNs, DGI does not rely on random walk objectives, and is readily applicable to both transductive and inductive learning setups. We demonstrate competitive performance on a variety of node classification benchmarks, which at times even exceeds the performance of supervised learning. 1 INTRODUCTION Generalizing neural networks to graph-structured inputs is one of the current major challenges of machine learning (Bronstein et al., 2017; Hamilton et al., 2017b; Battaglia et al., 2018). While significant strides have recently been made, notably with graph convolutional networks (Kipf & Welling, 2016a; Gilmer et al., 2017; Veličković et al., 2018), most successful methods use supervised learning, which is often not possible as most graph data in the wild is unlabeled. In addition, it is often desirable to discover novel or interesting structure from large-scale graphs, and as such, unsupervised graph learning is essential for many important tasks. Currently, the dominant algorithms for unsupervised representation learning with graph-structured data rely on random walk-based objectives (Grover & Leskovec, 2016; Perozzi et al., 2014; Tang et al., 2015; Hamilton et al., 2017a), sometimes further simplified to reconstruct adjacency information (Kipf & Welling, 2016b; Duran & Niepert, 2017). The underlying intuition is to train an encoder network so that nodes that are “close” in the input graph are also “close” in the representation space. While powerful—and related to traditional metrics such as the personalized PageRank score (Jeh & Widom, 2003)—random walk methods suffer from known limitations. Most prominently, the random-walk objective is known to over-emphasize proximity information at the expense of structural information (Ribeiro et al., 2017), and performance is highly dependent on hyperparameter choice (Grover & Leskovec, 2016; Perozzi et al., 2014). Moreover, with the introduction of stronger ∗Work performed while the author was at Mila. †CIFAR Fellow encoder models based on graph convolutions (Gilmer et al., 2017), it is unclear whether randomwalk objectives actually provide any useful signal, as these encoders already enforce an inductive bias that neighboring nodes have similar representations. In this work, we propose an alternative objective for unsupervised graph learning that is based upon mutual information, rather than random walks. Recently, scalable estimation of mutual information was made both possible and practical through Mutual Information Neural Estimation (MINE, Belghazi et al., 2018), which relies on training a statistics network as a classifier of samples coming from the joint distribution of two random variables and their product of marginals. Following on MINE, Hjelm et al. (2018) introduced Deep InfoMax (DIM) for learning representations of highdimensional data. DIM trains an encoder model to maximize the mutual information between a high-level “global” representation and “local” parts of the input (such as patches of an image). This encourages the encoder to carry the type of information that is present in all locations (and thus are globally relevant), such as would be the case of a class label. DIM relies heavily on convolutional neural network structure in the context of image data, and to our knowledge, no work has applied mutual information maximization to graph-structured inputs. Here, we adapt ideas from DIM to the graph domain, which can be thought of as having a more general type of structure than the ones captured by convolutional neural networks. In the following sections, we introduce our method called Deep Graph Infomax (DGI). We demonstrate that the representation learned by DGI is consistently competitive on both transductive and inductive classification tasks, often outperforming both supervised and unsupervised strong baselines in our experiments. 2 RELATED WORK Contrastive methods. An important approach for unsupervised learning of representations is to train an encoder to be contrastive between representations that capture statistical dependencies of interest and those that do not. For example, a contrastive approach may employ a scoring function, training the encoder to increase the score on “real” input (a.k.a, positive examples) and decrease the score on “fake” input (a.k.a., negative samples). Contrastive methods are central to many popular word-embedding methods (Collobert & Weston, 2008; Mnih & Kavukcuoglu, 2013; Mikolov et al., 2013), but they are found in many unsupervised algorithms for learning representations of graphstructured input as well. There are many ways to score a representation, but in the graph literature the most common techniques use classification (Perozzi et al., 2014; Grover & Leskovec, 2016; Kipf & Welling, 2016b; Hamilton et al., 2017b), though other scoring functions are used (Duran & Niepert, 2017; Bojchevski & Günnemann, 2018). DGI is also contrastive in this respect, as our objective is based on classifying local-global pairs and negative-sampled counterparts. Sampling strategies. A key implementation detail to contrastive methods is how to draw positive and negative samples. The prior work above on unsupervised graph representation learning relies on a local contrastive loss (enforcing proximal nodes to have similar embeddings). Positive samples typically correspond to pairs of nodes that appear together within short random walks in the graph—from a language modelling perspective, effectively treating nodes as words and random walks as sentences. Recent work by Bojchevski & Günnemann (2018) uses node-anchored sampling as an alternative. The negative sampling for these methods is primarily based on sampling of random pairs, with recent work adapting this approach to use a curriculum-based negative sampling scheme (with progressively “closer” negative examples; Ying et al., 2018a) or introducing an adversary to select the negative examples (Bose et al., 2018). Predictive coding. Contrastive predictive coding (CPC, Oord et al., 2018) is another method for learning deep representations based on mutual information maximization. Like the models above, CPC is also contrastive, in this case using an estimate of the conditional density (in the form of noise contrastive estimation, Gutmann & Hyvärinen, 2010) as the scoring function. However, unlike our approach, CPC and the graph methods above are all predictive: the contrastive objective effectively trains a predictor between structurally-specified parts of the input (e.g., between neighboring node pairs or between a node and its neighborhood). Our approach differs in that we contrast global / local parts of a graph simultaneously, where the global variable is computed from all local variables. To the best of our knowledge, the sole prior works that instead focuses on contrasting “global” and “local” representations on graphs do so via (auto-)encoding objectives on the adjacency matrix (Wang et al., 2016) and incorporation of community-level constraints into node embeddings (Wang et al., 2017). Both methods rely on matrix factorization-style losses and are thus not scalable to larger graphs. 3 DGI METHODOLOGY In this section, we will present the Deep Graph Infomax method in a top-down fashion: starting with an abstract overview of our specific unsupervised learning setup, followed by an exposition of the objective function optimized by our method, and concluding by enumerating all the steps of our procedure in a single-graph setting. 3.1 GRAPH-BASED UNSUPERVISED LEARNING We assume a generic graph-based unsupervised machine learning setup: we are provided with a set of node features, X = {~x1, ~x2, . . . , ~xN}, whereN is the number of nodes in the graph and ~xi ∈ RF represents the features of node i. We are also provided with relational information between these nodes in the form of an adjacency matrix, A ∈ RN×N . While A may consist of arbitrary real numbers (or even arbitrary edge features), in all our experiments we will assume the graphs to be unweighted, i.e. Aij = 1 if there exists an edge i→ j in the graph and Aij = 0 otherwise. Our objective is to learn an encoder, E : RN×F × RN×N → RN×F ′ , such that E(X,A) = H = {~h1,~h2, . . . ,~hN} represents high-level representations ~hi ∈ RF ′ for each node i. These representations may then be retrieved and used for downstream tasks, such as node classification. Here we will focus on graph convolutional encoders—a flexible class of node embedding architectures, which generate node representations by repeated aggregation over local node neighborhoods (Gilmer et al., 2017). A key consequence is that the produced node embeddings, ~hi, summarize a patch of the graph centered around node i rather than just the node itself. In what follows, we will often refer to ~hi as patch representations to emphasize this point. 3.2 LOCAL-GLOBAL MUTUAL INFORMATION MAXIMIZATION Our approach to learning the encoder relies on maximizing local mutual information—that is, we seek to obtain node (i.e., local) representations that capture the global information content of the entire graph, represented by a summary vector, ~s. In order to obtain the graph-level summary vectors, ~s, we leverage a readout function,R : RN×F → RF , and use it to summarize the obtained patch representations into a graph-level representation; i.e., ~s = R(E(X,A)). As a proxy for maximizing the local mutual information, we employ a discriminator, D : RF × RF → R, such that D(~hi, ~s) represents the probability scores assigned to this patch-summary pair (should be higher for patches contained within the summary). Negative samples for D are provided by pairing the summary ~s from (X,A) with patch representations ~̃hj of an alternative graph, (X̃, Ã). In a multi-graph setting, such graphs may be obtained as other elements of a training set. However, for a single graph, an explicit (stochastic) corruption function, C : RN×F × RN×N → RM×F × RM×M is required to obtain a negative example from the original graph, i.e. (X̃, Ã) = C(X,A). The choice of the negative sampling procedure will govern the specific kinds of structural information that is desirable to be captured as a byproduct of this maximization. For the objective, we follow the intuitions from Deep InfoMax (DIM, Hjelm et al., 2018) and use a noise-contrastive type objective with a standard binary cross-entropy (BCE) loss between the samples from the joint (positive examples) and the product of marginals (negative examples). Following their work, we use the following objective1: L = 1 N +M N∑ i=1 E(X,A) [ logD ( ~hi, ~s )] + M∑ j=1 E(X̃,Ã) [ log ( 1−D ( ~̃ hj , ~s ))] (1) This approach effectively maximizes mutual information between ~hi and ~s, based on the JensenShannon divergence2 between the joint and the product of marginals. As all of the derived patch representations are driven to preserve mutual information with the global graph summary, this allows for discovering and preserving similarities on the patch-level—for example, distant nodes with similar structural roles (which are known to be a strong predictor for many node classification tasks; Donnat et al., 2018). Note that this is a “reversed” version of the argument given by Hjelm et al. (2018): for node classification, our aim is for the patches to establish links to similar patches across the graph, rather than enforcing the summary to contain all of these similarities (however, both of these effects should in principle occur simultaneously). 3.3 THEORETICAL MOTIVATION We now provide some intuition that connects the classification error of our discriminator to mutual information maximization on graph representations. Lemma 1. Let {X(k)}|X|k=1 be a set of node representations drawn from an empirical probability distribution of graphs, p(X), with finite number of elements, |X|, such that p(X(k)) = p(X(k′)) ∀k, k′. LetR(·) be a deterministic readout function on graphs and ~s(k) = R(X(k)) be the summary vector of the k-th graph, with marginal distribution p(~s). The optimal classifier between the joint distribution p(X, ~s) and the product of marginals p(X)p(~s), assuming class balance, has an error rate upper bounded by Err∗ = 12 ∑|X| k=1 p(~s (k))2. This upper bound is achieved ifR is injective. Proof. Denote by Q(k) the set of all graphs in the input set that are mapped to ~s(k) by R, i.e. Q(k) = {X(j) | R(X(j)) = ~s(k)}. AsR(·) is deterministic, samples from the joint, (X(k), ~s(k)) are drawn from the product of marginals with probability p(~s(k))p(X(k)), which decomposes into: p(~s(k)) ∑ ~s p(X(k), ~s) = p(~s(k))p(X(k)|~s(k))p(~s(k)) = p(X (k))∑ X′∈Q(k) p(X ′) p(~s(k))2 (2) For convenience, let ρ(k) = p(X (k))∑ X′∈Q(k) p(X ′) . As, by definition, X (k) ∈ Q(k), it holds that ρ(k) ≤ 1. This probability ratio is maximized at 1 when Q(k) = {X(k)}, i.e. when R is injective for X(k). The probability of drawing any sample of the joint from the product of marginals is then bounded above by ∑|X| k=1 p(~s (k))2. As the probability of drawing (X(k), ~s(k)) from the joint is ρ(k)p(~s(k)) ≥ ρ(k)p(~s(k))2, we know that classifying these samples as coming from the joint has a lower error than classifying them as coming from the product of marginals. The error rate of such a classifier is then the probability of drawing a sample from the joint as a sample from product of marginals under the mixture probability, which we can bound by Err ≤ 12 ∑|X| k=1 p(~s (k))2, with the upper bound achieved, as above, whenR(·) is injective for all elements of {X(k)}. It may be useful to note that 12|X| ≤ Err ∗ ≤ 12 . The first result is obtained via a trivial application of Jensen’s inequality, while the other extreme is reached only in the edge case of a constant readout function (when every example from the joint is also an example from the product of marginals, so no classifier performs better than chance). Corollary 1. From now on, assume that the readout function used, R, is injective. Assume the number of allowable states in the space of ~s, |~s|, is greater than or equal to |X|. Then, for ~s?, the 1Note that Hjelm et al. (2018) use a softplus version of the binary cross-entropy. 2The “GAN” distance defined here—as per Goodfellow et al. (2014) and Nowozin et al. (2016)—and Jensen-Shannon divergence can be related by DGAN = 2DJS − log 4. Therefore, any parameters that optimize one also optimize the other. optimal summary under the classification error of an optimal classifier between the joint and the product of marginals, it holds that |~s?| = |X|. Proof. By injectivity of R, we know that ~s? = argmin~s Err∗. As the upper error bound, Err∗, is a simple geometric sum, we know that this is minimized when p(~s(k)) is uniform. As R(·) is deterministic, this implies that each potential summary state would need to be used at least once. Combined with the condition |~s| ≥ |X|, we conclude that the optimum has |~s?| = |X|. Theorem 1. ~s? = argmax~s MI(X;~s), where MI is mutual information. Proof. This follows from the fact that the mutual information is invariant under invertible transforms. As |~s?| = |X| and R is injective, it has an inverse function, R−1. It follows then that, for any ~s, MI(X;~s) ≤ H(X) = MI(X;X) = MI(X;R(X)) = MI(X;~s?), where H is entropy. Theorem 1 shows that for finite input sets and suitable deterministic functions, minimizing the classification error in the discriminator can be used to maximize the mutual information between the input and output. However, as was shown in Hjelm et al. (2018), this objective alone is not enough to learn useful representations. As in their work, we discriminate between the global summary vector and local high-level representations. Theorem 2. Let X(k)i = {~xj}j∈n(X(k),i) be the neighborhood of the node i in the k-th graph that collectively maps to its high-level features, ~hi = E(X(k)i ), where n is the neighborhood function that returns the set of neighborhood indices of node i for graph X(k), and E is a deterministic encoder function. Let us assume that |Xi| = |X| = |~s| ≥ |~hi|. Then, the ~hi that minimizes the classification error between p(~hi, ~s) and p(~hi)p(~s) also maximizes MI(X (k) i ; ~hi). Proof. Given our assumption of |Xi| = |~s|, there exists an inverse Xi = R−1(~s), and therefore ~hi = E(R−1(~s)), i.e. there exists a deterministic function (E ◦ R−1) mapping ~s to ~hi. The optimal classifier between the joint p(~hi, ~s) and the product of marginals p(~hi)p(~s) then has (by Lemma 1) an error rate upper bound of Err∗ = 12 ∑|X| k=1 p( ~h (k) i ) 2. Therefore (as in Corollary 1), for the optimal ~hi, |~hi| = |Xi|, which by the same arguments as in Theorem 1 maximizes the mutual information between the neighborhood and high-level features, MI(X(k)i ;~hi). This motivates our use of a classifier between samples from the joint and the product of marginals, and using the binary cross-entropy (BCE) loss to optimize this classifier is well-understood in the context of neural network optimization. 3.4 OVERVIEW OF DGI Assuming the single-graph setup (i.e., (X,A) provided as input), we will now summarize the steps of the Deep Graph Infomax procedure: 1. Sample a negative example by using the corruption function: (X̃, Ã) ∼ C(X,A). 2. Obtain patch representations, ~hi for the input graph by passing it through the encoder: H = E(X,A) = {~h1,~h2, . . . ,~hN}. 3. Obtain patch representations, ~̃hj for the negative example by passing it through the encoder: H̃ = E(X̃, Ã) = {~̃h1, ~̃h2, . . . , ~̃hM}. 4. Summarize the input graph by passing its patch representations through the readout func- tion: ~s = R(H). 5. Update parameters of E ,R and D by applying gradient descent to maximize Equation 1. This algorithm is fully summarized by Figure 1. 4 CLASSIFICATION PERFORMANCE We have assessed the benefits of the representation learnt by the DGI encoder on a variety of node classification tasks (transductive as well as inductive), obtaining competitive results. In each case, DGI was used to learn patch representations in a fully unsupervised manner, followed by evaluating the node-level classification utility of these representations. This was performed by directly using these representations to train and test a simple linear (logistic regression) classifier. 4.1 DATASETS We follow the experimental setup described in Kipf & Welling (2016a) and Hamilton et al. (2017a) on the following benchmark tasks: (1) classifying research papers into topics on the Cora, Citeseer and Pubmed citation networks (Sen et al., 2008); (2) predicting the community structure of a social network modeled with Reddit posts; and (3) classifying protein roles within protein-protein interaction (PPI) networks (Zitnik & Leskovec, 2017), requiring generalisation to unseen networks. Further information on the datasets may be found in Table 1 and Appendix A. 4.2 EXPERIMENTAL SETUP For each of three experimental settings (transductive learning, inductive learning on large graphs, and multiple graphs), we employed distinct encoders and corruption functions appropriate to that setting (described below). Transductive learning. For the transductive learning tasks (Cora, Citeseer and Pubmed), our encoder is a one-layer Graph Convolutional Network (GCN) model (Kipf & Welling, 2016a), with the following propagation rule: E(X,A) = σ ( D̂− 1 2 ÂD̂− 1 2 XΘ ) (3) where  = A + IN is the adjacency matrix with inserted self-loops and D̂ is its corresponding degree matrix; i.e. D̂ii = ∑ j Âij . For the nonlinearity, σ, we have applied the parametric ReLU (PReLU) function (He et al., 2015), and Θ ∈ RF×F ′ is a learnable linear transformation applied to every node, with F ′ = 512 features being computed (specially, F ′ = 256 on Pubmed due to memory limitations). The corruption function used in this setting is designed to encourage the representations to properly encode structural similarities of different nodes in the graph; for this purpose, C preserves the original adjacency matrix (à = A), whereas the corrupted features, X̃, are obtained by row-wise shuffling of X. That is, the corrupted graph consists of exactly the same nodes as the original graph, but they are located in different places in the graph, and will therefore receive different patch representations. We demonstrate DGI is stable to other choices of corruption functions in Appendix C, but we find those that preserve the graph structure result in the strongest features. Inductive learning on large graphs. For inductive learning, we may no longer use the GCN update rule in our encoder (as the learned filters rely on a fixed and known adjacency matrix); instead, we apply the mean-pooling propagation rule, as used by GraphSAGE-GCN (Hamilton et al., 2017a): MP(X,A) = D̂−1ÂXΘ (4) with parameters defined as in Equation 3. Note that multiplying by D̂−1 actually performs a normalized sum (hence the mean-pooling). While Equation 4 explicitly specifies the adjacency and degree matrices, they are not needed: identical inductive behaviour may be observed by a constant attention mechanism across the node’s neighbors, as used by the Const-GAT model (Veličković et al., 2018). For Reddit, our encoder is a three-layer mean-pooling model with skip connections (He et al., 2016): M̃P(X,A) = σ (XΘ′‖MP(X,A)) E(X,A) = M̃P3(M̃P2(M̃P1(X,A),A),A) (5) where ‖ is featurewise concatenation (i.e. the central node and its neighborhood are handled separately). We compute F ′ = 512 features in each MP layer, with the PReLU activation for σ. Given the large scale of the dataset, it will not fit into GPU memory entirely. Therefore, we use the subsampling approach of Hamilton et al. (2017a), where a minibatch of nodes is first selected, and then a subgraph centered around each of them is obtained by sampling node neighborhoods with replacement. Specifically, we sample 10, 10 and 25 neighbors at the first, second and third level, respectively—thus, each subsampled patch has 1 + 10 + 100 + 2500 = 2611 nodes. Only the computations necessary for deriving the central node i’s patch representation, ~hi, are performed. These representations are then used to derive the summary vector, ~s, for the minibatch (Figure 2). We used minibatches of 256 nodes throughout training. To define our corruption function in this setting, we use a similar approach as in the transductive tasks, but treat each subsampled patch as a separate graph to be corrupted (i.e., we row-wise shuffle the feature matrices within a subsampled patch). Note that this may very likely cause the central node’s features to be swapped out for a sampled neighbor’s features, further encouraging diversity in the negative samples. The patch representation obtained in the central node is then submitted to the discriminator. Inductive learning on multiple graphs. For the PPI dataset, inspired by previous successful supervised architectures (Veličković et al., 2018), our encoder is a three-layer mean-pooling model with dense skip connections (He et al., 2016; Huang et al., 2017): H1 = σ (MP1(X,A)) (6) H2 = σ (MP2(H1 + XWskip,A)) (7) E(X,A) = σ (MP3(H2 + H1 + XWskip,A)) (8) where Wskip is a learnable projection matrix, and MP is as defined in Equation 4. We compute F ′ = 512 features in each MP layer, using the PReLU activation for σ. In this multiple-graph setting, we opted to use randomly sampled training graphs as negative examples (i.e., our corruption function simply samples a different graph from the training set). We found this method to be the most stable, considering that over 40% of the nodes have all-zero features in this dataset. To further expand the pool of negative examples, we also apply dropout (Srivastava et al., 2014) to the input features of the sampled graph. We found it beneficial to standardize the learnt embeddings across the training set prior to providing them to the logistic regression model. Readout, discriminator, and additional training details. Across all three experimental settings, we employed identical readout functions and discriminator architectures. For the readout function, we use a simple averaging of all the nodes’ features: R(H) = σ ( 1 N N∑ i=1 ~hi ) (9) where σ is the logistic sigmoid nonlinearity. While we have found this readout to perform the best across all our experiments, we assume that its power will diminish with the increase in graph size, and in those cases, more sophisticated readout architectures such as set2vec (Vinyals et al., 2015) or DiffPool (Ying et al., 2018b) are likely to be more appropriate. The discriminator scores summary-patch representation pairs by applying a simple bilinear scoring function (similar to the scoring used by Oord et al. (2018)): D(~hi, ~s) = σ ( ~hTi W~s ) (10) Here, W is a learnable scoring matrix and σ is the logistic sigmoid nonlinearity, used to convert scores into probabilities of (~hi, ~s) being a positive example. All models are initialized using Glorot initialization (Glorot & Bengio, 2010) and trained to maximize the mutual information provided in Equation 1 on the available nodes (all nodes for the transductive, and training nodes only in the inductive setup) using the Adam SGD optimizer (Kingma & Ba, 2014) with an initial learning rate of 0.001 (specially, 10−5 on Reddit). On the transductive datasets, we use an early stopping strategy on the observed training loss, with a patience of 20 epochs3. On the inductive datasets we train for a fixed number of epochs (150 on Reddit, 20 on PPI). 4.3 RESULTS The results of our comparative evaluation experiments are summarized in Table 2. For the transductive tasks, we report the mean classification accuracy (with standard deviation) on the test nodes of our method after 50 runs of training (followed by logistic regression), and reuse the metrics already reported in Kipf & Welling (2016a) for the performance of DeepWalk and GCN, as well as Label Propagation (LP) (Zhu et al., 2003) and Planetoid (Yang et al., 2016)—a representative supervised random walk method. Specially, we provide results for training the logistic regression on raw input features, as well as DeepWalk with the input features concatenated. 3 A reference DGI implementation may be found at https://github.com/PetarV-/DGI. For the inductive tasks, we report the micro-averaged F1 score on the (unseen) test nodes, averaged after 50 runs of training, and reuse the metrics already reported in Hamilton et al. (2017a) for the other techniques. Specifically, as our setup is unsupervised, we compare against the unsupervised GraphSAGE approaches. We also provide supervised results for two related architectures— FastGCN (Chen et al., 2018) and Avg. pooling (Zhang et al., 2018). Our results demonstrate strong performance being achieved across all five datasets. We particularly note that the DGI approach is competitive with the results reported for the GCN model with the supervised loss, even exceeding its performance on the Cora and Citeseer datasets. We assume that these benefits stem from the fact that, indirectly, the DGI approach allows for every node to have access to structural properties of the entire graph, whereas the supervised GCN is limited to only two-layer neighborhoods (by the extreme sparsity of the training signal and the corresponding threat of overfitting). It should be noted that, while we are capable of outperforming equivalent supervised encoder architectures, our performance still does not surpass the current supervised transductive state of the art (which is held by methods such as GraphSGAN (Ding et al., 2018)). We further observe that the DGI method successfully outperformed all the competing unsupervised GraphSAGE approaches on the Reddit and PPI datasets—thus verifying the potential of methods based on local mutual information maximization in the inductive node classification domain. Our Reddit results are competitive with the supervised state of the art, whereas on PPI the gap is still large—we believe this can be attributed to the extreme sparsity of available node features (over 40% of the nodes having all-zero features), that our encoder heavily relies on. We note that a randomly initialized graph convolutional network may already extract highly useful features and represents a strong baseline—a well-known fact, considering its links to the Weisfeiler- Lehman graph isomorphism test (Weisfeiler & Lehman, 1968), that have already been highlighted and analyzed by Kipf & Welling (2016a) and Hamilton et al. (2017a). As such, we also provide, as Random-Init, the logistic regression performance on embeddings obtained from a randomly initialized encoder. Besides demonstrating that DGI is able to further improve on this strong baseline, it particularly reveals that, on the inductive datasets, previous random walk-based negative sampling methods may have been ineffective for learning appropriate features for the classification task. Lastly, it should be noted that deeper encoders correspond to more pronounced mixing between recovered patch representations, reducing the effective variability of our positive/negative examples’ pool. We believe that this is the reason why shallower architectures performed better on some of the datasets. While we cannot say that these trends will hold in general, with the DGI loss function we generally found benefits from employing wider, rather than deeper models. 5 QUALITATIVE ANALYSIS We performed a diverse set of analyses on the embeddings learnt by the DGI algorithm in order to better understand the properties of DGI. We focus our analysis exclusively on the Cora dataset (as it has the smallest number of nodes, significantly aiding clarity). A standard set of “evolving” t-SNE plots (Maaten & Hinton, 2008) of the embeddings is given in Figure 3. As expected given the quantitative results, the learnt embeddings’ 2D projections exhibit discernible clustering in the 2D projected space (especially compared to the raw features and Random-Init), which respects the seven topic classes of Cora. The projection obtains a Silhouette score (Rousseeuw, 1987) of 0.234, which compares favorably with the previous reported score of 0.158 for Embedding Propagation (Duran & Niepert, 2017). We ran further analyses, revealing insights into DGI’s mechanism of learning, isolating biased embedding dimensions for pushing the negative example scores down and using the remainder to encode useful information about positive examples. We leverage these insights to retain competitive performance to the supervised GCN even after half the dimensions are removed from the patch representations provided by the encoder. These—and several other—qualitative and ablation studies can be found in Appendix B. 6 CONCLUSIONS We have presented Deep Graph Infomax (DGI), a new approach for learning unsupervised representations on graph-structured data. By leveraging local mutual information maximization across the graph’s patch representations, obtained by powerful graph convolutional architectures, we are able to obtain node embeddings that are mindful of the global structural properties of the graph. This enables competitive performance across a variety of both transductive and inductive classification tasks, at times even outperforming relevant supervised architectures. ACKNOWLEDGMENTS We would like to thank the developers of PyTorch (Paszke et al., 2017). PV and PL have received funding from the European Union’s Horizon 2020 research and innovation programme PROPAGAGEING under grant agreement No 634821. We specially thank Hugo Larochelle and Jian Tang for the extremely useful discussions, and Andreea Deac, Arantxa Casanova, Ben Poole, Graham Taylor, Guillem Cucurull, Justin Gilmer, Nithium Thain and Zhaocheng Zhu for reviewing the paper prior to submission. A FURTHER DATASET DETAILS Transductive learning. We utilize three standard citation network benchmark datasets—Cora, Citeseer and Pubmed (Sen et al., 2008)—and closely follow the transductive experimental setup of Yang et al. (2016). In all of these datasets, nodes correspond to documents and edges to (undirected) citations. Node features correspond to elements of a bag-of-words representation of a document. Each node has a class label. We allow for only 20 nodes per class to be used for training—however, honouring the transductive setup, the unsupervised learning algorithm has access to all of the nodes’ feature vectors. The predictive power of the learned representations is evaluated on 1000 test nodes. Inductive learning on large graphs. We use a large graph dataset (231,443 nodes and 11,606,919 edges) of Reddit posts created during September 2014 (derived and preprocessed as in Hamilton et al. (2017a)). The objective is to predict the posts’ community (“subreddit”), based on the GloVe embeddings of their content and comments (Pennington et al., 2014), as well as metrics such as score or number of comments. Posts are linked together in the graph if the same user has commented on both. Reusing the inductive setup of Hamilton et al. (2017a), posts made in the first 20 days of the month are used for training, while the remaining posts are used for validation or testing and are invisible to the training algorithm. Inductive learning on multiple graphs. We make use of a protein-protein interaction (PPI) dataset that consists of graphs corresponding to different human tissues (Zitnik & Leskovec, 2017). The dataset contains 20 graphs for training, 2 for validation and 2 for testing. Critically, testing graphs remain completely unobserved during training. To construct the graphs, we used the preprocessed data provided by Hamilton et al. (2017a). Each node has 50 features that are composed of positional gene sets, motif gene sets and immunological signatures. There are 121 labels for each node set from gene ontology, collected from the Molecular Signatures Database (Subramanian et al., 2005), and a node can possess several labels simultaneously. B FURTHER QUALITATIVE ANALYSIS Visualizing discriminator scores. After obtaining the t-SNE visualizations, we turned our attention to the discriminator—and visualized the scores it attached to various nodes, for both the positive and a (randomly sampled) negative example (Figure 4). From here we can make an interesting observation—within the “clusters” of the learnt embeddings on the positive Cora graph, only a handful of “hot” nodes are selected to receive high discriminator scores. This suggests that there may be a clear distinction between embedding dimensions used for discrimination and classification, which we more thoroughly investigate in the next paragraph. In addition, we may observe that, as expected, the model is unable to find any strong structure within a negative example. Lastly, a few negative examples achieve high discriminator scores—a phenomenon caused by the existence of low-degree nodes in Cora (making the probability of a node ending up in an identical context it had in the positive graph non-negligible). Impact and role of embedding dimensions. Guided by the previous result, we have visualized the embeddings for the top-scoring positive and negative examples (Figure 5). The analysis revealed existence of distinct dimensions in which both the positive and negative examples are strongly biased. We hypothesize that, given the random shuffling, the average expected activation of a negative example is zero, and therefore strong biases are required to “push” the example down in the discriminator. The positive examples may then use the remaining dimensions to both counteract this bias and encode patch similarity. To substantiate this claim, we order the 512 dimensions based on how distinguishable the positive and negative examples are in them (using p-values obtained from a t-test as a proxy). We then remove these dimensions from the embedding, respecting this order—either starting from the most distinguishable (p ↑) or least distinguishable dimensions (p ↓)—monitoring how this affects both classification and discriminator performance (Figure 6). The observed trends largely support our hypothesis: if we start by removing the biased dimensions first (p ↓), the classification performance holds up for much longer (allowing us to remove over half of the embedding dimensions while remaining competitive to the supervised GCN), and the positive examples mostly remain correctly discriminated until well over half the dimensions are removed. C ROBUSTNESS TO CHOICE OF CORRUPTION FUNCTION Here, we consider alternatives to our corruption function, C, used to produce negative graphs. We generally find that, for the node classification task, DGI is stable and robust to different strategies. However, for learning graph features towards other kinds of tasks, the design of appropriate corruption strategies remains an area of open research. Our corruption function described in Section 4.2 preserves the original adjacency matrix (à = A) but corrupts the features, X̃, via row-wise shuffling of X. In this case, the negative graph is constrained to be isomorphic to the positive graph, which should not have to be mandatory. We can instead produce a negative graph by directly corrupting the adjacency matrix. Therefore, we first consider an alternative corruption function C which preserves the features (X̃ = X) but instead adds or removes edges from the adjacency matrix (à 6= A). This is done by sampling, i.i.d., a switch parameter Σij , which determines whether to corrupt the adjacency matrix at position (i, j). Assuming a given corruption rate, ρ, we may define C as performing the following operations: Σij ∼ Bernoulli(ρ) (11) à = A⊕Σ (12) where ⊕ is the XOR (exclusive OR) operation. This alternative strategy produces a negative graph with the same features, but different connectivity. Here, the corruption rate of ρ = 0 corresponds to an unchanged adjacency matrix (i.e. the positive and negative graphs are identical in this case). In this regime, learning is impossible for the discriminator, and the performance of DGI is in line with a randomly initialized DGI model. At higher rates of noise, however, DGI produces competitive embeddings. We also consider simultaneous feature shuffling (X̃ 6= X) and adjacency matrix perturbation (à 6= A), both as described before. We find that DGI still learns useful features under this compound corruption strategy—as expected, given that feature shuffling is already equivalent to an (isomorphic) adjacency matrix perturbation. From both studies, we may observe that a certain lower bound on the positive graph perturbation rate is required to obtain competitive node embeddings for the classification task on Cora. Furthermore, the features learned for downstream node classification tasks are most powerful when the negative graph has similar levels of connectivity to the positive graph. The classification performance peaks when the graph is perturbed to a reasonably high level, but remains sparse; i.e. the mixing between the separate 1-step patches is not substantial, and therefore the pool of negative examples is still diverse enough. Classification performance is impacted only marginally at higher rates of corruption—corresponding to dense negative graphs, and thus a less rich negative example pool—but still considerably outperforming the unsupervised baselines we have considered. This could be seen as further motivation for relying solely on feature shuffling, without adjacency perturbations—given that feature shuffling is a trivial way to guarantee a diverse set of negative examples, without incurring significant computational costs per epoch. The results of this study are visualized in Figures 7 and 8.
1. What is the main contribution of the paper in adapting Deep Informax for the graph domain? 2. What are the strengths and weaknesses of the proposed method regarding its objective and performance? 3. Do you have any concerns about the theoretical foundation of the method, specifically regarding mutual information and Jesen-Shannon MI estimation? 4. How does the reviewer assess the novelty and significance of the work compared to prior unsupervised graph encoders? 5. Are there any questions or suggestions regarding the experimental setup, such as the choice of mini-batch selection or the reliance on random walks?
Review
Review This paper adapts the Deep Informax (DIM; Hjelm et al. 2018) method, which was used on image data, into the graph domain. The architecture of the neural network and the learning cost function are given by figure 1 and eq.(1), respectively. The idea is to maximize the mutual information between a local representation (of a "patch" defined by graph adjacency) and a global representation (of the entire graph), so those different local patches are encouraged to carry some shared global information. This is in contrast to most unsupervised graph encoders, where the objective is to fit the random walk similarities (node adjacency on the graph). In an unsupervised learning scenario, where the graph structure and node features are given, the authors achieved state-of-the-art performance on transductive and inductive node classification tasks, in some cases even better than supervised baselines. The paper is well written. I recommend acceptance and have the following concerns. Main comment 1 The title suggests that there are some information theory contents. However, section 3 does not include much information theory. Rather, the author(s) directly give eq.(1) with pointers to references and informal discussions. This is not so helpful. It is not straightforward for the reader to relate eq.(1) with the definition of mutual information. Ideally, before eq.(1) there should be one or two equations (with text) to introduce the Jesen-Shannon MI estimation and information theoretic bounds etc. Overall, due to this, the contribution is mainly on adapting the DIM method info the graph domain. Although the experimental results are good, there is not much theoretical insight or "recreative" introduction of the DIM method from the authors' perspectives. This is the main reason for that it is not a strong accept. Main comment 2 A motivation of the proposition is to "not rely on random walks", or graph node adjacency. Notice that random walks can be intuitively regarded as higher order node adjacency. However, the encoder, which is based on GCN, does rely on the adjacency matrix, as the convolution is done in local neighborhoods (that can also be defined based on random-walk similarities). The authors are therefore suggested to make it clear in related places that, it is the cost function which is not based on node adjacency, although the neural network structure does rely on it. As a related question, in the inductive experiments, in the mini-batch of 256 nodes randomly selected, or selected by a local patch of the graph which is connected or nearby? If it is the latter case, the cost function does rely on random-walk similarities, as the summary vector will be a local patch average. Questions: -The summary vector is the average of all node features. On large graphs, the average may carry less information as compared to small graphs. It can be observed that on Pubmed and Reddit, the performance improvement is not as high as the other small graphs. Could you comment on this? -In the baseline "DeepWalk+features", are the two different types of features directly concatenated? -Is it straightforward to apply DGI to link prediction tasks? -It that a concern that the random corruption function will cause a high variance of the gradient?
ICLR
Title Deep Graph Infomax Abstract We present Deep Graph Infomax (DGI), a general approach for learning node representations within graph-structured data in an unsupervised manner. DGI relies on maximizing mutual information between patch representations and corresponding high-level summaries of graphs—both derived using established graph convolutional network architectures. The learnt patch representations summarize subgraphs centered around nodes of interest, and can thus be reused for downstream node-wise learning tasks. In contrast to most prior approaches to unsupervised learning with GCNs, DGI does not rely on random walk objectives, and is readily applicable to both transductive and inductive learning setups. We demonstrate competitive performance on a variety of node classification benchmarks, which at times even exceeds the performance of supervised learning. 1 INTRODUCTION Generalizing neural networks to graph-structured inputs is one of the current major challenges of machine learning (Bronstein et al., 2017; Hamilton et al., 2017b; Battaglia et al., 2018). While significant strides have recently been made, notably with graph convolutional networks (Kipf & Welling, 2016a; Gilmer et al., 2017; Veličković et al., 2018), most successful methods use supervised learning, which is often not possible as most graph data in the wild is unlabeled. In addition, it is often desirable to discover novel or interesting structure from large-scale graphs, and as such, unsupervised graph learning is essential for many important tasks. Currently, the dominant algorithms for unsupervised representation learning with graph-structured data rely on random walk-based objectives (Grover & Leskovec, 2016; Perozzi et al., 2014; Tang et al., 2015; Hamilton et al., 2017a), sometimes further simplified to reconstruct adjacency information (Kipf & Welling, 2016b; Duran & Niepert, 2017). The underlying intuition is to train an encoder network so that nodes that are “close” in the input graph are also “close” in the representation space. While powerful—and related to traditional metrics such as the personalized PageRank score (Jeh & Widom, 2003)—random walk methods suffer from known limitations. Most prominently, the random-walk objective is known to over-emphasize proximity information at the expense of structural information (Ribeiro et al., 2017), and performance is highly dependent on hyperparameter choice (Grover & Leskovec, 2016; Perozzi et al., 2014). Moreover, with the introduction of stronger ∗Work performed while the author was at Mila. †CIFAR Fellow encoder models based on graph convolutions (Gilmer et al., 2017), it is unclear whether randomwalk objectives actually provide any useful signal, as these encoders already enforce an inductive bias that neighboring nodes have similar representations. In this work, we propose an alternative objective for unsupervised graph learning that is based upon mutual information, rather than random walks. Recently, scalable estimation of mutual information was made both possible and practical through Mutual Information Neural Estimation (MINE, Belghazi et al., 2018), which relies on training a statistics network as a classifier of samples coming from the joint distribution of two random variables and their product of marginals. Following on MINE, Hjelm et al. (2018) introduced Deep InfoMax (DIM) for learning representations of highdimensional data. DIM trains an encoder model to maximize the mutual information between a high-level “global” representation and “local” parts of the input (such as patches of an image). This encourages the encoder to carry the type of information that is present in all locations (and thus are globally relevant), such as would be the case of a class label. DIM relies heavily on convolutional neural network structure in the context of image data, and to our knowledge, no work has applied mutual information maximization to graph-structured inputs. Here, we adapt ideas from DIM to the graph domain, which can be thought of as having a more general type of structure than the ones captured by convolutional neural networks. In the following sections, we introduce our method called Deep Graph Infomax (DGI). We demonstrate that the representation learned by DGI is consistently competitive on both transductive and inductive classification tasks, often outperforming both supervised and unsupervised strong baselines in our experiments. 2 RELATED WORK Contrastive methods. An important approach for unsupervised learning of representations is to train an encoder to be contrastive between representations that capture statistical dependencies of interest and those that do not. For example, a contrastive approach may employ a scoring function, training the encoder to increase the score on “real” input (a.k.a, positive examples) and decrease the score on “fake” input (a.k.a., negative samples). Contrastive methods are central to many popular word-embedding methods (Collobert & Weston, 2008; Mnih & Kavukcuoglu, 2013; Mikolov et al., 2013), but they are found in many unsupervised algorithms for learning representations of graphstructured input as well. There are many ways to score a representation, but in the graph literature the most common techniques use classification (Perozzi et al., 2014; Grover & Leskovec, 2016; Kipf & Welling, 2016b; Hamilton et al., 2017b), though other scoring functions are used (Duran & Niepert, 2017; Bojchevski & Günnemann, 2018). DGI is also contrastive in this respect, as our objective is based on classifying local-global pairs and negative-sampled counterparts. Sampling strategies. A key implementation detail to contrastive methods is how to draw positive and negative samples. The prior work above on unsupervised graph representation learning relies on a local contrastive loss (enforcing proximal nodes to have similar embeddings). Positive samples typically correspond to pairs of nodes that appear together within short random walks in the graph—from a language modelling perspective, effectively treating nodes as words and random walks as sentences. Recent work by Bojchevski & Günnemann (2018) uses node-anchored sampling as an alternative. The negative sampling for these methods is primarily based on sampling of random pairs, with recent work adapting this approach to use a curriculum-based negative sampling scheme (with progressively “closer” negative examples; Ying et al., 2018a) or introducing an adversary to select the negative examples (Bose et al., 2018). Predictive coding. Contrastive predictive coding (CPC, Oord et al., 2018) is another method for learning deep representations based on mutual information maximization. Like the models above, CPC is also contrastive, in this case using an estimate of the conditional density (in the form of noise contrastive estimation, Gutmann & Hyvärinen, 2010) as the scoring function. However, unlike our approach, CPC and the graph methods above are all predictive: the contrastive objective effectively trains a predictor between structurally-specified parts of the input (e.g., between neighboring node pairs or between a node and its neighborhood). Our approach differs in that we contrast global / local parts of a graph simultaneously, where the global variable is computed from all local variables. To the best of our knowledge, the sole prior works that instead focuses on contrasting “global” and “local” representations on graphs do so via (auto-)encoding objectives on the adjacency matrix (Wang et al., 2016) and incorporation of community-level constraints into node embeddings (Wang et al., 2017). Both methods rely on matrix factorization-style losses and are thus not scalable to larger graphs. 3 DGI METHODOLOGY In this section, we will present the Deep Graph Infomax method in a top-down fashion: starting with an abstract overview of our specific unsupervised learning setup, followed by an exposition of the objective function optimized by our method, and concluding by enumerating all the steps of our procedure in a single-graph setting. 3.1 GRAPH-BASED UNSUPERVISED LEARNING We assume a generic graph-based unsupervised machine learning setup: we are provided with a set of node features, X = {~x1, ~x2, . . . , ~xN}, whereN is the number of nodes in the graph and ~xi ∈ RF represents the features of node i. We are also provided with relational information between these nodes in the form of an adjacency matrix, A ∈ RN×N . While A may consist of arbitrary real numbers (or even arbitrary edge features), in all our experiments we will assume the graphs to be unweighted, i.e. Aij = 1 if there exists an edge i→ j in the graph and Aij = 0 otherwise. Our objective is to learn an encoder, E : RN×F × RN×N → RN×F ′ , such that E(X,A) = H = {~h1,~h2, . . . ,~hN} represents high-level representations ~hi ∈ RF ′ for each node i. These representations may then be retrieved and used for downstream tasks, such as node classification. Here we will focus on graph convolutional encoders—a flexible class of node embedding architectures, which generate node representations by repeated aggregation over local node neighborhoods (Gilmer et al., 2017). A key consequence is that the produced node embeddings, ~hi, summarize a patch of the graph centered around node i rather than just the node itself. In what follows, we will often refer to ~hi as patch representations to emphasize this point. 3.2 LOCAL-GLOBAL MUTUAL INFORMATION MAXIMIZATION Our approach to learning the encoder relies on maximizing local mutual information—that is, we seek to obtain node (i.e., local) representations that capture the global information content of the entire graph, represented by a summary vector, ~s. In order to obtain the graph-level summary vectors, ~s, we leverage a readout function,R : RN×F → RF , and use it to summarize the obtained patch representations into a graph-level representation; i.e., ~s = R(E(X,A)). As a proxy for maximizing the local mutual information, we employ a discriminator, D : RF × RF → R, such that D(~hi, ~s) represents the probability scores assigned to this patch-summary pair (should be higher for patches contained within the summary). Negative samples for D are provided by pairing the summary ~s from (X,A) with patch representations ~̃hj of an alternative graph, (X̃, Ã). In a multi-graph setting, such graphs may be obtained as other elements of a training set. However, for a single graph, an explicit (stochastic) corruption function, C : RN×F × RN×N → RM×F × RM×M is required to obtain a negative example from the original graph, i.e. (X̃, Ã) = C(X,A). The choice of the negative sampling procedure will govern the specific kinds of structural information that is desirable to be captured as a byproduct of this maximization. For the objective, we follow the intuitions from Deep InfoMax (DIM, Hjelm et al., 2018) and use a noise-contrastive type objective with a standard binary cross-entropy (BCE) loss between the samples from the joint (positive examples) and the product of marginals (negative examples). Following their work, we use the following objective1: L = 1 N +M N∑ i=1 E(X,A) [ logD ( ~hi, ~s )] + M∑ j=1 E(X̃,Ã) [ log ( 1−D ( ~̃ hj , ~s ))] (1) This approach effectively maximizes mutual information between ~hi and ~s, based on the JensenShannon divergence2 between the joint and the product of marginals. As all of the derived patch representations are driven to preserve mutual information with the global graph summary, this allows for discovering and preserving similarities on the patch-level—for example, distant nodes with similar structural roles (which are known to be a strong predictor for many node classification tasks; Donnat et al., 2018). Note that this is a “reversed” version of the argument given by Hjelm et al. (2018): for node classification, our aim is for the patches to establish links to similar patches across the graph, rather than enforcing the summary to contain all of these similarities (however, both of these effects should in principle occur simultaneously). 3.3 THEORETICAL MOTIVATION We now provide some intuition that connects the classification error of our discriminator to mutual information maximization on graph representations. Lemma 1. Let {X(k)}|X|k=1 be a set of node representations drawn from an empirical probability distribution of graphs, p(X), with finite number of elements, |X|, such that p(X(k)) = p(X(k′)) ∀k, k′. LetR(·) be a deterministic readout function on graphs and ~s(k) = R(X(k)) be the summary vector of the k-th graph, with marginal distribution p(~s). The optimal classifier between the joint distribution p(X, ~s) and the product of marginals p(X)p(~s), assuming class balance, has an error rate upper bounded by Err∗ = 12 ∑|X| k=1 p(~s (k))2. This upper bound is achieved ifR is injective. Proof. Denote by Q(k) the set of all graphs in the input set that are mapped to ~s(k) by R, i.e. Q(k) = {X(j) | R(X(j)) = ~s(k)}. AsR(·) is deterministic, samples from the joint, (X(k), ~s(k)) are drawn from the product of marginals with probability p(~s(k))p(X(k)), which decomposes into: p(~s(k)) ∑ ~s p(X(k), ~s) = p(~s(k))p(X(k)|~s(k))p(~s(k)) = p(X (k))∑ X′∈Q(k) p(X ′) p(~s(k))2 (2) For convenience, let ρ(k) = p(X (k))∑ X′∈Q(k) p(X ′) . As, by definition, X (k) ∈ Q(k), it holds that ρ(k) ≤ 1. This probability ratio is maximized at 1 when Q(k) = {X(k)}, i.e. when R is injective for X(k). The probability of drawing any sample of the joint from the product of marginals is then bounded above by ∑|X| k=1 p(~s (k))2. As the probability of drawing (X(k), ~s(k)) from the joint is ρ(k)p(~s(k)) ≥ ρ(k)p(~s(k))2, we know that classifying these samples as coming from the joint has a lower error than classifying them as coming from the product of marginals. The error rate of such a classifier is then the probability of drawing a sample from the joint as a sample from product of marginals under the mixture probability, which we can bound by Err ≤ 12 ∑|X| k=1 p(~s (k))2, with the upper bound achieved, as above, whenR(·) is injective for all elements of {X(k)}. It may be useful to note that 12|X| ≤ Err ∗ ≤ 12 . The first result is obtained via a trivial application of Jensen’s inequality, while the other extreme is reached only in the edge case of a constant readout function (when every example from the joint is also an example from the product of marginals, so no classifier performs better than chance). Corollary 1. From now on, assume that the readout function used, R, is injective. Assume the number of allowable states in the space of ~s, |~s|, is greater than or equal to |X|. Then, for ~s?, the 1Note that Hjelm et al. (2018) use a softplus version of the binary cross-entropy. 2The “GAN” distance defined here—as per Goodfellow et al. (2014) and Nowozin et al. (2016)—and Jensen-Shannon divergence can be related by DGAN = 2DJS − log 4. Therefore, any parameters that optimize one also optimize the other. optimal summary under the classification error of an optimal classifier between the joint and the product of marginals, it holds that |~s?| = |X|. Proof. By injectivity of R, we know that ~s? = argmin~s Err∗. As the upper error bound, Err∗, is a simple geometric sum, we know that this is minimized when p(~s(k)) is uniform. As R(·) is deterministic, this implies that each potential summary state would need to be used at least once. Combined with the condition |~s| ≥ |X|, we conclude that the optimum has |~s?| = |X|. Theorem 1. ~s? = argmax~s MI(X;~s), where MI is mutual information. Proof. This follows from the fact that the mutual information is invariant under invertible transforms. As |~s?| = |X| and R is injective, it has an inverse function, R−1. It follows then that, for any ~s, MI(X;~s) ≤ H(X) = MI(X;X) = MI(X;R(X)) = MI(X;~s?), where H is entropy. Theorem 1 shows that for finite input sets and suitable deterministic functions, minimizing the classification error in the discriminator can be used to maximize the mutual information between the input and output. However, as was shown in Hjelm et al. (2018), this objective alone is not enough to learn useful representations. As in their work, we discriminate between the global summary vector and local high-level representations. Theorem 2. Let X(k)i = {~xj}j∈n(X(k),i) be the neighborhood of the node i in the k-th graph that collectively maps to its high-level features, ~hi = E(X(k)i ), where n is the neighborhood function that returns the set of neighborhood indices of node i for graph X(k), and E is a deterministic encoder function. Let us assume that |Xi| = |X| = |~s| ≥ |~hi|. Then, the ~hi that minimizes the classification error between p(~hi, ~s) and p(~hi)p(~s) also maximizes MI(X (k) i ; ~hi). Proof. Given our assumption of |Xi| = |~s|, there exists an inverse Xi = R−1(~s), and therefore ~hi = E(R−1(~s)), i.e. there exists a deterministic function (E ◦ R−1) mapping ~s to ~hi. The optimal classifier between the joint p(~hi, ~s) and the product of marginals p(~hi)p(~s) then has (by Lemma 1) an error rate upper bound of Err∗ = 12 ∑|X| k=1 p( ~h (k) i ) 2. Therefore (as in Corollary 1), for the optimal ~hi, |~hi| = |Xi|, which by the same arguments as in Theorem 1 maximizes the mutual information between the neighborhood and high-level features, MI(X(k)i ;~hi). This motivates our use of a classifier between samples from the joint and the product of marginals, and using the binary cross-entropy (BCE) loss to optimize this classifier is well-understood in the context of neural network optimization. 3.4 OVERVIEW OF DGI Assuming the single-graph setup (i.e., (X,A) provided as input), we will now summarize the steps of the Deep Graph Infomax procedure: 1. Sample a negative example by using the corruption function: (X̃, Ã) ∼ C(X,A). 2. Obtain patch representations, ~hi for the input graph by passing it through the encoder: H = E(X,A) = {~h1,~h2, . . . ,~hN}. 3. Obtain patch representations, ~̃hj for the negative example by passing it through the encoder: H̃ = E(X̃, Ã) = {~̃h1, ~̃h2, . . . , ~̃hM}. 4. Summarize the input graph by passing its patch representations through the readout func- tion: ~s = R(H). 5. Update parameters of E ,R and D by applying gradient descent to maximize Equation 1. This algorithm is fully summarized by Figure 1. 4 CLASSIFICATION PERFORMANCE We have assessed the benefits of the representation learnt by the DGI encoder on a variety of node classification tasks (transductive as well as inductive), obtaining competitive results. In each case, DGI was used to learn patch representations in a fully unsupervised manner, followed by evaluating the node-level classification utility of these representations. This was performed by directly using these representations to train and test a simple linear (logistic regression) classifier. 4.1 DATASETS We follow the experimental setup described in Kipf & Welling (2016a) and Hamilton et al. (2017a) on the following benchmark tasks: (1) classifying research papers into topics on the Cora, Citeseer and Pubmed citation networks (Sen et al., 2008); (2) predicting the community structure of a social network modeled with Reddit posts; and (3) classifying protein roles within protein-protein interaction (PPI) networks (Zitnik & Leskovec, 2017), requiring generalisation to unseen networks. Further information on the datasets may be found in Table 1 and Appendix A. 4.2 EXPERIMENTAL SETUP For each of three experimental settings (transductive learning, inductive learning on large graphs, and multiple graphs), we employed distinct encoders and corruption functions appropriate to that setting (described below). Transductive learning. For the transductive learning tasks (Cora, Citeseer and Pubmed), our encoder is a one-layer Graph Convolutional Network (GCN) model (Kipf & Welling, 2016a), with the following propagation rule: E(X,A) = σ ( D̂− 1 2 ÂD̂− 1 2 XΘ ) (3) where  = A + IN is the adjacency matrix with inserted self-loops and D̂ is its corresponding degree matrix; i.e. D̂ii = ∑ j Âij . For the nonlinearity, σ, we have applied the parametric ReLU (PReLU) function (He et al., 2015), and Θ ∈ RF×F ′ is a learnable linear transformation applied to every node, with F ′ = 512 features being computed (specially, F ′ = 256 on Pubmed due to memory limitations). The corruption function used in this setting is designed to encourage the representations to properly encode structural similarities of different nodes in the graph; for this purpose, C preserves the original adjacency matrix (à = A), whereas the corrupted features, X̃, are obtained by row-wise shuffling of X. That is, the corrupted graph consists of exactly the same nodes as the original graph, but they are located in different places in the graph, and will therefore receive different patch representations. We demonstrate DGI is stable to other choices of corruption functions in Appendix C, but we find those that preserve the graph structure result in the strongest features. Inductive learning on large graphs. For inductive learning, we may no longer use the GCN update rule in our encoder (as the learned filters rely on a fixed and known adjacency matrix); instead, we apply the mean-pooling propagation rule, as used by GraphSAGE-GCN (Hamilton et al., 2017a): MP(X,A) = D̂−1ÂXΘ (4) with parameters defined as in Equation 3. Note that multiplying by D̂−1 actually performs a normalized sum (hence the mean-pooling). While Equation 4 explicitly specifies the adjacency and degree matrices, they are not needed: identical inductive behaviour may be observed by a constant attention mechanism across the node’s neighbors, as used by the Const-GAT model (Veličković et al., 2018). For Reddit, our encoder is a three-layer mean-pooling model with skip connections (He et al., 2016): M̃P(X,A) = σ (XΘ′‖MP(X,A)) E(X,A) = M̃P3(M̃P2(M̃P1(X,A),A),A) (5) where ‖ is featurewise concatenation (i.e. the central node and its neighborhood are handled separately). We compute F ′ = 512 features in each MP layer, with the PReLU activation for σ. Given the large scale of the dataset, it will not fit into GPU memory entirely. Therefore, we use the subsampling approach of Hamilton et al. (2017a), where a minibatch of nodes is first selected, and then a subgraph centered around each of them is obtained by sampling node neighborhoods with replacement. Specifically, we sample 10, 10 and 25 neighbors at the first, second and third level, respectively—thus, each subsampled patch has 1 + 10 + 100 + 2500 = 2611 nodes. Only the computations necessary for deriving the central node i’s patch representation, ~hi, are performed. These representations are then used to derive the summary vector, ~s, for the minibatch (Figure 2). We used minibatches of 256 nodes throughout training. To define our corruption function in this setting, we use a similar approach as in the transductive tasks, but treat each subsampled patch as a separate graph to be corrupted (i.e., we row-wise shuffle the feature matrices within a subsampled patch). Note that this may very likely cause the central node’s features to be swapped out for a sampled neighbor’s features, further encouraging diversity in the negative samples. The patch representation obtained in the central node is then submitted to the discriminator. Inductive learning on multiple graphs. For the PPI dataset, inspired by previous successful supervised architectures (Veličković et al., 2018), our encoder is a three-layer mean-pooling model with dense skip connections (He et al., 2016; Huang et al., 2017): H1 = σ (MP1(X,A)) (6) H2 = σ (MP2(H1 + XWskip,A)) (7) E(X,A) = σ (MP3(H2 + H1 + XWskip,A)) (8) where Wskip is a learnable projection matrix, and MP is as defined in Equation 4. We compute F ′ = 512 features in each MP layer, using the PReLU activation for σ. In this multiple-graph setting, we opted to use randomly sampled training graphs as negative examples (i.e., our corruption function simply samples a different graph from the training set). We found this method to be the most stable, considering that over 40% of the nodes have all-zero features in this dataset. To further expand the pool of negative examples, we also apply dropout (Srivastava et al., 2014) to the input features of the sampled graph. We found it beneficial to standardize the learnt embeddings across the training set prior to providing them to the logistic regression model. Readout, discriminator, and additional training details. Across all three experimental settings, we employed identical readout functions and discriminator architectures. For the readout function, we use a simple averaging of all the nodes’ features: R(H) = σ ( 1 N N∑ i=1 ~hi ) (9) where σ is the logistic sigmoid nonlinearity. While we have found this readout to perform the best across all our experiments, we assume that its power will diminish with the increase in graph size, and in those cases, more sophisticated readout architectures such as set2vec (Vinyals et al., 2015) or DiffPool (Ying et al., 2018b) are likely to be more appropriate. The discriminator scores summary-patch representation pairs by applying a simple bilinear scoring function (similar to the scoring used by Oord et al. (2018)): D(~hi, ~s) = σ ( ~hTi W~s ) (10) Here, W is a learnable scoring matrix and σ is the logistic sigmoid nonlinearity, used to convert scores into probabilities of (~hi, ~s) being a positive example. All models are initialized using Glorot initialization (Glorot & Bengio, 2010) and trained to maximize the mutual information provided in Equation 1 on the available nodes (all nodes for the transductive, and training nodes only in the inductive setup) using the Adam SGD optimizer (Kingma & Ba, 2014) with an initial learning rate of 0.001 (specially, 10−5 on Reddit). On the transductive datasets, we use an early stopping strategy on the observed training loss, with a patience of 20 epochs3. On the inductive datasets we train for a fixed number of epochs (150 on Reddit, 20 on PPI). 4.3 RESULTS The results of our comparative evaluation experiments are summarized in Table 2. For the transductive tasks, we report the mean classification accuracy (with standard deviation) on the test nodes of our method after 50 runs of training (followed by logistic regression), and reuse the metrics already reported in Kipf & Welling (2016a) for the performance of DeepWalk and GCN, as well as Label Propagation (LP) (Zhu et al., 2003) and Planetoid (Yang et al., 2016)—a representative supervised random walk method. Specially, we provide results for training the logistic regression on raw input features, as well as DeepWalk with the input features concatenated. 3 A reference DGI implementation may be found at https://github.com/PetarV-/DGI. For the inductive tasks, we report the micro-averaged F1 score on the (unseen) test nodes, averaged after 50 runs of training, and reuse the metrics already reported in Hamilton et al. (2017a) for the other techniques. Specifically, as our setup is unsupervised, we compare against the unsupervised GraphSAGE approaches. We also provide supervised results for two related architectures— FastGCN (Chen et al., 2018) and Avg. pooling (Zhang et al., 2018). Our results demonstrate strong performance being achieved across all five datasets. We particularly note that the DGI approach is competitive with the results reported for the GCN model with the supervised loss, even exceeding its performance on the Cora and Citeseer datasets. We assume that these benefits stem from the fact that, indirectly, the DGI approach allows for every node to have access to structural properties of the entire graph, whereas the supervised GCN is limited to only two-layer neighborhoods (by the extreme sparsity of the training signal and the corresponding threat of overfitting). It should be noted that, while we are capable of outperforming equivalent supervised encoder architectures, our performance still does not surpass the current supervised transductive state of the art (which is held by methods such as GraphSGAN (Ding et al., 2018)). We further observe that the DGI method successfully outperformed all the competing unsupervised GraphSAGE approaches on the Reddit and PPI datasets—thus verifying the potential of methods based on local mutual information maximization in the inductive node classification domain. Our Reddit results are competitive with the supervised state of the art, whereas on PPI the gap is still large—we believe this can be attributed to the extreme sparsity of available node features (over 40% of the nodes having all-zero features), that our encoder heavily relies on. We note that a randomly initialized graph convolutional network may already extract highly useful features and represents a strong baseline—a well-known fact, considering its links to the Weisfeiler- Lehman graph isomorphism test (Weisfeiler & Lehman, 1968), that have already been highlighted and analyzed by Kipf & Welling (2016a) and Hamilton et al. (2017a). As such, we also provide, as Random-Init, the logistic regression performance on embeddings obtained from a randomly initialized encoder. Besides demonstrating that DGI is able to further improve on this strong baseline, it particularly reveals that, on the inductive datasets, previous random walk-based negative sampling methods may have been ineffective for learning appropriate features for the classification task. Lastly, it should be noted that deeper encoders correspond to more pronounced mixing between recovered patch representations, reducing the effective variability of our positive/negative examples’ pool. We believe that this is the reason why shallower architectures performed better on some of the datasets. While we cannot say that these trends will hold in general, with the DGI loss function we generally found benefits from employing wider, rather than deeper models. 5 QUALITATIVE ANALYSIS We performed a diverse set of analyses on the embeddings learnt by the DGI algorithm in order to better understand the properties of DGI. We focus our analysis exclusively on the Cora dataset (as it has the smallest number of nodes, significantly aiding clarity). A standard set of “evolving” t-SNE plots (Maaten & Hinton, 2008) of the embeddings is given in Figure 3. As expected given the quantitative results, the learnt embeddings’ 2D projections exhibit discernible clustering in the 2D projected space (especially compared to the raw features and Random-Init), which respects the seven topic classes of Cora. The projection obtains a Silhouette score (Rousseeuw, 1987) of 0.234, which compares favorably with the previous reported score of 0.158 for Embedding Propagation (Duran & Niepert, 2017). We ran further analyses, revealing insights into DGI’s mechanism of learning, isolating biased embedding dimensions for pushing the negative example scores down and using the remainder to encode useful information about positive examples. We leverage these insights to retain competitive performance to the supervised GCN even after half the dimensions are removed from the patch representations provided by the encoder. These—and several other—qualitative and ablation studies can be found in Appendix B. 6 CONCLUSIONS We have presented Deep Graph Infomax (DGI), a new approach for learning unsupervised representations on graph-structured data. By leveraging local mutual information maximization across the graph’s patch representations, obtained by powerful graph convolutional architectures, we are able to obtain node embeddings that are mindful of the global structural properties of the graph. This enables competitive performance across a variety of both transductive and inductive classification tasks, at times even outperforming relevant supervised architectures. ACKNOWLEDGMENTS We would like to thank the developers of PyTorch (Paszke et al., 2017). PV and PL have received funding from the European Union’s Horizon 2020 research and innovation programme PROPAGAGEING under grant agreement No 634821. We specially thank Hugo Larochelle and Jian Tang for the extremely useful discussions, and Andreea Deac, Arantxa Casanova, Ben Poole, Graham Taylor, Guillem Cucurull, Justin Gilmer, Nithium Thain and Zhaocheng Zhu for reviewing the paper prior to submission. A FURTHER DATASET DETAILS Transductive learning. We utilize three standard citation network benchmark datasets—Cora, Citeseer and Pubmed (Sen et al., 2008)—and closely follow the transductive experimental setup of Yang et al. (2016). In all of these datasets, nodes correspond to documents and edges to (undirected) citations. Node features correspond to elements of a bag-of-words representation of a document. Each node has a class label. We allow for only 20 nodes per class to be used for training—however, honouring the transductive setup, the unsupervised learning algorithm has access to all of the nodes’ feature vectors. The predictive power of the learned representations is evaluated on 1000 test nodes. Inductive learning on large graphs. We use a large graph dataset (231,443 nodes and 11,606,919 edges) of Reddit posts created during September 2014 (derived and preprocessed as in Hamilton et al. (2017a)). The objective is to predict the posts’ community (“subreddit”), based on the GloVe embeddings of their content and comments (Pennington et al., 2014), as well as metrics such as score or number of comments. Posts are linked together in the graph if the same user has commented on both. Reusing the inductive setup of Hamilton et al. (2017a), posts made in the first 20 days of the month are used for training, while the remaining posts are used for validation or testing and are invisible to the training algorithm. Inductive learning on multiple graphs. We make use of a protein-protein interaction (PPI) dataset that consists of graphs corresponding to different human tissues (Zitnik & Leskovec, 2017). The dataset contains 20 graphs for training, 2 for validation and 2 for testing. Critically, testing graphs remain completely unobserved during training. To construct the graphs, we used the preprocessed data provided by Hamilton et al. (2017a). Each node has 50 features that are composed of positional gene sets, motif gene sets and immunological signatures. There are 121 labels for each node set from gene ontology, collected from the Molecular Signatures Database (Subramanian et al., 2005), and a node can possess several labels simultaneously. B FURTHER QUALITATIVE ANALYSIS Visualizing discriminator scores. After obtaining the t-SNE visualizations, we turned our attention to the discriminator—and visualized the scores it attached to various nodes, for both the positive and a (randomly sampled) negative example (Figure 4). From here we can make an interesting observation—within the “clusters” of the learnt embeddings on the positive Cora graph, only a handful of “hot” nodes are selected to receive high discriminator scores. This suggests that there may be a clear distinction between embedding dimensions used for discrimination and classification, which we more thoroughly investigate in the next paragraph. In addition, we may observe that, as expected, the model is unable to find any strong structure within a negative example. Lastly, a few negative examples achieve high discriminator scores—a phenomenon caused by the existence of low-degree nodes in Cora (making the probability of a node ending up in an identical context it had in the positive graph non-negligible). Impact and role of embedding dimensions. Guided by the previous result, we have visualized the embeddings for the top-scoring positive and negative examples (Figure 5). The analysis revealed existence of distinct dimensions in which both the positive and negative examples are strongly biased. We hypothesize that, given the random shuffling, the average expected activation of a negative example is zero, and therefore strong biases are required to “push” the example down in the discriminator. The positive examples may then use the remaining dimensions to both counteract this bias and encode patch similarity. To substantiate this claim, we order the 512 dimensions based on how distinguishable the positive and negative examples are in them (using p-values obtained from a t-test as a proxy). We then remove these dimensions from the embedding, respecting this order—either starting from the most distinguishable (p ↑) or least distinguishable dimensions (p ↓)—monitoring how this affects both classification and discriminator performance (Figure 6). The observed trends largely support our hypothesis: if we start by removing the biased dimensions first (p ↓), the classification performance holds up for much longer (allowing us to remove over half of the embedding dimensions while remaining competitive to the supervised GCN), and the positive examples mostly remain correctly discriminated until well over half the dimensions are removed. C ROBUSTNESS TO CHOICE OF CORRUPTION FUNCTION Here, we consider alternatives to our corruption function, C, used to produce negative graphs. We generally find that, for the node classification task, DGI is stable and robust to different strategies. However, for learning graph features towards other kinds of tasks, the design of appropriate corruption strategies remains an area of open research. Our corruption function described in Section 4.2 preserves the original adjacency matrix (à = A) but corrupts the features, X̃, via row-wise shuffling of X. In this case, the negative graph is constrained to be isomorphic to the positive graph, which should not have to be mandatory. We can instead produce a negative graph by directly corrupting the adjacency matrix. Therefore, we first consider an alternative corruption function C which preserves the features (X̃ = X) but instead adds or removes edges from the adjacency matrix (à 6= A). This is done by sampling, i.i.d., a switch parameter Σij , which determines whether to corrupt the adjacency matrix at position (i, j). Assuming a given corruption rate, ρ, we may define C as performing the following operations: Σij ∼ Bernoulli(ρ) (11) à = A⊕Σ (12) where ⊕ is the XOR (exclusive OR) operation. This alternative strategy produces a negative graph with the same features, but different connectivity. Here, the corruption rate of ρ = 0 corresponds to an unchanged adjacency matrix (i.e. the positive and negative graphs are identical in this case). In this regime, learning is impossible for the discriminator, and the performance of DGI is in line with a randomly initialized DGI model. At higher rates of noise, however, DGI produces competitive embeddings. We also consider simultaneous feature shuffling (X̃ 6= X) and adjacency matrix perturbation (à 6= A), both as described before. We find that DGI still learns useful features under this compound corruption strategy—as expected, given that feature shuffling is already equivalent to an (isomorphic) adjacency matrix perturbation. From both studies, we may observe that a certain lower bound on the positive graph perturbation rate is required to obtain competitive node embeddings for the classification task on Cora. Furthermore, the features learned for downstream node classification tasks are most powerful when the negative graph has similar levels of connectivity to the positive graph. The classification performance peaks when the graph is perturbed to a reasonably high level, but remains sparse; i.e. the mixing between the separate 1-step patches is not substantial, and therefore the pool of negative examples is still diverse enough. Classification performance is impacted only marginally at higher rates of corruption—corresponding to dense negative graphs, and thus a less rich negative example pool—but still considerably outperforming the unsupervised baselines we have considered. This could be seen as further motivation for relying solely on feature shuffling, without adjacency perturbations—given that feature shuffling is a trivial way to guarantee a diverse set of negative examples, without incurring significant computational costs per epoch. The results of this study are visualized in Figures 7 and 8.
1. What is the main contribution of the paper in the field of unsupervised learning? 2. How does the proposed approach utilize InfoMax and GCNs for node feature learning? 3. Can the method be applied to various types of graphs, or is it limited to certain structures? 4. How do the experimental results compare to supervised learning methods for node classification? 5. Are there any limitations or potential drawbacks to the proposed approach?
Review
Review This paper describes an approach for unsupervised learning of node features on a graph (with known structure), so that learned local representations represent community information that has high mutual info with a graph-level summary. The general idea is they apply InfoMax to graphs via graph convolutional networks (GCN), and report impressive results, including rivaling supervised learning methods for node classification. The 3 experiments are on paper topic classification, social network modeling, and protein classification. The idea of using InfoMax with GCNs for unsupervised node learning is clever and timely, the technical contribution is solid, the experiments are executed well, and the paper is clear and easy to read.
ICLR
Title Deep Graph Infomax Abstract We present Deep Graph Infomax (DGI), a general approach for learning node representations within graph-structured data in an unsupervised manner. DGI relies on maximizing mutual information between patch representations and corresponding high-level summaries of graphs—both derived using established graph convolutional network architectures. The learnt patch representations summarize subgraphs centered around nodes of interest, and can thus be reused for downstream node-wise learning tasks. In contrast to most prior approaches to unsupervised learning with GCNs, DGI does not rely on random walk objectives, and is readily applicable to both transductive and inductive learning setups. We demonstrate competitive performance on a variety of node classification benchmarks, which at times even exceeds the performance of supervised learning. 1 INTRODUCTION Generalizing neural networks to graph-structured inputs is one of the current major challenges of machine learning (Bronstein et al., 2017; Hamilton et al., 2017b; Battaglia et al., 2018). While significant strides have recently been made, notably with graph convolutional networks (Kipf & Welling, 2016a; Gilmer et al., 2017; Veličković et al., 2018), most successful methods use supervised learning, which is often not possible as most graph data in the wild is unlabeled. In addition, it is often desirable to discover novel or interesting structure from large-scale graphs, and as such, unsupervised graph learning is essential for many important tasks. Currently, the dominant algorithms for unsupervised representation learning with graph-structured data rely on random walk-based objectives (Grover & Leskovec, 2016; Perozzi et al., 2014; Tang et al., 2015; Hamilton et al., 2017a), sometimes further simplified to reconstruct adjacency information (Kipf & Welling, 2016b; Duran & Niepert, 2017). The underlying intuition is to train an encoder network so that nodes that are “close” in the input graph are also “close” in the representation space. While powerful—and related to traditional metrics such as the personalized PageRank score (Jeh & Widom, 2003)—random walk methods suffer from known limitations. Most prominently, the random-walk objective is known to over-emphasize proximity information at the expense of structural information (Ribeiro et al., 2017), and performance is highly dependent on hyperparameter choice (Grover & Leskovec, 2016; Perozzi et al., 2014). Moreover, with the introduction of stronger ∗Work performed while the author was at Mila. †CIFAR Fellow encoder models based on graph convolutions (Gilmer et al., 2017), it is unclear whether randomwalk objectives actually provide any useful signal, as these encoders already enforce an inductive bias that neighboring nodes have similar representations. In this work, we propose an alternative objective for unsupervised graph learning that is based upon mutual information, rather than random walks. Recently, scalable estimation of mutual information was made both possible and practical through Mutual Information Neural Estimation (MINE, Belghazi et al., 2018), which relies on training a statistics network as a classifier of samples coming from the joint distribution of two random variables and their product of marginals. Following on MINE, Hjelm et al. (2018) introduced Deep InfoMax (DIM) for learning representations of highdimensional data. DIM trains an encoder model to maximize the mutual information between a high-level “global” representation and “local” parts of the input (such as patches of an image). This encourages the encoder to carry the type of information that is present in all locations (and thus are globally relevant), such as would be the case of a class label. DIM relies heavily on convolutional neural network structure in the context of image data, and to our knowledge, no work has applied mutual information maximization to graph-structured inputs. Here, we adapt ideas from DIM to the graph domain, which can be thought of as having a more general type of structure than the ones captured by convolutional neural networks. In the following sections, we introduce our method called Deep Graph Infomax (DGI). We demonstrate that the representation learned by DGI is consistently competitive on both transductive and inductive classification tasks, often outperforming both supervised and unsupervised strong baselines in our experiments. 2 RELATED WORK Contrastive methods. An important approach for unsupervised learning of representations is to train an encoder to be contrastive between representations that capture statistical dependencies of interest and those that do not. For example, a contrastive approach may employ a scoring function, training the encoder to increase the score on “real” input (a.k.a, positive examples) and decrease the score on “fake” input (a.k.a., negative samples). Contrastive methods are central to many popular word-embedding methods (Collobert & Weston, 2008; Mnih & Kavukcuoglu, 2013; Mikolov et al., 2013), but they are found in many unsupervised algorithms for learning representations of graphstructured input as well. There are many ways to score a representation, but in the graph literature the most common techniques use classification (Perozzi et al., 2014; Grover & Leskovec, 2016; Kipf & Welling, 2016b; Hamilton et al., 2017b), though other scoring functions are used (Duran & Niepert, 2017; Bojchevski & Günnemann, 2018). DGI is also contrastive in this respect, as our objective is based on classifying local-global pairs and negative-sampled counterparts. Sampling strategies. A key implementation detail to contrastive methods is how to draw positive and negative samples. The prior work above on unsupervised graph representation learning relies on a local contrastive loss (enforcing proximal nodes to have similar embeddings). Positive samples typically correspond to pairs of nodes that appear together within short random walks in the graph—from a language modelling perspective, effectively treating nodes as words and random walks as sentences. Recent work by Bojchevski & Günnemann (2018) uses node-anchored sampling as an alternative. The negative sampling for these methods is primarily based on sampling of random pairs, with recent work adapting this approach to use a curriculum-based negative sampling scheme (with progressively “closer” negative examples; Ying et al., 2018a) or introducing an adversary to select the negative examples (Bose et al., 2018). Predictive coding. Contrastive predictive coding (CPC, Oord et al., 2018) is another method for learning deep representations based on mutual information maximization. Like the models above, CPC is also contrastive, in this case using an estimate of the conditional density (in the form of noise contrastive estimation, Gutmann & Hyvärinen, 2010) as the scoring function. However, unlike our approach, CPC and the graph methods above are all predictive: the contrastive objective effectively trains a predictor between structurally-specified parts of the input (e.g., between neighboring node pairs or between a node and its neighborhood). Our approach differs in that we contrast global / local parts of a graph simultaneously, where the global variable is computed from all local variables. To the best of our knowledge, the sole prior works that instead focuses on contrasting “global” and “local” representations on graphs do so via (auto-)encoding objectives on the adjacency matrix (Wang et al., 2016) and incorporation of community-level constraints into node embeddings (Wang et al., 2017). Both methods rely on matrix factorization-style losses and are thus not scalable to larger graphs. 3 DGI METHODOLOGY In this section, we will present the Deep Graph Infomax method in a top-down fashion: starting with an abstract overview of our specific unsupervised learning setup, followed by an exposition of the objective function optimized by our method, and concluding by enumerating all the steps of our procedure in a single-graph setting. 3.1 GRAPH-BASED UNSUPERVISED LEARNING We assume a generic graph-based unsupervised machine learning setup: we are provided with a set of node features, X = {~x1, ~x2, . . . , ~xN}, whereN is the number of nodes in the graph and ~xi ∈ RF represents the features of node i. We are also provided with relational information between these nodes in the form of an adjacency matrix, A ∈ RN×N . While A may consist of arbitrary real numbers (or even arbitrary edge features), in all our experiments we will assume the graphs to be unweighted, i.e. Aij = 1 if there exists an edge i→ j in the graph and Aij = 0 otherwise. Our objective is to learn an encoder, E : RN×F × RN×N → RN×F ′ , such that E(X,A) = H = {~h1,~h2, . . . ,~hN} represents high-level representations ~hi ∈ RF ′ for each node i. These representations may then be retrieved and used for downstream tasks, such as node classification. Here we will focus on graph convolutional encoders—a flexible class of node embedding architectures, which generate node representations by repeated aggregation over local node neighborhoods (Gilmer et al., 2017). A key consequence is that the produced node embeddings, ~hi, summarize a patch of the graph centered around node i rather than just the node itself. In what follows, we will often refer to ~hi as patch representations to emphasize this point. 3.2 LOCAL-GLOBAL MUTUAL INFORMATION MAXIMIZATION Our approach to learning the encoder relies on maximizing local mutual information—that is, we seek to obtain node (i.e., local) representations that capture the global information content of the entire graph, represented by a summary vector, ~s. In order to obtain the graph-level summary vectors, ~s, we leverage a readout function,R : RN×F → RF , and use it to summarize the obtained patch representations into a graph-level representation; i.e., ~s = R(E(X,A)). As a proxy for maximizing the local mutual information, we employ a discriminator, D : RF × RF → R, such that D(~hi, ~s) represents the probability scores assigned to this patch-summary pair (should be higher for patches contained within the summary). Negative samples for D are provided by pairing the summary ~s from (X,A) with patch representations ~̃hj of an alternative graph, (X̃, Ã). In a multi-graph setting, such graphs may be obtained as other elements of a training set. However, for a single graph, an explicit (stochastic) corruption function, C : RN×F × RN×N → RM×F × RM×M is required to obtain a negative example from the original graph, i.e. (X̃, Ã) = C(X,A). The choice of the negative sampling procedure will govern the specific kinds of structural information that is desirable to be captured as a byproduct of this maximization. For the objective, we follow the intuitions from Deep InfoMax (DIM, Hjelm et al., 2018) and use a noise-contrastive type objective with a standard binary cross-entropy (BCE) loss between the samples from the joint (positive examples) and the product of marginals (negative examples). Following their work, we use the following objective1: L = 1 N +M N∑ i=1 E(X,A) [ logD ( ~hi, ~s )] + M∑ j=1 E(X̃,Ã) [ log ( 1−D ( ~̃ hj , ~s ))] (1) This approach effectively maximizes mutual information between ~hi and ~s, based on the JensenShannon divergence2 between the joint and the product of marginals. As all of the derived patch representations are driven to preserve mutual information with the global graph summary, this allows for discovering and preserving similarities on the patch-level—for example, distant nodes with similar structural roles (which are known to be a strong predictor for many node classification tasks; Donnat et al., 2018). Note that this is a “reversed” version of the argument given by Hjelm et al. (2018): for node classification, our aim is for the patches to establish links to similar patches across the graph, rather than enforcing the summary to contain all of these similarities (however, both of these effects should in principle occur simultaneously). 3.3 THEORETICAL MOTIVATION We now provide some intuition that connects the classification error of our discriminator to mutual information maximization on graph representations. Lemma 1. Let {X(k)}|X|k=1 be a set of node representations drawn from an empirical probability distribution of graphs, p(X), with finite number of elements, |X|, such that p(X(k)) = p(X(k′)) ∀k, k′. LetR(·) be a deterministic readout function on graphs and ~s(k) = R(X(k)) be the summary vector of the k-th graph, with marginal distribution p(~s). The optimal classifier between the joint distribution p(X, ~s) and the product of marginals p(X)p(~s), assuming class balance, has an error rate upper bounded by Err∗ = 12 ∑|X| k=1 p(~s (k))2. This upper bound is achieved ifR is injective. Proof. Denote by Q(k) the set of all graphs in the input set that are mapped to ~s(k) by R, i.e. Q(k) = {X(j) | R(X(j)) = ~s(k)}. AsR(·) is deterministic, samples from the joint, (X(k), ~s(k)) are drawn from the product of marginals with probability p(~s(k))p(X(k)), which decomposes into: p(~s(k)) ∑ ~s p(X(k), ~s) = p(~s(k))p(X(k)|~s(k))p(~s(k)) = p(X (k))∑ X′∈Q(k) p(X ′) p(~s(k))2 (2) For convenience, let ρ(k) = p(X (k))∑ X′∈Q(k) p(X ′) . As, by definition, X (k) ∈ Q(k), it holds that ρ(k) ≤ 1. This probability ratio is maximized at 1 when Q(k) = {X(k)}, i.e. when R is injective for X(k). The probability of drawing any sample of the joint from the product of marginals is then bounded above by ∑|X| k=1 p(~s (k))2. As the probability of drawing (X(k), ~s(k)) from the joint is ρ(k)p(~s(k)) ≥ ρ(k)p(~s(k))2, we know that classifying these samples as coming from the joint has a lower error than classifying them as coming from the product of marginals. The error rate of such a classifier is then the probability of drawing a sample from the joint as a sample from product of marginals under the mixture probability, which we can bound by Err ≤ 12 ∑|X| k=1 p(~s (k))2, with the upper bound achieved, as above, whenR(·) is injective for all elements of {X(k)}. It may be useful to note that 12|X| ≤ Err ∗ ≤ 12 . The first result is obtained via a trivial application of Jensen’s inequality, while the other extreme is reached only in the edge case of a constant readout function (when every example from the joint is also an example from the product of marginals, so no classifier performs better than chance). Corollary 1. From now on, assume that the readout function used, R, is injective. Assume the number of allowable states in the space of ~s, |~s|, is greater than or equal to |X|. Then, for ~s?, the 1Note that Hjelm et al. (2018) use a softplus version of the binary cross-entropy. 2The “GAN” distance defined here—as per Goodfellow et al. (2014) and Nowozin et al. (2016)—and Jensen-Shannon divergence can be related by DGAN = 2DJS − log 4. Therefore, any parameters that optimize one also optimize the other. optimal summary under the classification error of an optimal classifier between the joint and the product of marginals, it holds that |~s?| = |X|. Proof. By injectivity of R, we know that ~s? = argmin~s Err∗. As the upper error bound, Err∗, is a simple geometric sum, we know that this is minimized when p(~s(k)) is uniform. As R(·) is deterministic, this implies that each potential summary state would need to be used at least once. Combined with the condition |~s| ≥ |X|, we conclude that the optimum has |~s?| = |X|. Theorem 1. ~s? = argmax~s MI(X;~s), where MI is mutual information. Proof. This follows from the fact that the mutual information is invariant under invertible transforms. As |~s?| = |X| and R is injective, it has an inverse function, R−1. It follows then that, for any ~s, MI(X;~s) ≤ H(X) = MI(X;X) = MI(X;R(X)) = MI(X;~s?), where H is entropy. Theorem 1 shows that for finite input sets and suitable deterministic functions, minimizing the classification error in the discriminator can be used to maximize the mutual information between the input and output. However, as was shown in Hjelm et al. (2018), this objective alone is not enough to learn useful representations. As in their work, we discriminate between the global summary vector and local high-level representations. Theorem 2. Let X(k)i = {~xj}j∈n(X(k),i) be the neighborhood of the node i in the k-th graph that collectively maps to its high-level features, ~hi = E(X(k)i ), where n is the neighborhood function that returns the set of neighborhood indices of node i for graph X(k), and E is a deterministic encoder function. Let us assume that |Xi| = |X| = |~s| ≥ |~hi|. Then, the ~hi that minimizes the classification error between p(~hi, ~s) and p(~hi)p(~s) also maximizes MI(X (k) i ; ~hi). Proof. Given our assumption of |Xi| = |~s|, there exists an inverse Xi = R−1(~s), and therefore ~hi = E(R−1(~s)), i.e. there exists a deterministic function (E ◦ R−1) mapping ~s to ~hi. The optimal classifier between the joint p(~hi, ~s) and the product of marginals p(~hi)p(~s) then has (by Lemma 1) an error rate upper bound of Err∗ = 12 ∑|X| k=1 p( ~h (k) i ) 2. Therefore (as in Corollary 1), for the optimal ~hi, |~hi| = |Xi|, which by the same arguments as in Theorem 1 maximizes the mutual information between the neighborhood and high-level features, MI(X(k)i ;~hi). This motivates our use of a classifier between samples from the joint and the product of marginals, and using the binary cross-entropy (BCE) loss to optimize this classifier is well-understood in the context of neural network optimization. 3.4 OVERVIEW OF DGI Assuming the single-graph setup (i.e., (X,A) provided as input), we will now summarize the steps of the Deep Graph Infomax procedure: 1. Sample a negative example by using the corruption function: (X̃, Ã) ∼ C(X,A). 2. Obtain patch representations, ~hi for the input graph by passing it through the encoder: H = E(X,A) = {~h1,~h2, . . . ,~hN}. 3. Obtain patch representations, ~̃hj for the negative example by passing it through the encoder: H̃ = E(X̃, Ã) = {~̃h1, ~̃h2, . . . , ~̃hM}. 4. Summarize the input graph by passing its patch representations through the readout func- tion: ~s = R(H). 5. Update parameters of E ,R and D by applying gradient descent to maximize Equation 1. This algorithm is fully summarized by Figure 1. 4 CLASSIFICATION PERFORMANCE We have assessed the benefits of the representation learnt by the DGI encoder on a variety of node classification tasks (transductive as well as inductive), obtaining competitive results. In each case, DGI was used to learn patch representations in a fully unsupervised manner, followed by evaluating the node-level classification utility of these representations. This was performed by directly using these representations to train and test a simple linear (logistic regression) classifier. 4.1 DATASETS We follow the experimental setup described in Kipf & Welling (2016a) and Hamilton et al. (2017a) on the following benchmark tasks: (1) classifying research papers into topics on the Cora, Citeseer and Pubmed citation networks (Sen et al., 2008); (2) predicting the community structure of a social network modeled with Reddit posts; and (3) classifying protein roles within protein-protein interaction (PPI) networks (Zitnik & Leskovec, 2017), requiring generalisation to unseen networks. Further information on the datasets may be found in Table 1 and Appendix A. 4.2 EXPERIMENTAL SETUP For each of three experimental settings (transductive learning, inductive learning on large graphs, and multiple graphs), we employed distinct encoders and corruption functions appropriate to that setting (described below). Transductive learning. For the transductive learning tasks (Cora, Citeseer and Pubmed), our encoder is a one-layer Graph Convolutional Network (GCN) model (Kipf & Welling, 2016a), with the following propagation rule: E(X,A) = σ ( D̂− 1 2 ÂD̂− 1 2 XΘ ) (3) where  = A + IN is the adjacency matrix with inserted self-loops and D̂ is its corresponding degree matrix; i.e. D̂ii = ∑ j Âij . For the nonlinearity, σ, we have applied the parametric ReLU (PReLU) function (He et al., 2015), and Θ ∈ RF×F ′ is a learnable linear transformation applied to every node, with F ′ = 512 features being computed (specially, F ′ = 256 on Pubmed due to memory limitations). The corruption function used in this setting is designed to encourage the representations to properly encode structural similarities of different nodes in the graph; for this purpose, C preserves the original adjacency matrix (à = A), whereas the corrupted features, X̃, are obtained by row-wise shuffling of X. That is, the corrupted graph consists of exactly the same nodes as the original graph, but they are located in different places in the graph, and will therefore receive different patch representations. We demonstrate DGI is stable to other choices of corruption functions in Appendix C, but we find those that preserve the graph structure result in the strongest features. Inductive learning on large graphs. For inductive learning, we may no longer use the GCN update rule in our encoder (as the learned filters rely on a fixed and known adjacency matrix); instead, we apply the mean-pooling propagation rule, as used by GraphSAGE-GCN (Hamilton et al., 2017a): MP(X,A) = D̂−1ÂXΘ (4) with parameters defined as in Equation 3. Note that multiplying by D̂−1 actually performs a normalized sum (hence the mean-pooling). While Equation 4 explicitly specifies the adjacency and degree matrices, they are not needed: identical inductive behaviour may be observed by a constant attention mechanism across the node’s neighbors, as used by the Const-GAT model (Veličković et al., 2018). For Reddit, our encoder is a three-layer mean-pooling model with skip connections (He et al., 2016): M̃P(X,A) = σ (XΘ′‖MP(X,A)) E(X,A) = M̃P3(M̃P2(M̃P1(X,A),A),A) (5) where ‖ is featurewise concatenation (i.e. the central node and its neighborhood are handled separately). We compute F ′ = 512 features in each MP layer, with the PReLU activation for σ. Given the large scale of the dataset, it will not fit into GPU memory entirely. Therefore, we use the subsampling approach of Hamilton et al. (2017a), where a minibatch of nodes is first selected, and then a subgraph centered around each of them is obtained by sampling node neighborhoods with replacement. Specifically, we sample 10, 10 and 25 neighbors at the first, second and third level, respectively—thus, each subsampled patch has 1 + 10 + 100 + 2500 = 2611 nodes. Only the computations necessary for deriving the central node i’s patch representation, ~hi, are performed. These representations are then used to derive the summary vector, ~s, for the minibatch (Figure 2). We used minibatches of 256 nodes throughout training. To define our corruption function in this setting, we use a similar approach as in the transductive tasks, but treat each subsampled patch as a separate graph to be corrupted (i.e., we row-wise shuffle the feature matrices within a subsampled patch). Note that this may very likely cause the central node’s features to be swapped out for a sampled neighbor’s features, further encouraging diversity in the negative samples. The patch representation obtained in the central node is then submitted to the discriminator. Inductive learning on multiple graphs. For the PPI dataset, inspired by previous successful supervised architectures (Veličković et al., 2018), our encoder is a three-layer mean-pooling model with dense skip connections (He et al., 2016; Huang et al., 2017): H1 = σ (MP1(X,A)) (6) H2 = σ (MP2(H1 + XWskip,A)) (7) E(X,A) = σ (MP3(H2 + H1 + XWskip,A)) (8) where Wskip is a learnable projection matrix, and MP is as defined in Equation 4. We compute F ′ = 512 features in each MP layer, using the PReLU activation for σ. In this multiple-graph setting, we opted to use randomly sampled training graphs as negative examples (i.e., our corruption function simply samples a different graph from the training set). We found this method to be the most stable, considering that over 40% of the nodes have all-zero features in this dataset. To further expand the pool of negative examples, we also apply dropout (Srivastava et al., 2014) to the input features of the sampled graph. We found it beneficial to standardize the learnt embeddings across the training set prior to providing them to the logistic regression model. Readout, discriminator, and additional training details. Across all three experimental settings, we employed identical readout functions and discriminator architectures. For the readout function, we use a simple averaging of all the nodes’ features: R(H) = σ ( 1 N N∑ i=1 ~hi ) (9) where σ is the logistic sigmoid nonlinearity. While we have found this readout to perform the best across all our experiments, we assume that its power will diminish with the increase in graph size, and in those cases, more sophisticated readout architectures such as set2vec (Vinyals et al., 2015) or DiffPool (Ying et al., 2018b) are likely to be more appropriate. The discriminator scores summary-patch representation pairs by applying a simple bilinear scoring function (similar to the scoring used by Oord et al. (2018)): D(~hi, ~s) = σ ( ~hTi W~s ) (10) Here, W is a learnable scoring matrix and σ is the logistic sigmoid nonlinearity, used to convert scores into probabilities of (~hi, ~s) being a positive example. All models are initialized using Glorot initialization (Glorot & Bengio, 2010) and trained to maximize the mutual information provided in Equation 1 on the available nodes (all nodes for the transductive, and training nodes only in the inductive setup) using the Adam SGD optimizer (Kingma & Ba, 2014) with an initial learning rate of 0.001 (specially, 10−5 on Reddit). On the transductive datasets, we use an early stopping strategy on the observed training loss, with a patience of 20 epochs3. On the inductive datasets we train for a fixed number of epochs (150 on Reddit, 20 on PPI). 4.3 RESULTS The results of our comparative evaluation experiments are summarized in Table 2. For the transductive tasks, we report the mean classification accuracy (with standard deviation) on the test nodes of our method after 50 runs of training (followed by logistic regression), and reuse the metrics already reported in Kipf & Welling (2016a) for the performance of DeepWalk and GCN, as well as Label Propagation (LP) (Zhu et al., 2003) and Planetoid (Yang et al., 2016)—a representative supervised random walk method. Specially, we provide results for training the logistic regression on raw input features, as well as DeepWalk with the input features concatenated. 3 A reference DGI implementation may be found at https://github.com/PetarV-/DGI. For the inductive tasks, we report the micro-averaged F1 score on the (unseen) test nodes, averaged after 50 runs of training, and reuse the metrics already reported in Hamilton et al. (2017a) for the other techniques. Specifically, as our setup is unsupervised, we compare against the unsupervised GraphSAGE approaches. We also provide supervised results for two related architectures— FastGCN (Chen et al., 2018) and Avg. pooling (Zhang et al., 2018). Our results demonstrate strong performance being achieved across all five datasets. We particularly note that the DGI approach is competitive with the results reported for the GCN model with the supervised loss, even exceeding its performance on the Cora and Citeseer datasets. We assume that these benefits stem from the fact that, indirectly, the DGI approach allows for every node to have access to structural properties of the entire graph, whereas the supervised GCN is limited to only two-layer neighborhoods (by the extreme sparsity of the training signal and the corresponding threat of overfitting). It should be noted that, while we are capable of outperforming equivalent supervised encoder architectures, our performance still does not surpass the current supervised transductive state of the art (which is held by methods such as GraphSGAN (Ding et al., 2018)). We further observe that the DGI method successfully outperformed all the competing unsupervised GraphSAGE approaches on the Reddit and PPI datasets—thus verifying the potential of methods based on local mutual information maximization in the inductive node classification domain. Our Reddit results are competitive with the supervised state of the art, whereas on PPI the gap is still large—we believe this can be attributed to the extreme sparsity of available node features (over 40% of the nodes having all-zero features), that our encoder heavily relies on. We note that a randomly initialized graph convolutional network may already extract highly useful features and represents a strong baseline—a well-known fact, considering its links to the Weisfeiler- Lehman graph isomorphism test (Weisfeiler & Lehman, 1968), that have already been highlighted and analyzed by Kipf & Welling (2016a) and Hamilton et al. (2017a). As such, we also provide, as Random-Init, the logistic regression performance on embeddings obtained from a randomly initialized encoder. Besides demonstrating that DGI is able to further improve on this strong baseline, it particularly reveals that, on the inductive datasets, previous random walk-based negative sampling methods may have been ineffective for learning appropriate features for the classification task. Lastly, it should be noted that deeper encoders correspond to more pronounced mixing between recovered patch representations, reducing the effective variability of our positive/negative examples’ pool. We believe that this is the reason why shallower architectures performed better on some of the datasets. While we cannot say that these trends will hold in general, with the DGI loss function we generally found benefits from employing wider, rather than deeper models. 5 QUALITATIVE ANALYSIS We performed a diverse set of analyses on the embeddings learnt by the DGI algorithm in order to better understand the properties of DGI. We focus our analysis exclusively on the Cora dataset (as it has the smallest number of nodes, significantly aiding clarity). A standard set of “evolving” t-SNE plots (Maaten & Hinton, 2008) of the embeddings is given in Figure 3. As expected given the quantitative results, the learnt embeddings’ 2D projections exhibit discernible clustering in the 2D projected space (especially compared to the raw features and Random-Init), which respects the seven topic classes of Cora. The projection obtains a Silhouette score (Rousseeuw, 1987) of 0.234, which compares favorably with the previous reported score of 0.158 for Embedding Propagation (Duran & Niepert, 2017). We ran further analyses, revealing insights into DGI’s mechanism of learning, isolating biased embedding dimensions for pushing the negative example scores down and using the remainder to encode useful information about positive examples. We leverage these insights to retain competitive performance to the supervised GCN even after half the dimensions are removed from the patch representations provided by the encoder. These—and several other—qualitative and ablation studies can be found in Appendix B. 6 CONCLUSIONS We have presented Deep Graph Infomax (DGI), a new approach for learning unsupervised representations on graph-structured data. By leveraging local mutual information maximization across the graph’s patch representations, obtained by powerful graph convolutional architectures, we are able to obtain node embeddings that are mindful of the global structural properties of the graph. This enables competitive performance across a variety of both transductive and inductive classification tasks, at times even outperforming relevant supervised architectures. ACKNOWLEDGMENTS We would like to thank the developers of PyTorch (Paszke et al., 2017). PV and PL have received funding from the European Union’s Horizon 2020 research and innovation programme PROPAGAGEING under grant agreement No 634821. We specially thank Hugo Larochelle and Jian Tang for the extremely useful discussions, and Andreea Deac, Arantxa Casanova, Ben Poole, Graham Taylor, Guillem Cucurull, Justin Gilmer, Nithium Thain and Zhaocheng Zhu for reviewing the paper prior to submission. A FURTHER DATASET DETAILS Transductive learning. We utilize three standard citation network benchmark datasets—Cora, Citeseer and Pubmed (Sen et al., 2008)—and closely follow the transductive experimental setup of Yang et al. (2016). In all of these datasets, nodes correspond to documents and edges to (undirected) citations. Node features correspond to elements of a bag-of-words representation of a document. Each node has a class label. We allow for only 20 nodes per class to be used for training—however, honouring the transductive setup, the unsupervised learning algorithm has access to all of the nodes’ feature vectors. The predictive power of the learned representations is evaluated on 1000 test nodes. Inductive learning on large graphs. We use a large graph dataset (231,443 nodes and 11,606,919 edges) of Reddit posts created during September 2014 (derived and preprocessed as in Hamilton et al. (2017a)). The objective is to predict the posts’ community (“subreddit”), based on the GloVe embeddings of their content and comments (Pennington et al., 2014), as well as metrics such as score or number of comments. Posts are linked together in the graph if the same user has commented on both. Reusing the inductive setup of Hamilton et al. (2017a), posts made in the first 20 days of the month are used for training, while the remaining posts are used for validation or testing and are invisible to the training algorithm. Inductive learning on multiple graphs. We make use of a protein-protein interaction (PPI) dataset that consists of graphs corresponding to different human tissues (Zitnik & Leskovec, 2017). The dataset contains 20 graphs for training, 2 for validation and 2 for testing. Critically, testing graphs remain completely unobserved during training. To construct the graphs, we used the preprocessed data provided by Hamilton et al. (2017a). Each node has 50 features that are composed of positional gene sets, motif gene sets and immunological signatures. There are 121 labels for each node set from gene ontology, collected from the Molecular Signatures Database (Subramanian et al., 2005), and a node can possess several labels simultaneously. B FURTHER QUALITATIVE ANALYSIS Visualizing discriminator scores. After obtaining the t-SNE visualizations, we turned our attention to the discriminator—and visualized the scores it attached to various nodes, for both the positive and a (randomly sampled) negative example (Figure 4). From here we can make an interesting observation—within the “clusters” of the learnt embeddings on the positive Cora graph, only a handful of “hot” nodes are selected to receive high discriminator scores. This suggests that there may be a clear distinction between embedding dimensions used for discrimination and classification, which we more thoroughly investigate in the next paragraph. In addition, we may observe that, as expected, the model is unable to find any strong structure within a negative example. Lastly, a few negative examples achieve high discriminator scores—a phenomenon caused by the existence of low-degree nodes in Cora (making the probability of a node ending up in an identical context it had in the positive graph non-negligible). Impact and role of embedding dimensions. Guided by the previous result, we have visualized the embeddings for the top-scoring positive and negative examples (Figure 5). The analysis revealed existence of distinct dimensions in which both the positive and negative examples are strongly biased. We hypothesize that, given the random shuffling, the average expected activation of a negative example is zero, and therefore strong biases are required to “push” the example down in the discriminator. The positive examples may then use the remaining dimensions to both counteract this bias and encode patch similarity. To substantiate this claim, we order the 512 dimensions based on how distinguishable the positive and negative examples are in them (using p-values obtained from a t-test as a proxy). We then remove these dimensions from the embedding, respecting this order—either starting from the most distinguishable (p ↑) or least distinguishable dimensions (p ↓)—monitoring how this affects both classification and discriminator performance (Figure 6). The observed trends largely support our hypothesis: if we start by removing the biased dimensions first (p ↓), the classification performance holds up for much longer (allowing us to remove over half of the embedding dimensions while remaining competitive to the supervised GCN), and the positive examples mostly remain correctly discriminated until well over half the dimensions are removed. C ROBUSTNESS TO CHOICE OF CORRUPTION FUNCTION Here, we consider alternatives to our corruption function, C, used to produce negative graphs. We generally find that, for the node classification task, DGI is stable and robust to different strategies. However, for learning graph features towards other kinds of tasks, the design of appropriate corruption strategies remains an area of open research. Our corruption function described in Section 4.2 preserves the original adjacency matrix (à = A) but corrupts the features, X̃, via row-wise shuffling of X. In this case, the negative graph is constrained to be isomorphic to the positive graph, which should not have to be mandatory. We can instead produce a negative graph by directly corrupting the adjacency matrix. Therefore, we first consider an alternative corruption function C which preserves the features (X̃ = X) but instead adds or removes edges from the adjacency matrix (à 6= A). This is done by sampling, i.i.d., a switch parameter Σij , which determines whether to corrupt the adjacency matrix at position (i, j). Assuming a given corruption rate, ρ, we may define C as performing the following operations: Σij ∼ Bernoulli(ρ) (11) à = A⊕Σ (12) where ⊕ is the XOR (exclusive OR) operation. This alternative strategy produces a negative graph with the same features, but different connectivity. Here, the corruption rate of ρ = 0 corresponds to an unchanged adjacency matrix (i.e. the positive and negative graphs are identical in this case). In this regime, learning is impossible for the discriminator, and the performance of DGI is in line with a randomly initialized DGI model. At higher rates of noise, however, DGI produces competitive embeddings. We also consider simultaneous feature shuffling (X̃ 6= X) and adjacency matrix perturbation (à 6= A), both as described before. We find that DGI still learns useful features under this compound corruption strategy—as expected, given that feature shuffling is already equivalent to an (isomorphic) adjacency matrix perturbation. From both studies, we may observe that a certain lower bound on the positive graph perturbation rate is required to obtain competitive node embeddings for the classification task on Cora. Furthermore, the features learned for downstream node classification tasks are most powerful when the negative graph has similar levels of connectivity to the positive graph. The classification performance peaks when the graph is perturbed to a reasonably high level, but remains sparse; i.e. the mixing between the separate 1-step patches is not substantial, and therefore the pool of negative examples is still diverse enough. Classification performance is impacted only marginally at higher rates of corruption—corresponding to dense negative graphs, and thus a less rich negative example pool—but still considerably outperforming the unsupervised baselines we have considered. This could be seen as further motivation for relying solely on feature shuffling, without adjacency perturbations—given that feature shuffling is a trivial way to guarantee a diverse set of negative examples, without incurring significant computational costs per epoch. The results of this study are visualized in Figures 7 and 8.
1. What is the main contribution of the paper, and how does it propose to learn node representations? 2. What are some potential issues with the presentation and naming used in the paper? 3. How does the proposed approach differ from other methods, specifically regarding its use of an encoder and a readout function? 4. What are some concerns regarding the scoring function used in the paper, and how might it impact the results? 5. Can you provide more details on the downstream classification results mentioned in the review? 6. How do the learned node representations compare to those obtained using supervised methods? 7. What is the difference between the GCN model used in the fully supervised setting and the semi-supervised method proposed in the paper? 8. Can you explain the concept of a "randomly initialized graph convolutional network" and how it differs from the author's proposal?
Review
Review This paper proposes an unsupervised approach to learning node representations. The basic steps are: (1) use an encoder E to learn node vectors, (2) use a readout function R to summarize node vectors into the graph vector, (3) use a scoring function D to score how much the node vectors are aligned with the graph vector, and (4) maximize the scores for the given graph meanwhile minimize the those from the negative distribution. I feel that the idea is interesting; however, the paper is less well written and the realization of the idea has drawbacks as well. 1. Presentation of Section 3.2 can be improved. The proposed approach becomes clear only toward the end. 2. Naming and wording is misleading. The title and the whole paper use the wording "mutual information", whereas in reality, the loss function is a cross entropy. 3. In equation (1), it is unclear why the authors take expectation with respect to the distribution of graphs before summing the scores for one particular graph. Should the order of the expectation and summation be swapped? 4. The proposal is more like a framework than a specific method. The encoder and the negative distribution need to be separately designed for different graphs. Good things about the proposal: 5. The downstream classification results are quite comparable to those of supervised methods (except for the PPI data). 6. The learned node representations possess a clear clustering structure (Figure 3). Minor comments: 7. In the third paragraph of section 4.3, the authors state that "... for the GCN model in the fully supervised setting". GCN should be a semi-supervised method rather than a fully-supervised one. 8. In the last paragraph of section 4.3, what is a "randomly initialized graph convolutional network" and how is it different from the proposal?
ICLR
Title LanczosNet: Multi-Scale Deep Graph Convolutional Networks Abstract We propose the Lanczos network (LanczosNet), which uses the Lanczos algorithm to construct low rank approximations of the graph Laplacian for graph convolution. Relying on the tridiagonal decomposition of the Lanczos algorithm, we not only efficiently exploit multi-scale information via fast approximated computation of matrix power but also design learnable spectral filters. Being fully differentiable, LanczosNet facilitates both graph kernel learning as well as learning node embeddings. We show the connection between our LanczosNet and graph based manifold learning methods, especially the diffusion maps. We benchmark our model against several recent deep graph networks on citation networks and QM8 quantum chemistry dataset. Experimental results show that our model achieves the state-of-the-art performance in most tasks. 1 INTRODUCTION Graph-structured data is ubiquitous in real world applications, social networks, gene expression regulatory networks, protein-protein interactions, and many other physical systems. How to model such data using machine learning, especially deep learning, has become a central research question [1]. For supervised and semi-supervised tasks such as graph or node classification and regression, learning based models can be roughly categorized into two classes, formulated either in terms of graph convolutions [2] or recurrent neural networks [3]. Methods based on recurrent neural networks (RNN), especially graph neural networks (GNN) [3], repeatedly unroll a message passing process over the graph by exchanging information between the nodes. In theory, a GNN can have as large a model capacity as its convolutional counterpart. However, due to the instability of RNN dynamics and difficulty of optimization, GNN and its variants are generally slower and harder to train. In this paper we focus on graph convolution based methods. Built on top of the graph signal processing (GSP) approaches [4], these methods extend convolution operators to graphs by leveraging spectral graph theory, graph wavelet theory, etc. Graph convolutions can be stacked and combined with nonlinear activation functions to build deep models, just as in regular convolutional neural networks (CNN). They often have large model capacity and achieve promising results. Also, graph convolution can be easily implemented with modern scientific computing libraries. There are two main issues with current graph convolution approaches. First, it is not clear how to efficiently leverage multi-scale information except by directly stacking multiple layers. Having an effective multi-scale scheme is key for enabling the model to be invariant to scale changes, and to capture many intrinsic regularities [5, 6]. Graph coarsening methods have been proposed to form a hierarchy of multi-scale graphs [7], but this coarsening process is fixed during both inference and learning which may cause some bias. Alternatively, the graph signal can be multiplied by the exponentiated graph Laplacian, where the exponent indicates the scale of the diffusion process on the graph [8]. Unfortunately, the computation and memory cost increases linearly with the exponent, which prohibits the exploitation of long scale diffusion in practice. Other fast methods for computing matrix power such as exponentiating by squaring are very memory intensive, even for moderately large graphs. Second, spectral filters within current graph convolution based models are mostly fixed. In the context of image processing, using a Gaussian kernel along with a spectral filter f(λ) = 2λ − λ2 corresponds to running forward the heat equation (blurring) followed by running it backwards (sharpening) [9]. Multi-scale kernels introduced in [10] extends the idea of forward-backward diffusion process and can be represented as polynomials of matrices related to a Gaussian kernel. Learning the spectral filters is thus beneficial since it learns the stochastic processes on the graph which produce useful representations for particular tasks. However, how to learn spectral filters which have large model capacity is largely underexplored. In this paper, we propose the Lanczos network (LanczosNet) to overcome the aforementioned issues. First, based on the tridiagonal decomposition implied by the Lanczos algorithm, our model exploits the low rank approximation of the graph Laplacian. This approximation facilitates efficient computation of matrix powers thus gathering multi-scale information easily. Second, we design learnable spectral filters based on the approximation which effectively increase model capacity. In scenarios where one wants to learn the graph kernel and/or node embeddings, we propose another variant, i.e., adaptive Lanczos network (AdaLanczosNet), which back-propagates through the Lanczos algorithm. We show that our proposed model is closely related to graph based manifold learning approaches such as diffusion maps which could potentially inspire more work from the intersection between deep graph networks and manifold learning. We benchmark against 9 recent deep graph networks, including both convolutional and RNN based methods, on citation networks and a quantum chemistry graph regression problem, and achieve state-of-the-art results in most tasks. 2 BACKGROUND In this section, we introduce some background material. A graph G with N nodes is denoted as G = (V, E , A), where A ∈ RN×N is an adjacency matrix which could either be binary or real valued. X ∈ RN×F is the compact representation of node features (or graph signal in the GSP literature). For any node v ∈ V , we denote its feature as a row vector Xv,: ∈ R1×F . We use X:,i to denote the i-th column of X . Graph Fourier Transform Given input node features X , we now discuss how to perform a graph convolution. Based on the adjacency matrix A, we compute the graph Laplacian L which can be defined in different ways: (1) L = D − A; (2) L = I −D−1A; (3) L = I −D− 12AD− 12 , where D is a diagonal degree matrix and Di,i = ∑N j=1Ai,j . The definition (3) is often used in the GSP literature due to the fact that it is real symmetric, positive semi-definite (PSD) and has eigenvalues lying in [0, 2]. In certain applications [11], it was found that adding self-loops, i.e., changing A to A + I , and using the affinity matrix S = D− 1 2AD− 1 2 instead of L gives better results. Since S is real symmetric, based on spectral decomposition, we have S = UΛU> where U is an orthogonal matrix and its column vectors are the eigenvectors of S. The diagonal matrix Λ contains the sorted eigenvalues where Λi,i = λi and 1 ≥ λ1 ≥ · · · ≥ λN ≥ −1. Based on the eigenbasis, we can define the graph Fourier transform Y = U>X and its inverse transform X̂ = UY following [12]. Note that L = I −D− 12AD− 12 shares the same eigenvectors with S = D− 12AD− 12 and the eigenvalues of L are µi = 1− λi. Therefore, L and S share the same graph Fourier transform which justifies the usages of S. Different forms of filters can be further constructed in the spectral domain. Localized Polynomial Filter A τ -localized polynomial filter is typically adopted in GSP literature [12], gw(Λ) = ∑τ−1 t=0 wtΛ t, where w = [w0, w1, . . . , wτ−1] ∈ Rτ×1 is the filter coefficient, i.e., learnable parameter. The filter is τ -localized in the sense that the filtering leverages information from nodes which are at most τ -hops away. One prominent example of this class is the Chebyshev polynomial filter [7]. Here the graph Laplacian is modified to L̃ = 2L/λmax− I such that its eigenvalues fall into [−1, 1]. Then the Chebyshev polynomial recursion is applied: X̃(t) = 2L̃X̃(t− 1)− X̃(t− 2) where X̃(0) = X and X̃(1) = L̃X . For a pair of input and output channels (i, j), the final filtering becomes, yi,j = [X̃(0):,i, . . . , X̃(τ − 1):,i]wi,j , where [·] means concatenation along columns and wi,j ∈ Rτ×1. Chebyshev polynomials provide two benefits: they form an orthogonal basis of L2([−1, 1], dy/ √ 1− y2) and one avoids the spectral decomposition of L̃ in the filtering. However, the functional form of the spectral filter is not learnable, and cannot adapt to the data. In this paper, instead of using the modified graph Laplacian L̃, we use the aforementioned S. Therefore, we can write the localized polynomial filtering in a more general form as, Y = τ−1∑ t=0 gt(S, . . . , S t, X)Wt, (1) where gt is a function that takes node features X and powers of the affinity matrices up to the t-th order as input and outputs a N × F matrix. Wt ∈ RF×O is the corresponding filter coefficient and Y ∈ RN×O is the output. One can easily verify that in the Chebyshev polynomial filter, any i-th column of the corresponding gt(X,S, . . . , St) lies in the Krylov subspace Kt+1(S,X:,i) ≡ span{X:,i, SX:,i, . . . , StX:,i}. This naturally motivates the usage of Krylov subspace methods, like the Lanczos algorithm [13], since it provides an orthonormal basis for the above Krylov subspace, thus making the filter coefficients compact. 3 LANCZOS NETWORKS In this section, we first introduce the Lanczos algorithm which approximates the affinity matrix S. We present our first model, called Lanczos network (LanczosNet), in which we execute the Lanczos algorithm once per graph and fix the basis throughout inference and learning. Then we introduce the adaptive Lanczos network (AdaLanczosNet) in which we learn the graph kernel and/or node embedding by back-propagating through the Lanczos algorithm. Algorithm 1 : Lanczos Algorithm 1: Input: S, x,K, 2: Initialization: β0 = 0, q0 = 0, and q1 = x/‖x‖ 3: For j = 1, 2, . . . ,K: 4: z = Sqj 5: γj = q>j z 6: z = z − γjqj − βj−1qj−1 7: βj = ‖z‖2 8: If βj < , quit 9: qj+1 = z/βj 10: 11: Q = [q1, q2, · · · , qK ] 12: Construct T following Eq. (2) 13: Eigen decomposition T = BRB> 14: Return V = QB and R. Algorithm 2 : LanczosNet 1: Input: Signal X , Lanczos output V and R, scale index sets S and I, 2: Initialization: Y0 = X 3: For ` = 1, 2, . . . , `c: 4: Z = Y`−1, Z = {∅} 5: For j = 1, 2, . . . ,max(S): 6: Z = SZ 7: If j ∈ S: 8: Z = Z ∪ Z 9: For i ∈ I: 10: Z = Z ∪ V R̂(Ii)V >Y`−1 11: Y` = concat(Z)W` 12: If ` < L 13: Y` = Dropout(σ(Y`)) 14: Return Y`c . 3.1 LANCZOS ALGORITHM Given the aforementioned affinity matrix S1 and node features x ∈ RN×1, the N -step Lanczos algorithm computes an orthogonal matrix Q and a symmetric tridiagonal matrix T , such that Q>SQ = T . We denote Q = [q1, · · · , qN ] where column vector qi is the i-th Lanczos vector. T is illustrated as below, T = γ1 β1 β1 . . . . . . . . . . . . βN−1 βN−1 γN . (2) One can verify that Q forms an orthonormal basis of the Krylov subspace KN (S, x) and the first K columns of Q forms the orthonormal basis of KK(S, x). The Lanczos algorithm is shown in detail in Alg. 1. Intuitively, if we investigate the j-th column of the system SQ = QT and rearrange terms, 1When faced with a non-symmetric matrix, one can resort to the Arnoldi algorithm. we obtain βjqj+1 = Sqj − βj−1qj−1 − γjqj , which clearly explains lines 4 to 6 of the pseudocode, i.e., it tries to solve the system in an iterative manner. Note that the most expensive operation in the algorithm is the matrix-vector multiplication in line 4. After obtaining the tridiagonal matrix T , we can compute the Ritz values and Ritz vectors which approximate the eigenvalues and eigenvectors of S by diagonalizing the matrix T . We only add this step in LanczosNet as we found back-propagating through the eigendecomposition in AdaLanczosNet is not numerically stable. 3.2 LANCZOSNET In this section, we first show the construction of the localized polynomial filter based on the Lanczos algorithm’s output and discuss its limitations. Then we explain how to construct the spectral filter using a particular low rank approximation and how to further make the filter learnable. At last, we elaborate how to construct multi-scale graph convolution and build a deep network. Localized Polynomial Filter For the ease of demonstrating the concept of Krylov subspace, we consider a pair of input and output channels (i, j). We denote the input as X:,i ∈ RN×1 and the output as Y:,j ∈ RN×1. Executing the Lanczos algorithm for K steps with the normalized X:,i as the starting vector, one can obtain the orthonormal basis Q̃ of KK(S,X:,i) and the corresponding tridiagonal matrix T̃ . Recall that in the localized polynomial filtering, given the orthonormal basis of KK(S,X:,i), one can write the graph convolution as Yj = Q̃wi,j , (3) where Q̃ ∈ RN×K depends on the X:,i and wi,j ∈ RK×1 is the learnable parameter. This filter has the benefit that the corresponding learnable coefficients are compact due to the orthonormal basis. However, if one wants to stack multiple graph convolution layers, the dependency of Q̃ on X:,i implies that a separate run of Lanczos algorithm is necessary for each graph convolution layer which is computationally demanding. Spectral Filter Ideally, we would like to compute Lanczos vectors only once during the inference of a deep graph convolutional network. Luckily, this can be achieved if we take an alternative view of Lanczos algorithm. In particular, we can choose a random starting vector with unit norm and treat the K step Lanczos layer’s output as the low rank approximation S ≈ QTQ>. Note that here Q ∈ RN×K has orthonormal columns and does not depend on the node featuresXi and T is aK×K tridiagonal matrix. Following [14], we prove the theorem below to bound the approximation error. Theorem 1. Let UΛU> be the eigendecomposition of an N ×N symmetric matrix S with Λi,i = λi, λ1 ≥ · · · ≥ λN and U = [u1, . . . , uN ]. Let Uj ≡ span{u1, . . . , uj}. Assume K-step Lanczos algorithm starts with vector v and outputs the orthogonal Q ∈ RN×K and tridiagonal T ∈ RK×K . For any j with 1 < j < N and K > j, we have, ‖S −QTQ>‖2F ≤ j∑ i=1 λ2i ( sin (v,Ui) ∏j−1 k=1(λk − λN )/(λk − λj) cos (v, ui)TK−i(1 + 2γi) )2 + N∑ i=j+1 λ2i , where TK−i(x) is the Chebyshev Polynomial of degree K − i and γi = (λi − λi+1)/(λi+1 − λN ). We leave the proof to the appendix. Note that the term ( ∑N i=j+1 λ 2 i ) 1/2 is the Frobenius norm of the error between S and the best rank-j approximation Sj . We decompose the tridiagonal matrix T = BRB>, where the K ×K diagonal matrix R contains the Ritz values and B ∈ RK×K is an orthogonal matrix. We have a low rank approximation of the affinity matrix S ≈ V RV >, where V = QB. Therefore, we can rewrite the graph convolution as, Yj = [Xi, SXi, . . . , S K−1Xi]wi,j ≈ [Xi, V RV >Xi, . . . , V RK−1V >Xi]wi,j , (4) The difference between Eq. (3) and Eq. (4) is that the former uses the orthonormal basis while the latter uses the approximation of the direct basis of KK(S,X:,i). Since we explicitly operate on the approximation of spectrum, i.e., Ritz value, it is a spectral filter. Such a filtering form will have significant computational benefits while considering the long range/scale dependency due to the fact that the t-th power of S can be approximated as St ≈ V RtV >, where we only need to raise the diagonal entries of R to the power t. Learning the Spectral Filter Following the previous filter, one can naturally design learnable spectral filters. Specifically, we use K different spectral filters of which the k-th output R̂(k) = fk([R,R 1, . . . , RK−1]), where fk is a multi-layer perceptron (MLP) and R is the diagonal vector of the corresponding diagonal matrix. We then construct a diagonal matrix R̂(k) based on the vector output of fk. Therefore, we have the following filtering, Yj = [Xi, V R̂(1)V >Xi, . . . , V R̂(K − 1)V >Xi]wi,j . (5) Note that it includes the polynomial filter as a special case. When positive semi-definiteness is a concern, one can apply an activation function like ReLU to the output of the MLPs. Multi-scale Graph Convolution Using any above filter, one can construct a deep graph convolutional network which leverages multi-scale information. Taking the learnable spectral filter as an example, we can write one graph convolution layer in a compact way as below, Y = [ LS1X, . . . , LSMX,V R̂(I1)V >X, . . . , V R̂(IN )V >X ] W, (6) where weight W ∈ R(M+E)D×O, S is a set of M short scale parameters and I is a set of E long scale parameters. We consider a non-negative integer as scale parameter, e.g., S = {0, 1, . . . , 5}, I = {10, 20, . . . , 50}. Note that the convolution corresponding to short scales is similar to [8] where the number of matrix-vector multiplications is tied to the maximum scale of S. In contrast, the convolution of long scales decouples the Lanczos step K and scale parameters I, thus permitting great freedom in tuning scales as hyperparameters. One can choose K properly to balance the computation cost and the accuracy of the low rank approximation. In our experiments, short scales are typically less than 10 which have reasonable computation cost. Moreover, the short scale part could sometimes remedy cases where the low rank approximation is crude. We set the long scale no larger than 100 in our experiments. If the maximum eigenvalue of S is 1, we can even raise the power to infinity, which corresponds to the equilibrium state of diffusion process on the graph. To build a deep network, we can stack multiple such graph convolution layers where each layer has its own spectral filter weights. Nonlinear activation functions, e.g., ReLU, and/or Dropout can be added between layers. The inference algorithm of such a deep network is shown in Alg. 2. With the top layer representation, one can use softmax to perform classification or a fully connected layer to perform regression. The Lanczos algorithm is run beforehand once per graph to construct the network and will not be invoked during inference and learning. 3.3 ADALANCZOSNET In this section, we explain another variant which back-propagates through the Lanczos algorithm. This facilitates learning the graph kernel and/or node embeddings. Graph Kernel Assume we are given node featuresX and a graph G. We are interested in learning a graph kernel function with the hope that it can capture the intrinsic geometry of node representations. Given data points xi, xj ∈ X , we define the anisotropic graph kernel, k : X × X 7→ R as, k(xi, xj) = exp ( −‖(fθ(xi)− fθ(xj))‖ 2 ) . (7) where fθ is a MLP. This class of anisotropic kernels is very expressive and includes self-tuning kernel [15] and the Gaussian kernel with Mahalanobis distances [16]. Moreover, for different kernel functions, the resulted graph Laplacians will converge to different limiting operators asymptotically. For example, even for isotropic Gaussian kernels, the graph Laplacian can converge pointwise to the Laplace-Beltrami, Fokker-Planck operator and heat kernel under different normalizations [17, 18]. In practice, we notice that choosing = ∑ (p,q)∈E ‖(fθ(xp)− fθ(xq))‖2/|E| helps normalizing the pairwise distances, thus avoiding the gradient vanishing issue due to the exponential function. This type of learnable anisotropic diffusion is useful in two ways. First, it increases model capacity, thus potentially gaining better performance. Second, it can better adapt to the non-uniform density of the data points on the manifold or nonlinear measurements of the underlying data points on a maninfold. We can construct an adjacency matrix A such that Ai,j = k(xi, xj) if (i, j) ∈ E and Ai,j = 0 otherwise. Then we can obtain the affinity matrix S = D− 1 2AD− 1 2 . Node Embedding In some applications, we do not observe the node features X but only the graph itself G, so we may need to learn an embedding vector per node. For example, this scenario applies in the quantum chemistry tasks where a node, i.e., an atom within a molecule, has rarely observed features. We can still use the above graph kernel to construct the affinity matrix which results in the same form except f is discarded. Learning embedding X naturally amounts to learning the similarities between nodes. Tridiagonal Decomposition Although all operations in LanczosNet are differentiable, we empirically observe that backpropagation through the eigendecomposition of the tridiagonal matrix is numerically instable. The situation would be even worse if multiple eigenvalues are numerically close or one takes a large power in Eq. (6). Therefore, we instead directly leverage the approximated tridiagonal decomposition S ≈ QTQ> which is obtained by running the Lanczos algorithm K steps. Then we can rewrite the graph convolution layer with learnable spectral filter as following, Y = [ SS1X, . . . , SSMX,Qf1 ( T I1 ) Q>X, . . . , QfN ( T IN ) Q>X ] W, (8) where fi is a learnable spectral filter. Each f is constructed from a separate MLP denoting as g which takes T ∈ RK×K as input and outputs a same sized matrix. To ensure f outputs a symmetric matrix, we define f(T ) = g(T ) + g(T )>. With the above parameterization of the graph Laplacian and tridiagonal decomposition, we can back-propagate the loss through the Lanczos algorithm to either the graph kernel parameters θ or the node embedding X . The overall model is similar to the LanczosNet except that the Lanczos algorithm needs to be invoked for each inference pass. 4 LANCZOS NETWORK AND DIFFUSION MAPS In this section, we highlight the relationship between LanczosNet and an important example of graph based manifold learning algorithms, diffusion maps [17]. Diffusion Maps In diffusion maps, the weights in the adjacency matrix define a discrete random walk over the graph, where the Markov transition matrix P = D−1A shows the transition probability in a single time step. Therefore, P ti,j sums the probability of all paths of length t that start at node i and end at node j. It is shown in [17] that P can be used to define an inner product in a Hilbert space. Specifically, we use the eigenvalues and right eigenvectors {λl, ψl}Nl=1 of P to define a diffusion mapping Φt as, Φt(i) = ( λt1ψ1(i), λ t 2ψ2(i), . . . , λ t NψN (i) ) , (9) where ψl(i) is the i-th entry of the eigenvector ψl. Since the row stochastic matrix P is similar to S, i.e., P = D−1/2SD1/2, we have ψl = D−1/2ul. The mapping Φt satisfies ∑N k=1 P t i,kP t j,k/Dk,k = 〈Φt(i),Φt(j)〉, where 〈·, ·〉 is the inner product over Euclidean space. The diffusion distance between i and j, d2DM,t(i, j) = ‖Φt(i)− Φt(j)‖ 2 = ∑N k=1 (P t i,k − P tj,k)2/Dk,k, is the weighted-l2 proximity between the probability clouds of random walkers starting at i and ending at j after t steps. Since all eigenvalues of S reside in the interval [−1, 1], for some large t, λtl in Eq. (9) is close to zero, and dDM,t can be well approximated by using only a few largest eigenvalues and their eigenvectors. Connection to Graph Convolution Apart from using diffusion maps to embed node features X at different time scales, one can use it to compute the frequency representations of X as below, X̂ = ΛtU>X, (10) where U are the eigenvectors of S and define the graph Fourier transform. The frequency representation X̂ is weighted by the powers of the eigenvalues λtl , suppressing entries with small magnitude of eigenvalues. Recall that in the convolution layer Eq. (5) of LanczosNet, we use multiple such frequency representations with different scales t and replace the eigenvalues Λ in Eq. (10) with their approximation, i.e., Ritz values. Therefore, in LanczosNet, spectral filters are actually applied to the frequency representations which are obtained by projecting the node features X onto multiple diffusion maps with different scales. 5 RELATED WORK We can roughly categorize the application of machine learning, especially deep learning, to graph structured data into supervised/semi-supervised and unsupervised scenarios. For the former, a majority of work focuses on node/graph classification and regression [19, 20, 21, 1]. For the latter, unsupervised node/graph embedding learning [22, 23] is common. Recently, generative models for graphs, such as molecule generation, has drawn some attention [24, 25]. Graph Convolution Based Models The first class of learning models on graphs stems from graph signal processing (GSP) [12, 4] which tries to generalize convolution operators from traditional signal processing to graphs. Relying on spectral graph theory [26] and graph wavelet theory [27], several definitions of frequency representations of graph signals have been proposed [4]. Among these, spectral graph theory based one is popular, where graph Fourier transform and its inverse are defined based on the eigenbasis of the graph Laplacian. Following this line, many graph convolution based deep network models emerge. [2, 28] are among the first to explore Laplacian based graph convolution within the context of deep networks. Meanwhile, [29] performs graph convolution directly based on the adjacency matrix to predict molecule fingerprints. [30] proposes a strategy to form same sized local neighborhoods and then apply convolution like regular CNNs. Chebyshev polynomials are exploited by [7] to construct localized polynomial filters for graph convolution and are later simplified in graph convolutional networks (GCN) [11]. Further accelerations for GCN based on importance sampling and control variate techniques have been proposed by [31, 32]. Several attention mechanisms have been introduced in [33, 34] to learn the weights over edges for GCNs. Notably, [8] proposes diffusion convolutional neural networks (DCNN) which uses diffusion operator for graph convolution. Lanczos method has been explored for graph convolution in [35] for the purpose of acceleration. Specifically, they only consider the localized polynomial filter case in our LanczosNet variant and do not explore the low rank decomposition, learnable spectral filter and graph kernel/node embedding learning as we do. Recurrent Neural Networks based Models The second class of models dates back to recursive neural networks [36] which recurrently apply neural networks to trees following a particular order. Graph neural networks (GNN) [3] generalize recursive neural networks to arbitrary graphs and exploit the synchronous schedule to propagate information on graphs. [37] later proposes the gated graph neural networks (GGNN) which improves GNN by adding gated recurrent unit and training the network with back-propagation through time. [38] learns graph embeddings via unrolling variational inference algorithms over a graph as a RNN. [39] introduces random subgraph sampling and explores different aggregation functions to scale GNN to large graphs. [40] proposes asynchronous propagation schedules based on graph partitions to improve GNN. Moreover, many applications have recently emerged for GNNs, including community detection [41], situation recognition [42], RGBD semantic segmentation [43], few-shot learning [21], probabilistic inference [44], continuous control of reinforcement learning [45, 46] and so on. Graph based Manifold Learning The non-linear dimensionality reduction methods, such as locally linear embedding (LLE) [47], ISOMAP [48], Hessian LLE [49], Laplacian eigenmaps [50], and diffusion maps [17], assume that the high-dimensional data lie on or close to a low dimensional manifold and use the local affinities in the weighted graph to learn the global features of the data. They are invaluable tools for embedding complex data in a low dimensional space and regressing functions over graphs. Spectral clustering [51, 52], semi-supervised learning [53], and out-of-sample extension [54] share the similar geometrical consideration of the associated graphs. Anisotropic graph kernels are useful in many applications. For example, [15] improves the spectral clustering results with a self-tuning diffusion kernel that takes into account the local variance at each node in the Gaussian kernel function. Similarly, [55] uses the anisotropic Gaussian kernel defined by the local Mahalanobis distances to extract independent components from nonlinear measurements of independent stochastic Itô processes. Manifold learning with anisotropic kernel is also useful for data-driven dynamical system analysis, for example, detecting intrinsically slow variable for a stochastic dynamical system [56], filtering dynamical processes [57], and long range climate forecasting [58, 59]. The anisotropic diffusion is able to use the local statistics of the measurements to convey the geometric information on the underlying factors rather than the specific realization or measurements at hand [60, 61]. 6 EXPERIMENTS In this section, we compare our two model variants against 9 recent graph networks, including graph convolution networks for fingerprint (GCN-FP) [29], gated graph neural networks (GGNN) [37], diffusion convolutional neural networks (DCNN) [8], Chebyshev networks (ChebyNet) [7], graph convolutional networks (GCN) [11], message passing neural networks (MPNN) [62], graph sample and aggregate (GraphSAGE) [39], graph partition neural networks (GPNN) [40], graph attention networks (GAT) [33]. We test them on two sets of tasks: (1) semi-supervised document classification on 3 citation networks [63], (2) supervised regression of molecule property on QM8 quantum chemistry dataset [64]. For fair comparison, we only tune model-related hyperparameters in all our experiments and share the others, e.g., using the same batch size. We carefully tune hyperparameters based on cross-validation and report the best performance of each competitor. Please refer to the appendix for more details on hyperparameters. We implement all methods using PyTorch [65] and release the code at https://github.com/lrjconan/LanczosNetwork. 6.1 CITATION NETWORKS Three citation networks used in this experiment are: Cora, Citeseer and Pubmed. For each network, nodes are documents and connected based on their citation links. Each node is associated with a bag-of-words feature vector. We use the same pre-processing procedure and follow the transductive setting as in [63]. In particular, given a portion of nodes and their labeled content categories, e.g., history, science, the task is to predict the category for other unlabeled nodes within the same graph. The statistics of these datasets are summarized in the appendix. All experiments are repeated 10 times with different random seeds. During each run, all methods share the same random seed. We first experiment with the public data split and observe severe overfitting for almost all algorithms. To mitigate overfitting and test the robustness of models, we then increase the difficulty of the task by reducing the portion of training examples to several levels and randomly split data. Experimental results and exact portions of training examples are shown in Table. 1. We use the reported best hyperparameters when available for public split and do cross-validation otherwise. Hyperparameters are reported in the appendix. From the table, we see that for random splits with different portion of training examples, since each run of experiment uses a separate random split, the overall variance is larger than its public counterpart. We see that GAT achieves the best performance on the public split but performs poorly on random splits with different portions of training examples. This is partly due to the fact that GAT uses multiple dropout throughout the model which helps only if there is overfitting. We can see that either LanczosNet or AdaLanczosNet achieves state-of-the-art accuracy on random difficult splits and performs closely with respect to GAT on public splits. This may be attributed to the fact that with fewer training examples, the model requires longer scale schemes to spread supervised information over the graph. Our model provides an efficient way of leveraging such long scale information. 6.2 QUANTUM CHEMISTRY We then benchmark all algorithms on the QM8 quantum chemistry dataset which comes from a recent study on modeling quantum mechanical calculations of electronic spectra and excited state energy of small molecules [64]. The setup of QM8 is as follows. Atoms are treated as nodes and they are connected to each other following the structure of the corresponding molecule. Each edge is labeled with a chemical bond. Note that two atoms in one molecule can have multiple edges belong to different chemical bonds. Therefore a molecule is actually modeled as a multigraph. We also use explicit hydrogen in molecule graphs as suggested in [62]. Since some models cannot leverage feature on edges easily, we use the molecule graph itself as the only input information for all models so that it is a fair comparison. As demonstrated in our ablation studies, learning node embeddings for atoms is very helpful. Therefore, we augment all competitors and our models with this component. The task is to predict 16 different quantities of electronic spectra and energy per molecule graph which boils down to a regression problem. There are 21786 molecule graphs in total of which the average numbers of nodes and edges are around 16 and 21. There are 6 different chemical bonds and 70 different atoms throughout the dataset. We use the split provided by DeepChem 2 which have 17428, 2179 and 2179 graphs for training, validation and testing respectively. Following [62, 66], we use mean squared error (MSE) as the loss for training and weighted mean absolute error (MAE) as the evaluation metric. We repeat all experiments 3 times with different random seeds and report the average performance and standard deviation. The same random seed is shared for all methods per run. Hyperparameters are reported in the appendix. The validation and test MAE of all methods are shown in Table 2. As you can see, LanczosNet and AdaLanczosNet achieve better performances than all other competitors. Note that DCNN also achieves good performance with the carefully chosen scale parameters since it is somewhat similar to our model in terms of leveraging multi-scale information. 6.3 ABLATION STUDY We also did a thorough ablation study of our modeling components on the validation set of QM8. Multi-Scale Graph Convolution: We first study the effect of multi-scale graph convolution. In order to rule out the impact of other factors, we use LanczosNet, do not employ the learnable spectral filter and use the one-hot encoding as the node embedding. The results are shown in the first row of Table 3. Using long scales for graph convolution clearly helps on this task. Combining both short and long scales further improves results. Lanczos Step: We then investigate the Lanczos step since it will have an impact on the accuracy of the low rank approximation induced by the Lanczos algorithm. The results are shown in the second row of Table 3. We can see that the performance is better with a relatively small Lanczos step like 10 and 20 which makes sense since the average number of nodes in this dataset is around 16. Learning Spectral Filter: We then study whether learning spectral filter will help improve performance. The results are shown in the third row of Table 3. Adding a 3-layer MLP does help reduce the 2https://deepchem.io/ error compared to not using any learnable spectral filter. Note that the MLP consists of 128 hidden units per layer and uses ReLU as the nonlinearity. However, using a deeper MLP does not seem to be helpful which might be caused by the challenges in optimization. Graph Kernel/Node Embedding: At last, we study the usefulness of adding graph kernel and node embeddings. We first fix the node embedding as one-hot encoding and learn a 3 layer MLP which is the function fθ in Eq. (7). Next, we learn the node embeddings directly. Intuitively, learning embeddings amounts to learn a separate function f per node whereas our graph kernel learning enforces that f is shared for all nodes, thus being more restrictive. As shown in the 3-rd and 4- th rows of the table, learning node embeddings significantly improves the performance for both LanczosNet and AdaLanczosNet and is more effective than learning graph kernels. Also, tuning the scale parameters further boosts the performance. 7 CONCLUSION In this paper, we propose LanczosNet which leverages the Lanczos algorithm to construct a low rank approximation of the graph Laplacian. It not only provides an efficient way to gather multi-scale information for graph convolution but also enables learning spectral filters. Additionally, we propose a model variant AdaLanczosNet which facilitates graph kernel and node embedding learning. We show that our model has a close relationship with graph based manifold learning, especially diffusion map. Experimental results demonstrate that our model outperforms a range of other graph networks, on challenging graph problems. We are currently exploring customized eigen-decomposition methods for tridiagonal matrices, which will potentially further improve our AdaLanczosNet. Overall, work in this direction holds promise for allowing deep learning to scale up to very large graph problems. ACKNOWLEDGMENTS RL thanks Roger Grosse for introducing the Lanczos algorithm to him. RL was supported by Connaught International Scholarships. RL, RU and RZ were supported in part by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/Interior Business Center (DoI/IBC) contract number D16PC00003. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: the views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/IBC, or the U.S. Government. 8 APPENDIX 8.1 LOW RANK APPROXIMATION We first state the following Lemma from [67] without proof and then prove our Theorem 1 following [14]. Lemma 1. Let A ∈ RN×N be symmetric and v an arbitrary vector. Define Krylov subspace Km ≡ span{v,Av, . . . , Am−1v}. Let A = UΛU> be the eigendecomposition of A with Λi,i = λi and λ1 ≥ · · · ≥ λn. Denoting U = [u1, . . . , uN ] and Uj = span{u1, . . . , uj}, then tan (uj ,Km) ≤ sin (v,Uj) ∏j−1 k=1(λk − λn)/(λk − λj) cos (v, uj)Tm−j(1 + 2γ) , where Tm−j(x) is the Chebyshev Polynomial of degree m− j and γ = (λj − λj+1)/(λj+1 − λN ). Theorem 1. Let UΛU> be the eigendecomposition of an N ×N symmetric matrix S with Λi,i = λi, λ1 ≥ · · · ≥ λN and U = [u1, . . . , uN ]. Let Uj ≡ span{u1, . . . , uj}. Assume K-step Lanczos algorithm starts with vector v and outputs the orthogonal Q ∈ RN×K and tridiagonal T ∈ RK×K . For any j with 1 < j < N and K > j, we have, ‖S −QTQ>‖2F ≤ j∑ i=1 λ2i ( sin (v,Ui) ∏j−1 k=1(λk − λN )/(λk − λj) cos (v, ui)TK−i(1 + 2γi) )2 + N∑ i=j+1 λ2i , where TK−i(x) is the Chebyshev Polynomial of degree K − i and γi = (λi − λi+1)/(λi+1 − λN ). Proof. From Lanczos algorithm, we have SQ = QT . Therefore, ‖S −QTQ>‖2F = ‖S − SQQ>‖2F = ‖S(I −QQ>)‖2F (11) Let P⊥Q ≡ I−QQ>, the orthogonal projection onto the orthogonal complement of subspace span{Q}. Relying on the eigendecomposition, we have, ‖S −QTQ>‖2F = ‖UΛU>(I −QQ>)‖2F = ‖ΛU>(I −QQ>)‖2F = ‖(I −QQ>)UΛ‖2F = ‖ [ λ1P ⊥ Q u1, . . . , λNP ⊥ Q uN ] ‖2F , (12) where we use the fact that ‖RA‖2F = ‖A‖2F for any orthogonal matrix R and ‖A>‖2F = ‖A‖2F . Note that for any j we have, ∥∥[λ1P⊥Q u1, . . . , λNP⊥Q uN ]∥∥2F = N∑ i=1 λ2i ‖P⊥Q ui‖2 ≤ j∑ i=1 λ2i ‖P⊥Q ui‖2 + N∑ i=j+1 λ2i , (13) where we use the fact that for any i, ‖P⊥Q ui‖2 = ‖ui‖2 − ‖ui − P⊥Q ui‖2 ≤ ‖ui‖2 = 1. Note that we have span{Q} = span{v, Sv, . . . , SK−1v} ≡ KK from the Lanczos algorithm. Therefore, we have, ‖P⊥Q ui‖ = | sin (ui,KK)| ≤ | tan (ui,KK)|. (14) Applying the above lemma with A = S, we finish the proof. 8.2 LANCZOS ALGORITHM Utilizing exact arithmetic, Lanczos vectors are orthogonal to each other. However, in floating point arithmetic, it is well known that the round-off error will make the Lanczos vectors lose orthogonality as the iteration proceeds. One could apply a full Gram-Schmidt (GS) process z = z − ∑j−1 i=1 z >qiqi after line 6 of Alg. 1 to ensure orthogonality. Other partial or selective re-orthogonalization could also be explored. However, since we found the orthgonality issue does not hurt overall performance with a small iteration number, e.g., K = 20, and the full GS process is computationally expensive, we do not add such a step. Although some customized eigendecomposition methods, e.g., implicit QL [68], exist for tridiagonal matrix, we leave it for future exploration due to its complicated implementation. 8.3 EXPERIMENTS For ChebyNet, we do not use graph coarsening in all experiments due to its demanding computation cost for large graphs. Also, for small molecule graphs, coarsening generally does not help since it loses information compared to directly stacking another layer of original graph. Citation Networks The statistics of three citation networks are summarized in Table 4. We now report the important hyperparameters chosen via cross-validation for each method. All methods are trained with Adam with learning rate 1.0e−2 and weight decay 5.0e−4. The maximum number of epoch is set to 200. Early stop with window size 10 is also adopted. We tune hyperparameters using Cora alone and fix them for citeseer and pubmed. For convolution based methods, we found 2 layers work the best. In GCN-FP, we set the hidden dimension to 64 and dropout to 0.5. In GGNN, we set the hidden dimension to 64, the propagate step to 2 and aggregation function to summation. In DCNN, we set the hidden dimension to 64, dropout to 0.5 and use diffusion scales {1, 2, 5}. In ChebyNet, we set the polynomial order to 5, the hidden dimension to 64 and dropout to 0.5. In GCN, we set the hidden dimension to 64 and dropout to 0.5. In MPNN, we use GRU as update function and set the hidden dimension to 64 and dropout to 0.5. No edge embedding is used as there is just one edge type. In GraphSAGE, we set the number of sampled neighbors to 500, the hidden dimension to 64, dropout to 0.5 and the aggregation function to average. In GAT, we set the number of heads per layer to 8, 1, hidden dimension per head to 8 and dropout to 0.6. In LanczosNet, we set the short and long diffusion scales to {1, 2, 5, 7} and {10, 20, 30} respectively. The hidden dimension is 64 and dropout is 0.5. Lanczos step is 20. 1-layer MLP with 128 hidden units and ReLU nonlinearity is used as the spectral filter. In AdaLanczosNet, we set the short and long diffusion scales to {1, 2, 5} and {10, 20} respectively. The hidden dimension is 64 and dropout is 0.5. Lanczos step is 20. 1-layer MLP with 128 hidden units and ReLU nonlinearity is used as the spectral filter. Quantum Chemistry We now report the important hyperparameters chosen via cross-validation for each method. All methods are trained with Adam with learning rate 1.0e−4 and no weight decay. The maximum number of epoch is set to 200. Early stop with window size 10 is also adopted. For convolution based methods, we found 7 layers work the best. We augment all methods with 64- dimension node embedding and add edge types by either feeding a multiple-channel graph Laplacian matrix or directly adding a separate message function per edge type. For all methods, no dropout is used since it slightly hurts the performance. In GCN-FP, we set the hidden dimension to 128. In GGNN, we set the hidden dimension to 128, the propagate step to 15 and aggregation function to average. In DCNN, we set the hidden dimension to 128 and use diffusion scales {3, 5, 7, 10, 20, 30}. In ChebyNet, we set the polynomial order to 5, the hidden dimension to 128. In GCN, we set the hidden dimension to 128. In MPNN, we use GRU as update function, set the number of propagation to 7, set the hidden dimension to 128, use a 1-layer MLP with 1024 hidden units and ReLU nonlinearity as the message function and set the number of unroll step of Set2Vec to 10. In GraphSAGE, we set the number of sampled neighbors to 40, the hidden dimension to 128 and the aggregation function to average. In GAT, we set the number of heads of all 7 layers to 8 and hidden dimension per head to 16. In LanczosNet, we do not use short diffusion scales and set long ones to {1, 2, 3, 5, 7, 10, 20, 30}. The hidden dimension is 128. Lanczos step is 20. 1-layer MLP with 128 hidden units and ReLU nonlinearity is used as the spectral filter. In AdaLanczosNet, we set the short and long diffusion scales to {1, 2, 3} and {5, 7, 10, 20, 30} respectively. The hidden dimension is 128. Lanczos step is 20. 3-layer MLP with 4096 hidden units and ReLU nonlinearity is used as the spectral filter.
1. What are the novel contributions and insights provided by the paper regarding graph convolutional networks? 2. How does the paper's approach differ from previous work in utilizing the Lanczos algorithm? 3. Can you explain how the proposed method benefits from exploring the low rank decomposition underlying the Lanczos algorithm? 4. How does the paper connect graph diffusion methods to the matrix power computation inherent to the Lanczos iteration? 5. What advantages does the proposed method offer over existing approaches in semi-supervised learning and molecule property prediction tasks? 6. Do you have any concerns or suggestions regarding the presentation quality or minor details in the paper?
Review
Review The paper under review builds useful insights and novel methods for graph convolutional networks, based on the Lanczos algorithm for efficient computations involving the graph Laplacian matrices induced by the neighbor edge structure of graph networks. While previous work [35] has explored the Lanczos algorithm from numerical linear algebra as a means to accelerate computations in graph convolutional networks, the current paper goes further by: (1) exploring in significant more depth the low rank decomposition underlying the Lanczos algorithm. (2) learning the spectral filter (beyond the Chebychev design) and potentially also the graph kernel and node embedding. (3) drawing interesting connections with graph diffusion methods which naturally arise from the matrix power computation inherent to the Lanczos iteration. The paper includes a systematic evaluation of the proposed approach and comparison with existing methods on two tasks: semi-supervised learning in citation networks and molecule property prediction from interactions in atom networks. The main advantage of the proposed method as illustrated in particular by the experimental results in the citation network domain is its ability to generalize well in the presence of a small amount of training data, which the authors attribute to its efficient capturing of both short- and long-range interactions. In terms of presentation quality, the paper is clearly written, the proposed methods are well explained, and the notation is consistent. Overall, a good paper. Minor comment: page 3, footnote: "When faced with a non-symmetric matrix, one can resort to the Arnoldi algorithm.": I was wondering if the authors have tried that? I think that the Arnoldi algorithm for non-symmetric matrices are significantly less stable than their Lanczos counterparts for symmetric matrices.
ICLR
Title LanczosNet: Multi-Scale Deep Graph Convolutional Networks Abstract We propose the Lanczos network (LanczosNet), which uses the Lanczos algorithm to construct low rank approximations of the graph Laplacian for graph convolution. Relying on the tridiagonal decomposition of the Lanczos algorithm, we not only efficiently exploit multi-scale information via fast approximated computation of matrix power but also design learnable spectral filters. Being fully differentiable, LanczosNet facilitates both graph kernel learning as well as learning node embeddings. We show the connection between our LanczosNet and graph based manifold learning methods, especially the diffusion maps. We benchmark our model against several recent deep graph networks on citation networks and QM8 quantum chemistry dataset. Experimental results show that our model achieves the state-of-the-art performance in most tasks. 1 INTRODUCTION Graph-structured data is ubiquitous in real world applications, social networks, gene expression regulatory networks, protein-protein interactions, and many other physical systems. How to model such data using machine learning, especially deep learning, has become a central research question [1]. For supervised and semi-supervised tasks such as graph or node classification and regression, learning based models can be roughly categorized into two classes, formulated either in terms of graph convolutions [2] or recurrent neural networks [3]. Methods based on recurrent neural networks (RNN), especially graph neural networks (GNN) [3], repeatedly unroll a message passing process over the graph by exchanging information between the nodes. In theory, a GNN can have as large a model capacity as its convolutional counterpart. However, due to the instability of RNN dynamics and difficulty of optimization, GNN and its variants are generally slower and harder to train. In this paper we focus on graph convolution based methods. Built on top of the graph signal processing (GSP) approaches [4], these methods extend convolution operators to graphs by leveraging spectral graph theory, graph wavelet theory, etc. Graph convolutions can be stacked and combined with nonlinear activation functions to build deep models, just as in regular convolutional neural networks (CNN). They often have large model capacity and achieve promising results. Also, graph convolution can be easily implemented with modern scientific computing libraries. There are two main issues with current graph convolution approaches. First, it is not clear how to efficiently leverage multi-scale information except by directly stacking multiple layers. Having an effective multi-scale scheme is key for enabling the model to be invariant to scale changes, and to capture many intrinsic regularities [5, 6]. Graph coarsening methods have been proposed to form a hierarchy of multi-scale graphs [7], but this coarsening process is fixed during both inference and learning which may cause some bias. Alternatively, the graph signal can be multiplied by the exponentiated graph Laplacian, where the exponent indicates the scale of the diffusion process on the graph [8]. Unfortunately, the computation and memory cost increases linearly with the exponent, which prohibits the exploitation of long scale diffusion in practice. Other fast methods for computing matrix power such as exponentiating by squaring are very memory intensive, even for moderately large graphs. Second, spectral filters within current graph convolution based models are mostly fixed. In the context of image processing, using a Gaussian kernel along with a spectral filter f(λ) = 2λ − λ2 corresponds to running forward the heat equation (blurring) followed by running it backwards (sharpening) [9]. Multi-scale kernels introduced in [10] extends the idea of forward-backward diffusion process and can be represented as polynomials of matrices related to a Gaussian kernel. Learning the spectral filters is thus beneficial since it learns the stochastic processes on the graph which produce useful representations for particular tasks. However, how to learn spectral filters which have large model capacity is largely underexplored. In this paper, we propose the Lanczos network (LanczosNet) to overcome the aforementioned issues. First, based on the tridiagonal decomposition implied by the Lanczos algorithm, our model exploits the low rank approximation of the graph Laplacian. This approximation facilitates efficient computation of matrix powers thus gathering multi-scale information easily. Second, we design learnable spectral filters based on the approximation which effectively increase model capacity. In scenarios where one wants to learn the graph kernel and/or node embeddings, we propose another variant, i.e., adaptive Lanczos network (AdaLanczosNet), which back-propagates through the Lanczos algorithm. We show that our proposed model is closely related to graph based manifold learning approaches such as diffusion maps which could potentially inspire more work from the intersection between deep graph networks and manifold learning. We benchmark against 9 recent deep graph networks, including both convolutional and RNN based methods, on citation networks and a quantum chemistry graph regression problem, and achieve state-of-the-art results in most tasks. 2 BACKGROUND In this section, we introduce some background material. A graph G with N nodes is denoted as G = (V, E , A), where A ∈ RN×N is an adjacency matrix which could either be binary or real valued. X ∈ RN×F is the compact representation of node features (or graph signal in the GSP literature). For any node v ∈ V , we denote its feature as a row vector Xv,: ∈ R1×F . We use X:,i to denote the i-th column of X . Graph Fourier Transform Given input node features X , we now discuss how to perform a graph convolution. Based on the adjacency matrix A, we compute the graph Laplacian L which can be defined in different ways: (1) L = D − A; (2) L = I −D−1A; (3) L = I −D− 12AD− 12 , where D is a diagonal degree matrix and Di,i = ∑N j=1Ai,j . The definition (3) is often used in the GSP literature due to the fact that it is real symmetric, positive semi-definite (PSD) and has eigenvalues lying in [0, 2]. In certain applications [11], it was found that adding self-loops, i.e., changing A to A + I , and using the affinity matrix S = D− 1 2AD− 1 2 instead of L gives better results. Since S is real symmetric, based on spectral decomposition, we have S = UΛU> where U is an orthogonal matrix and its column vectors are the eigenvectors of S. The diagonal matrix Λ contains the sorted eigenvalues where Λi,i = λi and 1 ≥ λ1 ≥ · · · ≥ λN ≥ −1. Based on the eigenbasis, we can define the graph Fourier transform Y = U>X and its inverse transform X̂ = UY following [12]. Note that L = I −D− 12AD− 12 shares the same eigenvectors with S = D− 12AD− 12 and the eigenvalues of L are µi = 1− λi. Therefore, L and S share the same graph Fourier transform which justifies the usages of S. Different forms of filters can be further constructed in the spectral domain. Localized Polynomial Filter A τ -localized polynomial filter is typically adopted in GSP literature [12], gw(Λ) = ∑τ−1 t=0 wtΛ t, where w = [w0, w1, . . . , wτ−1] ∈ Rτ×1 is the filter coefficient, i.e., learnable parameter. The filter is τ -localized in the sense that the filtering leverages information from nodes which are at most τ -hops away. One prominent example of this class is the Chebyshev polynomial filter [7]. Here the graph Laplacian is modified to L̃ = 2L/λmax− I such that its eigenvalues fall into [−1, 1]. Then the Chebyshev polynomial recursion is applied: X̃(t) = 2L̃X̃(t− 1)− X̃(t− 2) where X̃(0) = X and X̃(1) = L̃X . For a pair of input and output channels (i, j), the final filtering becomes, yi,j = [X̃(0):,i, . . . , X̃(τ − 1):,i]wi,j , where [·] means concatenation along columns and wi,j ∈ Rτ×1. Chebyshev polynomials provide two benefits: they form an orthogonal basis of L2([−1, 1], dy/ √ 1− y2) and one avoids the spectral decomposition of L̃ in the filtering. However, the functional form of the spectral filter is not learnable, and cannot adapt to the data. In this paper, instead of using the modified graph Laplacian L̃, we use the aforementioned S. Therefore, we can write the localized polynomial filtering in a more general form as, Y = τ−1∑ t=0 gt(S, . . . , S t, X)Wt, (1) where gt is a function that takes node features X and powers of the affinity matrices up to the t-th order as input and outputs a N × F matrix. Wt ∈ RF×O is the corresponding filter coefficient and Y ∈ RN×O is the output. One can easily verify that in the Chebyshev polynomial filter, any i-th column of the corresponding gt(X,S, . . . , St) lies in the Krylov subspace Kt+1(S,X:,i) ≡ span{X:,i, SX:,i, . . . , StX:,i}. This naturally motivates the usage of Krylov subspace methods, like the Lanczos algorithm [13], since it provides an orthonormal basis for the above Krylov subspace, thus making the filter coefficients compact. 3 LANCZOS NETWORKS In this section, we first introduce the Lanczos algorithm which approximates the affinity matrix S. We present our first model, called Lanczos network (LanczosNet), in which we execute the Lanczos algorithm once per graph and fix the basis throughout inference and learning. Then we introduce the adaptive Lanczos network (AdaLanczosNet) in which we learn the graph kernel and/or node embedding by back-propagating through the Lanczos algorithm. Algorithm 1 : Lanczos Algorithm 1: Input: S, x,K, 2: Initialization: β0 = 0, q0 = 0, and q1 = x/‖x‖ 3: For j = 1, 2, . . . ,K: 4: z = Sqj 5: γj = q>j z 6: z = z − γjqj − βj−1qj−1 7: βj = ‖z‖2 8: If βj < , quit 9: qj+1 = z/βj 10: 11: Q = [q1, q2, · · · , qK ] 12: Construct T following Eq. (2) 13: Eigen decomposition T = BRB> 14: Return V = QB and R. Algorithm 2 : LanczosNet 1: Input: Signal X , Lanczos output V and R, scale index sets S and I, 2: Initialization: Y0 = X 3: For ` = 1, 2, . . . , `c: 4: Z = Y`−1, Z = {∅} 5: For j = 1, 2, . . . ,max(S): 6: Z = SZ 7: If j ∈ S: 8: Z = Z ∪ Z 9: For i ∈ I: 10: Z = Z ∪ V R̂(Ii)V >Y`−1 11: Y` = concat(Z)W` 12: If ` < L 13: Y` = Dropout(σ(Y`)) 14: Return Y`c . 3.1 LANCZOS ALGORITHM Given the aforementioned affinity matrix S1 and node features x ∈ RN×1, the N -step Lanczos algorithm computes an orthogonal matrix Q and a symmetric tridiagonal matrix T , such that Q>SQ = T . We denote Q = [q1, · · · , qN ] where column vector qi is the i-th Lanczos vector. T is illustrated as below, T = γ1 β1 β1 . . . . . . . . . . . . βN−1 βN−1 γN . (2) One can verify that Q forms an orthonormal basis of the Krylov subspace KN (S, x) and the first K columns of Q forms the orthonormal basis of KK(S, x). The Lanczos algorithm is shown in detail in Alg. 1. Intuitively, if we investigate the j-th column of the system SQ = QT and rearrange terms, 1When faced with a non-symmetric matrix, one can resort to the Arnoldi algorithm. we obtain βjqj+1 = Sqj − βj−1qj−1 − γjqj , which clearly explains lines 4 to 6 of the pseudocode, i.e., it tries to solve the system in an iterative manner. Note that the most expensive operation in the algorithm is the matrix-vector multiplication in line 4. After obtaining the tridiagonal matrix T , we can compute the Ritz values and Ritz vectors which approximate the eigenvalues and eigenvectors of S by diagonalizing the matrix T . We only add this step in LanczosNet as we found back-propagating through the eigendecomposition in AdaLanczosNet is not numerically stable. 3.2 LANCZOSNET In this section, we first show the construction of the localized polynomial filter based on the Lanczos algorithm’s output and discuss its limitations. Then we explain how to construct the spectral filter using a particular low rank approximation and how to further make the filter learnable. At last, we elaborate how to construct multi-scale graph convolution and build a deep network. Localized Polynomial Filter For the ease of demonstrating the concept of Krylov subspace, we consider a pair of input and output channels (i, j). We denote the input as X:,i ∈ RN×1 and the output as Y:,j ∈ RN×1. Executing the Lanczos algorithm for K steps with the normalized X:,i as the starting vector, one can obtain the orthonormal basis Q̃ of KK(S,X:,i) and the corresponding tridiagonal matrix T̃ . Recall that in the localized polynomial filtering, given the orthonormal basis of KK(S,X:,i), one can write the graph convolution as Yj = Q̃wi,j , (3) where Q̃ ∈ RN×K depends on the X:,i and wi,j ∈ RK×1 is the learnable parameter. This filter has the benefit that the corresponding learnable coefficients are compact due to the orthonormal basis. However, if one wants to stack multiple graph convolution layers, the dependency of Q̃ on X:,i implies that a separate run of Lanczos algorithm is necessary for each graph convolution layer which is computationally demanding. Spectral Filter Ideally, we would like to compute Lanczos vectors only once during the inference of a deep graph convolutional network. Luckily, this can be achieved if we take an alternative view of Lanczos algorithm. In particular, we can choose a random starting vector with unit norm and treat the K step Lanczos layer’s output as the low rank approximation S ≈ QTQ>. Note that here Q ∈ RN×K has orthonormal columns and does not depend on the node featuresXi and T is aK×K tridiagonal matrix. Following [14], we prove the theorem below to bound the approximation error. Theorem 1. Let UΛU> be the eigendecomposition of an N ×N symmetric matrix S with Λi,i = λi, λ1 ≥ · · · ≥ λN and U = [u1, . . . , uN ]. Let Uj ≡ span{u1, . . . , uj}. Assume K-step Lanczos algorithm starts with vector v and outputs the orthogonal Q ∈ RN×K and tridiagonal T ∈ RK×K . For any j with 1 < j < N and K > j, we have, ‖S −QTQ>‖2F ≤ j∑ i=1 λ2i ( sin (v,Ui) ∏j−1 k=1(λk − λN )/(λk − λj) cos (v, ui)TK−i(1 + 2γi) )2 + N∑ i=j+1 λ2i , where TK−i(x) is the Chebyshev Polynomial of degree K − i and γi = (λi − λi+1)/(λi+1 − λN ). We leave the proof to the appendix. Note that the term ( ∑N i=j+1 λ 2 i ) 1/2 is the Frobenius norm of the error between S and the best rank-j approximation Sj . We decompose the tridiagonal matrix T = BRB>, where the K ×K diagonal matrix R contains the Ritz values and B ∈ RK×K is an orthogonal matrix. We have a low rank approximation of the affinity matrix S ≈ V RV >, where V = QB. Therefore, we can rewrite the graph convolution as, Yj = [Xi, SXi, . . . , S K−1Xi]wi,j ≈ [Xi, V RV >Xi, . . . , V RK−1V >Xi]wi,j , (4) The difference between Eq. (3) and Eq. (4) is that the former uses the orthonormal basis while the latter uses the approximation of the direct basis of KK(S,X:,i). Since we explicitly operate on the approximation of spectrum, i.e., Ritz value, it is a spectral filter. Such a filtering form will have significant computational benefits while considering the long range/scale dependency due to the fact that the t-th power of S can be approximated as St ≈ V RtV >, where we only need to raise the diagonal entries of R to the power t. Learning the Spectral Filter Following the previous filter, one can naturally design learnable spectral filters. Specifically, we use K different spectral filters of which the k-th output R̂(k) = fk([R,R 1, . . . , RK−1]), where fk is a multi-layer perceptron (MLP) and R is the diagonal vector of the corresponding diagonal matrix. We then construct a diagonal matrix R̂(k) based on the vector output of fk. Therefore, we have the following filtering, Yj = [Xi, V R̂(1)V >Xi, . . . , V R̂(K − 1)V >Xi]wi,j . (5) Note that it includes the polynomial filter as a special case. When positive semi-definiteness is a concern, one can apply an activation function like ReLU to the output of the MLPs. Multi-scale Graph Convolution Using any above filter, one can construct a deep graph convolutional network which leverages multi-scale information. Taking the learnable spectral filter as an example, we can write one graph convolution layer in a compact way as below, Y = [ LS1X, . . . , LSMX,V R̂(I1)V >X, . . . , V R̂(IN )V >X ] W, (6) where weight W ∈ R(M+E)D×O, S is a set of M short scale parameters and I is a set of E long scale parameters. We consider a non-negative integer as scale parameter, e.g., S = {0, 1, . . . , 5}, I = {10, 20, . . . , 50}. Note that the convolution corresponding to short scales is similar to [8] where the number of matrix-vector multiplications is tied to the maximum scale of S. In contrast, the convolution of long scales decouples the Lanczos step K and scale parameters I, thus permitting great freedom in tuning scales as hyperparameters. One can choose K properly to balance the computation cost and the accuracy of the low rank approximation. In our experiments, short scales are typically less than 10 which have reasonable computation cost. Moreover, the short scale part could sometimes remedy cases where the low rank approximation is crude. We set the long scale no larger than 100 in our experiments. If the maximum eigenvalue of S is 1, we can even raise the power to infinity, which corresponds to the equilibrium state of diffusion process on the graph. To build a deep network, we can stack multiple such graph convolution layers where each layer has its own spectral filter weights. Nonlinear activation functions, e.g., ReLU, and/or Dropout can be added between layers. The inference algorithm of such a deep network is shown in Alg. 2. With the top layer representation, one can use softmax to perform classification or a fully connected layer to perform regression. The Lanczos algorithm is run beforehand once per graph to construct the network and will not be invoked during inference and learning. 3.3 ADALANCZOSNET In this section, we explain another variant which back-propagates through the Lanczos algorithm. This facilitates learning the graph kernel and/or node embeddings. Graph Kernel Assume we are given node featuresX and a graph G. We are interested in learning a graph kernel function with the hope that it can capture the intrinsic geometry of node representations. Given data points xi, xj ∈ X , we define the anisotropic graph kernel, k : X × X 7→ R as, k(xi, xj) = exp ( −‖(fθ(xi)− fθ(xj))‖ 2 ) . (7) where fθ is a MLP. This class of anisotropic kernels is very expressive and includes self-tuning kernel [15] and the Gaussian kernel with Mahalanobis distances [16]. Moreover, for different kernel functions, the resulted graph Laplacians will converge to different limiting operators asymptotically. For example, even for isotropic Gaussian kernels, the graph Laplacian can converge pointwise to the Laplace-Beltrami, Fokker-Planck operator and heat kernel under different normalizations [17, 18]. In practice, we notice that choosing = ∑ (p,q)∈E ‖(fθ(xp)− fθ(xq))‖2/|E| helps normalizing the pairwise distances, thus avoiding the gradient vanishing issue due to the exponential function. This type of learnable anisotropic diffusion is useful in two ways. First, it increases model capacity, thus potentially gaining better performance. Second, it can better adapt to the non-uniform density of the data points on the manifold or nonlinear measurements of the underlying data points on a maninfold. We can construct an adjacency matrix A such that Ai,j = k(xi, xj) if (i, j) ∈ E and Ai,j = 0 otherwise. Then we can obtain the affinity matrix S = D− 1 2AD− 1 2 . Node Embedding In some applications, we do not observe the node features X but only the graph itself G, so we may need to learn an embedding vector per node. For example, this scenario applies in the quantum chemistry tasks where a node, i.e., an atom within a molecule, has rarely observed features. We can still use the above graph kernel to construct the affinity matrix which results in the same form except f is discarded. Learning embedding X naturally amounts to learning the similarities between nodes. Tridiagonal Decomposition Although all operations in LanczosNet are differentiable, we empirically observe that backpropagation through the eigendecomposition of the tridiagonal matrix is numerically instable. The situation would be even worse if multiple eigenvalues are numerically close or one takes a large power in Eq. (6). Therefore, we instead directly leverage the approximated tridiagonal decomposition S ≈ QTQ> which is obtained by running the Lanczos algorithm K steps. Then we can rewrite the graph convolution layer with learnable spectral filter as following, Y = [ SS1X, . . . , SSMX,Qf1 ( T I1 ) Q>X, . . . , QfN ( T IN ) Q>X ] W, (8) where fi is a learnable spectral filter. Each f is constructed from a separate MLP denoting as g which takes T ∈ RK×K as input and outputs a same sized matrix. To ensure f outputs a symmetric matrix, we define f(T ) = g(T ) + g(T )>. With the above parameterization of the graph Laplacian and tridiagonal decomposition, we can back-propagate the loss through the Lanczos algorithm to either the graph kernel parameters θ or the node embedding X . The overall model is similar to the LanczosNet except that the Lanczos algorithm needs to be invoked for each inference pass. 4 LANCZOS NETWORK AND DIFFUSION MAPS In this section, we highlight the relationship between LanczosNet and an important example of graph based manifold learning algorithms, diffusion maps [17]. Diffusion Maps In diffusion maps, the weights in the adjacency matrix define a discrete random walk over the graph, where the Markov transition matrix P = D−1A shows the transition probability in a single time step. Therefore, P ti,j sums the probability of all paths of length t that start at node i and end at node j. It is shown in [17] that P can be used to define an inner product in a Hilbert space. Specifically, we use the eigenvalues and right eigenvectors {λl, ψl}Nl=1 of P to define a diffusion mapping Φt as, Φt(i) = ( λt1ψ1(i), λ t 2ψ2(i), . . . , λ t NψN (i) ) , (9) where ψl(i) is the i-th entry of the eigenvector ψl. Since the row stochastic matrix P is similar to S, i.e., P = D−1/2SD1/2, we have ψl = D−1/2ul. The mapping Φt satisfies ∑N k=1 P t i,kP t j,k/Dk,k = 〈Φt(i),Φt(j)〉, where 〈·, ·〉 is the inner product over Euclidean space. The diffusion distance between i and j, d2DM,t(i, j) = ‖Φt(i)− Φt(j)‖ 2 = ∑N k=1 (P t i,k − P tj,k)2/Dk,k, is the weighted-l2 proximity between the probability clouds of random walkers starting at i and ending at j after t steps. Since all eigenvalues of S reside in the interval [−1, 1], for some large t, λtl in Eq. (9) is close to zero, and dDM,t can be well approximated by using only a few largest eigenvalues and their eigenvectors. Connection to Graph Convolution Apart from using diffusion maps to embed node features X at different time scales, one can use it to compute the frequency representations of X as below, X̂ = ΛtU>X, (10) where U are the eigenvectors of S and define the graph Fourier transform. The frequency representation X̂ is weighted by the powers of the eigenvalues λtl , suppressing entries with small magnitude of eigenvalues. Recall that in the convolution layer Eq. (5) of LanczosNet, we use multiple such frequency representations with different scales t and replace the eigenvalues Λ in Eq. (10) with their approximation, i.e., Ritz values. Therefore, in LanczosNet, spectral filters are actually applied to the frequency representations which are obtained by projecting the node features X onto multiple diffusion maps with different scales. 5 RELATED WORK We can roughly categorize the application of machine learning, especially deep learning, to graph structured data into supervised/semi-supervised and unsupervised scenarios. For the former, a majority of work focuses on node/graph classification and regression [19, 20, 21, 1]. For the latter, unsupervised node/graph embedding learning [22, 23] is common. Recently, generative models for graphs, such as molecule generation, has drawn some attention [24, 25]. Graph Convolution Based Models The first class of learning models on graphs stems from graph signal processing (GSP) [12, 4] which tries to generalize convolution operators from traditional signal processing to graphs. Relying on spectral graph theory [26] and graph wavelet theory [27], several definitions of frequency representations of graph signals have been proposed [4]. Among these, spectral graph theory based one is popular, where graph Fourier transform and its inverse are defined based on the eigenbasis of the graph Laplacian. Following this line, many graph convolution based deep network models emerge. [2, 28] are among the first to explore Laplacian based graph convolution within the context of deep networks. Meanwhile, [29] performs graph convolution directly based on the adjacency matrix to predict molecule fingerprints. [30] proposes a strategy to form same sized local neighborhoods and then apply convolution like regular CNNs. Chebyshev polynomials are exploited by [7] to construct localized polynomial filters for graph convolution and are later simplified in graph convolutional networks (GCN) [11]. Further accelerations for GCN based on importance sampling and control variate techniques have been proposed by [31, 32]. Several attention mechanisms have been introduced in [33, 34] to learn the weights over edges for GCNs. Notably, [8] proposes diffusion convolutional neural networks (DCNN) which uses diffusion operator for graph convolution. Lanczos method has been explored for graph convolution in [35] for the purpose of acceleration. Specifically, they only consider the localized polynomial filter case in our LanczosNet variant and do not explore the low rank decomposition, learnable spectral filter and graph kernel/node embedding learning as we do. Recurrent Neural Networks based Models The second class of models dates back to recursive neural networks [36] which recurrently apply neural networks to trees following a particular order. Graph neural networks (GNN) [3] generalize recursive neural networks to arbitrary graphs and exploit the synchronous schedule to propagate information on graphs. [37] later proposes the gated graph neural networks (GGNN) which improves GNN by adding gated recurrent unit and training the network with back-propagation through time. [38] learns graph embeddings via unrolling variational inference algorithms over a graph as a RNN. [39] introduces random subgraph sampling and explores different aggregation functions to scale GNN to large graphs. [40] proposes asynchronous propagation schedules based on graph partitions to improve GNN. Moreover, many applications have recently emerged for GNNs, including community detection [41], situation recognition [42], RGBD semantic segmentation [43], few-shot learning [21], probabilistic inference [44], continuous control of reinforcement learning [45, 46] and so on. Graph based Manifold Learning The non-linear dimensionality reduction methods, such as locally linear embedding (LLE) [47], ISOMAP [48], Hessian LLE [49], Laplacian eigenmaps [50], and diffusion maps [17], assume that the high-dimensional data lie on or close to a low dimensional manifold and use the local affinities in the weighted graph to learn the global features of the data. They are invaluable tools for embedding complex data in a low dimensional space and regressing functions over graphs. Spectral clustering [51, 52], semi-supervised learning [53], and out-of-sample extension [54] share the similar geometrical consideration of the associated graphs. Anisotropic graph kernels are useful in many applications. For example, [15] improves the spectral clustering results with a self-tuning diffusion kernel that takes into account the local variance at each node in the Gaussian kernel function. Similarly, [55] uses the anisotropic Gaussian kernel defined by the local Mahalanobis distances to extract independent components from nonlinear measurements of independent stochastic Itô processes. Manifold learning with anisotropic kernel is also useful for data-driven dynamical system analysis, for example, detecting intrinsically slow variable for a stochastic dynamical system [56], filtering dynamical processes [57], and long range climate forecasting [58, 59]. The anisotropic diffusion is able to use the local statistics of the measurements to convey the geometric information on the underlying factors rather than the specific realization or measurements at hand [60, 61]. 6 EXPERIMENTS In this section, we compare our two model variants against 9 recent graph networks, including graph convolution networks for fingerprint (GCN-FP) [29], gated graph neural networks (GGNN) [37], diffusion convolutional neural networks (DCNN) [8], Chebyshev networks (ChebyNet) [7], graph convolutional networks (GCN) [11], message passing neural networks (MPNN) [62], graph sample and aggregate (GraphSAGE) [39], graph partition neural networks (GPNN) [40], graph attention networks (GAT) [33]. We test them on two sets of tasks: (1) semi-supervised document classification on 3 citation networks [63], (2) supervised regression of molecule property on QM8 quantum chemistry dataset [64]. For fair comparison, we only tune model-related hyperparameters in all our experiments and share the others, e.g., using the same batch size. We carefully tune hyperparameters based on cross-validation and report the best performance of each competitor. Please refer to the appendix for more details on hyperparameters. We implement all methods using PyTorch [65] and release the code at https://github.com/lrjconan/LanczosNetwork. 6.1 CITATION NETWORKS Three citation networks used in this experiment are: Cora, Citeseer and Pubmed. For each network, nodes are documents and connected based on their citation links. Each node is associated with a bag-of-words feature vector. We use the same pre-processing procedure and follow the transductive setting as in [63]. In particular, given a portion of nodes and their labeled content categories, e.g., history, science, the task is to predict the category for other unlabeled nodes within the same graph. The statistics of these datasets are summarized in the appendix. All experiments are repeated 10 times with different random seeds. During each run, all methods share the same random seed. We first experiment with the public data split and observe severe overfitting for almost all algorithms. To mitigate overfitting and test the robustness of models, we then increase the difficulty of the task by reducing the portion of training examples to several levels and randomly split data. Experimental results and exact portions of training examples are shown in Table. 1. We use the reported best hyperparameters when available for public split and do cross-validation otherwise. Hyperparameters are reported in the appendix. From the table, we see that for random splits with different portion of training examples, since each run of experiment uses a separate random split, the overall variance is larger than its public counterpart. We see that GAT achieves the best performance on the public split but performs poorly on random splits with different portions of training examples. This is partly due to the fact that GAT uses multiple dropout throughout the model which helps only if there is overfitting. We can see that either LanczosNet or AdaLanczosNet achieves state-of-the-art accuracy on random difficult splits and performs closely with respect to GAT on public splits. This may be attributed to the fact that with fewer training examples, the model requires longer scale schemes to spread supervised information over the graph. Our model provides an efficient way of leveraging such long scale information. 6.2 QUANTUM CHEMISTRY We then benchmark all algorithms on the QM8 quantum chemistry dataset which comes from a recent study on modeling quantum mechanical calculations of electronic spectra and excited state energy of small molecules [64]. The setup of QM8 is as follows. Atoms are treated as nodes and they are connected to each other following the structure of the corresponding molecule. Each edge is labeled with a chemical bond. Note that two atoms in one molecule can have multiple edges belong to different chemical bonds. Therefore a molecule is actually modeled as a multigraph. We also use explicit hydrogen in molecule graphs as suggested in [62]. Since some models cannot leverage feature on edges easily, we use the molecule graph itself as the only input information for all models so that it is a fair comparison. As demonstrated in our ablation studies, learning node embeddings for atoms is very helpful. Therefore, we augment all competitors and our models with this component. The task is to predict 16 different quantities of electronic spectra and energy per molecule graph which boils down to a regression problem. There are 21786 molecule graphs in total of which the average numbers of nodes and edges are around 16 and 21. There are 6 different chemical bonds and 70 different atoms throughout the dataset. We use the split provided by DeepChem 2 which have 17428, 2179 and 2179 graphs for training, validation and testing respectively. Following [62, 66], we use mean squared error (MSE) as the loss for training and weighted mean absolute error (MAE) as the evaluation metric. We repeat all experiments 3 times with different random seeds and report the average performance and standard deviation. The same random seed is shared for all methods per run. Hyperparameters are reported in the appendix. The validation and test MAE of all methods are shown in Table 2. As you can see, LanczosNet and AdaLanczosNet achieve better performances than all other competitors. Note that DCNN also achieves good performance with the carefully chosen scale parameters since it is somewhat similar to our model in terms of leveraging multi-scale information. 6.3 ABLATION STUDY We also did a thorough ablation study of our modeling components on the validation set of QM8. Multi-Scale Graph Convolution: We first study the effect of multi-scale graph convolution. In order to rule out the impact of other factors, we use LanczosNet, do not employ the learnable spectral filter and use the one-hot encoding as the node embedding. The results are shown in the first row of Table 3. Using long scales for graph convolution clearly helps on this task. Combining both short and long scales further improves results. Lanczos Step: We then investigate the Lanczos step since it will have an impact on the accuracy of the low rank approximation induced by the Lanczos algorithm. The results are shown in the second row of Table 3. We can see that the performance is better with a relatively small Lanczos step like 10 and 20 which makes sense since the average number of nodes in this dataset is around 16. Learning Spectral Filter: We then study whether learning spectral filter will help improve performance. The results are shown in the third row of Table 3. Adding a 3-layer MLP does help reduce the 2https://deepchem.io/ error compared to not using any learnable spectral filter. Note that the MLP consists of 128 hidden units per layer and uses ReLU as the nonlinearity. However, using a deeper MLP does not seem to be helpful which might be caused by the challenges in optimization. Graph Kernel/Node Embedding: At last, we study the usefulness of adding graph kernel and node embeddings. We first fix the node embedding as one-hot encoding and learn a 3 layer MLP which is the function fθ in Eq. (7). Next, we learn the node embeddings directly. Intuitively, learning embeddings amounts to learn a separate function f per node whereas our graph kernel learning enforces that f is shared for all nodes, thus being more restrictive. As shown in the 3-rd and 4- th rows of the table, learning node embeddings significantly improves the performance for both LanczosNet and AdaLanczosNet and is more effective than learning graph kernels. Also, tuning the scale parameters further boosts the performance. 7 CONCLUSION In this paper, we propose LanczosNet which leverages the Lanczos algorithm to construct a low rank approximation of the graph Laplacian. It not only provides an efficient way to gather multi-scale information for graph convolution but also enables learning spectral filters. Additionally, we propose a model variant AdaLanczosNet which facilitates graph kernel and node embedding learning. We show that our model has a close relationship with graph based manifold learning, especially diffusion map. Experimental results demonstrate that our model outperforms a range of other graph networks, on challenging graph problems. We are currently exploring customized eigen-decomposition methods for tridiagonal matrices, which will potentially further improve our AdaLanczosNet. Overall, work in this direction holds promise for allowing deep learning to scale up to very large graph problems. ACKNOWLEDGMENTS RL thanks Roger Grosse for introducing the Lanczos algorithm to him. RL was supported by Connaught International Scholarships. RL, RU and RZ were supported in part by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/Interior Business Center (DoI/IBC) contract number D16PC00003. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: the views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/IBC, or the U.S. Government. 8 APPENDIX 8.1 LOW RANK APPROXIMATION We first state the following Lemma from [67] without proof and then prove our Theorem 1 following [14]. Lemma 1. Let A ∈ RN×N be symmetric and v an arbitrary vector. Define Krylov subspace Km ≡ span{v,Av, . . . , Am−1v}. Let A = UΛU> be the eigendecomposition of A with Λi,i = λi and λ1 ≥ · · · ≥ λn. Denoting U = [u1, . . . , uN ] and Uj = span{u1, . . . , uj}, then tan (uj ,Km) ≤ sin (v,Uj) ∏j−1 k=1(λk − λn)/(λk − λj) cos (v, uj)Tm−j(1 + 2γ) , where Tm−j(x) is the Chebyshev Polynomial of degree m− j and γ = (λj − λj+1)/(λj+1 − λN ). Theorem 1. Let UΛU> be the eigendecomposition of an N ×N symmetric matrix S with Λi,i = λi, λ1 ≥ · · · ≥ λN and U = [u1, . . . , uN ]. Let Uj ≡ span{u1, . . . , uj}. Assume K-step Lanczos algorithm starts with vector v and outputs the orthogonal Q ∈ RN×K and tridiagonal T ∈ RK×K . For any j with 1 < j < N and K > j, we have, ‖S −QTQ>‖2F ≤ j∑ i=1 λ2i ( sin (v,Ui) ∏j−1 k=1(λk − λN )/(λk − λj) cos (v, ui)TK−i(1 + 2γi) )2 + N∑ i=j+1 λ2i , where TK−i(x) is the Chebyshev Polynomial of degree K − i and γi = (λi − λi+1)/(λi+1 − λN ). Proof. From Lanczos algorithm, we have SQ = QT . Therefore, ‖S −QTQ>‖2F = ‖S − SQQ>‖2F = ‖S(I −QQ>)‖2F (11) Let P⊥Q ≡ I−QQ>, the orthogonal projection onto the orthogonal complement of subspace span{Q}. Relying on the eigendecomposition, we have, ‖S −QTQ>‖2F = ‖UΛU>(I −QQ>)‖2F = ‖ΛU>(I −QQ>)‖2F = ‖(I −QQ>)UΛ‖2F = ‖ [ λ1P ⊥ Q u1, . . . , λNP ⊥ Q uN ] ‖2F , (12) where we use the fact that ‖RA‖2F = ‖A‖2F for any orthogonal matrix R and ‖A>‖2F = ‖A‖2F . Note that for any j we have, ∥∥[λ1P⊥Q u1, . . . , λNP⊥Q uN ]∥∥2F = N∑ i=1 λ2i ‖P⊥Q ui‖2 ≤ j∑ i=1 λ2i ‖P⊥Q ui‖2 + N∑ i=j+1 λ2i , (13) where we use the fact that for any i, ‖P⊥Q ui‖2 = ‖ui‖2 − ‖ui − P⊥Q ui‖2 ≤ ‖ui‖2 = 1. Note that we have span{Q} = span{v, Sv, . . . , SK−1v} ≡ KK from the Lanczos algorithm. Therefore, we have, ‖P⊥Q ui‖ = | sin (ui,KK)| ≤ | tan (ui,KK)|. (14) Applying the above lemma with A = S, we finish the proof. 8.2 LANCZOS ALGORITHM Utilizing exact arithmetic, Lanczos vectors are orthogonal to each other. However, in floating point arithmetic, it is well known that the round-off error will make the Lanczos vectors lose orthogonality as the iteration proceeds. One could apply a full Gram-Schmidt (GS) process z = z − ∑j−1 i=1 z >qiqi after line 6 of Alg. 1 to ensure orthogonality. Other partial or selective re-orthogonalization could also be explored. However, since we found the orthgonality issue does not hurt overall performance with a small iteration number, e.g., K = 20, and the full GS process is computationally expensive, we do not add such a step. Although some customized eigendecomposition methods, e.g., implicit QL [68], exist for tridiagonal matrix, we leave it for future exploration due to its complicated implementation. 8.3 EXPERIMENTS For ChebyNet, we do not use graph coarsening in all experiments due to its demanding computation cost for large graphs. Also, for small molecule graphs, coarsening generally does not help since it loses information compared to directly stacking another layer of original graph. Citation Networks The statistics of three citation networks are summarized in Table 4. We now report the important hyperparameters chosen via cross-validation for each method. All methods are trained with Adam with learning rate 1.0e−2 and weight decay 5.0e−4. The maximum number of epoch is set to 200. Early stop with window size 10 is also adopted. We tune hyperparameters using Cora alone and fix them for citeseer and pubmed. For convolution based methods, we found 2 layers work the best. In GCN-FP, we set the hidden dimension to 64 and dropout to 0.5. In GGNN, we set the hidden dimension to 64, the propagate step to 2 and aggregation function to summation. In DCNN, we set the hidden dimension to 64, dropout to 0.5 and use diffusion scales {1, 2, 5}. In ChebyNet, we set the polynomial order to 5, the hidden dimension to 64 and dropout to 0.5. In GCN, we set the hidden dimension to 64 and dropout to 0.5. In MPNN, we use GRU as update function and set the hidden dimension to 64 and dropout to 0.5. No edge embedding is used as there is just one edge type. In GraphSAGE, we set the number of sampled neighbors to 500, the hidden dimension to 64, dropout to 0.5 and the aggregation function to average. In GAT, we set the number of heads per layer to 8, 1, hidden dimension per head to 8 and dropout to 0.6. In LanczosNet, we set the short and long diffusion scales to {1, 2, 5, 7} and {10, 20, 30} respectively. The hidden dimension is 64 and dropout is 0.5. Lanczos step is 20. 1-layer MLP with 128 hidden units and ReLU nonlinearity is used as the spectral filter. In AdaLanczosNet, we set the short and long diffusion scales to {1, 2, 5} and {10, 20} respectively. The hidden dimension is 64 and dropout is 0.5. Lanczos step is 20. 1-layer MLP with 128 hidden units and ReLU nonlinearity is used as the spectral filter. Quantum Chemistry We now report the important hyperparameters chosen via cross-validation for each method. All methods are trained with Adam with learning rate 1.0e−4 and no weight decay. The maximum number of epoch is set to 200. Early stop with window size 10 is also adopted. For convolution based methods, we found 7 layers work the best. We augment all methods with 64- dimension node embedding and add edge types by either feeding a multiple-channel graph Laplacian matrix or directly adding a separate message function per edge type. For all methods, no dropout is used since it slightly hurts the performance. In GCN-FP, we set the hidden dimension to 128. In GGNN, we set the hidden dimension to 128, the propagate step to 15 and aggregation function to average. In DCNN, we set the hidden dimension to 128 and use diffusion scales {3, 5, 7, 10, 20, 30}. In ChebyNet, we set the polynomial order to 5, the hidden dimension to 128. In GCN, we set the hidden dimension to 128. In MPNN, we use GRU as update function, set the number of propagation to 7, set the hidden dimension to 128, use a 1-layer MLP with 1024 hidden units and ReLU nonlinearity as the message function and set the number of unroll step of Set2Vec to 10. In GraphSAGE, we set the number of sampled neighbors to 40, the hidden dimension to 128 and the aggregation function to average. In GAT, we set the number of heads of all 7 layers to 8 and hidden dimension per head to 16. In LanczosNet, we do not use short diffusion scales and set long ones to {1, 2, 3, 5, 7, 10, 20, 30}. The hidden dimension is 128. Lanczos step is 20. 1-layer MLP with 128 hidden units and ReLU nonlinearity is used as the spectral filter. In AdaLanczosNet, we set the short and long diffusion scales to {1, 2, 3} and {5, 7, 10, 20, 30} respectively. The hidden dimension is 128. Lanczos step is 20. 3-layer MLP with 4096 hidden units and ReLU nonlinearity is used as the spectral filter.
1. What is the novelty of the proposed method in using Lanczos algorithm for graph convolutional networks? 2. How does the reviewer assess the clarity and completeness of the algorithm presentation? 3. What are the concerns regarding the learned features' ability to generalize to different graphs? 4. How does the reviewer evaluate the complexity of the proposed methods and their learning process? 5. Are the results impressive, and what kind of tradeoffs could be shown between complexity and performance? 6. How does the reviewer view the relation between the proposed method and other graph convolutional network architectures? 7. What suggestions does the reviewer have for improving the paper, particularly in terms of focusing on the core aspects?
Review
Review This paper proposes to use a Lanczos alogrithm, to get approximate decompositions of the graph Laplacian, which would facilitate the computation and learning of spectral features in graph convnets. It further proposes an extension with back propagation through the Lanczos algorithm, in order to train end to end models. Overall, the idea of using Lanczos algorithm to bypass the computation of the eigendecomposition, and thus simplify filtering operations in graph signal processing is not new [e.g., 35]. However, using this algorithm in the framework of graph convents is new, and certainly interesting. The authors seem to claim that their method permits to learn spectral filters, what other methods could not do - this is not completely true and should probably be rephrased more clearly: many graph convnets, actually learn features. The general construction and presentation of the algorithms are generally clear, and pretty complete. A few things that could be clarified are the following: - in the spectral filters of Eq (4), what gets fundamentally different from polynomial filters proposed in other graph convnets architectures? - what happens when the graph change? Do the learned features make sense on different graphs? And if yes, why? If not, the authors should be more explicit in their presentation - what is the complexity of the proposed methods? that should be minimally discussed (at least), as it is part of the key motivations for the proposed algorithms - how is the learning done in 3.2? If there is any learning at all? (btw, S below Eq (6) is a poor notation choice, as S is used earlier for something else) - the results are not very impressive - they are good, but not stellar, and could benefit from showing an explicit tradeoff in terms of complexity too? The discussion in the related work, and the analogy with manifold learning are interesting. However, that brings probably to one of the main issues with the papers - the authors are obviously very knowledgeable in graph convnets, graph signal processing, and optimisation. However, there are really too many things in this paper, which leads to numerous shortcuts, and some time confusion. Given the page limits, not everything can be treated with the level of details that it would deserve. It might be good to consider trimming down the paper to its main and core aspects for the next version.
ICLR
Title LanczosNet: Multi-Scale Deep Graph Convolutional Networks Abstract We propose the Lanczos network (LanczosNet), which uses the Lanczos algorithm to construct low rank approximations of the graph Laplacian for graph convolution. Relying on the tridiagonal decomposition of the Lanczos algorithm, we not only efficiently exploit multi-scale information via fast approximated computation of matrix power but also design learnable spectral filters. Being fully differentiable, LanczosNet facilitates both graph kernel learning as well as learning node embeddings. We show the connection between our LanczosNet and graph based manifold learning methods, especially the diffusion maps. We benchmark our model against several recent deep graph networks on citation networks and QM8 quantum chemistry dataset. Experimental results show that our model achieves the state-of-the-art performance in most tasks. 1 INTRODUCTION Graph-structured data is ubiquitous in real world applications, social networks, gene expression regulatory networks, protein-protein interactions, and many other physical systems. How to model such data using machine learning, especially deep learning, has become a central research question [1]. For supervised and semi-supervised tasks such as graph or node classification and regression, learning based models can be roughly categorized into two classes, formulated either in terms of graph convolutions [2] or recurrent neural networks [3]. Methods based on recurrent neural networks (RNN), especially graph neural networks (GNN) [3], repeatedly unroll a message passing process over the graph by exchanging information between the nodes. In theory, a GNN can have as large a model capacity as its convolutional counterpart. However, due to the instability of RNN dynamics and difficulty of optimization, GNN and its variants are generally slower and harder to train. In this paper we focus on graph convolution based methods. Built on top of the graph signal processing (GSP) approaches [4], these methods extend convolution operators to graphs by leveraging spectral graph theory, graph wavelet theory, etc. Graph convolutions can be stacked and combined with nonlinear activation functions to build deep models, just as in regular convolutional neural networks (CNN). They often have large model capacity and achieve promising results. Also, graph convolution can be easily implemented with modern scientific computing libraries. There are two main issues with current graph convolution approaches. First, it is not clear how to efficiently leverage multi-scale information except by directly stacking multiple layers. Having an effective multi-scale scheme is key for enabling the model to be invariant to scale changes, and to capture many intrinsic regularities [5, 6]. Graph coarsening methods have been proposed to form a hierarchy of multi-scale graphs [7], but this coarsening process is fixed during both inference and learning which may cause some bias. Alternatively, the graph signal can be multiplied by the exponentiated graph Laplacian, where the exponent indicates the scale of the diffusion process on the graph [8]. Unfortunately, the computation and memory cost increases linearly with the exponent, which prohibits the exploitation of long scale diffusion in practice. Other fast methods for computing matrix power such as exponentiating by squaring are very memory intensive, even for moderately large graphs. Second, spectral filters within current graph convolution based models are mostly fixed. In the context of image processing, using a Gaussian kernel along with a spectral filter f(λ) = 2λ − λ2 corresponds to running forward the heat equation (blurring) followed by running it backwards (sharpening) [9]. Multi-scale kernels introduced in [10] extends the idea of forward-backward diffusion process and can be represented as polynomials of matrices related to a Gaussian kernel. Learning the spectral filters is thus beneficial since it learns the stochastic processes on the graph which produce useful representations for particular tasks. However, how to learn spectral filters which have large model capacity is largely underexplored. In this paper, we propose the Lanczos network (LanczosNet) to overcome the aforementioned issues. First, based on the tridiagonal decomposition implied by the Lanczos algorithm, our model exploits the low rank approximation of the graph Laplacian. This approximation facilitates efficient computation of matrix powers thus gathering multi-scale information easily. Second, we design learnable spectral filters based on the approximation which effectively increase model capacity. In scenarios where one wants to learn the graph kernel and/or node embeddings, we propose another variant, i.e., adaptive Lanczos network (AdaLanczosNet), which back-propagates through the Lanczos algorithm. We show that our proposed model is closely related to graph based manifold learning approaches such as diffusion maps which could potentially inspire more work from the intersection between deep graph networks and manifold learning. We benchmark against 9 recent deep graph networks, including both convolutional and RNN based methods, on citation networks and a quantum chemistry graph regression problem, and achieve state-of-the-art results in most tasks. 2 BACKGROUND In this section, we introduce some background material. A graph G with N nodes is denoted as G = (V, E , A), where A ∈ RN×N is an adjacency matrix which could either be binary or real valued. X ∈ RN×F is the compact representation of node features (or graph signal in the GSP literature). For any node v ∈ V , we denote its feature as a row vector Xv,: ∈ R1×F . We use X:,i to denote the i-th column of X . Graph Fourier Transform Given input node features X , we now discuss how to perform a graph convolution. Based on the adjacency matrix A, we compute the graph Laplacian L which can be defined in different ways: (1) L = D − A; (2) L = I −D−1A; (3) L = I −D− 12AD− 12 , where D is a diagonal degree matrix and Di,i = ∑N j=1Ai,j . The definition (3) is often used in the GSP literature due to the fact that it is real symmetric, positive semi-definite (PSD) and has eigenvalues lying in [0, 2]. In certain applications [11], it was found that adding self-loops, i.e., changing A to A + I , and using the affinity matrix S = D− 1 2AD− 1 2 instead of L gives better results. Since S is real symmetric, based on spectral decomposition, we have S = UΛU> where U is an orthogonal matrix and its column vectors are the eigenvectors of S. The diagonal matrix Λ contains the sorted eigenvalues where Λi,i = λi and 1 ≥ λ1 ≥ · · · ≥ λN ≥ −1. Based on the eigenbasis, we can define the graph Fourier transform Y = U>X and its inverse transform X̂ = UY following [12]. Note that L = I −D− 12AD− 12 shares the same eigenvectors with S = D− 12AD− 12 and the eigenvalues of L are µi = 1− λi. Therefore, L and S share the same graph Fourier transform which justifies the usages of S. Different forms of filters can be further constructed in the spectral domain. Localized Polynomial Filter A τ -localized polynomial filter is typically adopted in GSP literature [12], gw(Λ) = ∑τ−1 t=0 wtΛ t, where w = [w0, w1, . . . , wτ−1] ∈ Rτ×1 is the filter coefficient, i.e., learnable parameter. The filter is τ -localized in the sense that the filtering leverages information from nodes which are at most τ -hops away. One prominent example of this class is the Chebyshev polynomial filter [7]. Here the graph Laplacian is modified to L̃ = 2L/λmax− I such that its eigenvalues fall into [−1, 1]. Then the Chebyshev polynomial recursion is applied: X̃(t) = 2L̃X̃(t− 1)− X̃(t− 2) where X̃(0) = X and X̃(1) = L̃X . For a pair of input and output channels (i, j), the final filtering becomes, yi,j = [X̃(0):,i, . . . , X̃(τ − 1):,i]wi,j , where [·] means concatenation along columns and wi,j ∈ Rτ×1. Chebyshev polynomials provide two benefits: they form an orthogonal basis of L2([−1, 1], dy/ √ 1− y2) and one avoids the spectral decomposition of L̃ in the filtering. However, the functional form of the spectral filter is not learnable, and cannot adapt to the data. In this paper, instead of using the modified graph Laplacian L̃, we use the aforementioned S. Therefore, we can write the localized polynomial filtering in a more general form as, Y = τ−1∑ t=0 gt(S, . . . , S t, X)Wt, (1) where gt is a function that takes node features X and powers of the affinity matrices up to the t-th order as input and outputs a N × F matrix. Wt ∈ RF×O is the corresponding filter coefficient and Y ∈ RN×O is the output. One can easily verify that in the Chebyshev polynomial filter, any i-th column of the corresponding gt(X,S, . . . , St) lies in the Krylov subspace Kt+1(S,X:,i) ≡ span{X:,i, SX:,i, . . . , StX:,i}. This naturally motivates the usage of Krylov subspace methods, like the Lanczos algorithm [13], since it provides an orthonormal basis for the above Krylov subspace, thus making the filter coefficients compact. 3 LANCZOS NETWORKS In this section, we first introduce the Lanczos algorithm which approximates the affinity matrix S. We present our first model, called Lanczos network (LanczosNet), in which we execute the Lanczos algorithm once per graph and fix the basis throughout inference and learning. Then we introduce the adaptive Lanczos network (AdaLanczosNet) in which we learn the graph kernel and/or node embedding by back-propagating through the Lanczos algorithm. Algorithm 1 : Lanczos Algorithm 1: Input: S, x,K, 2: Initialization: β0 = 0, q0 = 0, and q1 = x/‖x‖ 3: For j = 1, 2, . . . ,K: 4: z = Sqj 5: γj = q>j z 6: z = z − γjqj − βj−1qj−1 7: βj = ‖z‖2 8: If βj < , quit 9: qj+1 = z/βj 10: 11: Q = [q1, q2, · · · , qK ] 12: Construct T following Eq. (2) 13: Eigen decomposition T = BRB> 14: Return V = QB and R. Algorithm 2 : LanczosNet 1: Input: Signal X , Lanczos output V and R, scale index sets S and I, 2: Initialization: Y0 = X 3: For ` = 1, 2, . . . , `c: 4: Z = Y`−1, Z = {∅} 5: For j = 1, 2, . . . ,max(S): 6: Z = SZ 7: If j ∈ S: 8: Z = Z ∪ Z 9: For i ∈ I: 10: Z = Z ∪ V R̂(Ii)V >Y`−1 11: Y` = concat(Z)W` 12: If ` < L 13: Y` = Dropout(σ(Y`)) 14: Return Y`c . 3.1 LANCZOS ALGORITHM Given the aforementioned affinity matrix S1 and node features x ∈ RN×1, the N -step Lanczos algorithm computes an orthogonal matrix Q and a symmetric tridiagonal matrix T , such that Q>SQ = T . We denote Q = [q1, · · · , qN ] where column vector qi is the i-th Lanczos vector. T is illustrated as below, T = γ1 β1 β1 . . . . . . . . . . . . βN−1 βN−1 γN . (2) One can verify that Q forms an orthonormal basis of the Krylov subspace KN (S, x) and the first K columns of Q forms the orthonormal basis of KK(S, x). The Lanczos algorithm is shown in detail in Alg. 1. Intuitively, if we investigate the j-th column of the system SQ = QT and rearrange terms, 1When faced with a non-symmetric matrix, one can resort to the Arnoldi algorithm. we obtain βjqj+1 = Sqj − βj−1qj−1 − γjqj , which clearly explains lines 4 to 6 of the pseudocode, i.e., it tries to solve the system in an iterative manner. Note that the most expensive operation in the algorithm is the matrix-vector multiplication in line 4. After obtaining the tridiagonal matrix T , we can compute the Ritz values and Ritz vectors which approximate the eigenvalues and eigenvectors of S by diagonalizing the matrix T . We only add this step in LanczosNet as we found back-propagating through the eigendecomposition in AdaLanczosNet is not numerically stable. 3.2 LANCZOSNET In this section, we first show the construction of the localized polynomial filter based on the Lanczos algorithm’s output and discuss its limitations. Then we explain how to construct the spectral filter using a particular low rank approximation and how to further make the filter learnable. At last, we elaborate how to construct multi-scale graph convolution and build a deep network. Localized Polynomial Filter For the ease of demonstrating the concept of Krylov subspace, we consider a pair of input and output channels (i, j). We denote the input as X:,i ∈ RN×1 and the output as Y:,j ∈ RN×1. Executing the Lanczos algorithm for K steps with the normalized X:,i as the starting vector, one can obtain the orthonormal basis Q̃ of KK(S,X:,i) and the corresponding tridiagonal matrix T̃ . Recall that in the localized polynomial filtering, given the orthonormal basis of KK(S,X:,i), one can write the graph convolution as Yj = Q̃wi,j , (3) where Q̃ ∈ RN×K depends on the X:,i and wi,j ∈ RK×1 is the learnable parameter. This filter has the benefit that the corresponding learnable coefficients are compact due to the orthonormal basis. However, if one wants to stack multiple graph convolution layers, the dependency of Q̃ on X:,i implies that a separate run of Lanczos algorithm is necessary for each graph convolution layer which is computationally demanding. Spectral Filter Ideally, we would like to compute Lanczos vectors only once during the inference of a deep graph convolutional network. Luckily, this can be achieved if we take an alternative view of Lanczos algorithm. In particular, we can choose a random starting vector with unit norm and treat the K step Lanczos layer’s output as the low rank approximation S ≈ QTQ>. Note that here Q ∈ RN×K has orthonormal columns and does not depend on the node featuresXi and T is aK×K tridiagonal matrix. Following [14], we prove the theorem below to bound the approximation error. Theorem 1. Let UΛU> be the eigendecomposition of an N ×N symmetric matrix S with Λi,i = λi, λ1 ≥ · · · ≥ λN and U = [u1, . . . , uN ]. Let Uj ≡ span{u1, . . . , uj}. Assume K-step Lanczos algorithm starts with vector v and outputs the orthogonal Q ∈ RN×K and tridiagonal T ∈ RK×K . For any j with 1 < j < N and K > j, we have, ‖S −QTQ>‖2F ≤ j∑ i=1 λ2i ( sin (v,Ui) ∏j−1 k=1(λk − λN )/(λk − λj) cos (v, ui)TK−i(1 + 2γi) )2 + N∑ i=j+1 λ2i , where TK−i(x) is the Chebyshev Polynomial of degree K − i and γi = (λi − λi+1)/(λi+1 − λN ). We leave the proof to the appendix. Note that the term ( ∑N i=j+1 λ 2 i ) 1/2 is the Frobenius norm of the error between S and the best rank-j approximation Sj . We decompose the tridiagonal matrix T = BRB>, where the K ×K diagonal matrix R contains the Ritz values and B ∈ RK×K is an orthogonal matrix. We have a low rank approximation of the affinity matrix S ≈ V RV >, where V = QB. Therefore, we can rewrite the graph convolution as, Yj = [Xi, SXi, . . . , S K−1Xi]wi,j ≈ [Xi, V RV >Xi, . . . , V RK−1V >Xi]wi,j , (4) The difference between Eq. (3) and Eq. (4) is that the former uses the orthonormal basis while the latter uses the approximation of the direct basis of KK(S,X:,i). Since we explicitly operate on the approximation of spectrum, i.e., Ritz value, it is a spectral filter. Such a filtering form will have significant computational benefits while considering the long range/scale dependency due to the fact that the t-th power of S can be approximated as St ≈ V RtV >, where we only need to raise the diagonal entries of R to the power t. Learning the Spectral Filter Following the previous filter, one can naturally design learnable spectral filters. Specifically, we use K different spectral filters of which the k-th output R̂(k) = fk([R,R 1, . . . , RK−1]), where fk is a multi-layer perceptron (MLP) and R is the diagonal vector of the corresponding diagonal matrix. We then construct a diagonal matrix R̂(k) based on the vector output of fk. Therefore, we have the following filtering, Yj = [Xi, V R̂(1)V >Xi, . . . , V R̂(K − 1)V >Xi]wi,j . (5) Note that it includes the polynomial filter as a special case. When positive semi-definiteness is a concern, one can apply an activation function like ReLU to the output of the MLPs. Multi-scale Graph Convolution Using any above filter, one can construct a deep graph convolutional network which leverages multi-scale information. Taking the learnable spectral filter as an example, we can write one graph convolution layer in a compact way as below, Y = [ LS1X, . . . , LSMX,V R̂(I1)V >X, . . . , V R̂(IN )V >X ] W, (6) where weight W ∈ R(M+E)D×O, S is a set of M short scale parameters and I is a set of E long scale parameters. We consider a non-negative integer as scale parameter, e.g., S = {0, 1, . . . , 5}, I = {10, 20, . . . , 50}. Note that the convolution corresponding to short scales is similar to [8] where the number of matrix-vector multiplications is tied to the maximum scale of S. In contrast, the convolution of long scales decouples the Lanczos step K and scale parameters I, thus permitting great freedom in tuning scales as hyperparameters. One can choose K properly to balance the computation cost and the accuracy of the low rank approximation. In our experiments, short scales are typically less than 10 which have reasonable computation cost. Moreover, the short scale part could sometimes remedy cases where the low rank approximation is crude. We set the long scale no larger than 100 in our experiments. If the maximum eigenvalue of S is 1, we can even raise the power to infinity, which corresponds to the equilibrium state of diffusion process on the graph. To build a deep network, we can stack multiple such graph convolution layers where each layer has its own spectral filter weights. Nonlinear activation functions, e.g., ReLU, and/or Dropout can be added between layers. The inference algorithm of such a deep network is shown in Alg. 2. With the top layer representation, one can use softmax to perform classification or a fully connected layer to perform regression. The Lanczos algorithm is run beforehand once per graph to construct the network and will not be invoked during inference and learning. 3.3 ADALANCZOSNET In this section, we explain another variant which back-propagates through the Lanczos algorithm. This facilitates learning the graph kernel and/or node embeddings. Graph Kernel Assume we are given node featuresX and a graph G. We are interested in learning a graph kernel function with the hope that it can capture the intrinsic geometry of node representations. Given data points xi, xj ∈ X , we define the anisotropic graph kernel, k : X × X 7→ R as, k(xi, xj) = exp ( −‖(fθ(xi)− fθ(xj))‖ 2 ) . (7) where fθ is a MLP. This class of anisotropic kernels is very expressive and includes self-tuning kernel [15] and the Gaussian kernel with Mahalanobis distances [16]. Moreover, for different kernel functions, the resulted graph Laplacians will converge to different limiting operators asymptotically. For example, even for isotropic Gaussian kernels, the graph Laplacian can converge pointwise to the Laplace-Beltrami, Fokker-Planck operator and heat kernel under different normalizations [17, 18]. In practice, we notice that choosing = ∑ (p,q)∈E ‖(fθ(xp)− fθ(xq))‖2/|E| helps normalizing the pairwise distances, thus avoiding the gradient vanishing issue due to the exponential function. This type of learnable anisotropic diffusion is useful in two ways. First, it increases model capacity, thus potentially gaining better performance. Second, it can better adapt to the non-uniform density of the data points on the manifold or nonlinear measurements of the underlying data points on a maninfold. We can construct an adjacency matrix A such that Ai,j = k(xi, xj) if (i, j) ∈ E and Ai,j = 0 otherwise. Then we can obtain the affinity matrix S = D− 1 2AD− 1 2 . Node Embedding In some applications, we do not observe the node features X but only the graph itself G, so we may need to learn an embedding vector per node. For example, this scenario applies in the quantum chemistry tasks where a node, i.e., an atom within a molecule, has rarely observed features. We can still use the above graph kernel to construct the affinity matrix which results in the same form except f is discarded. Learning embedding X naturally amounts to learning the similarities between nodes. Tridiagonal Decomposition Although all operations in LanczosNet are differentiable, we empirically observe that backpropagation through the eigendecomposition of the tridiagonal matrix is numerically instable. The situation would be even worse if multiple eigenvalues are numerically close or one takes a large power in Eq. (6). Therefore, we instead directly leverage the approximated tridiagonal decomposition S ≈ QTQ> which is obtained by running the Lanczos algorithm K steps. Then we can rewrite the graph convolution layer with learnable spectral filter as following, Y = [ SS1X, . . . , SSMX,Qf1 ( T I1 ) Q>X, . . . , QfN ( T IN ) Q>X ] W, (8) where fi is a learnable spectral filter. Each f is constructed from a separate MLP denoting as g which takes T ∈ RK×K as input and outputs a same sized matrix. To ensure f outputs a symmetric matrix, we define f(T ) = g(T ) + g(T )>. With the above parameterization of the graph Laplacian and tridiagonal decomposition, we can back-propagate the loss through the Lanczos algorithm to either the graph kernel parameters θ or the node embedding X . The overall model is similar to the LanczosNet except that the Lanczos algorithm needs to be invoked for each inference pass. 4 LANCZOS NETWORK AND DIFFUSION MAPS In this section, we highlight the relationship between LanczosNet and an important example of graph based manifold learning algorithms, diffusion maps [17]. Diffusion Maps In diffusion maps, the weights in the adjacency matrix define a discrete random walk over the graph, where the Markov transition matrix P = D−1A shows the transition probability in a single time step. Therefore, P ti,j sums the probability of all paths of length t that start at node i and end at node j. It is shown in [17] that P can be used to define an inner product in a Hilbert space. Specifically, we use the eigenvalues and right eigenvectors {λl, ψl}Nl=1 of P to define a diffusion mapping Φt as, Φt(i) = ( λt1ψ1(i), λ t 2ψ2(i), . . . , λ t NψN (i) ) , (9) where ψl(i) is the i-th entry of the eigenvector ψl. Since the row stochastic matrix P is similar to S, i.e., P = D−1/2SD1/2, we have ψl = D−1/2ul. The mapping Φt satisfies ∑N k=1 P t i,kP t j,k/Dk,k = 〈Φt(i),Φt(j)〉, where 〈·, ·〉 is the inner product over Euclidean space. The diffusion distance between i and j, d2DM,t(i, j) = ‖Φt(i)− Φt(j)‖ 2 = ∑N k=1 (P t i,k − P tj,k)2/Dk,k, is the weighted-l2 proximity between the probability clouds of random walkers starting at i and ending at j after t steps. Since all eigenvalues of S reside in the interval [−1, 1], for some large t, λtl in Eq. (9) is close to zero, and dDM,t can be well approximated by using only a few largest eigenvalues and their eigenvectors. Connection to Graph Convolution Apart from using diffusion maps to embed node features X at different time scales, one can use it to compute the frequency representations of X as below, X̂ = ΛtU>X, (10) where U are the eigenvectors of S and define the graph Fourier transform. The frequency representation X̂ is weighted by the powers of the eigenvalues λtl , suppressing entries with small magnitude of eigenvalues. Recall that in the convolution layer Eq. (5) of LanczosNet, we use multiple such frequency representations with different scales t and replace the eigenvalues Λ in Eq. (10) with their approximation, i.e., Ritz values. Therefore, in LanczosNet, spectral filters are actually applied to the frequency representations which are obtained by projecting the node features X onto multiple diffusion maps with different scales. 5 RELATED WORK We can roughly categorize the application of machine learning, especially deep learning, to graph structured data into supervised/semi-supervised and unsupervised scenarios. For the former, a majority of work focuses on node/graph classification and regression [19, 20, 21, 1]. For the latter, unsupervised node/graph embedding learning [22, 23] is common. Recently, generative models for graphs, such as molecule generation, has drawn some attention [24, 25]. Graph Convolution Based Models The first class of learning models on graphs stems from graph signal processing (GSP) [12, 4] which tries to generalize convolution operators from traditional signal processing to graphs. Relying on spectral graph theory [26] and graph wavelet theory [27], several definitions of frequency representations of graph signals have been proposed [4]. Among these, spectral graph theory based one is popular, where graph Fourier transform and its inverse are defined based on the eigenbasis of the graph Laplacian. Following this line, many graph convolution based deep network models emerge. [2, 28] are among the first to explore Laplacian based graph convolution within the context of deep networks. Meanwhile, [29] performs graph convolution directly based on the adjacency matrix to predict molecule fingerprints. [30] proposes a strategy to form same sized local neighborhoods and then apply convolution like regular CNNs. Chebyshev polynomials are exploited by [7] to construct localized polynomial filters for graph convolution and are later simplified in graph convolutional networks (GCN) [11]. Further accelerations for GCN based on importance sampling and control variate techniques have been proposed by [31, 32]. Several attention mechanisms have been introduced in [33, 34] to learn the weights over edges for GCNs. Notably, [8] proposes diffusion convolutional neural networks (DCNN) which uses diffusion operator for graph convolution. Lanczos method has been explored for graph convolution in [35] for the purpose of acceleration. Specifically, they only consider the localized polynomial filter case in our LanczosNet variant and do not explore the low rank decomposition, learnable spectral filter and graph kernel/node embedding learning as we do. Recurrent Neural Networks based Models The second class of models dates back to recursive neural networks [36] which recurrently apply neural networks to trees following a particular order. Graph neural networks (GNN) [3] generalize recursive neural networks to arbitrary graphs and exploit the synchronous schedule to propagate information on graphs. [37] later proposes the gated graph neural networks (GGNN) which improves GNN by adding gated recurrent unit and training the network with back-propagation through time. [38] learns graph embeddings via unrolling variational inference algorithms over a graph as a RNN. [39] introduces random subgraph sampling and explores different aggregation functions to scale GNN to large graphs. [40] proposes asynchronous propagation schedules based on graph partitions to improve GNN. Moreover, many applications have recently emerged for GNNs, including community detection [41], situation recognition [42], RGBD semantic segmentation [43], few-shot learning [21], probabilistic inference [44], continuous control of reinforcement learning [45, 46] and so on. Graph based Manifold Learning The non-linear dimensionality reduction methods, such as locally linear embedding (LLE) [47], ISOMAP [48], Hessian LLE [49], Laplacian eigenmaps [50], and diffusion maps [17], assume that the high-dimensional data lie on or close to a low dimensional manifold and use the local affinities in the weighted graph to learn the global features of the data. They are invaluable tools for embedding complex data in a low dimensional space and regressing functions over graphs. Spectral clustering [51, 52], semi-supervised learning [53], and out-of-sample extension [54] share the similar geometrical consideration of the associated graphs. Anisotropic graph kernels are useful in many applications. For example, [15] improves the spectral clustering results with a self-tuning diffusion kernel that takes into account the local variance at each node in the Gaussian kernel function. Similarly, [55] uses the anisotropic Gaussian kernel defined by the local Mahalanobis distances to extract independent components from nonlinear measurements of independent stochastic Itô processes. Manifold learning with anisotropic kernel is also useful for data-driven dynamical system analysis, for example, detecting intrinsically slow variable for a stochastic dynamical system [56], filtering dynamical processes [57], and long range climate forecasting [58, 59]. The anisotropic diffusion is able to use the local statistics of the measurements to convey the geometric information on the underlying factors rather than the specific realization or measurements at hand [60, 61]. 6 EXPERIMENTS In this section, we compare our two model variants against 9 recent graph networks, including graph convolution networks for fingerprint (GCN-FP) [29], gated graph neural networks (GGNN) [37], diffusion convolutional neural networks (DCNN) [8], Chebyshev networks (ChebyNet) [7], graph convolutional networks (GCN) [11], message passing neural networks (MPNN) [62], graph sample and aggregate (GraphSAGE) [39], graph partition neural networks (GPNN) [40], graph attention networks (GAT) [33]. We test them on two sets of tasks: (1) semi-supervised document classification on 3 citation networks [63], (2) supervised regression of molecule property on QM8 quantum chemistry dataset [64]. For fair comparison, we only tune model-related hyperparameters in all our experiments and share the others, e.g., using the same batch size. We carefully tune hyperparameters based on cross-validation and report the best performance of each competitor. Please refer to the appendix for more details on hyperparameters. We implement all methods using PyTorch [65] and release the code at https://github.com/lrjconan/LanczosNetwork. 6.1 CITATION NETWORKS Three citation networks used in this experiment are: Cora, Citeseer and Pubmed. For each network, nodes are documents and connected based on their citation links. Each node is associated with a bag-of-words feature vector. We use the same pre-processing procedure and follow the transductive setting as in [63]. In particular, given a portion of nodes and their labeled content categories, e.g., history, science, the task is to predict the category for other unlabeled nodes within the same graph. The statistics of these datasets are summarized in the appendix. All experiments are repeated 10 times with different random seeds. During each run, all methods share the same random seed. We first experiment with the public data split and observe severe overfitting for almost all algorithms. To mitigate overfitting and test the robustness of models, we then increase the difficulty of the task by reducing the portion of training examples to several levels and randomly split data. Experimental results and exact portions of training examples are shown in Table. 1. We use the reported best hyperparameters when available for public split and do cross-validation otherwise. Hyperparameters are reported in the appendix. From the table, we see that for random splits with different portion of training examples, since each run of experiment uses a separate random split, the overall variance is larger than its public counterpart. We see that GAT achieves the best performance on the public split but performs poorly on random splits with different portions of training examples. This is partly due to the fact that GAT uses multiple dropout throughout the model which helps only if there is overfitting. We can see that either LanczosNet or AdaLanczosNet achieves state-of-the-art accuracy on random difficult splits and performs closely with respect to GAT on public splits. This may be attributed to the fact that with fewer training examples, the model requires longer scale schemes to spread supervised information over the graph. Our model provides an efficient way of leveraging such long scale information. 6.2 QUANTUM CHEMISTRY We then benchmark all algorithms on the QM8 quantum chemistry dataset which comes from a recent study on modeling quantum mechanical calculations of electronic spectra and excited state energy of small molecules [64]. The setup of QM8 is as follows. Atoms are treated as nodes and they are connected to each other following the structure of the corresponding molecule. Each edge is labeled with a chemical bond. Note that two atoms in one molecule can have multiple edges belong to different chemical bonds. Therefore a molecule is actually modeled as a multigraph. We also use explicit hydrogen in molecule graphs as suggested in [62]. Since some models cannot leverage feature on edges easily, we use the molecule graph itself as the only input information for all models so that it is a fair comparison. As demonstrated in our ablation studies, learning node embeddings for atoms is very helpful. Therefore, we augment all competitors and our models with this component. The task is to predict 16 different quantities of electronic spectra and energy per molecule graph which boils down to a regression problem. There are 21786 molecule graphs in total of which the average numbers of nodes and edges are around 16 and 21. There are 6 different chemical bonds and 70 different atoms throughout the dataset. We use the split provided by DeepChem 2 which have 17428, 2179 and 2179 graphs for training, validation and testing respectively. Following [62, 66], we use mean squared error (MSE) as the loss for training and weighted mean absolute error (MAE) as the evaluation metric. We repeat all experiments 3 times with different random seeds and report the average performance and standard deviation. The same random seed is shared for all methods per run. Hyperparameters are reported in the appendix. The validation and test MAE of all methods are shown in Table 2. As you can see, LanczosNet and AdaLanczosNet achieve better performances than all other competitors. Note that DCNN also achieves good performance with the carefully chosen scale parameters since it is somewhat similar to our model in terms of leveraging multi-scale information. 6.3 ABLATION STUDY We also did a thorough ablation study of our modeling components on the validation set of QM8. Multi-Scale Graph Convolution: We first study the effect of multi-scale graph convolution. In order to rule out the impact of other factors, we use LanczosNet, do not employ the learnable spectral filter and use the one-hot encoding as the node embedding. The results are shown in the first row of Table 3. Using long scales for graph convolution clearly helps on this task. Combining both short and long scales further improves results. Lanczos Step: We then investigate the Lanczos step since it will have an impact on the accuracy of the low rank approximation induced by the Lanczos algorithm. The results are shown in the second row of Table 3. We can see that the performance is better with a relatively small Lanczos step like 10 and 20 which makes sense since the average number of nodes in this dataset is around 16. Learning Spectral Filter: We then study whether learning spectral filter will help improve performance. The results are shown in the third row of Table 3. Adding a 3-layer MLP does help reduce the 2https://deepchem.io/ error compared to not using any learnable spectral filter. Note that the MLP consists of 128 hidden units per layer and uses ReLU as the nonlinearity. However, using a deeper MLP does not seem to be helpful which might be caused by the challenges in optimization. Graph Kernel/Node Embedding: At last, we study the usefulness of adding graph kernel and node embeddings. We first fix the node embedding as one-hot encoding and learn a 3 layer MLP which is the function fθ in Eq. (7). Next, we learn the node embeddings directly. Intuitively, learning embeddings amounts to learn a separate function f per node whereas our graph kernel learning enforces that f is shared for all nodes, thus being more restrictive. As shown in the 3-rd and 4- th rows of the table, learning node embeddings significantly improves the performance for both LanczosNet and AdaLanczosNet and is more effective than learning graph kernels. Also, tuning the scale parameters further boosts the performance. 7 CONCLUSION In this paper, we propose LanczosNet which leverages the Lanczos algorithm to construct a low rank approximation of the graph Laplacian. It not only provides an efficient way to gather multi-scale information for graph convolution but also enables learning spectral filters. Additionally, we propose a model variant AdaLanczosNet which facilitates graph kernel and node embedding learning. We show that our model has a close relationship with graph based manifold learning, especially diffusion map. Experimental results demonstrate that our model outperforms a range of other graph networks, on challenging graph problems. We are currently exploring customized eigen-decomposition methods for tridiagonal matrices, which will potentially further improve our AdaLanczosNet. Overall, work in this direction holds promise for allowing deep learning to scale up to very large graph problems. ACKNOWLEDGMENTS RL thanks Roger Grosse for introducing the Lanczos algorithm to him. RL was supported by Connaught International Scholarships. RL, RU and RZ were supported in part by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/Interior Business Center (DoI/IBC) contract number D16PC00003. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: the views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/IBC, or the U.S. Government. 8 APPENDIX 8.1 LOW RANK APPROXIMATION We first state the following Lemma from [67] without proof and then prove our Theorem 1 following [14]. Lemma 1. Let A ∈ RN×N be symmetric and v an arbitrary vector. Define Krylov subspace Km ≡ span{v,Av, . . . , Am−1v}. Let A = UΛU> be the eigendecomposition of A with Λi,i = λi and λ1 ≥ · · · ≥ λn. Denoting U = [u1, . . . , uN ] and Uj = span{u1, . . . , uj}, then tan (uj ,Km) ≤ sin (v,Uj) ∏j−1 k=1(λk − λn)/(λk − λj) cos (v, uj)Tm−j(1 + 2γ) , where Tm−j(x) is the Chebyshev Polynomial of degree m− j and γ = (λj − λj+1)/(λj+1 − λN ). Theorem 1. Let UΛU> be the eigendecomposition of an N ×N symmetric matrix S with Λi,i = λi, λ1 ≥ · · · ≥ λN and U = [u1, . . . , uN ]. Let Uj ≡ span{u1, . . . , uj}. Assume K-step Lanczos algorithm starts with vector v and outputs the orthogonal Q ∈ RN×K and tridiagonal T ∈ RK×K . For any j with 1 < j < N and K > j, we have, ‖S −QTQ>‖2F ≤ j∑ i=1 λ2i ( sin (v,Ui) ∏j−1 k=1(λk − λN )/(λk − λj) cos (v, ui)TK−i(1 + 2γi) )2 + N∑ i=j+1 λ2i , where TK−i(x) is the Chebyshev Polynomial of degree K − i and γi = (λi − λi+1)/(λi+1 − λN ). Proof. From Lanczos algorithm, we have SQ = QT . Therefore, ‖S −QTQ>‖2F = ‖S − SQQ>‖2F = ‖S(I −QQ>)‖2F (11) Let P⊥Q ≡ I−QQ>, the orthogonal projection onto the orthogonal complement of subspace span{Q}. Relying on the eigendecomposition, we have, ‖S −QTQ>‖2F = ‖UΛU>(I −QQ>)‖2F = ‖ΛU>(I −QQ>)‖2F = ‖(I −QQ>)UΛ‖2F = ‖ [ λ1P ⊥ Q u1, . . . , λNP ⊥ Q uN ] ‖2F , (12) where we use the fact that ‖RA‖2F = ‖A‖2F for any orthogonal matrix R and ‖A>‖2F = ‖A‖2F . Note that for any j we have, ∥∥[λ1P⊥Q u1, . . . , λNP⊥Q uN ]∥∥2F = N∑ i=1 λ2i ‖P⊥Q ui‖2 ≤ j∑ i=1 λ2i ‖P⊥Q ui‖2 + N∑ i=j+1 λ2i , (13) where we use the fact that for any i, ‖P⊥Q ui‖2 = ‖ui‖2 − ‖ui − P⊥Q ui‖2 ≤ ‖ui‖2 = 1. Note that we have span{Q} = span{v, Sv, . . . , SK−1v} ≡ KK from the Lanczos algorithm. Therefore, we have, ‖P⊥Q ui‖ = | sin (ui,KK)| ≤ | tan (ui,KK)|. (14) Applying the above lemma with A = S, we finish the proof. 8.2 LANCZOS ALGORITHM Utilizing exact arithmetic, Lanczos vectors are orthogonal to each other. However, in floating point arithmetic, it is well known that the round-off error will make the Lanczos vectors lose orthogonality as the iteration proceeds. One could apply a full Gram-Schmidt (GS) process z = z − ∑j−1 i=1 z >qiqi after line 6 of Alg. 1 to ensure orthogonality. Other partial or selective re-orthogonalization could also be explored. However, since we found the orthgonality issue does not hurt overall performance with a small iteration number, e.g., K = 20, and the full GS process is computationally expensive, we do not add such a step. Although some customized eigendecomposition methods, e.g., implicit QL [68], exist for tridiagonal matrix, we leave it for future exploration due to its complicated implementation. 8.3 EXPERIMENTS For ChebyNet, we do not use graph coarsening in all experiments due to its demanding computation cost for large graphs. Also, for small molecule graphs, coarsening generally does not help since it loses information compared to directly stacking another layer of original graph. Citation Networks The statistics of three citation networks are summarized in Table 4. We now report the important hyperparameters chosen via cross-validation for each method. All methods are trained with Adam with learning rate 1.0e−2 and weight decay 5.0e−4. The maximum number of epoch is set to 200. Early stop with window size 10 is also adopted. We tune hyperparameters using Cora alone and fix them for citeseer and pubmed. For convolution based methods, we found 2 layers work the best. In GCN-FP, we set the hidden dimension to 64 and dropout to 0.5. In GGNN, we set the hidden dimension to 64, the propagate step to 2 and aggregation function to summation. In DCNN, we set the hidden dimension to 64, dropout to 0.5 and use diffusion scales {1, 2, 5}. In ChebyNet, we set the polynomial order to 5, the hidden dimension to 64 and dropout to 0.5. In GCN, we set the hidden dimension to 64 and dropout to 0.5. In MPNN, we use GRU as update function and set the hidden dimension to 64 and dropout to 0.5. No edge embedding is used as there is just one edge type. In GraphSAGE, we set the number of sampled neighbors to 500, the hidden dimension to 64, dropout to 0.5 and the aggregation function to average. In GAT, we set the number of heads per layer to 8, 1, hidden dimension per head to 8 and dropout to 0.6. In LanczosNet, we set the short and long diffusion scales to {1, 2, 5, 7} and {10, 20, 30} respectively. The hidden dimension is 64 and dropout is 0.5. Lanczos step is 20. 1-layer MLP with 128 hidden units and ReLU nonlinearity is used as the spectral filter. In AdaLanczosNet, we set the short and long diffusion scales to {1, 2, 5} and {10, 20} respectively. The hidden dimension is 64 and dropout is 0.5. Lanczos step is 20. 1-layer MLP with 128 hidden units and ReLU nonlinearity is used as the spectral filter. Quantum Chemistry We now report the important hyperparameters chosen via cross-validation for each method. All methods are trained with Adam with learning rate 1.0e−4 and no weight decay. The maximum number of epoch is set to 200. Early stop with window size 10 is also adopted. For convolution based methods, we found 7 layers work the best. We augment all methods with 64- dimension node embedding and add edge types by either feeding a multiple-channel graph Laplacian matrix or directly adding a separate message function per edge type. For all methods, no dropout is used since it slightly hurts the performance. In GCN-FP, we set the hidden dimension to 128. In GGNN, we set the hidden dimension to 128, the propagate step to 15 and aggregation function to average. In DCNN, we set the hidden dimension to 128 and use diffusion scales {3, 5, 7, 10, 20, 30}. In ChebyNet, we set the polynomial order to 5, the hidden dimension to 128. In GCN, we set the hidden dimension to 128. In MPNN, we use GRU as update function, set the number of propagation to 7, set the hidden dimension to 128, use a 1-layer MLP with 1024 hidden units and ReLU nonlinearity as the message function and set the number of unroll step of Set2Vec to 10. In GraphSAGE, we set the number of sampled neighbors to 40, the hidden dimension to 128 and the aggregation function to average. In GAT, we set the number of heads of all 7 layers to 8 and hidden dimension per head to 16. In LanczosNet, we do not use short diffusion scales and set long ones to {1, 2, 3, 5, 7, 10, 20, 30}. The hidden dimension is 128. Lanczos step is 20. 1-layer MLP with 128 hidden units and ReLU nonlinearity is used as the spectral filter. In AdaLanczosNet, we set the short and long diffusion scales to {1, 2, 3} and {5, 7, 10, 20, 30} respectively. The hidden dimension is 128. Lanczos step is 20. 3-layer MLP with 4096 hidden units and ReLU nonlinearity is used as the spectral filter.
1. What is the main contribution of the paper in the field of graph convolutional networks? 2. What are the strengths of the proposed method, particularly in its novelty and performance? 3. How does the reviewer assess the clarity and accessibility of the paper's content? 4. Are there any suggestions for improving the paper, such as moving technical details to an appendix?
Review
Review The authors propose a novel method for learning graph convolutional networks. The core idea is to use the Lanczos algorithm to obtain a low-rank approximation of the graph Laplacian. The authors propose two ways to include the Lanczos algorithm. First, as a preprocessing step where the algorithm is applied once on the input graph and the resulting approximation is fixed during learning. Second, by including a differentiable version of the algorithm into an end-to-end trainable model. The proposed method is novel and achieves good results on a set of experiments. The authors discuss related work in a thorough and meaningful manner. There is not much to criticize. This is a very good paper. The almost 10 pages are perhaps a bit excessive considering there was an (informal) 8 page limit. It might make sense to provide a more accessible discussion of the method and Theorem 1, and move some more detailed/technical parts in pages 4, 5, and 6 to an appendix.
ICLR
Title Graph Classification with Geometric Scattering Abstract One of the most notable contributions of deep learning is the application of convolutional neural networks (ConvNets) to structured signal classification, and in particular image classification. Beyond their impressive performances in supervised learning, the structure of such networks inspired the development of deep filter banks referred to as scattering transforms. These transforms apply a cascade of wavelet transforms and complex modulus operators to extract features that are invariant to group operations and stable to deformations. Furthermore, ConvNets inspired recent advances in geometric deep learning, which aim to generalize these networks to graph data by applying notions from graph signal processing to learn deep graph filter cascades. We further advance these lines of research by proposing a geometric scattering transform using graph wavelets defined in terms of random walks on the graph. We demonstrate the utility of features extracted with this designed deep filter bank in graph classification of biochemistry and social network data (incl. state of the art results in the latter case), and in data exploration, where they enable inference of EC exchange preferences in enzyme evolution. 1 INTRODUCTION Over the past decade, numerous examples have established that deep neural networks (i.e., cascades of linear operations and simple nonlinearities) typically outperform traditional “shallow” models in various modern machine learning applications, especially given the increasing Big Data availability nowadays. Perhaps the most well known example of the advantages of deep networks is in computer vision, where the utilization of 2D convolutions enable network designs that learn cascades of convolutional filters, which have several advantages over fully connected network architectures, both computationally and conceptually. Indeed, in terms of supervised learning, convolutional neural networks (ConvNets) hold the current state of the art in image classification, and have become the standard machine learning approach towards processing big structured-signal data, including audio and video processing. See, e.g., Goodfellow et al. (2016, Chapter 9) for a detailed discussion. Beyond their performances when applied to specific tasks, pretrained ConvNet layers have been explored as image feature extractors by freezing the first few pretrained convolutional layers and then retraining only the last few layers for specific datasets or applications (e.g., Yosinski et al., 2014; Oquab et al., 2014). Such transfer learning approaches provide evidence that suitably constructed deep filter banks should be able to extract task-agnostic semantic information from structured data, and in some sense mimic the operation of human visual and auditory cortices, thus supporting the neural terminology in deep learning. An alternative approach towards such universal feature extraction was presented in Mallat (2012), where a deep filter bank, known as the scattering transform, is designed, rather than trained, based on predetermined families of distruptive patterns that should be eliminated to extract informative representations. The scattering transform is constructed as a cascade of linear wavelet transforms and nonlinear complex modulus operations that provides features with guaranteed invariance to a predetermined Lie group of operations such as rotations, translations, or scaling. Further, it also provides Lipschitz stability to small diffeomorphisms of the inputted signal. Scattering features have been shown to be effective in several audio (e.g., Bruna & Mallat, 2013a; Andén & Mallat, 2014; Lostanlen & Mallat, 2015) and image (e.g., Bruna & Mallat, 2013b; Sifre & Mallat, 2014; Oyallon & Mallat, 2015) processing applications, and their advantages over learned features are especially relevant in applications with relatively low data availability, such as quantum chemistry (e.g., Hirn et al., 2017; Eickenberg et al., 2017; 2018). Following the recent interest in geometric deep learning approaches for processing graph-structured data (see, for example, Bronstein et al. (2017) and references therein), we present here a generalization of the scattering transform from Euclidean domains to graphs. Similar to the Euclidean case, our construction is based on a cascade of bandpass filters, defined in this case using graph signal processing (Shuman et al., 2013) notions, and complex moduli, which in this case take the form of absolute values (see Sec. 3). While several choices of filter banks could generally be used with the proposed cascade, we focus here on graph wavelet filters defined by lazy random walks (see Sec. 2). These wavelet filters are also closely related to diffusion geometry and related notions of geometric harmonic analysis, e.g. the diffusion maps algorithm of Coifman & Lafon (2006) and the associated diffusion wavelets of Coifman & Maggioni (2006). Therefore, we call the constructed cascade geometric scattering, which also follows the same terminology from geometric deep learning. We note that similar attempts at generalizing the scattering transform to graphs have been presented in Chen et al. (2014) as well as Zou & Lerman (2018) and Gama et al. (2018). The latter two works are most closely related to the present paper. In them, the authors focus on theoretical properties of the proposed graph scattering transforms, and show that such transforms are invariant to graph isomorphism. The geometric scattering transform that we define here also possesses the same invariance property, and we expect similar stability properties to hold for the proposed construction as well. However, in this paper we focus mainly on the practical applicability of geometric scattering transforms for graph-structured data analysis, with particular emphasis on the task of graph classification, which has received much attention recently in geometric deep learning (see Sec. 4) In supervised graph classification problems one is given a training database of graph/label pairs {(Gi, yi)}Ni=1 ⊂ G × Y sampled from a set of potential graphs G and potential labels Y . The goal is to use the training data to learn a model f : G → Y that associates to any graph G ∈ G a label y = f(G) ∈ Y . These types of databases arise in biochemistry, in which the graphs may be molecules and the labels some property of the molecule (e.g., its toxicity), as well as in various types of social network databases. Until recently, most approaches were kernel based methods, in which the model f was selected from the reproducing kernel Hilbert space generated by a kernel that measures the similarity between two graphs; one of the most successful examples of this approach is the Weisfeiler-Lehman graph kernel of Shervashidze et al. (2011). Numerous feed forward deep learning algorithms, though, have appeared over the last few years. In many of these algorithms, task based (i.e., dependent upon the labels Y) graph filters are learned from the training data as part of the larger network architecture. These filters act on a characteristic signal xG that is defined on the vertices of any graph G, e.g., xG may be a vector of degrees of each vertex (we remark there are also edge based algorithms, such as Gilmer et al. (2017) and references within, but these have largely been developed for and tested on databases not considered in Sec. 4). Here, we propose an alternative to these methods in the form of a geometric scattering classifier (GSC) that leverages graph-dependent (but not label dependent) scattering transforms to map each graph G to the scattering features extracted from xG. Furthermore, inspired by transfer learning approaches such as Oquab et al. (2014), we consider treatment of our scattering cascade as frozen layers on xG, either followed by fully connected classification layers (see Fig. 2), or fed into other classifiers such as SVM or logistic regression. We note that while the formulation in Sec. 3 is phrased for a single signal xG, it naturally extends to multiple signals by concatenating their scattering features. In Sec. 4.1 we evaluate the quality of the scattering features and resulting classification by comparing it to numerous graph kernel and deep learning methods over 13 datasets (7 biochemistry ones and 6 social network ones) commonly studied in related literature. In terms of classification accuracy on individual datasets, we show that the proposed approach obtains state of the art results on two datasets and performs competitively on the rest, despite only learning a classifier that come after the geometric scattering transform. Furthermore, while other methods may excel on specific datasets, when considering average accuracy: within social network data, our proposed GSC outperforms all other methods; in biochemistry or over all datasets, it outperforms nearly all feed forward neural network approaches, and is competitive with state of the art results of graph kernels (Kriege et al., 2016) and graph recurrent neural networks (Taheri et al., 2018). We regard this result as crucial in establishing the universality of graph features extracted by geometric scattering, as they provide an effective task-independent representation of analyzed graphs. Finally, to establish their unsupervised qualities, in Sec. 4.2 we use geometric scattering features extracted from enzyme data (Borgwardt et al., 2005a) to infer emergent patterns of enzyme commission (EC) exchange preferences in enzyme evolution, validated with established knowledge from Cuesta et al. (2015). 2 GRAPH RANDOM WALKS AND GRAPH WAVELETS We define graph wavelets as the difference between lazy random walks that have propagated at different time scales, which mimics classical wavelet constructions found in Meyer (1993) as well as more recent constructions found in Coifman & Maggioni (2006). The underpinnings for this construction arise out of graph signal processing, and in particular the properties of the graph Laplacian. Let G = (V,E,W ) be a weighted graph, consisting of n vertices V = {v1, . . . , vn}, edges E ⊆ {(v`, vm) : 1 ≤ `,m ≤ n}, and weights W = {w(v`, vm) > 0 : (v`, vm) ∈ E}. Note that unweighted graphs are considered as a special case, by setting w(v`, vm) = 1 for each (v`, vm) ∈ E. Define the n × n (weighted) adjacency matrix AG = A of G by A(v`, vm) = w(v`, vm) if (v`, vm) ∈ E and zero otherwise, where we use the notation A(v`, vm) to denote the (`,m) entry of the matrix A so as to emphasize the correspondence with the vertices in the graph and to reserve sub-indices for enumerating objects. Define the (weighted) degree of vertex v` as deg(v`) = ∑ m A(v`, vm) and the corresponding diagonal n × n degree matrix D given by D(v`, v`) = deg(v`), D(v`, vm) = 0, ` 6= m. Finally, the n × n graph Laplacian matrix LG = L on G is defined as L = D−A. The graph Laplacian is a symmetric, real valued positive semi-definite matrix, and thus has n nonnegative eigenvalues. Furthermore, if we set 0 = (0, . . . , 0)T to to be the n× 1 vector of all zeroes, and 1 = (1, . . . , 1)T to be the analogous vector of all ones, then it is easy to see that L1 = 0. Therefore 0 is an eigenvalue of L and we write the n eigenvalues of L as 0 = λ0 ≤ λ1 ≤ · · · ≤ λn−1 with corresponding n × 1 orthonormal eigenvectors 1/ √ n = ϕ0,ϕ1, . . . ,ϕn−1. If the graph G is connected, then λ1 > 0. In order to simplify the following discussion we assume that this is the case, although the discussion below can be amended to include disconnected graphs as well. Since ϕ0 is constant and every other eigenvector is orthogonal to ϕ0, it is natural to view the eigenvectors ϕk as the Fourier modes of the graph G, with a frequency magnitude √ λk. Let x : V → R be a signal defined on the vertices of the graph G, which we will consider as an n × 1 vector with entries x(v`). It follows that the Fourier transform of x can be defined as x̂(k) = x · ϕk, where x · y is the standard dot product. This analogy is one of the foundations of graph signal processing and indeed we could use this correspondence to define wavelet operators on the graph G, as in Hammond et al. (2011). Rather than follow this path, though, we instead take a related path similar to Coifman & Maggioni (2006); Gama et al. (2018) by defining the graph wavelet operators in terms of random walks defined on G, which will avoid diagonalizing L and will allow us to control the “spatial” graph support of the filters directly. Define the n×n transition matrix of a lazy random random walk as P = 12 ( D−1A + I ) . Note that the row sums of P are all one and thus the entry P(v`, vm) corresponds to the transition probability of walking from vertex v` to vm in one step. Powers of P run the random walk forward, so that in particular Pt(v`, vm) is the transition probability of walking from v` to vm in exactly t steps. We will use P as a left multiplier, in which case P acts a diffusion operator. To understand this idea more precisely, first note that a simple calculation shows that P1 = 1 and furthermore if the graph G is connected, every other eigenvalue of P is contained in [0, 1). Note in particular that L and P share the eigenvector 1. It follows that Ptx responds most significantly to the zero frequency x̂(0) of x while depressing the non-zero frequencies of x (where the frequency modes are defined in terms of the graph Laplacian L, as described above). On the spatial side, the value Ptx(v`) is the weighted average of x(v`) with all values x(vm) such that vm is within t steps of v` in the graph G. High frequency responses of x can be recovered in multiple different fashions, but we utilize multiscale wavelet transforms that group the non-zero frequencies of G into approximately dyadic bands. As shown in Mallat (2012, Lemma 2.12), wavelet transforms are provably stable operators in the Euclidean domain, and the proof of Zou & Lerman (2018, Theorem 5.1) indicates that similar results on graphs may be possible. Furthermore, the multiscale nature of wavelet transforms will allow the resulting geometric scattering transform (Sec. 3) to traverse the entire graph G in one layer, which is valuable for obtaining global descriptions of G. Following Coifman & Maggioni (2006), define the n× n diffusion wavelet matrix at the scale 2j as Ψj = P 2j−1 −P2 j = P2 j−1 (I−P2 j−1 ) (1) Since Pt1 = 1 for every t, we see that Ψj1 = 0 for each j ≥ 1. Thus each Ψjx partially recovers x̂(k) for k ≥ 1. The value Ψjx(v`) aggregates the signal information x(vm) from the vertices vm that are within 2j steps of v`, but does not average the information like the operator P2 j . Instead, it responds to sharp transitions or oscillations of the signal x within the neighborhood of v` with radius 2j (in terms of the graph path distance). Generally, the smaller j the higher the frequencies Ψjx recovers in x. These high frequency wavelet coefficients up to the scale 2J are denoted by: Ψ(J)x(v`) = [Ψjx(v`) : 1 ≤ j ≤ J ] , ` = 1, . . . , n . (2) Since 2J controls the maximum scale of the wavelet, in the experiments of Sec. 4 we select J such that 2J ∼ diam(G). Figure 1 plots the diffusion wavelets at different scales on two different graphs. 3 GEOMETRIC SCATTERING ON GRAPHS A geometric wavelet scattering transform follows a similar construction as the (Euclidean) wavelet scattering transform of Mallat (2012), but leverages a graph wavelet transform. In this paper we utilize the wavelet transform defined in (2) of the previous section, but remark that in principle any graph wavelet transform could be used (see, e.g., Zou & Lerman, 2018). In Sec. 3.1 we define the graph scattering transform, in Sec. 3.2 we discuss its relation to other recently proposed graph scattering constructions (Gama et al., 2018; Zou & Lerman, 2018), and in Sec. 3.3 we describe several of its desirable properties as compared to other geometric deep learning algorithms on graphs. 3.1 GEOMETRIC SCATTERING DEFINITIONS Machine learning algorithms that compare and classify graphs must be invariant to graph isomorphism, i.e., re-indexations of the vertices and corresponding edges. A common way to obtain invariant graph features is via summation operators, which act on a signal x = xG that can be defined on any graph G, e.g., x(v`) = deg(v`) for each vertex v` in G. The geometric scattering transform, which is described in the remainder of this section, follows such an approach. The simplest of such summation operators computes the sum of the responses of the signal x. As described in Verma & Zhang (2018), this invariant can be complemented by higher order summary statistics of x, the collection of which form statistical moments, and which are also referred to as “capsules” in that work. For example, the unnormalized qth moments of x yield the following “zero” order geometric scattering moments: Sx(q) = n∑ `=1 x(v`) q, 1 ≤ q ≤ Q (3) We can also replace (3) with normalized (i.e., standardized) moments of x, in which case we store its mean (q = 1), variance (q = 2), skew (q = 3), kurtosis (q = 4), and so on. In the numerical experiments described in Sec. 4 we take Q = 2, 3, 4 depending upon the database, where Q is chosen via cross validation to optimize classification performance. Higher order moments are not considered as they become increasingly unstable, and we report results for both normalized and unnormalized moments. In what follows we discuss the unnormalized moments, since their presentation is simpler and we use them in conjunction with fully connected layers (FCL) for classification purposes, but the same principles also apply to normalized moments (e.g., used with SVM and logistic regression in our classification results). The invariants Sx(q) do not capture the full variability of x and hence the graph G upon which the signal x is defined. We thus complement these moments with summary statistics derived from the wavelet coefficients of x, which in turn will lead naturally to the graph ConvNet structure of the geometric scattering transform. Observe, analogously to the Euclidean setting, that in computing Sx(1), which is the summation of x(v`) over V , we have captured the zero frequency of x since ∑n `=1 x(v`) = x · 1 = √ n x̂(0). Higher order moments of x can incorporate the full range of frequencies in x, e.g. Sx(2) =∑n `=1 x(v`) 2 = ∑n k=1 x̂(k) 2, but they are mixed into one invariant coefficient. We can separate and recapture the high frequencies of x by computing its wavelet coefficients Ψ(J)x, which were defined in (2). However, Ψ(J)x is not invariant to permutations of the vertex indices; in fact, it is covariant (or equivariant). Before summing the individual wavelet coefficient vectors Ψjx, though, we must first apply a pointwise nonlinearity. Indeed, define the n× 1 vector d(v`) = deg(v`), and note that Ψjx · d = 0 since one can show that d is a left eigenvector of P with eigenvalue 1. If G is a regular graph then d = c1 from which it follows that Ψjx · 1 = 0. For more general graphs d(v`) ≥ 0 for v` ∈ V , which implies that for many graphs 1 ·d will be the dominating coefficient in an expansion of 1 in an orthogonal basis containing d; it follows that in these cases |Ψjx · 1| 1. We thus apply the absolute value nonlinearity, to obtain nonlinear covariant coefficients |Ψ(J)x| = {|Ψjx| : 1 ≤ j ≤ J}. We use absolute value because it is covariant to vertex permutations, nonexpansive, and when combined with traditional wavelet transforms on Euclidean domains, yields a provably stable scattering transform for q = 1. Furthermore, initial theoretical results in Zou & Lerman (2018); Gama et al. (2018) indicate that similar graph based scattering transforms possess certain types of stability properties as well. As in (3), we extract invariant coefficients from |Ψjx| by computing its moments, which define the first order geometric scattering moments: Sx(j, q) = n∑ `=1 |Ψjx(v`)|q, 1 ≤ j ≤ J, 1 ≤ q ≤ Q (4) These first order scattering moments aggregate complimentary multiscale geometric descriptions of G into a collection of invariant multiscale statistics. These invariants give a finer partition of the frequency responses of x. For example, whereas Sx(2) mixed all frequencies of x, we see that Sx(j, 2) only mixes the frequencies of x captured by the graph wavelet Ψj . First order geometric scattering moments can be augmented with second order geometric scattering moments by iterating the graph wavelet and absolute value transforms, which leads naturally to the structure of a graph ConvNet. These moments are defined as: Sx(j, j′, q) = n∑ i=1 |Ψj′ |Ψjx(vi)||q, 1 ≤ j < j′ ≤ J, 1 ≤ q ≤ Q (5) which consists of reapplying the wavelet transform operator Ψ(J) to each |Ψjx| and computing the summary statistics of the magnitudes of the resulting coefficients. The intermediate covariant coefficients |Ψj′ |Ψjx|| and resulting invariant statistics Sx(j, j′, q) couple two scales 2j and 2j ′ within the graph G, thus creating features that bind patterns of smaller subgraphs within G with patterns of larger subgraphs (e.g., circles of friends of individual people with larger community structures in social network graphs). The transform can be iterated additional times, leading to third order features and beyond, and thus has the general structure of a graph ConvNet. The collection of graph scattering moments Sx = {Sx(q), Sx(j, q), Sx(j, j′, q)} (illustrated in Fig. 2(a)) provides a rich set of multiscale invariants of the graphG. These can be used in supervised settings as input to graph classification or regression models, or in unsupervised settings to embed graphs into a Euclidean feature space for further exploration, as demonstrated in Sec. 4. 3.2 STABILITY AND CAPACITY OF GEOMETRIC SCATTERING In order to assess the utility of scattering features for representing graphs, two properties have to be considered: stability and capacity. First, the stability property aims to essentially provide an upper bound on distances between similar graphs that only differ by types of deformations that can be treated as noise. This property has been the focus of both Zou & Lerman (2018) and Gama et al. (2018), and in particular the latter shows that a diffusion scattering transform yields features that are stable to graph structure deformations whose size can be computed via the diffusion framework (Coifman & Maggioni, 2006) that forms the basis for their construction. While there are some technical differences between the geometric scattering here and the diffusion scattering in Gama et al. (2018), these constructions are sufficiently similar that we can expect both of them to have analogous stability properties. Therefore, we mainly focus here on the complementary property of the scattering transform capacity to provide a rich feature space for representing graph data without eliminating informative variance in them. We note that even in the classical Euclidean case, while the stability of scattering transforms to deformations can be established analytically (Mallat, 2012), their capacity is typically examined by empirical evidence when applied to machine learning tasks (e.g., Bruna & Mallat, 2011; Sifre & Mallat, 2012; Andén & Mallat, 2014). Similarly, in the graph processing settings, we examine the capacity of our proposed geometric scattering features via their discriminaive power in graph data analysis tasks. In Sec. 4.1, we describe extensive numerical experiments for graph classification problems in which our scattering coefficients are utilized in conjunction with several classifiers, namely, fully connected layers (FCL, illustrated in Fig. 2(b)), support vector machine (SVM), and logistic regression. We note that SVM classification over scattering features leads to state of the art results on social network data, as well as outperforming all feed-forward neural network methods in general. Furthermore, for biochemistry data (where graphs represent molecule structures), FCL classification over scattering features outperforms all other feed-forward neural networks, even though we only train the fully connected layers. Finally, to assess the scattering feature space for data representation and exploration, in Sec. 4.2 we examine its qualities when analyzing biochemistry data, with emphasis on enzyme graphs. We show that geometric scattering enables graph embedding in a relatively low dimensional Euclidean space, while preserving insightful properties in the data. Beyond establishing the capacity of our specific construction, these results also indicate the viability of graph scattering transforms in general, as universal feature extractors on graph data, and complement the stability results established in Zou & Lerman (2018) and Gama et al. (2018). 3.3 GEOMETRIC SCATTERING COMPARED TO OTHER FEED FORWARD GRAPH CONVNETS We give a brief comparison of geometric scattering with other graph ConvNets, with particular interest in isolating the key principles for building accurate graph ConvNet classifiers. We begin by remarking that like several other successful graph neural networks, the graph scattering transform is covariant or equivariant to vertex permutations (i.e., commutes with them) until the final features are extracted. This idea has been discussed in depth in various articles, including Kondor et al. (2018b), so we limit the discussion to observing that the geometric scattering transform thus propagates nearly all of the information in x through the multiple wavelet and absolute value layers, since only the absolute value operation removes information on x. As in Verma & Zhang (2018), we aggregate covariant responses via multiple summary statistics (i.e., moments), which are referred to there as a capsule. In the scattering context, at least, this idea is in fact not new and has been previously used in the Euclidean setting for the regression of quantum mechanical energies in Eickenberg et al. (2018; 2017) and texture synthesis in Bruna & Mallat (2018). We also point out that, unlike many deep learning classifiers (graph included), a graph scattering transform extracts invariant statistics at each layer/order. These intermediate layer statistics, while necessarily losing some information in x (and hence G), provide important coarse geometric invariants that eliminate needless complexity in subsequent classification or regression. Furthermore, such layer by layer statistics have proven useful in characterizing signals of other types (e.g., texture synthesis in Gatys et al., 2015). A graph wavelet transform Ψ(J)x decomposes the geometry of G through the lens of x, along different scales. Graph ConvNet algorithms also obtain multiscale representations of G, but several works, including Atwood & Towsley (2016) and Zhang et al. (2018), propagate information via a random walk. While random walk operators like Pt act at different scales on the graph G, per the analysis in Sec. 2 we see that Pt for any t will be dominated by the low frequency responses of x. While subsequent nonlinearities may be able to recover this high frequency information, the resulting transform will most likely be unstable due to the suppression and then attempted recovery of the high frequency content of x. Alternatively, features derived from Ptx may lose the high frequency responses of x, which are useful in distinguishing similar graphs. The graph wavelet coefficients Ψ(J)x, on the other hand, respond most strongly within bands of nearly non-overlapping frequencies, each with a center frequency kj that depends on Ψj . Finally, graph labels are often complex functions of both local and global subgraph structure within G. While graph ConvNets are adept at learning local structure within G, as detailed in Verma & Zhang (2018) they require many layers to obtain features that aggregate macroscopic patterns in the graph. This is due in large part to the use of fixed size filters, which often only incorporate information from the neighbors of any individual vertex. The training of such networks is difficult due to the limited size of many graph classification databases (see Table 4 in Appendix D). Geometric scattering transforms have two advantages in this regard: (a) the wavelet filters are designed; and (b) they are multiscale, thus incorporating macroscopic graph patterns in every layer/order. 4 APPLICATION & RESULTS 4.1 GRAPH CLASSIFICATION To evaluate the proposed geometric scattering features, we test their effectiveness for graph classification on thirteen datasets commonly used for this task. Out of these, seven datasets contain biochemistry graphs that describe molecular structures of chemical compounds, as described in the following works that introduced them: NCI1 and NCI109, Wale et al. (2008); MUTAG, Debnath et al. (1991); PTC, Toivonen et al. (2003); PROTEINS and ENZYMES, Borgwardt et al. (2005b); and D&D, Dobson & Doig (2003). In these cases, each graph has several associated vertex features x that represent chemical properties of atoms in the molecule, and the classification is aimed to characterize compound properties (e.g., protein types). The other six datasets, which are introduced in Yanardag & Vishwanathan (2015), contain social network data extracted from scientific collaborations (COLLAB), movie collaborations (IMDB-B & IMDB-M), and Reddit discussion threads (REDDIT-B, REDDIT-5K, REDDIT-12K). In these cases there are no inherent graph signals in the data, and therefore we compute general node characteristics (e.g., degree, eccentricity, and clustering coefficient) over them, as is considered standard practice in relevant literature (see, for example, Verma & Zhang, 2018). A detailed description of each of these datasets appear in their respective references, and are briefly summarized in Appendix D for completeness. In all cases, we iterate over all graphs in the database and for each one we associate graph-wide features by (1) computing the scattering features of each of the available graph signals (provided or computed), and (2) concatenating the features of all such signals. Then, the full scattering feature vectors of these graphs are passed to a classifier, which is trained from input labels, in order to infer the class for each graph. We consider three classifiers here: neural network with two/three fully connected hidden layers (FCL), SVM with RBF kernel, or logistic regression. We note that the scattering features (computed as described in Sec. 3) are based on either normalized or unnormalized moments over the entire graph. Here we used unnormalized moments for FCL, and normalized ones for other classifiers, but the difference is subtle and similar results can be achieved for the other combinations. Finally, we also note that all technical design choices for configuring our geometric scattering or the classifiers were done as part of the cross validation described in Appendix E. We evaluate the classification results of our three geometric scattering classification (GSC) settings using ten-fold cross validation (as explained in Appendix E) and compare them to 14 prominent methods for graph classification. Out of these, six are graph kernel methods, namely: WeisfeilerLehman graph kernels (WL, Shervashidze et al., 2011), propagation kernel (PK, Neumann et al., 2012), Graphlet kernels (Shervashidze et al., 2009), Random walks (RW, Gärtner et al., 2003), deep graph kernels (DGK, Yanardag & Vishwanathan, 2015), and Weisfeiler-Lehman optimal assignment kernels (WL-OA, Kriege et al., 2016). Seven other methods are recent geometric feed forward deep learning algorithms, namely: deep graph convolutional neural network (DGCNN, Zhang et al., 2018), Graph2vec (Narayanan et al., 2017), 2D convolutional neural networks (2DCNN, Tixier et al., 2017), covariant compositional networks (CCN, Kondor et al., 2018a), Patchy-san (PSCN, Niepert et al., 2016, with k = 10), diffusion convolutional neural networks (DCNN, Atwood & Towsley, 2016), and graph capsule convolutional neural networks (GCAPS-CNN, Verma & Zhang, 2018). Finally, one method is the recently introduced recurrent neural network autoencoder for graphs (S2S-N2N-PP, Taheri et al., 2018). Following the standard format of reported classification performances for these methods (per their respective references, see also Appendix A), our results are reported in the form of average accuracy± standard deviation (in percentages) over the ten crossvalidation folds. We remark here that many of them are not reported for all datasets, and hence, we 1Accuracy for these methods was reported for less than 3/4 of considered social graph datasets, but with biochemistry data they reach 7/9 of all considered datasets. mark N/A when appropriate. For brevity, the comparison is reported here in Fig. 3 in summarized form, as explained below, and in full in Appendix A. Since the scattering transform is independent of training labels, it provides universal graph features that might not be specifically optimal in each individual dataset, but overall provide stable classification results. Further, careful examination of the results of previous methods (feed forward algorithms in particular) shows that while some may excel in specific cases, none of them achieves the best results in all reported datasets. Therefore, to compare the overall classification quality of our GSC methods with related methods, we consider average accuracy aggregated over all datasets, and within each field (i.e., biochemistry and social networks) in the following way. First, out of the thirteen datasets, classification results on four datasets (NCI109, ENZYMES, IMDB-M, REDDIT12K) are reported significantly less frequently than the others, and therefore we discard them and use the remaining nine for the aggregation. Next, to address reported values versus N/A ones, we set an inclusion criterion of 75% reported datasets for each method. This translates into at most one N/A in each individual field, and at most two N/A overall. For each method that qualifies for this inclusion criterion, we compute its average accuracy over reported values (ignoring N/A ones) within each field and over all datasets; this results in up to three reported values for each method. The aggregated results of our GSC and 13 of the compared methods appears in Fig. 3(a). These results show that GSC (with SVM) outperforms all other methods on social network data, and in fact as shown Appendinx B, it achieves state of the art results on two datasets of this type. Additionally, the aggregated results shows that our GSC approach (with FCL or SVM) outperforms all other feed forward methods both on biochemsitry data and overall in terms of universal average accuracy2. The CCN method is omitted from these aggregated results, as its results in Kondor et al. (2018a) are only reported on four biochemistry datasets. For completeness, detailed comparison of GSC with this method, which appears in Fig. 3(b), shows that our method outperforms it on two datasets while CCN outperforms GSC on the other two. 4.2 SCATTERING FEATURE SPACE FOR DATA EXPLORATION Geometric scattering essentially provides a task independent representation of graphs in a Euclidean feature space. Therefore, it is not limited to supervised learning applications, and can be also utilized for exploratory graph-data analysis, as we demonstrate in this section. We focus our discussion on biochemistry data, and in particular on the ENZYMES dataset. Here, geometric scattering features can be considered as providing “signature” vectors for individual enzymes, which can be used to explore interactions between the six top level enzyme classes, labelled by their Enzyme Commission (EC) numbers (Borgwardt et al., 2005a). In order to emphasize the properties of scattering-based feature extraction, rather than downstream processing, we mostly limit our analysis of the scattering feature space to linear operations such as principal component analysis (PCA). We start by considering the viability of scattering-based embedding for dimensionality reduction of graph data. To this end, we applied PCA to our scattering coefficients (computed from unnormalized moments), while choosing the number of principal components to capture 90% explained variance. In the ENZYMES case, this yields a 16 dimensional subspace of the full scattering features space. While the Euclidean notion of dimensionality is not naturally available in the original dataset, we note that graphs in it have, on average, 124.2 edges, 29.8 vertices, and 3 features per vertex, and therefore the effective embedding of the data into R16 indeed provides a significant dimensionality reduction. Next, to verify the resulting PCA subspace still captures sufficient discriminative information with respect to classes in the data, we compare SVM classification on the resulting low dimensional vectors to the the full feature space; indeed, projection on the PCA subspace results in only a small drop in accuracy from 56.85 ± 4.97 (full) to 49.83 ± 5.40 (PCA). Finally, we also consider the dimensionality of each individual class (with PCA and > 90% exp. variance) in the scattering feature space, as we expect scattering to reduce the variability in each class w.r.t. the full feature space. In the ENZYMES case, individual classes have PCA dimensionality ranging between 6 and 10, which is indeed significantly lower than the 16 dimensions of the entire PCA space. Appendix C summarizes these findings, and repeats the described procedure for two addi- 2It should be noted, though, that if NCI109 and ENZYMES were included, the GCAPS-CNN would outperform the GSC. However, many other methods would not be comparable then. tional biochemistry datasets (from Wale et al., 2008) to verify that these are not unique to the specific ENZYMES dataset, but rather indicate a more general trend for geometric scattering feature spaces. To further explore the scattering feature space, we now use it to infer relations between EC classes. First, for each enzyme e, with scattering feature vector ve (i.e., with Sx for all vertex features x), we compute its distance from class EC-j, with PCA subspace Cj , as the projection distance: dist(e,EC-j) = ‖ve−projSjve‖. Then, for each enzyme class EC-i, we compute the mean distance of enzymes in it from the subspace of each EC-j class asD(i, j) = mean{dist(e,EC-j) : e ∈ EC-i}. Appendix C summarizes these distances, as well as the proportion of points from each class that have their true EC as their nearest (or second nearest) subspace in the scattering feature space. In general, 48% of enzymes select their true EC as the nearest subspace (with additional 19% as second nearest), but these proportions vary between individual EC classes. Finally, we use these scatteringbased distances to infer EC exchange preferences during enzyme evolution, which are presented in Fig. 4 and validated with respect to established preferences observed and reported in Cuesta et al. (2015). We note that the result there is observed independently from the ENZYMES dataset. In particular, the portion of enzymes considered from each EC is different between these data, since Borgwardt et al. (2005b) took special care to ensure each EC class in ENZYMES has exactly 100 enzymes in it. However, we notice that in fact the portion of enzymes (in each EC) that choose the wrong EC as their nearest subspace, which can be considered as EC “incoherence” in the scattering feature space, correlates well with the proportion of evolutionary exchanges generally observed for each EC in Cuesta et al. (2015), and therefore we use these as EC weights in Fig. 4(c). Our results in Fig. 4 demonstrate that scattering features are sufficiently rich to capture relations between enzyme classes, and indicate that geometric scattering has the capacity to uncover descriptive and exploratory insights in graph data analysis, beyond the supervised graph classification from Sec 4.1. 5 CONCLUSION We presented the geometric scattering transform as a deep filter bank for feature extraction on graphs. This transform generalizes the scattering transform, and augments the theoretical foundations of geometric deep learning. Further, our evaluation results on graph classification and data exploration show the potential of the produced scattering features to serve as universal representations of graphs. Indeed, classification with these features with relatively simple classifier models reaches high accuracy results on most commonly used graph classification datasets, and outperforms both traditional and recent deep learning feed forward methods in terms of average classification accuracy over multiple datasets. We note that this might be partially due to the scarcity of labeled big data in this field, compared to more traditional ones (e.g., image or audio classification). However, this trend also correlates with empirical results for the classic scattering transform, which excels in cases with low data availability. Finally, the geometric scattering features provide a new way for computing and considering global graph representations, independent of specific learning tasks. Therefore, they raise the possibility of embedding entire graphs in Euclidean space and computing meaningful distances between graphs with them, which can be used for both supervised and unsupervised learning, as well as exploratory analysis of graph-structured data. APPENDIX A FULL COMPARISON TABLE All results come from the respective papers that introduced the methods, with the exception of: (1) social network results of WL, from Tixier et al. (2017); (2) biochemistry and social results of DCNN, from Verma & Zhang (2018); (3) biochemistry, except for D&D, and social result of GK, 3DCNN using different training/test split from Yanardag & Vishwanathan (2015); (4) D&D of GK is from Niepert et al. (2016); and (5) for Graphlets, biochemistry results from Kriege et al. (2016), social results from Tixier et al. (2017). APPENDIX B STATE OF THE ART RESULTS ON REDDIT DATASETS APPENDIX C DETAILED TABLES FOR SCATTERING FEATURE SPACE ANALYSIS FROM SECTION 4.2 APPENDIX D DETAILED DATASET DESCRIPTIONS The details of the datasets used in this work are as follows (see the main text in Sec. 3 for references): NCI1 contains 4,110 chemical compounds as graphs, with 37 node features. Each compound is labeled according to is activity against non-small cell lung cancer and ovarian cancer cell lines, and these labels serve as classification goal on this data. NCI109 is similar to NCI1, but with 4,127 chemical compounds and 38 node features. MUTAG consists of 188 mutagenic aromatic and heteroaromatic nitro compounds (as graphs) with 7 node features. The classification here is binary (i.e., two classes), based on whether or not a compound has a mutagenic effect on bacterium. PTC is a dataset of 344 chemical compounds (as graphs) with nineteen node features that are divided into two classes depending on whether they are carcinogenic in rats. PROTEINS dataset contains 1,113 proteins (as graphs) with three node features, where the goal of the classification is to predict whether the protein is enzyme or not. D&D dataset contains 1,178 protein structures (as graphs) that, similar to the previous one, are classified as enzymes or non-enzymes. ENZYMES is a dataset of 600 protein structures (as graphs) with three node features. These proteins are divided into six classes of enzymes (labelled by enzyme commission numbers) for classification. COLLAB is a scientific collaboration dataset contains 5K graphs. The classification goal here is to predict whether the graph belongs to a subfield of Physics. IMDB-B is a movie collaboration dataset with contains 1K graphs. The graphs are generated on two genres: Action and Romance, the classification goal is to predict the correct genre for each graph. IMDB-M is similar to IMDB-B, but with 1.5K graphs & 3 genres: Comedy, Romance, and Sci-Fi. REDDIT-B is a dataset with 2K graphs, where each graph corresponds to an online discussion thread. The classification goal is to predict whether the graph belongs to a Q&A-based community or discussion-based community. REDDIT-5K consists of 5K threads (as graphs) from five different subreddits. The classification goal is to predict the corresponding subreddit for each thread. REDDIT-12K is similar to REDDIT-5k, but with 11,929 graphs from 12 different subreddits. Table 4 summarizes the size of available graph data (i.e., number of graphs, and both max & mean number of vertices within graphs) in these datasets, as previously reported in the literature. Graph signals for social network data: None of the social network datasets has ready-to-use node features. Therefore, in the case of COLLAB, IMDB-B, and IMDB-M, we use the eccentricity, degree, and clustering coefficients for each vertex as characteristic graph signals. In the case of REDDIT-B, REDDIT-5K and REDDIT-12K, on the other hand, we only use degree and clustering coefficient, due to presence of disconnected graphs in these datasets. APPENDIX E TECHNICAL DETAILS The computation of the scattering features described in Section 3 is based on several design choices, akin to typical architecture choices in neural networks. Most importantly, it requires a choice of 1. which statistical moments to use (normalized or unnormalized), 2. the number of wavelet scales to use (given by J), and 3. the number of moments to use (denoted by Q). The configuration used for each dataset in this work is summarized in Table 5, together with specific settings used in the downstream classification layers, as descibed below. Once the scattering coefficients are generated through the above processes, they are either fed into a standard classifier (SVM or logistic regression), or into two or three fully connected layers (see Table 5 for specifics) and then a softmax layer that is used to compute the class probabilities. In the latter case, cross entropy loss is minimized during the training process and ReLU is used as the activation function between fully connected layers. Besides, we use mini batch training with batch size 64 and ADAM optimization technique for training. Two learning rates 0.002 and 0.02 are tested during training. Optimal training epochs are decided through cross validation. Finally, L2 norm regularization is used to avoid overfittings. Cross validation procedure: Classification evaluation was done with standard ten-fold cross validation procedure. First, the entire dataset is randomly split into ten subsets. Then, in each iteration (or “fold”), nine of them are used as training and validation, and the other one is used for testing classification accuracy. In total, after ten iterations, each of the subsets has been used once for testing, resulting in ten reported classification accuracy numbers for the examined dataset. Finally, the mean and standard deviation of these ten accuracies are computed and reported. It should be noted that when using fully connected layers, each iteration also performs automatic tuning of the trained classifier, as follows. First, nine iterations are performed, each time using eight subsets (i.e., folds) as training and the remaining one as validation set, which is used to determine the optimal epoch for network training. Then, the classifier is retrained with all nine subsets. After nine iterations, each of the training/validation subsets has been used once for validation, and we obtain nine classification models, which in turn produce nine predictions (i.e., class probabilities) for each data point in the test subset of the main cross validation. To obtain the final result of this cross validation iteration, we sum up all these predictions and select the class with the highest probability as our final classification result. These results are then compared to the true labels (in the test set) on the test subset to obtain classification accuracy for this fold. Software & hardware environment: Geometric scattering and related classification code were implemented in Python with TensorFlow. All experiments were performed on HPC environment using an intel16-k80 cluster, with a job requesting one node with four processors and two Nvidia Tesla k80 GPUs.
1. What is the focus of the paper in terms of advancing geometric deep learning? 2. How does the proposed approach differ from previous works in this area, specifically those mentioned in the review? 3. Is the theoretical novelty of the paper sufficient to justify its contribution? 4. How significant is the improvement shown in the paper compared to other published methods? 5. Are the claims made in the paper supported by the experimental results provided?
Review
Review The authors propose an advance in geometric deep learning based on a geometric scattering transform using graph wavelets defined in terms of ran- dom walks on the graph. The paper is well written, easy to understand also for a not-so-tech audience but nevertheless precise in all the mathematical details. Intro and references are satisfactory, and also the experimental section is sufficiently convincing. However, there are two big issues undermining the overall structure of the manuscript: a) the theoretical novelty w.r.t. (Zou & Lerman, 2018) and (Game, 2018) is partial and rather technical, so the originality of the present manuscript is limited b) the improvement w.r.t. to other published method is rather small, so the performance gain is only partially justified by the quite complex theoretical construction.