query
stringlengths
273
149k
pos
stringlengths
18
667
idx
int64
0
1.99k
task_name
stringclasses
1 value
The vulnerabilities of deep neural networks against adversarial examples have become a significant concern for deploying these models in sensitive domains. Devising a definitive defense against such attacks is proven to be challenging, and the methods relying on detecting adversarial samples are only valid when the attacker is oblivious to the detection mechanism. In this paper, we consider the adversarial detection problem under the robust optimization framework. We partition the input space into subspaces and train adversarial robust subspace detectors using asymmetrical adversarial training (AAT). The integration of the classifier and detectors presents a detection mechanism that provides a performance guarantee to the adversary it considered. We demonstrate that AAT promotes the learning of class-conditional distributions, which further gives rise to generative detection/classification approaches that are both robust and more interpretable. We provide comprehensive evaluations of the above methods, and demonstrate their competitive performances and compelling properties on adversarial detection and robust classification problems. Deep neural networks have become the staple of modern machine learning pipelines, achieving stateof-the-art performance on extremely difficult tasks in various applications such as computer vision , speech recognition , machine translation , robotics , and biomedical image analysis . Despite their outstanding performance, these networks are shown to be vulnerable against various types of adversarial attacks, including evasion attacks (aka, inference or perturbation attacks) (; b; b;) and poisoning attacks . These vulnerabilities in deep neural networks hinder their deployment in sensitive domains including, but not limited to, health care, finances, autonomous driving, and defense-related applications and have become a major security concern. Due to the mentioned vulnerabilities, there has been a recent surge toward designing defense mechanisms against adversarial attacks (; ; b; ; ;), which has in turn motivated the design of stronger attacks that defeat the proposed defenses (b; b; a; b; ; ; ;). Besides, the proposed defenses have been shown to be limited and often not effective and easy to overcome . Alternatively, a large body of work has focused on detection of adversarial examples (; ; ; ; ; ; ; ; ; ; ; ; ;). While training robust classifiers focuses on maintaining performance in presence of adversarial examples, adversarial detection only cares for detecting these examples. The majority of the current detection mechanisms focus on non-adaptive threats, for which the attacks are not specifically tuned/tailored to bypass the detection mechanism, and the attacker is oblivious to the detection mechanism. In fact, Carlini & Wagner (2017a) and showed that the detection methods presented in (; ; ; ; ; ; ;), are significantly less effective than their claimed performances under adaptive attacks. The current solutions are mostly heuristic approaches that cannot provide performance guarantees to the adversary they considered. In this paper, we are interested in detection mechanisms for adversarial examples that can withstand adaptive attacks. Unlike previous approaches that assume adversarial and natural samples coming from different distributions, thus rely on using a single classifier to distinguish between them, we instead partition the input space into subspaces based on the classification system's output and perform adversarial/natural sample classification in these subspaces. Importantly, the mentioned partitions allow us to drop the adversarial constrain and employ a novel asymmetrical adversarial training (AAT) objective to train robust binary classifiers in the subspaces. Figure 1 demonstrates our idea of space partitioning and robust detector training. Our qualitative show that AAT supports detectors to learn class-conditional distributions, which further motivates generative detection/classification solutions that are both robust and interpretable. Our specific contributions are: • We develop adversarial example detection techniques that provide performance guarantees to norm constrained adversaries. Empirically, our best models improve previous state-ofthe-art mean L 2 distortion from 3.68 to 4.47 on the MNIST dataset, and from 1.1 to 1.5 on the CIFAR10 dataset. • We study powerful and versatile generative classification models derived from our detection framework and demonstrate their competitive performances over discriminative robust classifiers. While defense mechanisms based on ordinary adversarial training are vulnerable to unrecognizable inputs (e.g., rubbish examples), inputs that cause confident predictions of our models have human-understandable semantic meanings. • We demonstrate that AAT not only induces robustness as ordinary adversarial training methods do, but also promotes the learning of class-conditional distributions. Intuitively, the learning mechanism is similar to that of GANs, but the objective doesn't learn a fixed generator. On 1D and 2D benchmarking datasets we show this flexibility allows us to precisely control the data generation process such that the detector could be pushed to a good approximation of the underlying density function. (In case of GANs at the global optimum the discriminator converges to a degenerated uniform solution.) Our image generation on CIFAR10 and ImageNet rival that of state-of-the-art GANs. Adversarial attacks. Since the pioneering work of , a large body of work has focused on designing algorithms that achieve successful attacks on neural networks (b; ; b; ; a; b). More recently, iterative projected gradient descent (PGD), initially proposed by Kurakin et al. (2016b), has been empirically identified as the most effective approach for performing norm ball constrained attacks, and the attack reasonably approximates the optimal attack . Adversarial detection techniques. The majority of the methods developed for detecting adversarial attacks are based on the following core idea: given a trained K-class classifier, f: R d → {1...K}, and its corresponding natural training samples,, generate a set of adversarially attacked samples, and devise a mechanism to discriminate D from D. For instance, use this exact idea and learn a binary classifier to distinguish the natural and adversarially perturbed sets. append a new "attacked" class to the classifier, f, and re-train a secured network that classifies natural images, x ∈ D, into the K classes and all attacked images, x ∈ D, to the (K + 1)-th class. In contrast to; , which aim at detecting adversarial examples directly from the image content, trained a binary classifier that receives as input the intermediate layer features extracted from the classifier network f, and distinguished D from D based on such input features. More importantly, considered the so-called case of adaptive/dynamic adversary and proposed to harden the detector against such attacks using a similar adversarial training approach as in Goodfellow et al. (2014b). Unfortunately, the mentioned detection methods are significantly less effective under an adaptive adversary equipped with a strong attack (a;)... K} be the classifier that is used to do classification on D. With the labels and predicted labels the dataset respectively forms the partition, 1} be a set of binary classifiers (detectors), in which h k is trained to discriminate natural samples classified as k, from adversarial samples that fool the network, f (·), to be classified as k. Also, let D be a set of L p norm bounded adversarial examples crafted from D: Consider the following procedure to determine whether a sample x in D ∪ D is an adversarial example: First obtain the estimated class label k:= f (x), then use the k-th detector to predict: if h k (x) = 1 then x a natural sample, otherwise it's an adversarial sample. The detection accuracy of the algorithm is given by where Thus minimizing the algorithm's classification error is equivalent to minimizing classification error of individual detectors. Employing empirical risk minimization, detector k, parameterized by θ k, is trained by where L is a loss function that measures the distance between h k's output and the supplied label (e.g., the binary cross-entropy loss). In the case of adaptive attacks, when the adversary aims to fool both the classifier and detectors, the accuracy of a naively trained detector could be significantly reduced. In order to be robust to adaptive attacks, inspired by the idea of robust optimization , we incorporate the attack into the training objective: where D f \k = {x : f (x) = k, y = k, x ∈ D}, and we assume that perturbation budget is large enough such that ∀x ∈ D f \k, ∃δ ∈ S, s.t. f (x + δ) = k. Now by dropping the f (x + δ) = k constrain we could derive an upper bound for the first loss term: max The detector could instead be trained by minimizing this upper bound using the following unconstrained objective, Further, we use the fact that when D is used as the training set, f could overfit on D such that D \k = {x i : y i = k} and D k are respectively good approximations of D f \k and D f k. This leads to our proposed asymmetrical adversarial training (AAT) objective: In a nutshell, each detector is trained using in-class natural samples and detector-adversarial examples crafted from out-of-class samples. We use iterative PGD attack to solve the inner maximization. Because of the integrated adversary, objective 5 is no longer a straightforward discriminative objective. Our investigations (Appendix A) showed that the objective promotes detectors to learn conditional data distributions. Similar to GANs' objective (a), the AAT objective presents a minimax problem, where the adversary tries to generate perturbed samples that look like the target class data, and the detector is trained by discriminating between target class data and perturbed data. The key difference is that instead of learning a mapping from latent space distribution to the data distribution (a.k.a., the generator), AAT relies on gradient descent (a.k.a., PGD attack) to generate data samples. This is crucial, as it allows us to perform fine-grained control over the generation process (especially by constraining on perturbation limit), so that the discriminator (detector) could retain density information (see Appendix A) and not converge to a degenerate uniform solution as in the case of GANs (a;). Unfortunately, the detector does not define an explicit density function. Under the energy-based learning framework , we could, however, obtain the joint probability of the input and a class category using the Gibbs distribution:, where Z Θ is an unknown normalizing constant, and E θ k (x) = −z(h k (x)) (see Appendix A for justification). We could then apply the Bayes classification rule to obtain a generative classifier: H(x) = arg max k p(x, k) = arg max k z(h k (x)). In addition, we could base on p(x, k) to reject low probability inputs. We implement the reject option by thresholdingk-th detector's logit output, wherek is the predicted class. In the context of adversarial example detection, rejected samples are considered as adversarial examples. We first test the robustness of individual detectors. We show that, once we train a detector with an adequately configured PGD attack, its performance cannot be significantly reduced by an adversary with much stronger configurations (stronger in terms of steps and step-size). Although the PGD attack can reasonably solve the inner maximization in objective 5, it is not clear whether the optimization landscape of the asymmetrical objective is the same as its symmetrical counterparts. For instance, we found that the step-size used by to train their CIFAR10 robust classifier would not induce robustness to our detectors (see Appendix D.2.2). We also face a unique challenge when training with objective 5: the number of positive and negative samples are highly imbalanced. Our solution is to use re-sampling to balance positive and negative classes. Furthermore, we use adversarial finetuning on CIFAR10 and ImageNet to speed up the training of our detectors. With the robustness test, we show that robust optimization also introduces robustness within this new training paradigm. We use AUC (area under the ROC Curve) to measure detection performances. The metric could be interpreted as the probability that the detector assigns a higher score to a random positive sample than to a random negative example. While the true-positive and the false-positive rates are the commonly used metrics for measuring the detection performance, they require a detection threshold to be specified. AUC, however, is an aggregated measurement of detection performance across a range of thresholds, and we found it to be a more stable and reliable metric. For the k-th detector h k, its AUC is computed on the set {(x, 0): \k )} (refer to loss 4). Having validated the robustness of individual detectors, we evaluate the overall performance of our integrated detection system. Recalling our detection rule, we first obtain the estimated class label k:= f (x), then use the k-th detector's logit output z(h k (x)) to predict: if z(h k (x)) ≥ T k, then x is a natural sample, otherwise it is an adversarially perturbed sample. For the sake of this evaluation, we use a universal threshold for all the detectors: ∀k ∈ {1...K} T k = T, and report detection performance at a range of universal thresholds. In practice, however, the optimal value of each detector's detection threshold T k should be determined by optimizing a utility function. to denote the test set that contains natural samples, and to denote the corresponding perturbed test set. For a given threshold T, we compute the true positive rate (TPR) on D and false positive rate (FPR) on D. These two metrics are respectively defined as and In the FPR definition we use f (x) = y to constrain that only true adversarial examples are counted as false positives. This constraint is necessary, as we found that for the norm ball constraint we considered in the experiments, not all perturbed samples are adversarial examples that cause misclassification on f. In order to craft the perturbed dataset D, we consider three attacking scenarios. Classifier attack. This attack corresponds to the scenario where the adversary is oblivious to the detection mechanism. For a given natural sample x and its label y, the perturbed sample x is computed by minimizing the loss, where z(f (x)) is the classifier's logit outputs. This objective is derived from the CW attack (b) and used in MadryLab (b) and MadryLab (a) to perform untargeted attacks. Detectors attack. In this scenario adversarial examples are produced by attacking only the detectors. We construct a single detection function H by using the i-th detector's logit output as its i-th logit output:. H is then treated as a single network, and the perturbed sample x for a given input (x, y) is computed by minimizing the loss Note that, according to our detection rule, a low value of the detector's logit output indicates detection of an adversarial example, thus by minimizing the negative of logit output we make the perturbed example harder to detect. H could also be fed directly to the CW loss 8 or to cross-entropy loss, but we found the attack based on the loss in 9 to be significantly more effective. Combined attack. With the goal of fooling both the classifier and detectors, perturbed samples are produced by attacking the integrated detection system. We consider two loss functions for realizing the combined attack. The first is based on the combined loss function (a) that has been shown to be effective against an array of detection methods. Given a natural example x and its label y, same as the detectors-attack scenario, we first construct a single detection function H by aggregating the logit outputs of individual detectors: z(H(x)) i:= z(h i (x)). We then use the aggregated detector's largest logit output max k =y z(H(x)) k (low value of this quantity indicates detection of an adversarial example) and the classifier logit outputs z(f (x)) to construct a surrogate classifier g, with its logit outputs being A perturbed example x is then computed by minimizing the loss function In practice we observe that the optimization of this loss tends to stuck at the point where max i =y z(f (x)) i keeps changing signs while max j =y z(H(x)) j stays as a large negative number (which indicates detection). To derive a more effective attack we consider a simple combination of loss 8 and loss 9: The objective is straightforward: if x is not yet an adversarial example on f, optimize it for that goal; otherwise optimize it for fooling the aggregated detector. We mention briefly here that we perform the same performance analysis of our generative detection method (as detailed in Section 3.2) by computing TPR on D and FPR on D. We use the loss 9 to perform attacks against the generative detection method, but also provide of attacks based on cross-entropy loss and CW loss 8. Integrated classification. In addition to the generative classifier proposed in Section 3.2, we introduce another classification scheme that provides a reject option. The scheme is based an integration of the naive classifier f and the detectors: for a given input x and its prediction label We respectively use loss 12 and 9 to attack the integrated classifier and the generative classifier. Performance metric. In the context of robust classification, the performance of a robust classifier is measured using standard accuracy and robust accuracy -accuracies respectively computed on the natural dataset and perturbed dataset. We provide a similar performance analysis of the above classification models. On the natural dataset, we compute the accuracy as the fraction of samples that are correctly classified (f (x) = y) and at the same time not rejected (z(h k (x)) ≥ T ): On the perturbed dataset we compute the error as the fraction of samples that are misclassified (f (x) = y) and at the same time not rejected: Note that in this case the error is no longer a complement of the accuracy. For a classification system with a reject option, any perturbed samples that are rejected should be considered as properly handled, regardless of whether they are misclassified. Thus on the perturbed dataset, the error, which is the fraction of misclassified and not rejected samples, is a more proper notion of such system's performance. For a standard robust classifier, its perturbed set error is computed as the complement of its accuracy on the perturbed set. Using different p-norm and maximum perturbation constrains we trained four detection systems (each has 10 base detectors), with training and validation adversarial examples optimized using PGD attacks of different steps and step-size (see Table 6). At each step of PGD attack we use the Adam optimizer to perform gradient descent, both for L 2 and L ∞ constrained scenarios. Appendix B.1 provides more training details. Robustness . The robustness test in Table 1 confirm that the base detectors trained with objective 5 are able to withstand much stronger PGD attacks, for both L 2 and L ∞ scenarios. Normalized steepest descent is another popular choice for performing PGD attack (; MadryLab, b; a), for which we got similar robustness (Table 8). Further on the complete list of performances of the L ∞ = 0.3 and L ∞ = 0.5 trained detectors, cross-norm and cross-perturbation test , and random restart test are included in Appendix D.1. Table 1: AUC scores of the first two detectors (k = 0, 1) tested with different strengths of PGD attacks using Adam optimizer. PGD attack steps, step-size Detection . Figure 2a shows that the combined attack is the most effective attack against integrated detection. Generative detection (attacked using loss 9) outperforms integrated detection, especially when the detection threshold is low (the region where TPR is high). In Figure 12 we confirm that loss 9 is more effective than CW loss and cross-entropy loss for attacking generative detection. Notably, the red curve that overlaps the y-axis shows that integrated detection can perfectly detect adversarial examples crafted by attacking only the classifier (using objective 8). In Table 2 we compare the performances of our generative detection method with the state-of-the-art detection method as identified by Carlini & Wagner (2017a). Our method using L ∞ = 0.5 trained base detectors is able to outperform the state-of-the-art method by a large margin. Appendix C describes the procedure we used to compute the mean L 2 distortion of our method. Classification . In Figure 2b, we compare the robust classification performance of our methods and a state-of-the-art robust classifier. While the performance of the robust classifier is fixed, by using different rejection thresholds, our classification methods provide the option to balance standard accuracy and robust error. The generative classifier outperforms the integrated classifier when the rejection threshold is low (i.e., when the perturbed set error is high). We observe that a stronger attack (= 0.4) breaks the robust classifier, while the generative classifier still exhibits robustness, even though both systems are trained with the same L ∞ = 0.3 constrain. shows perturbed samples by performing targeted attacks against the generative classifier and robust classifier. We observe that perturbed samples produced by attacking the generative classifier have distinguishably visible features of the target class, indicating that the base detectors, from which the generative classifier is built, have successfully learned the class-conditional distributions, and the perturbation has to change the semantics of the underlying sample for a successful attack. In contrast, perturbations introduced by attacking the robust classifier are not interpretable, even though they could cause high logit output of the target classes (see Figure 13 for the logit outputs distribution). Following this path and use a larger perturbation limit, it is straightforward to generate unrecognizable images that cause highly confident predictions of the robust classifier. Perturbed samples (generative classifier) Perturbed samples (robust classifier) Figure 3: Natural samples and corresponding perturbed samples produced by performing a targeted attack against the generative classifier and robust classifier . Targets from top row to bottom row are digit class from 0 to 9. We perform the targeted attack by maximizing the logit output of the targeted class, using L ∞ = 0.4 constrained PGD attack of steps 100 and step-size 0.01. We note that both classifiers use L ∞ = 0.3 for their training constraint. On CIFAR10 we train the base detectors using L ∞ = 8 constrain PGD attack of steps 40 and step size 0.5. Note that the scale of and step-size here is 0-255 (rather than 0-1 as in the case of MNIST). The robust classifier that we will compare with is trained with the same L ∞ = 8 constraint but with a different step-size (see Appendix D.2.2 for a discussion of step-sizes). Appendix B.2 provides the training details. Robustness . Table 3 shows that the base detector models can withstand attacks that are significantly stronger than the training attack. In Appendix D.2.1 we present random restart test , cross-norm and cross-perturbation test , and robustness test for L 2 based models. Detection . Consistent with the MNIST , in Figure 4a combined attack is the most effective method against integrated detection. Similarly, the generative detection outperforms integrated detection when the detection threshold is low (i.e., where TPR is high). In this figure we use loss 9 to attack generative detection, and in Figure 14 we show that it's more effective than attack based on cross-entropy loss and CW loss. In Table 4 our method outperforms the state-of-the-art adversarial detection method. Classification . In Figure 4b, we did not observe a dramatic decrease in the robust classifier's performance when we increase the perturbation limit to = 12. Integrated classification can reach the standard accuracy of a regular classifier, but at the cost of significantly increasing error on the perturbed set. Figure 5 shows some perturbed samples produced by attacking the generative classifier and robust classifier. While these two classifiers have similar errors on the perturbed set, samples produced by attacking the generative classifier have more visible features of the targets, which indicates that the adversary has to change more semantic in order to cause the same error. Figures 6 and 15 demonstrate that hard to recognize images are able to cause high logit outputs of the robust classifier. Such examples highlight a major defect of the defense mechanisms based on ordinary adversary training: they could be easily fooled by unrecognizable inputs (; b;). In contrast, samples that cause high logit outputs of the generative classifier all have clear semantic meaning. Since both classifiers are trained with L ∞ = 8 constrain, these indicate that AAT improves robust and interpretable feature learning. (See Appendix E for Gaussian noise attack and a discussion about the interpretability of our approach.) The visual similarity between generated samples in Figure 6 and real samples further suggests that detectors have successfully learned the conditional data distributions. Similarly, on ImageNet, we show asymmetrical adversarial training induces detection robustness and supports the learning of class-conditional distributions. Our experiment is based on Restricted ImageNet , a subset of ImageNet that has its samples reorganized into customized categories. The dog category contains images of different dog breeds collected from ImageNet class between 151 and 268. We trained a dog class detector by finetuning a pre-trained ResNet50 model. The dog category covers a range of ImageNet classes, with each one having its logit output. We use the subnetwork defined by the logit output of class 151 as the detector (in principle logit output of other classes in the range should also work). Due to computational constraints, we only validated the robustness of a L ∞ = 0.02 trained detector (trained with PGD attack of steps 40 and step-size 0.001), and we present the in Table 5 Generated by attacking generative classifier Generated by attacking robust classifier Figure 6: Images generated from class-conditional Gaussian noise by performing targeted attack against the generative classifier and robust classier. we use PGD attack of steps 60 and step-size 0.5 × 255 to perform L 2 = 30 × 255 constrained attack (same as . The Gaussian noise inputs from which these two plots are generated are the same. Samples not selected. In this paper, we studied the problem of adversarial detection under the robust optimization framework and proposed a novel adversarial detection scheme based on input space partitioning. Our formulation leads to a new generative modeling technique which we called asymmetrical adversarial training (AAT). AAT's capability to learn class-conditional distributions further gives rise to generative detection/classification methods that show competitive performance and improved interpretability. In particular, our generative classifier is more resistant to "rubbish examples", a significant threat to even the most successful defense mechanisms. High computational cost (see Appendix F for more discussion) is a major drawback of our methods, and in the future, we will explore the idea of shared computation between detectors. A A DEEPER LOOK AT ASYMMETRICAL ADVERSARIAL TRAINING Under the energy-based learning framework , the AAT objective could be understood as learning an energy function that has low energy outputs on target class data points, and high energy outputs anywhere else. The energy function in this case could be defined using the logit output of the target detector:. Using the Gibbs distribution we could obtain a density function and it should be interpreted as the joint distribution of the data point and the corresponding class category p(x, k) =, where Z Θ is a normalizing constant known as the partition function, and could be computed in our case as Z Θ = k exp(−E θ k (x))dx. (We note a similar formulation is used by .) When x is in high dimensional space, Z Θ is intractable, but for generative classification, only knowing the unnormalized probability will suffice. While ordinary discriminative training only learns a good discriminative boundary, AAT is able to learns the underlying density function. In Figure 7 we provide a schematic illustration of this phenomenon ( on real 1D data is in Figure 8). In Figure 9 we observe similar on 2D benchmarking datasets (; ; ; ;) that detectors are able to cover all models of the distributions. We note that unlike GANs that at the global optimum the discriminator converge to uniform solution and thus could not retain density information (a;), by properly constraining the adversary (especially on the perturbation limit), AAT is able to stably push the detector to a good approximation of the underlying density function. Being able to do reliable (implicit) density estimation justifies our energy-based generative classification formulation (we leave the theoretical analysis of the objective's convergence property to future work). The compelling property of AAT generalizes to the case of high dimensional data. In the following, we present CIFAR10 and ImageNet image generation for our models and state-of-the-art GANs models (Figure 10 vs. Figure 11 for CIFAR10, and Figure 21 vs. Figure 23 for ImageNet). While images generated by our models are not necessarily more "realistic", they clearly have captured the structures of the target objects. In particular, the in Figure 21 show that our model is only modeling the main structures of target objects, but not other irrelevant elements like s and other objects. We note that apart from being able to produce state-of-the-art image generation , our approach at the same time provides classification/detection performance guarantee against norm constrained adversarial examples, while other generative models cannot . The ing energy function after training. The blue data distribution has two modes, but the energy function is not able to capture this structure due to its minute effects on the discriminative objective. (c) By solving the inner maximization in objective 5 red points are pushed towards to the low energy region, but (crucially) they are not collapsed to the lowest energy point due to the perturbation limit. The gap between the two sets of blue points is now filled with red points; minimizing the loss ] causes the energy function to be pulled up on this region. (d) The energy function after training is able to capture the internal structure of the blue data. The positive class data (blue points) are sampled from a mixture of Gaussians (mean 0.4 with std 0.01, and mean 0.6 with std 0.005, each with 250 samples). Both the blue and red data has 500 samples. The estimated density function is computed using Gibbs distribution and network logit outputs. PGD attack steps 20, step size 0.05, and perturbation limit = 0.3. Figure 9: 2D datasets (top row, blue points are class 1 data, and red points are class 0 data, both have 1000 data points) and sigmoid outputs of AAT trained models (bottom row). The architecture of the MLP model for solving these tasks is 2-500-500-500-500-500-1. PGD attack steps 10, step-size 0.05, and perturbation limit L ∞ = 0.5. We use 50K samples from the original training set for training and the rest 10K samples for validation, and report test performances based on the epoch-saved checkpoint that gives the best validation performance. All base detectors are trained using a network consisting of two max-pooled convolutional layers each with 32 and 64 filters, and a fully connected layer of size 1024, same as the one used in. At each iteration we sample a batch of 320 samples, from which in-class samples are used as positive samples, and out-of-class samples are used as the source for adversarial examples that will be used as negative samples. To balance positive and negative examples at each batch, we resample the out-of-class set to have same number of samples as in-class set. All base detectors are trained for 100 epochs. Table 6: MNIST dataset PGD attack steps and step-sizes for base detector training and validation. We train our CIFAR10 base detectors using the ResNet50 model . To speedup training, we take advantage of a natural trained classifier: the subnetwork of f that defines the output logit z(f (·)) k is essentially a "detector", that would output high values for samples of class k, and low values for others. Our detector is then trained by finetuning the subnetwork using objective 5. Our pretrained classifier has a test accuracy of 95.01% (fetched from the CIFAR10 adversarial challenge (MadryLab, a)). At each iteration of training we sample a batch of 300 samples, from which in-class samples are used as positive samples, while an equal number of out-of-class samples are used as sources for adversarial examples. Adversarial examples for training L 2 and L ∞ models are both optimized using PGD attack with normalized steepest descent (MadryLab, b). We report based on the best performances on the CIFAR10 test set (thus don't claim generalization performance of the proposed method). We first find the detection threshold T with which the detection system has 0.95 TPR. We construct a new loss function by adding a weighted loss term that measures perturbation size to objective 9 We then use unconstrained PGD attack to optimize L(x). We use binary search to find the optimal c, where in each bsearch attempt if x is a false positive (max i z(H(x)) i = y and max i =y z(H(x)) i > T ) we consider the current c as effective and continue with a larger c. The configurations for performing binary search and PGD attack are detailed in Table 7. The c upper bound is established such that with this upper bound, no samples except those that are inherently misclassified by the generative classifier, could be perturbed as a false positive. With these settings, our MNIST generative detector with L ∞ = 0.3 base detectors reached 0.9962 FPR, generative detector with L ∞ = 0.5 base detectors reached 0.9936 FPR, and CIFAR10 generative detector reached 0.9995 FPR. We note that it's very difficult to find the optimal c in loss 15 using binary search, hence performance based on mean L 2 distortion is not precise, and we encourage future work to measure detection performances based on norm constrained attacks (as in Figure 2a). Table 8: AUC scores of the first two base detectors under different strengths of PGD attacks using normalized steepest descent. The gradient descent rules for L 2 and L ∞ constrained attacks are respectively x n+1 = x n − γ ∇f (xn) ∇f (xn) 2 and x n+1 = x n − γ · sign(∇f (x n)). Table 9: AUC scores of the first two base detectors under cross-norm and cross-perturbation attacks. L ∞ based attacks use steps 200 and step-size 0.01, and L 2 based attacks uses steps 200 and step-size 0.1. Table 10: AUC scores of the first MNIST base detector under fixed start and multiple random restarts attacks. These two tests use the same attack configuration: the L ∞ = 0.5 trained base detector is attacked using L ∞ = 0.5 constrained PGD attack of steps 200 and step-size 0.01, and the L 2 = 5.0 trained base detector is attacked using L 2 = 5.0 constrained PGD attack of steps 200 and step-size 0.1. Table 11: AUC scores of all L ∞ = 0.3 trained base detectors. Tested with L ∞ = 0.3 constrained PGD attacks of steps 200 and step-size 0.01. Table 12: AUC scores of all L ∞ = 0.5 trained base detectors. Tested with L ∞ = 0.5 constrained PGD attacks of steps 200 and step-size 0.01. Table 13: AUC scores of the first CIFAR10 base detector under fixed start and multiple random restarts attacks. The L ∞ = 2.0 base detector is attacked using PGD attack of steps 10 and stepsize 0.5, and the L ∞ = 8.0 base detector is attacked using PGD attack of steps 40 and step-size 0.5. Table 16: AUC scores of L ∞ = 2.0 trained base detectors under L ∞ = 2.0 constrained PGD attack of steps 10 and step-size 0.5. Table 17: AUC scores of L ∞ = 8.0 trained base detectors under L ∞ = 8.0 constrained PGD attack of steps 40 and step-size 0.5. We found training with adversarial examples optimized with a sufficiently small step-size to be essential for detection robustness. In table 18 we tested two L ∞ = 2.0 base detectors respectively trained with 0.5 and 1.0 step-size. The step-size 1.0 model is not robust when tested with a much smaller step-size. We observe that when training the step-size 1.0 model, training set adv AUC reached 1.0 in less than one hundred iterations, but test set natural AUC plummeted to around 0.95 and couldn't recover thereafter. (Please refer to Figure 16 for the definitions of adv AUC and nat AUC.) This suggests that naturally occurring data samples, and adversarial examples produced using a large step-size, live in two quite different data spaces -training a classifier to separate these two kinds of data is easy, but the performance won't generalize to real attacks. was able to train their CIFAR10 robust classifier using step-size 2, we found this step-size not working in our case. To study the effects of perturbation limit on asymmetrical adversarial training, we compare one L ∞ = 2.0 trained and one L ∞ = 8.0 trained base detector. In Figure 16 we show the training and testing history of these two models. The = 2.0 model history show that by adversarial finetuning the model reach robustness in just a few thousands of iterations, and the performance on natural samples is preserved (test natural AUC begins at 0.9971, and ends at 0.9981). Adversarial finetuning on the = 8.0 model didn't converge after an extended 20K iterations of training. The gap between train adv AUC and test adv AUC of the = 8.0 model is more pronounced, and we observed a decrease of test natural AUC from 0.9971 to 0.9909. The comparison shows that training with larger perturbation limit is more time and resource consuming, and could lead to performance decrease on natural samples. The benefit is that the model learns more interpretable features. In Figure 17, perturbations generated by attacking the naturally trained classifier (corresponds to 0 perturbation limit) don't have clear semantic. In contrast, perturbed samples of the L ∞ = 8 model are completely recognizable. Figure 18: ImageNet 224×224×3 random samples generated from class-conditional Gaussian noise by attacking robust classifier and detector models trained with different constrains. Note than large perturbation models are still under training and haven't reached robustness. Please refer to for the detail about how the class-conditional Gaussian is estimated. In this section we use Gaussian noise attack experiment to motivate a discussion about the interpretabilities of our generative classification model and discriminative robust classification model . Our approach is more interpretable in the sense that it provides a probabilitic view of the decision making process of the classification problem, and this probabilistic interpolation is further supported by the experimental . We first discuss how these two approaches determine the posterior class probabilities. For the discriminative classifier, the posterior probabilities are computed from the logit outputs of the classifier using the softmax function p(k|x) = exp(z(f (x)) k ) K j=1 exp(z(f (x))j ). For our generative classifier, the posterior probabilities are computed in two steps: in the first, we train our base detectors, which is the process of solving the inference problem of determining the joint probability p(x, k) (see Appendix A), and in the second, we use Bayes rule to compute the posterior probability p(k|x) =. Coincidentally, the formulas for computing the posterior probabilities take the same form. But in our approach, the exponential of the logit output of a detector (i.e., exp(z(h k (x)))) has a clear probabilistic interpretation: it's the unnormalized joint probability of the input and the corresponding class category. We use Gaussian noise attack to demonstrate that this probabilistic interpretation is consistent with our visual perception. We start from a Gaussian noise image, and gradually perturb it to cause higher and higher logit outputs. This is implemented by targeted PGD attack against logit outputs of these two classification models. The ing images in Figure 24 show that, for our model, the logit output increase direction is the semantic changing direction; while for the discriminative robust model, the perturbed image computed by increasing logit outputs are not as clearly interpretable. In particular, the perturbed images that cause high logit outputs of the robust classifiers are not recognizable. In this section we provide an analysis of the computational cost of our generative classification approach. In terms of memory requirements, if we assume the softmax classifier (i.e., the discriminative robust classifier) and the detectors use the same architecture (i.e., only defer in the final layer) then the detector-based generative classifier is approximately K times more expensive than the Kclass softmax classifier. This also means that the computational graph of the generative classifier is K times larger than the softmax classifier. Indeed, in the CIFAR10 task, on our Quadro M6000 24GB GPU (TensorFlow 1.13.1), the inference speed of the generative classifier is roughly ten times slower than the softmax classifier. We next benchmark the training speed of these two types of classifiers. The generative classifier has K logit outputs, with each one defined by the logit output of a detector. Same with the softmax classifier, except that the K outputs share the parameters in the convolutional part. Now consider ordinary adversarial training on the softmax classifier and asymmetrical adversarial training on the generative classifier. To train the softmax classifier, we use batches of N samples. For the generative classifier, we train each detector with batches of 2 × M samples (M positive samples and M negative samples). At each iteration, we need to respectively compute N and M × K adversarial examples for these two classifiers. Now we test the speed of the following two scenarios: 1) compute the gradient w.r.t. to N samples on a single computational graph, and 2) compute the gradient w.r.t to M × K samples on K computational graphs, with each graph working on M samples. We assume in scenario 2 that all the computational graphs are loaded to GPUs, and their computations are in parallel. In our CIFAR10 experiment, we used batches consisting of 30 positive samples and 30 negative samples to train each ResNet50 based detectors. , the softmax classifier was trained with batches of 128 samples. In this case, K = 10, M = 30, and N = 128. On our GPU, scenario 1 took 683 ms ± 6.76 ms per loop, while scenario 2 took 1.85 s ± 42.7 ms per loop. In this case, we could expect asymmetrical adversarial training to be about 2.7 times slower than ordinary adversarial training, if not considering parameter gradient computation. (If we choose to use a large batch size the computational cost will increase accordingly.)
A new generative modeling technique based on asymmetrical adversarial training, and its applications to adversarial example detection and robust classification
500
scitldr
Exploration is a key component of successful reinforcement learning, but optimal approaches are computationally intractable, so researchers have focused on hand-designing mechanisms based on exploration bonuses and intrinsic reward, some inspired by curious behavior in natural systems. In this work, we propose a strategy for encoding curiosity algorithms as programs in a domain-specific language and searching, during a meta-learning phase, for algorithms that enable RL agents to perform well in new domains. Our rich language of programs, which can combine neural networks with other building blocks including nearest-neighbor modules and can choose its own loss functions, enables the expression of highly generalizable programs that perform well in domains as disparate as grid navigation with image input, acrobot, lunar lander, ant and hopper. To make this approach feasible, we develop several pruning techniques, including learning to predict a program's success based on its syntactic properties. We demonstrate the effectiveness of the approach empirically, finding curiosity strategies that are similar to those in published literature, as well as novel strategies that are competitive with them and generalize well. Figure 1: Our RL agent is augmented with a curiosity module, obtained by meta-learning over a complex space of programs, which computes a pseudo-reward r at every time step. When an agent is learning to behave online, via reinforcement learning (RL), it is critical that it both explores its domain and exploits its rewards effectively. In very simple problems, it is possible to solve the problem optimally, using techniques of Bayesian decision theory . However, these techniques do not scale at all well and are not effectively applicable to the problems addressable by modern deep RL, with large state and action spaces and sparse rewards. This difficulty has left researchers the task of designing good exploration strategies for RL systems in complex environments. One way to think of this problem is in terms of curiosity or intrisic motivation: constructing reward signals that augment or even replace the extrinsic reward from the domain, which induce the RL agent to explore their domain in a way that in effective longer-term learning and behavior (; ;). The primary difficulty with this approach is that researchers are hand-designing these strategies: it is difficult for humans to systematically consider the space of strategies or to tailor strategies for the distribution of environments an agent might be expected to face. We take inspiration from the curious behavior observed in young humans and other animals and hypothesize that curiosity is a mechanism found by evolution that encourages meaningful exploration early in agent's life in order to expose it to experiences that enable it to learn to obtain high rewards over the course of its lifetime. We propose to formulate the problem of generating curious behavior as one of meta-learning: an outer loop, operating at "evolutionary" scale will search over a space of algorithms for generating curious behavior by dynamically adapting the agent's reward signal, and the inner loop will perform standard reinforcement learning using the adapted reward signal. This process is illustrated in figure 1; note that the aggregate agent, outlined in gray, has the standard interface of an RL agent. The inner RL algorithm is continually adapting to its input stream of states and rewards, attempting to learn a policy that optimizes the discounted sum of proxy rewards k≥0 γ k r t+k. The outer "evolutionary" search is attempting to find a program for the curiosity module, so to optimize the agent's lifetime return T t=0 r t, or another global objective like the mean performance on the last few trials. Although it is, in principle, possible to discover a complete, integrated algorithm for the entire curious learning agent in the gray box, that is a much more complex search problem that is currently computationally infeasible. We are relying on the assumption that the foundational methods for reinforcement learning, including those based on temporal differencing and policy gradient, are fundamentally sound and can serve as the behavior-learning basis for our agents. It is important to note, though, that the internal RL algorithm in our architecture must be able to tolerate a nonstationary reward signal, which may necessitate minor algorithmic changes or, at least, different hyperparameter values. In this meta-learning setting, our objective is to find a curiosity module that works well given a distribution of environments from which we can sample at meta-learning time. If the environment distribution is relatively low-variance (the tasks are all quite similar) then it might suffice to search over a relatively simple space of curiosity strategies (most trivially, the in an -greedy exploration strategy). Meta-RL has been widely explored recently, in some cases with a focus on reducing the amount of experience needed by initializing the RL algorithm well and, in others, for efficient exploration . The environment distributions in these cases have still been relatively low-diversity, mostly limited to variations of the same task, such as exploring different mazes or navigating terrains of different slopes. We would like to discover curiosity mechanisms that can generalize across a much broader distribution of environments, even those with different state and action spaces: from image-based games, to joint-based robotic control tasks. To do that, we perform meta-learning in a rich, combinatorial, open-ended space of programs. This paper makes three novel contributions. We focus on a regime of meta-reinforcement-learning in which the possible environments the agent might face are dramatically disparate and in which the agent's lifetime is very long. This is a substantially different setting than has been addressed in previous work on meta-RL and it requires substantially different techniques for representation and search. We represent meta-learned curiosity strategies in a rich, combinatorial space of programs rather than in a fixed-dimensional numeric parameter space. The programs are represented in a domain-specific language (DSL) which includes sophisticated building blocks including neural networks complete with gradient-descent mechanisms, learned objective functions, ensembles, buffers, and other regressors. This language is rich enough to represent many previously reported hand-designed exploration algorithms. We believe that by performing meta-RL in such a rich space of mechanisms, we will be able to discover highly general, fundamental curiosity-based exploration methods. This generality means that a relatively computationally expensive meta-learning process can be amortized over the lifetimes of many agents in a wide variety of environments. We make the search over programs feasible with relatively modest amounts of computation. It is a daunting search problem to find a good solution in a combinatorial space of programs, where evaluating a single potential solution requires running an RL algorithm for up to millions of time steps. We address this problem in multiple ways. By including environments of substantially different difficulty and character, we can evaluate candidate programs first on relatively simple and short-horizon domains: if they don't perform well in those domains, they are pruned early, which saves a significant amount of computation time. In addition, we predict the performance of an algorithm from its structure and operations, thus trying the most promising algorithms early in our search. Finally, we also monitor the learning curve of agents and stop unpromising programs before they reach all T environment steps. We demonstrate the effectiveness of the approach empirically, finding curiosity strategies that are similar to those in published literature, as well as novel strategies that are competitive with them and generalize well. Let us assume we have an agent equipped with an RL algorithm (such as DQN or PPO, with all hyperparameters specified), A, which receives states and rewards from and outputs actions to an environment E, generating a stream of experienced transitions e(A; E) t = (s t, a t, r t, s t+1). The agent continually learns a policy π(t): s t → a t, which will change in time as described by algorithm A; so π(t) = A(e 1:t−1) and thus a t ∼ A(e 1:t−1)(s t). Although this need not be the case, we can think of A as an algorithm that tries to maximize the discounted reward i γ i r t+i, γ < 1 and that, at any time-step t, always takes the greedy action that maximizes its estimated expected discounted reward. To add exploration to this policy, we include a curiosity module C that has access to the stream of state transitions e t experienced by the agent and that, at every time-step t, outputs a proxy reward r t. We connect this module so that the original RL agent receives these modified rewards, thus observing e(A, C; E) t = (s t, a t, r t = C(e 1:t−1), s t+1 ), without having access to the original r t. Now, even though the inner RL algorithm acts in a purely exploitative manner with respect to r, it may efficiently explore in the outer environment. Our overall goal is to design a curiosity module C that induces the agent to maximize T t=0 r t, for some number of total time-steps T or some other global goal, like final episode performance. In an episodic problem, T will span many episodes. More formally, given a single environment E, RL algorithm A, and curiosity module C, we can see the triplet (environment, curiosity module, agent) as a dynamical system that induces state transitions for the environment, and learning updates for the curiosity module and the agent. Our objective is to find C that maximizes the expected original reward obtained by the composite system in the environment. Note that the expectation is over two different distributions at different time scales: there is an "outer" expectation over environments E, and in "inner" expectation over the rewards received by the composite system in that environment, so our final objective is: In science and computing, mathematical language has been very successful in describing varied phenomena and powerful algorithms with short descriptions. As Valiant points out: "the power [of mathematics and algorithms] comes from the implied generality, that knowledge of one equation alone will allow one to make accurate predictions about a host of situations not even conceived when the equation was first written down" . Therefore, in order to obtain curiosity modules that can generalize over a very broad range of tasks and that are sophisticated enough to provide exploration guidance over very long horizons, we describe them in terms of general programs in a domain-specific language. Algorithms in this language will map a history of (s t+1, a t, r t) triples into a proxy reward r t. Inspired by human-designed systems that compute and use intrinsic rewards, and to simplify the search, we decompose the curiosity module into two components: the first, I, outputs an intrinsic reward value i t based on the current experienced transition (s t, a t, s t+1) (and past transitions (s 1:t−1, a 1:t−1) indirectly through its memory); the second, χ, takes the current time-step t, the actual reward r t, and the intrinsic reward i t (and, if it chooses to store them, their histories) and combines them to yield the proxy reward r t. To ease generalization across different timescales, in practice, before feeding t into χ we normalize it by the total length of the agent's lifetime, T. We draw both programs from the same basic class. Fundamentally, they consist of a directed acyclic graph (DAG) of modules with polymorphically typed inputs and outputs. There are four classes of modules: • Input modules (shown in blue), drawn from the set {s t, a t, s t+1} for the I module and from the set {i t, r t} for the χ module. They have no inputs, and their outputs have the type corresponding to the types of states and actions in whatever domain they are applied to, or the reals numbers for rewards. Figure 2: Example diagrams of published algorithms covered by our language (larger figures in the appendix). The green box represents the output of the intrinsic curiosity function, the pink box is the loss to be minimized. Pink arcs represent paths and networks along which gradients flow back from the minimizer to update parameters. • Buffer and parameter modules (shown in gray) of two kinds: FIFO queues that provide as output a finite list of the k most recent inputs, and neural network weights initialized at random at the start of the program and which may (pink border) or may not get updated via back-propagation depending on the computation graph. • Functional modules (shown in white), which compute output values given input from their parent modules. • Update modules (shown in pink), which are functional modules (such as k-NearestNeighbor) that either add variables to buffers or modules which add real-valued outputs to a global loss that will provide error signals for gradient descent. A single node in the DAG is designated as the output node (shown in green): the output of this node is considered to be the output of the entire program, but it need not be a leaf node of the DAG. On each call to a program (corresponding to one time-step of the system) the current input values and parameter values are propagated through the functional modules, and the output node's output is saved, to be yielded as the output of the whole program. Before the call terminates, the FIFO buffers are updated and the adjustable parameters are updated via gradient descent using the Adam optimizer. Most operations are differentiable and thus able to propagate gradient backwards. Some operations are not differentiable such as buffers (to avoid backpropagating through time) and "Detach" whose purpose is stopping the gradient from flowing back. In practice, we have multiple copies of the same agent running at the same time, with both a shared policy and shared curiosity module. Thus, we execute multiple reward predictions on a batch and then update on a batch. A crucial, and possibly somewhat counter-intuitive, aspect of these programs is their use of neural network weight updates via gradient descent as a form of memory. In the parameter update step, all adjustable parameters are decremented by the gradient of the sum of the outputs of the loss modules, with respect to the parameters. This type of update allows the program to, for example, learn to make some types of predictions, online, and use the quality of those predictions in a state to modulate the proxy reward for visiting that state (as is done, for example, in random network distillation (RND) ). Programs representing several published designs for curiosity modules that perform internal gradient descent, including inverse features , RND , and ensemble predictive variance , are shown in figure 2 (and bigger versions can be found in appendix A.3). We can also represent algorithms similar to novelty search and EX 2 , which include buffers and nearest neighbor regression modules. Details on the data types and module library are given in appendix A. Key to our program search are polymorphic data types: the inputs and outputs to each module are typed, but the instantiation of some types, and thus of some operations, depends on the environment. We have the four types: reals R, state space of the given environment S, action space of the given environment A and feature space F, used for intermediate computations and always set to R 32 in our current implementation. For example, a neural network module going from S to F will be instantiated as a convolutional neural network if S is an image and as a fully connected neural network of the appropriate dimension if S is a vector. Similarly, if we are measuring an error in action space A we use mean-squared error for continuous action spaces and negative log-likelihood for discrete action spaces. This facility means that the same curiosity program can be applied, independent of whether states are represented as images or vectors, or whether the actions are discrete or continuous, or the dimensionality of either. This type of abstraction enables our meta-learning approach to discover curiosity modules that generalize radically, applying not just to new tasks, but to tasks with substantially different input and output spaces than the tasks they were trained on. To clarify the semantics of these programs, we walk through the operation of the RND program in figure 2. Its only input is s t+1, which might be an image or an input vector, which is processed by two NNs with parameters Θ 1 and Θ 2, respectively. The structure of the NNs (and, hence, the dimensions of the Θ i) depends on the type of s t+1: if s t+1 is an image, then they are CNNs, otherwise a fully connected networks. Each NN outputs a 32-dimensional vector; the L 2 distance between these vectors is the output of the program on this iteration, and is also the input to a loss module. So, given an input s t+1, the output intrinsic reward is large if the two NNs generate different outputs and small otherwise. After each forward pass, the weights in Θ 2 are updated to minimize the loss while Θ 1 remains constant, which causes the trainable NN to mimic the output of the randomly initialized NN. As the program's ability to predict the output of the randomized NN on an input improves, the intrinsic reward for visiting that state decreases, driving the agent to visit new states. To limit the search space and prioritize short, meaningful programs we limit the total number of modules of the computation graph to 7. Our language is expressive enough to describe many (but far from all) curiosity mechanisms in the existing literature, as well as many other potential alternatives, but the expressiveness leads to a very large search space. Additionally, removing or adding a single operation can drastically change the behavior of a program, making the objective function nonsmooth and, therefore, the space hard to search. In the next section we explore strategies for speeding up the search over tens of thousands of programs. We wish to find curiosity programs that work effectively in a wide range of environments, from simple to complex. However, evaluating tens of thousands of programs in the most expensive environments would consume decades of GPU computation. Therefore, we have designed multiple strategies for quickly discarding less promising programs and focusing more computation on a few promising programs. In doing so, we take inspiration from efforts in the AutoML community . We divide these pruning efforts into three categories: simple tests that are independent of running the program in any environment, "filtering" by ruling out some programs based on poor performance in simple environments, and "meta-meta-RL" learning to predict which programs will perform well based on syntactic features. Many programs are obviously bad curiosity programs. We have developed two heuristics to immediately prune these programs without an expensive evaluation. • Checking that programs are not duplicates. Since our language is highly expressive, there are many non-obvious ways of getting equivalent programs. To find duplicates, we designed a randomized test where we identically seed two programs, feed them both identical fake environment data for tens of steps and check whether their outputs are identical. This test may, with low probability, prune a program that is not an exact duplicate, but since there is a very near neighbor under consideration, it is not very harmful to do so. • Checking that the loss functions cannot be minimized independently of the input data. Many programs optimize some loss depending on neural network regressors. If we treat inputs as uncontrollable variables and networks as having the ability to become any possible function, then for every variable, we can determine whether neural networks can be optimized to minimize it, independently of the input data. For example, if our loss function is |N N θ (s)| 2 the neural network can learn to make it 0 by disregarding s and optimizing the weights θ to 0. We discard any program that has this property. Our ultimate goal is to find algorithms that perform well on many different environments, both simple and complex. We make two key observations. First, there may be only tens of reasonable programs that perform well on all environments but hundreds of thousands of programs that perform poorly. Second, there are some environments that are solvable in a few hundred steps while others require tens of millions. Therefore, a key idea in our search is to try many programs in cheap environments and only a few promising candidates in the most expensive environments. This was inspired by the effective use of sequential halving in hyper-parameter optimization . By pruning programs aggressively, we may be losing multiple programs that perform well on complex environments. However, by definition, these programs will tend to be less general and robust than those that succeed in all environments. Moreover, we seek generalization not only for its own sake, but also to ease the search since, even if we only cared about the most expensive environment, performing the complete search only in this environment would be impractical. Perhaps surprisingly, we find that we can predict program performance directly from program structure. Our search process bootstraps an initial training set of (program structure, program performance) pairs, then uses this training set to select the most promising next programs to evaluate. We encode each program's structure with features representing how many times each operation is used, thus having as many features as number of operations in our vocabulary. We use a k-nearestneighbor regressor, with k = 10. We then try the most promising programs and update the regressor with their . Finally, we add an -greedy exploration policy to make sure we explore all the search space. Even though the correlation between predictions and actual values is only moderately high (0.54 on a holdout test), this is enough to discover most of the top programs searching only half of the program space, which is our ultimate goal. Results are shown in appendix C. We can also prune algorithms during the training process of the RL agent. In particular, at any point during the meta-search, we use the top K current best programs as benchmarks for all T timesteps. Then, during the training of a new candidate program we compare its current performance at time t with the performance at time t of the top K programs and stop the run if its performance is significantly lower. If the program is not pruned and reaches the final time-step T with one of the top K performances, it becomes part of the benchmark for the future programs. Our RL agent uses PPO based on the implementation by in PyTorch . Our code, which can be found at https://bit. ly/meta-learning-curiosity-algs, is meant to take in any OpenAI gym environment with a specification of the desired exploration horizon T. We evaluate each curiosity algorithm for multiple trials, using a seed dependent on the trial but independent of the algorithm, which leads to the PPO weights and curiosity data-structures being initialized identically on the same trials for all algorithms. As is common in PPO, we run multiple rollouts (5, except for MuJoCo which only has 1), with independent experiences but shared policy and curiosity modules. Curiosity predictions and updates are batched across these rollouts, but not across time. PPO policy updates are batched both across rollouts and multiple timesteps. We start by searching for a good intrinsic curiosity program I in a purely exploratory environment, designed by , which is an image-based grid world where agents navigate in an image of a 2D room either by moving forward in the pixel grid or rotating left or right. We optimize the total number of distinct pixels visited across the agent's lifetime. This allows us to evaluate intrinsic reward programs in a fast and simple environment, without worrying about combining it with external reward. To bias towards simple, interpretable algorithms and keep the search space manageable, we search for programs with at most 7 operations. We first discard duplicate and invalid programs, as described in section 3.1, ing in about 52,000 programs. We then randomly split the programs across 4 machines, each with 8 Nvidia Tesla K80 GPUs for 10 hours. Each machine tries to find the highest-scoring 625 programs in its section of the search space and prunes programs whose partial learning curve is statistically significantly lower than the current top 625 programs. To do so, after every episode of every trial, we check whether the mean performance of the current program is below the mean performance (at that point during the trial) of the top 625 programs minus two standard deviations of their performance minus one standard deviation of our estimate of the mean of the current program. In this way we account for both inter-program variability among the top 625 programs and intra-program variability among multiple trials of the same program. We use a 10-nearest-neighbor regressor to predict program performance and choose the next program to evaluate with an -greedy strategy, choosing the best predicted program 90% of the time and a random program 10% of the time. By doing this, we try the most promising programs early in our search. This is important for two reasons: first, we only try 26,000 programs, half of the whole search space, which we estimated from earlier (shown in figure 8 in the appendix) would be enough to get 88% of the top 1% of programs. Second, the earlier we run our best programs, the higher the bar for later programs, thus allowing us to prune them earlier, further saving computation time. Searching through this space took a total of 13 GPU days. As shown in figure 9 in the appendix, we find that most programs perform relatively poorly, with a long tail of programs that are statistically significantly better, comprising roughly 0.5% of the whole program space. The highest scoring program (a few other programs have lower average performance but are statistically equivalent) is surprisingly simple and meaningful, comprised of only 5 operations, even though the limit was 7. This program, which we will call Top, is shown in figure 3; it uses a single neural network (a CNN or MLP depending on the type of state) to predict the action from s t+1 and then compares its predictions based on s t with its predictions based on s t+1, generating high intrinsic reward when the difference is large. The action prediction loss module either computes a softmax followed by NLL loss or appends zeros to the action to match dimensions and applies MSE loss, depending on the type of the action space. Note that this is not the same as rewarding taking a different action in the previous time-step. To the best of our knowledge, the algorithm represented by this program has not been proposed before, although its simplicity makes us think it may have. The network predicting the action is learning to imitate the policy learned by the internal RL agent, because the curiosity module does not have direct access to the RL agent's internal state. We show correlation between program performance in gridworld and performance in harder environments (lunar lander on the left, acrobot on the right), using the top 2,000 programs in gridworld. Performance is evaluated using mean reward across all learning episodes, averaged over trials (two trials for acrobot / lunar lander and five for gridworld).We can see that almost all intrinsic curiosity programs that had statistically significant performance for grid world also do well on the other two environments. In green, we show the performance of three published works; in increasing gridworld performance: disagreement , inverse features and random distillation . Many of the highest-scoring programs are small variations on Top, including versions that predict the action from s t instead of s t+1. Of the top 16 programs, 13 are variants of Top and 3 are variants of an interesting program that is more difficult to understand because it does a combination of random network distillation and state-transition prediction, with some weight sharing, shown in figure 11 in the appendix. Our reward combiner was developed in lunar lander (the simplest environment with meaningful extrinsic reward) based on the best program among a preliminary set of 16,000 programs (which resembled Random Network Distillation, its computation graph is shown in appendix E). Among a set of 2478 candidates (with 5 or less operations) the best reward combiner was r t = (1+it−t/T)·it+t/T ·rt 1+it. Notice that for 0 < i t 1 (usually the case) this is approximately r t = i 2 t + (1 − t/T)i t + (t/T)r t, which is a down-scaled version of intrinsic reward plus a linear interpolation that ranges from all intrinsic reward at t = 0 to all extrinsic reward at t = T. In future work, we hope to co-adapt the search for intrinsic reward programs and combiners as well as find multiple reward combiners. Given the fixed reward combiner and the list of 2,000 selected programs found in the image-based grid world, we evaluate the programs on both lunar lander and acrobot, in their discrete action space versions. Notice that both environments have much longer horizons than the image-based grid world (37,500 and 50,000 vs 2,500) and they have vector-based inputs, not image-based. The in figure 4 show good correlation between performance on grid world and on each of the new environments. Especially interesting is that, for both environments, when intrinsic reward in grid world is above 370 (the start of the statistically significant performances), performance on the other two environments is also good in more than 90% of cases. Finally, we evaluate the 16 best programs on grid world (most of which also did well on lunar lander and acrobot) on two MuJoCo environments : hopper and ant. These environments have more than an order of magnitude longer exploration horizon than acrobot and lunar lander, exploring for 500K time-steps, as well as continuous action-spaces instead of discrete. Ant Hopper Baseline algorithms [-95.3, -39.9 Table 1 : Meta-learned algorithms perform significantly better than constant rewards and statistically equivalently to published algorithms found by human researchers (see 2). The table shows the confidence interval (one standard deviation) for the mean performance (across trials, across algorithms) for each algorithm category. Performance is defined as mean episode reward for all episodes. We then compare the best 16 programs on grid world to four weak baselines (constant 0,-1,1 intrinsic reward and Gaussian noise reward) and the three published algorithms expressible in our language (shown in figure 2). We run two trials for each algorithm and pool all in each category to get a confidence interval for the mean of that category. All trials used the reward combiner found on lunar lander. For both environments we find that the performance of our top programs is statistically equivalent to published work and significantly better than the weak baselines, confirming that we meta-learned good curiosity programs. Note that we meta-trained our intrinsic curiosity programs only on one environment (GridWorld) and showed they generalized well to other, very different, environments. Adding more more metatraining tasks would be as simple as standardising the performance within each task (to make comparable) and then select the programs with best mean performance. We chose to only meta-train on a single, simple, task because it (surprisingly!) already gave great ; highlighting the broad generalization of meta-learning program representations. In some regards our work is similar to neural architecture search (NAS) (; ; ;) or hyperparameter optimization for deep networks , which aim at finding the best neural network architecture and hyper-parameters for a particular task. However, in contrast to most (but not all, see) NAS work, we want to generalize to many environments instead of just one. Moreover, we search over programs, which include non-neural operations and data structures, rather than just neural-network architectures, and decide what loss functions to use for training. Our work also resembles work in the AutoML community that searches in a space of programs, for example in the case of SAT solving or auto-sklearn . Although we take inspiration from ideas in that community , our algorithms also specify their own optimization objectives (vs being specified by the user) which need to work well in syncrony with an expensive deep RL algorithm. There has been work on meta-learning with genetic programming , searching over mathematical operations within neural networks , searching over programs to solve games (; ; and to optimize neural network weights , and neural networks that learn programs . In contrast, our work uses neural networks as basic operations within larger algorithms. Finally, modular metalearning trains the weights of small neural modules and transfers to new tasks by searching for a good composition of modules using a relatively simple composition scheme; as such, it can be seen as a (restricted) dual of our approach. There has been much interesting work in designing intrinsic curiosity algorithms. We take inspiration from many of them to design our domain-specific language. In particular, we rely on the idea of using neural network training as an implicit memory, which scales well to millions of time-steps, as well as buffers and nearest-neighbour regressors. As we showed in figure 2 we can represent several prominent curiosity algorithms. We can also generate meaningful algorithms similar to novelty search and EX 2 ; which include buffers and nearest neighbours. However, there are many exploration algorithm classes that we do not cover, such as those focusing on generating goals (; ;), learning progress (; ;), generating diverse skills , stochastic neural networks , count-based exploration or object-based curiosity measures . Finally, part of our motivation stems from Taïga et al. showing that some bonus-based curiosity algorithms have trouble generalising to new environments. Related work on parametric-based meta-RL and efforts to increase its generalization can be found in appendix B. More relevant to our work, there have been research efforts on meta-learning exploration policies.; learn an LSTM that explores an environment for one episode, retains its hidden state and is spawned in a second episode in the same environment; by training the network to maximize the reward in the second episode alone it learns to explore efficiently in the first episode. improves their exploration and that of by considering the importance of sampling in RL policies. combine gradient-based meta-learning with a learned latent exploration space in which they add structured noise for meaningful exploration. Closer to our formulation, parametrize an intrinsic reward function which influences policy-gradient updates in a differentiable manner, allowing them to backpropagate through a single step of the policy-gradient update to optimize the intrinsic reward function for a single task. In contrast to all three of these methods, we search over algorithms, which will allows us to generalize more broadly and to consider the effect of exploration on up to 10 5 − 10 6 time-steps instead of the 10 2 − 10 3 of previous work.; have a setting similar to ours where they modify reward functions over the entire agent's lifetime, but instead of searching over intrinsic curiosity algorithms they tune the parameters of a hand-designed reward function. Probably closest to our work, in evolved policy gradients meta-learn a neural network that computes a loss function based on interactions of the agent with an environment. The weights of this network are optimized via evolution strategies to efficiently optimize new policies from scratch to satisfy new goals. They show that they can generalize more broadly than MAML and RL 2 by meta-training a robot to go to different positions to the east of the start location and then meta-test making the robot quickly learn to go to a location to the west. In contrast, we showed that by meta-learning programs, we can generalize between radically different environments, not just goal variations of a single environment. For all methods transferring parametric representations, it is not clear how one would adapt the learned neural networks to an unseen environment with different action dimensionality or a different observation space. In contrast, algorithms leverage polymorphic data types that adapt the neural networks to the environment they are running in. In this work we show that programs are a powerful, succinct, representation for algorithms for generating curious exploration, and these programs can be meta-learned efficiently via active search. Results from this work are two-fold. First, by construction, algorithms ing from this search will have broad generalization and will thus be a useful default for RL settings, where reliability is key. Second, the algorithm search code will be open-sourced to facilitate further research on exploration algorithms based on new ideas or building blocks, which can be added to the search. In addition, we note that the approach of meta-learning programs instead of network weights may have further applications beyond finding curiosity algorithms, such as meta-learning optimization algorithms or even meta-learning meta-learning algorithms. We have the following types. Note that S and A get defined differently for every environment. • R: real numbers such as r t or the dot-product between two vectors. • R +: numbers guaranteed to be positive, such as the distance between two vectors. The only difference to our program search between R and R + is in pruning programs that can optimize objectives without looking at the data. For R + we check whether they can optimize down to 0, for R we check whether they can optimize to arbitrarily negative values. • state space S: the environment state, such as a matrix of pixels or a vector with robot joint values. The particular form of this type is adapted to each environment. • action space A: either a 1-hot description of the action or the action itself. The particular form of this type is adapted to each environment. • feature-space F = R 32: a space mostly useful to work with neural network embeddings. For simplicity, we only have a single feature space. • List [X]: for each type we may also have a list of elements of that type. All operations that take a particular type as input can also be applied to lists of elements of that type by mapping the function to every element in the list. Lists also support extra operations such as average or variance. Note that X stands for the option of being F or A. |a|+|c|. RunningNorm keeps track of the variance of the input and normalizes by that variance. A.3 TWO OTHER PUBLISHED ALGORITHMS COVERED BY OUR DSL Most work on meta-RL has focused on learning transferable feature representations or parameter values for quickly adapting to new tasks (; ;) or improving performance on a single task . However, the range of variability between tasks is typically limited to variations of the same goal (such as moving at different speeds or to different locations) or generalizing to different environment variations (such as different mazes or different terrain slopes). There have been some attempts to broaden the spectrum of generalization, showing transfer between Atari games thanks to modularity or proper pretraining . However, as noted by , Atari games are too different to get big gains with current feature-transfer methods; they instead suggest using different levels of the game Sonic to benchmark generalization. recently proposed a benchmark of many tasks. automatically generate different terrains for a bipedal walker and transfer policies between terrains, showing that this is more effective than learning a policy on hard terrains from scratch; similar to our suggestion in section 3.2. In contrast to these methods, we aim at generalization between completely different environments, even between environments that do not share the same state and action spaces. Figure 8: Predicting algorithm performance allows us to find the best programs faster. We investigate the number of the top 1% of programs found vs. the number of programs evaluated, and observe that the optimized search (in blue) finds 88% of the best programs after only evaluating 50% of the programs (highlighted in green). The naive search order would have only found 50% of the best programs at that point. Figure 9: In black, mean performance across 5 trials for all 26,000 programs evaluated (out of their finished trials). In green mean plus one standard deviation for the mean estimate and in red one minus one standard deviation for the mean estimate. On the right, you can see program means form roughly a gaussian distribution of very big noise (thus probably not significant) with a very small (between 0.5% and 1% of programs) long tail of programs with statistically significant performance (their red dots are much higher than almost all green dots), composed of algorithms leading to good exploration. Figure 10: Top variant in preliminary search on grid world; variant on random network distillation using an ensemble of trained networks instead of a single one. Figure 11: Good algorithm found by our search (3 of the top 16 programs on grid world are variants of this program). On its left part it does random network distillation but does not use that error as a reward. Instead it does an extra prediction based on the state transition on the right and compares both predictions. Notice that, to make both predictions, the same F → F network was used to map from the query to the target, thus sharing the weights between both predictions.
Meta-learning curiosity algorithms by searching through a rich space of programs yields novel mechanisms that generalize across very different reinforcement-learning domains.
501
scitldr
Many machine learning algorithms represent input data with vector embeddings or discrete codes. When inputs exhibit compositional structure (e.g. objects built from parts or procedures from subroutines), it is natural to ask whether this compositional structure is reflected in the the inputs’ learned representations. While the assessment of compositionality in languages has received significant attention in linguistics and adjacent fields, the machine learning literature lacks general-purpose tools for producing graded measurements of compositional structure in more general (e.g. vector-valued) representation spaces. We describe a procedure for evaluating compositionality by measuring how well the true representation-producing model can be approximated by a model that explicitly composes a collection of inferred representational primitives. We use the procedure to provide formal and empirical characterizations of compositional structure in a variety of settings, exploring the relationship between compositionality and learning dynamics, human judgments, representational similarity, and generalization. Figure 1: Representations arising from a communication game. In this game, an observation (b) is presented to a learned speaker model (c), which encodes it as a discrete character sequence (d) to be consumed by a listener model for some downstream task. The space of inputs has known compositional structure (a). We want to measure the extent to which this structure is reflected (perhaps imperfectly) in the structure of the learned codes. The success of modern representation learning techniques has been accompanied by an interest in understanding the structure of learned representations. One feature shared by many humandesigned representation systems is compositionality: the capacity to represent complex concepts (from objects to procedures to beliefs) by combining simple parts BID18. While many machine learning approaches make use of human-designed compositional analyses for representation and prediction BID44 BID16, it is also natural to ask whether (and how) compositionality arises in learning problems where compositional structure has not been built in from the start. Consider the example in Figure 1, which shows a hypothetical character-based encoding scheme learned for a simple communication task (similar to the one studied by). Is this encoding scheme compositional? That is, to what extent can we analyze the agents' messages as being built from smaller pieces (e.g. pieces xx meaning blue and bb meaning triangle)?A large body of work, from early experiments on language evolution to recent deep learning models BID24 ), aims to answer questions like this one. But existing solutions rely on manual (and often subjective) analysis of model outputs BID32, or at best automated procedures tailored to the specifics of individual problem domains BID7. They are difficult to compare and difficult to apply systematically. We are left with a need for a standard, formal, automatable and quantitative technique for evaluating claims about compositional structure in learned representations. The present work aims at first steps toward meeting that need. We focus on an oracle setting where the compositional structure of model inputs is known, and where the only question is whether this structure is reflected in model outputs. This oracle evaluation paradigm covers most of the existing representation learning problems in which compositionality has been studied. The first contribution of this paper is a simple formal framework for measuring how well a collection of representations (discrete-or continuous-valued) reflects an oracle compositional analysis of model inputs. We propose an evaluation metric called TRE, which provides graded judgments of compositionality for a given set of (input, representation) pairs. The core of our proposal is to treat a set of primitive meaning representations as hidden, and optimize over them to find an explicitly compositional model that approximates the true model as well as possible. For example, if the compositional structure that describes an object is a simple conjunction of attributes, we can search for a collection of "attribute vectors" that sum together to produce the observed object representations; if it is a sparse combination of (attribute, value) pairs we can additionally search for "value vectors" and parameters of a binding operation; and so on for more complex compositions. Having developed a tool for assessing the compositionality of representations, the second contribution of this paper is a survey of applications. We present experiments and analyses aimed at answering four questions about the relationship between compositionality and learning:• How does compositionality of representations evolve in relation to other measurable model properties over the course of the learning process? (Section 4) • How well does compositionality of representations track human judgments about the compositionality of model inputs? (Section 5) • How does compositionality constrain distances between representations, and how does TRE relate to other methods that analyze representations based on similarity? (Section 6) • Are compositional representations necessary for generalization to out-of-distribution inputs?(Section 7)We conclude with a discussion of possible applications and generalizations of TRE-based analysis. Arguments about whether distributed (and other non-symbolic) representations could model compositional phenomena were a staple of 1980s-era connectionist-classicist debates. BID42 provides an overview of this discussion and its relation to learnability, as well as a concrete implementation of a compositional encoding scheme with distributed representations. Since then, numerous other approaches for compositional representation learning have been proposed, with BID30 BID43 and without BID15 BID22 ) the scaffolding of explicit composition operations built into the model. The main experimental question is thus when and how compositionality arises "from scratch" in the latter class of models. In order to answer this question it is first necessary to determine whether compositional structure is present at all. Most existing proposals come from linguistics and and philosophy, and offer evaluations of compositionality targeted at analysis of formal and natural languages BID8 BID28. Techniques from this literature are specialized to the details of linguistic representations-particularly the algebraic structure of grammars BID31. It is not straightforward to apply these techniques in more general settings, particularly those featuring non-string-valued representation spaces. We are not aware of existing work that describes a procedure suitable for answering questions about compositionality in the general case. Machine learning research has responded to this absence in several ways. One class of evaluations BID32 BID11 derives judgments from ad-hoc manual analyses of representation spaces. These analyses provide insight into the organization of representations but are time-consuming and non-reproducible. Another class of evaluations BID6 BID0 BID4 ) exploits task-specific structure (e.g. the ability to elicit pairs of representations known to feature particular relationships) to give evidence of compositionality. Our work aims to provide a standard and scalable alternative to these model-and task-specific evaluations. Other authors refrain from measuring compositionality directly, and instead base analysis on measurement of related phenomena, for which more standardized evaluations exist. Examples include correlation between representation similarity and similarity of oracle compositional analyses BID7 and generalization to structurally novel inputs BID26. Our approach makes it possible to examine the circumstances under which these surrogate measures in fact track stricter notions of compositionality; similarity is discussed in Sec. 6 and generalization in Sec. 7.A long line of work in natural language processing BID13 BID2 BID12 BID19 focuses on learning composition functions to produce distributed representations of phrases and sentences-that is, for purposes of modeling rather than evaluation. We use one experiment from this literature to validate our own approach (Section 5). On the whole, we view work on compositional representation learning in NLP as complementary to the framework presented here: our approach is agnostic to the particular choice of composition function, and the aforementioned references provide well-motivated choices suitable for evaluating data from language and other sources. Indeed, one view of the present work is simply as a demonstration that we can take existing NLP techniques for compositional representation learning, fit them to representations produced by other models (even in non-linguistic settings), and view the ing training loss as a measure of the compositionality of the representation system in question. Consider again the communication task depicted in Figure 1. Here, a speaker model observes a target object described by a feature vector. The speaker sends a message to a listener model, which uses the message to complete a downstream task-for example, identifying the referent from a collection of distractors based on the content of the message BID17 ). Messages produced by the speaker model serve as representations of input objects; we want to know if these representations are compositional. Crucially, we may already know something about the structure of the inputs themselves. In this example, inputs can be identified via composition of categorical shape and color attributes. How might we determine whether this oracle analysis of input structure is reflected in the structure of representations? This section proposes an automated procedure for answering the question. Representations A representation learning problem is defined by a dataset X of observations x (Figure 1b); a space Θ of representations θ (Figure 1d); and a model f: X → Θ (Figure 1c). We assume that the representations produced by f are used in a larger system to accomplish some concrete task, the details of which are not important for our analysis. Derivations The technique we propose additionally assumes we have prior knowledge about the compositional structure of inputs. In particular, we assume that inputs can be labeled with treestructured derivations d (Figure 1a), defined by a finite set D 0 of primitives and a binary bracketing operation ·, ·, such that if DISPLAYFORM0 Compositionality In intuitive terms, the representations computed by f are compositional if each f (x) is determined by the structure of D(x). Most discussions of compositionality, following BID31, make this precise by defining a composition operation θ a * θ b → θ in the space of representations. Then the model f is compositional if it is a homomorphism from inputs to representations: we require that for any x with DISPLAYFORM1 In the linguistic contexts for which this definition was originally proposed, it is straightforward to apply. Inputs x are natural language strings. Their associated derivations D(x) are syntax trees, and composition of derivations is syntactic composition. Representations θ are logical representations of meaning (for an overview see BID47 . To argue that a particular fragment of language is compositional, it is sufficient to exhibit a lexicon D 0 mapping words to their associated meaning representations, and a grammar for composing meanings where licensed by derivations. Algorithms for learning grammars and lexicons from data are a mainstay of semantic parsing approaches to language understanding problems like question answering and instruction following BID49 BID9 BID1 .But for questions of compositionality involving more general representation spaces and more general analyses, the above definition presents two difficulties: In the absence of a clearly-defined syntax of the kind available in natural language, how do we identify lexicon entries: the primitive parts from which representations are constructed? What do we do with languages like the one in Figure 1d, which seem to exhibit some kind of regular structure, but for which the homomorphism condition given in Equation 1 cannot be made to hold exactly?Consider again the example in Figure 1. The oracle derivations tell us to identify primitive representations for dark, blue, green, square, and triangle. The derivations then suggest a process for composing these primitives (e.g. via string concatenation) to produce full representations. The speaker model is compositional (in the sense of Equation 1) as long as there is some assignment of representations to primitives such that for each model input, composing primitive representations according to the oracle derivation reproduces the speaker's prediction. In Figure 1 there is no assignment of strings to primitives that reproduces model predictions exactly. But predictions can be reproduced approximately-by taking xx to mean blue, aa to mean square, etc. The quality of the approximation serves as a measure of the compositionality of the true predictor: predictors that are mostly compositional but for a few exceptions, or compositional but for the addition of some noise, will be well-approximated on average, while arbitrary mappings from inputs to representations will not. This suggests that we should measure compositionality by searching for representations that allow an explicitly compositional model to approximate the true f as closely as possible. We define our evaluation procedure as follows:Tree Reconstruction Error (TRE)First choose: DISPLAYFORM2, a compositional approximation to f with parameters η, as: DISPLAYFORM3 f η has one parameter vector η i for every d i in D 0; these vectors are members of the representation space Θ.Given a dataset X of inputs x i with derivations d i = D(x i), compute: DISPLAYFORM4 Then we can define datum-and dataset-level evaluation metrics: DISPLAYFORM5 DISPLAYFORM6 TRE and compositionality How well does the evaluation metric TRE(X) capture the intuition behind Equation 1? The definition above uses parameters η i to witness the constructability of representations from parts, in this case by explicitly optimizing over those parts rather than taking them to be given by f. Each term in Equation 2 is analogous to an instance of Equation 1, measuring how wellf η * (x i), the best compositional prediction, matches the true model prediction f (x i). In the case of models that are homomorphisms in the sense of Equation 1, TRE reduces to the familiar case: DISPLAYFORM7 Learnable composition operators The definition of TRE leaves the choice of δ and * up to the evaluator. Indeed, if the exact form of the composition function is not known a priori, it is natural to define * with free parameters (as in e.g. BID2, treat these as another learned part off, and optimize them jointly with the η i . However, some care must be taken when choosing * (especially when learning it) to avoid trivial solutions:Remark 2. Suppose D is injective; that is, every x ∈ X is assigned a unique derivation. Then there is always some * that achieves TRE(X) = 0: DISPLAYFORM8 as in the preceding definition, and setf = f.In other words, some pre-commitment to a restricted composition function is essentially inevitable: if we allow the evaluation procedure to select an arbitrary composition function, the will be trivial. This paper features experiments with * in both a fixed functional form and a learned parametric one. Implementation details For models with continuous Θ and differentiable δ and *, TRE(X) is also differentiable. Equation 2 can be solved using gradient descent. We use this strategy in Sections 4 and 5. For discrete Θ, it may be possible to find a continuous relaxation with respect to which δ(θ, ·) and * are differentiable, and gradient descent again employed. We use this strategy in Section 7 (discussed further there). An implementation of an SGD-based TRE solver is provided in the accompanying software release. For other problems, task-specific optimizers (e.g. machine translation alignment models; BID4 or general-purpose discrete optimization toolkits can be applied to Equation 2.The remainder of the paper highlights ways of using TRE to answer questions about compositionality that arise in machine learning problems of various kinds. We begin by studying the relationship between compositionality and learning dynamics, focusing on the information bottleneck theory of representation learning proposed by BID45 . This framework proposes that learning in deep models consists of an error minimization phase followed by a compression phase, and that compression is characterized by a decrease in the mutual information between inputs and their computed representations. We investigate the hypothesis that the compression phase finds a compositional representation of the input distribution, isolating decision-relevant attributes and discarding irrelevant information. FIG1). We predict classifiers in a meta-learning framework BID39 BID37: for each sub-task, the learner is presented with two images corresponding to some compositional visual concept (e.g. "digit 8 on a black " or "green with heavy stroke") and must determine whether a held-out image is an example of the same visual concept. Given example images x 1 and x 2, a test image x *, and label y *, the model computes: DISPLAYFORM0 We use θ as the representation of a classifier for analysis. The model is trained to minimize the logistic loss between logitsŷ and ground-truth labels y *. More details are given in Appendix A.Compositional structure Visual concepts used in this task are all single attributes or conjunctions of attributes; i.e. their associated derivations are of the form attr or attr 1, attr 2. Attributes include color, digit color, digit identity and stroke type. The composition function * is addition and the distance δ(θ, θ) is cosine similarity 1 − θ θ /(θ θ).Evaluation The training dataset consists of 9000 image triplets, evenly balanced between positive and negative classes, with a validation set of 500 examples. At convergence, the model achieves validation accuracy of 75.2% on average over ten training runs. (Perfect accuracy is not possible because the true classifier is not fully determined by two training examples). We explore the relationship between the information bottleneck and compositionality by comparing TRE(X) to the mutual information I(θ; x) between representations and inputs over the course of training. Both quantities are computed on the validation set, calculating TRE(X) as described in Section 3 and I(θ; X) as described in BID41. (For discussion of limitations of this approach to computing mutual information between inputs and representations, see .) FIG2 shows the relationship between TRE(X) and I(θ; X). Recall that small TRE is indicative of a high degree of compositionality. It can be seen that both mutual information and reconstruction error are initially low (because representations initially encode little about distinctions between inputs). Both increase over the course of training, and decrease together after mutual information reaches a maximum FIG2. This pattern holds if we plot values from multiple training runs at the same time FIG2 ), or if we consider only the postulated compression phase FIG2 ). These are consistent with the hypothesis that compression in the information bottleneck framework is associated with the discovery of compositional representations. Next we investigate a more conventional representation learning task. High-dimensional embeddings of words and phrases are useful for many natural language processing applications BID46, and many techniques exist to learn them from unlabeled text BID14 BID29. The question we wish to explore is not whether phrase vectors are compositional in aggregate, but rather how compositional individual phrase representations are. Our hypothesis is that bigrams whose representations have low TRE are those whose meaning is essentially compositional, and well-explained by the constituent words, while bigrams with large reconstruction error will correspond to non-compositional multi-word expressions BID33 ).This task is already well-studied in the natural language processing literature BID36, and the analysis we present differs only in the use of TRE to search for atomic representations rather than taking them to be given by pre-trained word representations. Our goal is to validate our approach in a language processing context, and show how existing work on compositionality (and representations of natural language in particular) fit into the more general framework proposed in the current paper. We train embeddings for words and bigrams using the CBOW objective of BID29 using the implementation provided in FastText BID5 with 100-dimensional vectors and a context size of 5. Vectors are estimated from a 250M-word subset of the Gigaword dataset BID34. More details are provided in Appendix A.Compositional structure We want to know how close phrase embeddings are to the composition of their constituent word embeddings. We define derivations for words and phrases in the natural way: single words w have primitive derivations d = w; bigrams w 1 w 2 have derivations of the form w 1, w 2. The composition function is again vector addition and distance is cosine distance. (Future work might explore learned composition functions as in e.g. BID21, for future work.) We compare bigram-level judgments of compositionality computed by TRE with a dataset of human judgments about noun-noun compounds BID35. In this dataset, humans rate bigrams as compositional on a scale from 0 to 5, with highly conventionalized phrases like gravy train assigned low scores and graduate student assigned high ones. Results We reproduce the of BID36 within the tree reconstruction error framework: for a given x, TRE(x) is anticorrelated with human judgments of compositionality (ρ = −0.34, p < 0.01). Collocations rated "most compositional" by our approach (i.e. with lowest TRE) are: application form, polo shirt, research project; words rated "least compositional" are fine line, lip service, and nest egg. The next section aims at providing a formal, rather than experimental, characterization of the relationship between TRE and another perspective on the analysis of representations with help from oracle derivations. BID7 introduce a notion of topographic similarity, arguing that a learned representation captures relevant domain structure if distances between learned representations are correlated with distances between their associated derivations. This can be viewed as providing a weak form of evidence for compositionality-if the distance function rewards pairs of representations that share overlapping substructure (as might be the case with e.g. string edit distance), edit distance will be expected to correlate with some notion of derivational similarity.In this section we aim to clarify the relationship between the two evaluations. To do this we first need to equip the space of derivations described in Section 3 with a distance function. As the derivations considered in this paper are all tree-structured, it is natural to use a simple tree edit distance BID3 for this purpose. We claim the following: Proposition 1. Letf =f η * be an approximation to f estimated as in Equation 2, with all TRE(x) ≤ for some. Let ∆ be the tree edit distance (defined formally in Appendix B, Definition 2), and let δ be any distance on Θ satisfying the following properties: DISPLAYFORM0, where 0 is the identity element for *. DISPLAYFORM1 (This condition is satisfied by any translation-invariant metric.)Then ∆ is an approximate upper bound on δ: for any DISPLAYFORM2 In other words, representations cannot be much farther apart than the derivations that produce them. Proof is provided in Appendix B.We emphasize that small TRE is not a sufficient condition for topographic similarity as defined by BID7: very different derivations might be associated with the same representation (e.g. when representing arithmetic expressions by their ). But this does demonstrate that compositionality imposes some constraints on the inferences that can be drawn from similarity judgments between representations. In our final set of experiments, we investigate the relationship between compositionality and generalization. Here we focus on communication games like the one depicted in Figure 1 and in more detail in FIG3. As in the previous section, existing work argues for a relationship between compositionality and generalization, claiming that agents need compositional communication protocols to generalize to unseen referents BID26 BID11. Here we are able to evaluate this claim empirically by training a large number of agents from random initial conditions, measuring the compositional structure of the language that emerges, and seeing how this relates to their performance on both familiar and novel objects. A speaker model observes a pair of target objects, and sends a description of the objects (as a discrete code) to a listener model. The listener attempts to reconstruct the targets, receiving fractional reward for partially-correct predictions. Our experiment focuses on a reference game BID20. Two policies are trained: a speaker and a listener. The speaker observes pair of target objects represented with a feature vector. The speaker then sends a message (coded as a discrete character sequence) to the listener model. The listener observes this message and attempts to reconstruct the target objects by predicting a sequence of attribute sets. If all objects are predicted correctly, both the speaker and the listener receive a reward of 1 (partial credit is awarded for partly-correct objects; FIG3).Because the communication protocol is discrete, policies are jointly trained using a policy gradient objective BID48. The speaker and listener are implemented with RNNs; details are provided in Appendix A.Compositional structure Every target referent consists of two objects; each object has two attributes. The derivation associated with each communicative task thus has the tree structure attr 1a, attr 1b, attr 2a, attr 2b. We hold out a subset of these object pairs at training time to evaluate generalization: in each training run, 1/3 of possible reference candidates are never presented to the agent at training time. Where the previous examples involved a representation space of real embeddings, here representations are fixed-length discrete codes. Moreover, the derivations themselves have a more complicated semantics than in Sections 4 and 5: order matters, and a commutative operation like addition cannot capture the distinction between green, square, blue, triangle and green, triangle, blue, square. We thus need a different class of composition and distance operations. We represent each agent message as a sequence of one-hot vectors, and take the error function δ to be the 1 distance between vectors. The composition function has the form: DISPLAYFORM0 with free composition parameters η * = {A, B} in Equation 2. These matrices can redistribute the tokens in θ and θ across different positions of the input string, but cannot affect the choice of the tokens themselves; this makes it possible to model non-commutative aspects of string production. To (b) However, compositional languages also exhibit lower absolute performance (r = 0.57, p < 1e−9). Both facts remain true even if we restrict analysis to "successful" training runs in which agents achieve a reward > 0.5 on held-out referents (r = 0.6, p < 1e−3 and r = 0.38, p < 0.05 respectively). Figure 6: Fragment of languages ing from two multiagent training runs. In the first section, the left column shows the target referent, while the remaining columns show the message generated by speaker in the given training run after observing the referent. The two languages have substantially different TRE, but induce similar listener performance (Train and Test reward).compute TRE via gradient descent, we allow the elements of D 0 to be arbitrary vectors (intuitively assigning fractional token counts to string indices) rather than restricting them to one-hot indicators. With this change, both δ and * have subgradients and can be optimized using the same procedure as in preceding sections. Results We train 100 speaker-listener pairs with random initial parameters and measure their performance on both training and test sets. Our suggest a more nuanced view of the relationship between compositionality and generalization than has been argued in the existing literature. TRE is significantly correlated with generalization error (measured as the difference between training accuracies, FIG4). However, TRE is also significantly correlated with absolute model reward (FIG4)-"compositional" languages more often from poor communication strategies than successful ones. This is largely a consequence of the fact that many languages with low TRE correspond to trivial strategies (for example, one in which the speaker sends the same message regardless of its observation) that in poor overall performance. Moreover, despite the correlation between TRE and generalization error, low TRE is by no means a necessary condition for good generalization. We can use our technique to automatically mine a collection of training runs for languages that achieve good generalization performance at both low and high levels of compositionality. Examples of such languages are shown in Figure 6. We have introduced a new evaluation method called TRE for generating graded judgments about compositional structure in representation learning problems where the structure of the observations is understood. TRE infers a set of primitive meaning representations that, when composed, approximate the observed representations, then measures the quality of this approximation. We have applied TRE-based analysis to four different problems in representation learning, relating compositionality to learning dynamics, linguistic compositionality, similarity and generalization. Many interesting questions regarding compositionality and representation learning remain open. The most immediate is how to generalize TRE to the setting where oracle derivations are not available; in this case Equation 2 must be solved jointly with an unsupervised grammar induction problem BID25. Beyond this, it is our hope that this line of research opens up two different kinds of new work: better understanding of existing machine learning models, by providing a new set of tools for understanding their representational capacity; and better understanding of problems, by better understanding the kinds of data distributions and loss functions that give rise to compositionalor non-compositional representations of observations. Code and data for all experiments in this paper are provided at https://github.com/jacobandreas/tre. Thanks to Daniel Fried and David Gaddy for feedback on an early draft of this paper. The author was supported by a Facebook Graduate Fellowship at the time of writing. Few-shot classification The CNN has the following form:Conv(out=6, kernel=5) ReLU MaxPool(kernel=2) Conv(out=16, kernel=5) ReLU MaxPool(kernel=2) Linear(out=128) ReLU Linear(out=64) ReLUThe model is trained using ADAM BID23 ) with a learning rate of.001 and a batch size of 128. Training is ended when the model stops improving on a held-out set. Word embeddings We train FastText BID5 on the first 250 million words of the NYT section of Gigaword BID34. To acquire bigram representations, we pre-process this dataset so that each occurrence of a bigram from the BID35 dataset is treated as a single word for purposes of estimating word vectors. Communication The encoder and decoder RNNs both use gated recurrent units BID10 with embeddings and hidden states of size 256. The size of the discrete vocabulary is set to 16 and the maximum message length to 4. Training uses a policy gradient objective with a scalar baseline set to the running average reward; this is optimized using ADAM BID23 ) with a learning rate of.001 and a batch size of 256. Each model is trained for 500 steps. Models are trained by sampling from the decoder's output distribution, but greedy decoding is used to evaluate performance and produce Figure 6. First, some definitions:Definition 1. The size of a derivation is given by: DISPLAYFORM0 Definition 2. The tree edit distance between derivations is defined by: Proof. For d ∈ D 0 this follows immediately from Condition 2 in the proposition. For composed derivations it follows from Condition 3 taking θ k = θ = 0 and induction on |d|. DISPLAYFORM1
This paper proposes a simple procedure for evaluating compositional structure in learned representations, and uses the procedure to explore the role of compositionality in four learning problems.
502
scitldr
In this paper, we propose an end-to-end deep learning model, called E2Efold, for RNA secondary structure prediction which can effectively take into account the inherent constraints in the problem. The key idea of E2Efold is to directly predict the RNA base-pairing matrix, and use an unrolled constrained programming algorithm as a building block in the architecture to enforce constraints. With comprehensive experiments on benchmark datasets, we demonstrate the superior performance of E2Efold: it predicts significantly better structures compared to previous SOTA (29.7% improvement in some cases in F1 scores and even larger improvement for pseudoknotted structures) and runs as efficient as the fastest algorithms in terms of inference time. Ribonucleic acid (RNA) is a molecule playing essential roles in numerous cellular processes and regulating expression of genes . It consists of an ordered sequence of nucleotides, with each nucleotide containing one of four bases: Adenine (A), Guanine (G), Cytosine (C) and Uracile (U). This sequence of bases can be represented as x:= (x 1, . . ., x L) where x i ∈ {A, G, C, U}, which is known as the primary structure of RNA. The bases can bond with one another to form a set of base-pairs, which defines the secondary structure. A secondary structure can be represented by a binary matrix A * where A * ij = 1 if the i, j-th bases are paired (Fig 1). Discovering the secondary structure of RNA is important for understanding functions of RNA since the structure essentially affects the interaction and reaction between RNA and other cellular components. Although secondary structure can be determined by experimental assays (e.g. X-ray diffraction), it is slow, expensive and technically challenging. Therefore, computational prediction of RNA secondary structure becomes an important task in RNA research and is useful in many applications such as drug design . (ii) Pseudo-knot (i) Nested Structure Research on computational prediction of RNA secondary structure from knowledge of primary structure has been carried out for decades. Most existing methods assume the secondary structure is a of energy minimization, i.e., A * = arg min A E x (A). The energy function is either estimated by physics-based thermodynamic experiments or learned from data . These approaches are faced with a common problem that the search space of all valid secondary structures is exponentially-large with respect to the length L of the sequence. To make the minimization tractable, it is often assumed the base-pairing has a nested structure (Fig 2 left), and the energy function factorizes pairwisely. With this assumption, dynamic programming (DP) based algorithms can iteratively find the optimal structure for subsequences and thus consider an enormous number of structures in time O(L 3). Although DP-based algorithms have dominated RNA structure prediction, it is notable that they restrict the search space to nested structures, which excludes some valid yet biologically important RNA secondary structures that contain'pseudoknots', i.e., elements with at least two non-nested base-pairs (Fig 2 right). Pseudoknots make up roughly 1.4% of base-pairs , and are overrepresented in functionally important regions . Furthermore, pseudoknots are present in around 40% of the RNAs. They also assist folding into 3D structures and thus should not be ignored. To predict RNA structures with pseudoknots, energy-based methods need to run more computationally intensive algorithms to decode the structures. In summary, in the presence of more complex structured output (i.e., pseudoknots), it is challenging for energy-based approaches to simultaneously take into account the complex constraints while being efficient. In this paper, we adopt a different viewpoint by assuming that the secondary structure is the output of a feed-forward function, i.e., A * = F θ (x), and propose to learn θ from data in an end-to-end fashion. It avoids the second minimization step needed in energy function based approach, and does not require the output structure to be nested. Furthermore, the feed-forward model can be fitted by directly optimizing the loss that one is interested in. Despite the above advantages of using a feed-forward model, the architecture design is challenging. To be more concrete, in the RNA case, F θ is difficult to design for the following reasons: (i) RNA secondary structure needs to obey certain hard constraints (see details in Section 3), which means certain kinds of pairings cannot occur at all . Ideally, the output of F θ needs to satisfy these constraints. (ii) The number of RNA data points is limited, so we cannot expect that a naive fully connected network can learn the predictive information and constraints directly from data. Thus, inductive biases need to be encoded into the network architecture. (iii) One may take a two-step approach, where a post-processing step can be carried out to enforce the constraints when F θ predicts an invalid structure. However, in this design, the deep network trained in the first stage is unaware of the post-processing stage, making less effective use of the potential prior knowledge encoded in the constraints. In this paper, we present an end-to-end deep learning solution which integrates the two stages. The first part of the architecture is a transformer-based deep model called Deep Score Network which represents sequence information useful for structure prediction. The second part is a multilayer network called Post-Processing Network which gradually enforces the constraints and restrict the output space. It is designed based on an unrolled algorithm for solving a constrained optimization. These two networks are coupled together and learned jointly in an end-to-end fashion. Therefore, we call our model E2Efold. By using an unrolled algorithm as the inductive bias to design Post-Processing Network, the output space of E2Efold is constrained (see Fig 3 for an illustration), which makes it easier to learn a good model in the case of limited data and also reduces the overfitting issue. Yet, the constraints encoded in E2Efold are flexible enough such that pseudoknots are included in the output space. In summary, E2Efold strikes a nice balance between model biases for learning and expressiveness for valid RNA structures. We conduct extensive experiments to compare E2Efold with state-of-the-art (SOTA) methods on several RNA benchmark datasets, showing superior performance of E2Efold including: • being able to predict valid RNA secondary structures including pseudoknots; • running as efficient as the fastest algorithm in terms of inference time; • producing structures that are visually close to the true structure; • better than previous SOTA in terms of F1 score, precision and recall. Although in this paper we focus on RNA secondary structure prediction, which presents an important and concrete problem where E2Efold leads to significant improvements, our method is generic and can be applied to other problems where constraints need to be enforced or prior knowledge is provided. We imagine that our design idea of learning unrolled algorithm to enforce constraints can also be transferred to problems such as protein folding and natural language understanding problems (e.g., building correspondence structure between different parts in a document). Classical RNA folding methods identify candidate structures for an RNA sequence energy minimization through DP and rely on thousands of experimentally-measured thermodynamic parameters. A few widely used methods such as RNAstructure, Vienna RNAfold and UNAFold achieved linear run time O(L) by applying beam search, but it can not handle pseudoknots in RNA structures. The prediction of lowest free energy structures with pseudoknots is NP-complete (Lyngsø &), so pseudoknots are not considered in most algorithms. Heuristic algorithms such as HotKnots and Probknots have been made to predict structures with pseudoknots, but the predictive accuracy and efficiency still need to be improved. Learning-based RNA folding methods such as ContraFold and ContextFold have been proposed for energy parameters estimation due to the increasing availability of known RNA structures, ing in higher prediction accuracies, but these methods still rely on the above DP-based algorithms for energy minimization. A recent deep learning model, CDPfold, applied convolutional neural networks to predict base-pairings, but it adopts the dot-bracket representation for RNA secondary structure, which can not represent pseudoknotted structures. Moreover, it requires a DP-based post-processing step whose computational complexity is prohibitive for sequences longer than a few hundreds. Learning with differentiable algorithms is a useful idea that inspires a series of works (; ; ; ;), which shared similar idea of using differentiable unrolled algorithms as a building block in neural architectures. Some models are also applied to structured prediction problems (; ;), but they did not consider the challenging RNA secondary structure problem or discuss how to properly incorporating constraints into the architecture. OptNet integrates constraints by differentiating KKT conditions, but it has cubic complexity in the number of variables and constraints, which is prohibitive for the RNA case. Dependency parsing in NLP is a different but related problem to RNA folding. It predicts the dependency between the words in a sentence. Similar to nested/non-nested structures, the corresponding terms in NLP are projective/non-projective parsing, where most works focus on the former and DP-based inference algorithms are commonly used . Deep learning models are proposed to proposed to score the dependency between words, which has a similar flavor to the Deep Score Network in our work. In the RNA secondary structure prediction problem, the input is the ordered sequence of bases x = (x 1, . . ., x L) and the output is the RNA secondary structure represented by a matrix A * ∈ {0, 1} L×L. Hard constraints on the forming of an RNA secondary structure dictate that certain kinds of pairings cannot occur at all . Formally, these constraints are: (i) Only three types of nucleotides combinations, B:= {AU, U A}∪ {GC, CG} ∪ {GU, U G}, can form base-pairs. (ii) No sharp loops are allowed. ∀|i − j| < 4, A ij = 0. (iii) There is no overlap of pairs, i.e., it is a matching. ∀i, The space of all valid secondary structures contains all symmetric matrices A ∈ {0, 1} L×L that satisfy the above three constraints. This space is much smaller than the space of all binary matrices {0, 1} L×L. Therefore, if we could incorporate these constraints in our deep model, the reduced output space could help us train a better predictive model with less training data. We do this by using an unrolled algorithm as the inductive bias to design deep architecture. In the literature on feed-forward networks for structured prediction, most models are designed using traditional deep learning architectures. However, for RNA secondary structure prediction, directly using these architectures does not work well due to the limited amount of RNA data points and the hard constraints on forming an RNA secondary structure. These challenges motivate the design of our E2Efold deep model, which combines a Deep Score Network with a Post-Processing Network based on an unrolled algorithm for solving a constrained optimization problem. The first part of E2Efold is a Deep Score Network U θ (x) whose output is an L × L symmetric matrix. Each entry of this matrix, i.e., U θ (x) ij, indicates the score of nucleotides x i and x j being paired. The x input to the network here is the L × 4 dimensional one-hot embedding. The specific architecture of U θ is shown in Fig 4. It mainly consists of by their exact and relative positions: where {ψ j} is a set of n feature maps such as sin(·), poly(·), sigmoid(·), etc, and MLP(·) denotes multi-layer perceptions. Such position embedding idea has been used in natural language modeling such as BERT , but we adapted for RNA sequence representation; • a stack of Transformer Encoders which encode the sequence information and the global dependency between nucleotides; • a 2D Convolution layers for outputting the pairwise scores. With the representation power of neural networks, the hope is that we can learn an informative U θ such that higher scoring entries in U θ (x) correspond well to actual paired bases in RNA structure. Once the score matrix U θ (x) is computed, a naive approach to use it is to choose an offset term s ∈ R (e.g., s = 0) and let A ij = 1 if U θ (x) ij > s. However, such entry-wise independent predictions of A ij may in a matrix A that violates the constraints for a valid RNA secondary structure. Therefore, a post-processing step is needed to make sure the predicted A is valid. This step could be carried out separately after U θ is learned. But such decoupling of base-pair scoring and post-processing for constraints may lead to sub-optimal , where the errors in these two stages can not be considered together and tuned together. Instead, we will introduce a Post-Processing Network which can be trained end-to-end together with U θ to enforce the constraints. The second part of E2Efold is a Post-Processing Network PP φ which is an unrolled and parameterized algorithm for solving a constrained optimization problem. We first present how we formulate the post-processing step as a constrained optimization problem and the algorithm for solving it. After that, we show how we use the algorithm as a template to design deep architecture PP φ. Formulation of constrained optimization. Given the scores predicted by U θ (x), we define the total score 1 2 i,j (U θ (x) ij − s)A ij as the objective to maximize, where s is an offset term. Clearly, without structure constraints, the optimal solution is to take A ij = 1 when U θ (x) ij > s. Intuitively, the objective measures the covariation between the entries in the scoring matrix and the A matrix. With constraints, the exact maximization becomes intractable. To make it tractable, we consider a convex relaxation of this discrete optimization to a continuous one by allowing A ij ∈. Consequently, the solution space that we consider to optimize over is A(x):= A ∈ L×L | A is symmetric and satisfies constraints (i)-(iii) in Section 3. To further simplify the search space, we define a nonlinear transformation, where • denotes element-wise multiplication. Matrix M is defined as M (x) ij:= 1 if x i x j ∈ B and also |i − j| ≥ 4, and M (x) ij:= 0 otherwise. From this definition we can see that M (x) encodes both constraint (i) and (ii). With transformation T, the ing matrix is non-negative, symmetric, and satisfies constraint (i) and (ii). Hence, by defining A:= T (Â), the solution space is simplified as Finally, we introduce a 1 penalty term  1:= i,j | ij | to make A sparse and formulate the post-processing step as: (·, · denotes matrix inner product, i.e., sum of entry-wise multiplication) The advantages of this formulation are that the variables ij are free variables in R and there are only L inequality constraints A1 ≤ 1. This system of linear inequalities can be replaced by a set of nonlinear equalities relu(A1 − 1) = 0 so that the constrained problem can be easily transformed into an unconstrained problem by introducing a Lagrange multiplier λ ∈ R Algorithm for solving it. We use proximal gradient (derived in Appendix B) for maximization and gradient descent for minimization. In each iteration, and λ are updated alternatively by: where soft threshold: gradient step: where α, β are step sizes and γ α, γ β are decaying coefficients. When it converges at T, an approximate solution Round A T = T ( T) is obtained. With this algorithm operated on the learned U θ (x), even if this step is disconnected to the training phase of U θ (x), the final prediction works much better than many other existing methods (as reported in Section 6). Next, we introduce how to couple this post-processing step with the training of U θ (x) to further improve the performance. We design a Post-Processing Network, denoted by PP φ, based on the above algorithm. After it is defined, we can connect it with the deep score network U θ and train them jointly in an end-to-end fashion, so that the training phase of U θ (x) is aware of the post-processing step. Algorithm 1: Post-Processing Network PP φ (U, M) Algorithm 2: The specific computation graph of PP φ is given in Algorithm 1, whose main component is a recurrent cell which we call PPcell φ. The computation graph is almost the same as the iterative update from Eq. 3 to Eq. 6, except for several modifications: • (learnable hyperparameters) The hyperparameters including step sizes α, β, decaying rate γ α, γ β, sparsity coefficient ρ and the offset term s are treated as learnable parameters in φ, so that there is no need to tune the hyperparameters by hand but automatically learn them from data instead. • (fixed # iterations) Instead of running the iterative updates until convergence, PPcell φ is applied recursively for T iterations where T is a manually fixed number. This is why in Fig 3 the output space of E2Efold is slightly larger than the true solution space. • (smoothed sign function) Resulted from the gradient of relu(·), the update step in Eq. 4 contains a sign(·) function. However, to push gradient through PP φ, we require a differentiable update step. Therefore, we use a smoothed sign function defined as softsign(c):= 1/(1 + exp(−kc)), where k is a temperature. • (clipÂ) An additional step, ← min(Â, 1), is included to make the output A t at each iteration stay in the range L×L. This is useful for computing the loss over intermediate , for which we will explain more in Section 5. With these modifications, the Post-Processing Network PP φ is a tuning-free and differentiable unrolled algorithm with meaningful intermediate outputs. Combining it with the deep score network, the final deep model is 5 END-TO-END TRAINING ALGORITHM Given a dataset D containing examples of input-output pairs (x, A *), the training procedure of E2Efold is similar to standard gradient-based supervised learning. However, for RNA secondary structure prediction problems, commonly used metrics for evaluating predictive performances are F1 score, precision and recall, which are non-differentiable. Since F1 = 2TP/(2TP + FP + FN), we define a loss function to mimic the negative of F1 score as: Assuming that ij A * ij = 0, this loss is well-defined and differentiable on L×L. Precision and recall losses can be defined in a similar way, but we optimize F1 score in this paper. It is notable that this F1 loss takes advantages over other differentiable losses including 2 and cross-entropy losses, because there are much more negative samples (i.e. A ij = 0) than positive samples (i.e. A ij = 1). A hand-tuned weight is needed to balance them while using 2 or crossentropy losses, but F1 loss handles this issue automatically, which can be useful for a number of problems (; . L×L in each iteration. This allows us to add auxiliary losses to regularize the intermediate , guiding it to learn parameters which can generate a smooth solution trajectory. More specifically, we use an objective that depends on the entire trajectory of optimization: where) and γ ≤ 1 is a discounting factor. Empirically, we find it very useful to pre-train U θ using logistic regression loss. Also, it is helpful to add this additional loss to Eq. 9 as a regularization. We compare E2Efold with the SOTA and also the most commonly used methods in the RNA secondary structure prediction field on two benchmark datasets. It is revealed from the experimental that E2Efold achieves 29.7% improvement in terms of F1 score on RNAstralign dataset and it infers the RNA secondary structure as fast as the most efficient algorithm (LinearFold) among existing ones. An ablation study is also conducted to show the necessity of pushing gradient through the post-processing step. Experiments On RNAStralign. We divide RNAStralign dataset into training, testing and validation sets by stratified sampling (see details in Table 7 and Fig 6), so that each set contains all RNA types. We compare the performance of E2Efold to six methods including CDPfold, LinearFold, Mfold, RNAstructure (ProbKnot), RNAfold and CONTRAfold. Both E2Efold and CDPfold are learned from the same training/validation sets. For other methods, we directly use the provided packages or web-servers to generate predicted structures. We evaluate the F1 score, Precision and Recall for each sequence in the test set. Averaged values are reported in Table 2. As suggested by , for a base pair (i, j), the following predictions are also considered as correct:, so we also reported the metrics when one-position shift is allowed. As shown in Table 2, traditional methods can achieve a F1 score ranging from 0.433 to 0.624, which is consistent with the performance reported with their original papers. The two learning-based methods, CONTRAfold and CDPfold, can outperform classical methods with reasonable margin on some criteria. E2Efold, on the other hand, significantly outperforms all previous methods across all criteria, with at least 20% improvement. Notice that, for almost all the other methods, the recall is usually higher than precision, while for E2Efold, the precision is higher than recall. That can be the of incorporating constraints during neural network training. Fig 5 shows the distributions of F1 scores for each method. It suggests that E2Efold has consistently good performance. To estimate the performance of E2Efold on long sequences, we also compute the F1 scores weighted by the length of sequences, such that the are more dominated by longer sequences. Detailed are given in Appendix D.3. Test On ArchiveII Without Re-training. To mimic the real world scenario where the users want to predict newly discovered RNA's structures which may have a distribution different from the training dataset, we directly test the model learned from RNAStralign training set on the ArchiveII dataset, without re-training the model. To make the comparison fair, we exclude sequences that are overlapped with the RNAStralign dataset. We then test the model on sequences in ArchiveII that have overlapping RNA types (5SrRNA, 16SrRNA, etc) with the RNAStralign dataset. Results are shown in Table 3. It is understandable that the performances of classical methods which are not learning- based are consistent with that on RNAStralign. The performance of E2Efold, though is not as good as that on RNAStralign, is still better than all the other methods across different evaluation criteria. In addition, since the original ArchiveII dataset contains domain sequences (subsequences), we remove the domains and report the in Appendix D.4, which are similar to in Table 3. Inference Time Comparison. We record the running time of all algorithms for predicting RNA secondary structures on the RNAStralign test set, which is summarized in Table 4. LinearFold is the most efficient among baselines because it uses beam pruning heuristic to accelerate DP. CDPfold, which achieves higher F1 score than other baselines, however, is extremely slow due to its DP post-processing step. Since we use a gradient-based algorithm which is simple to design the PostProcessing Network, E2Efold is fast. On GPU, E2Efold has similar inference time as LinearFold. Pseudoknot Prediction. Even though E2Efold does not exclude pseudoknots, it is not sure whether it actually generates pseudoknotted structures. Therefore, we pick all sequences containing pseudoknots and compute the averaged F1 score only on this set. Besides, we count the number of pseudoknotted sequences that are predicted as pseudoknotted and report this count as true positive (TP). Similarly we report TN, FP and FN in Table 5 along with the F1 score. Most tools exclude pseudoknots while RNAstructure is the most famous one that can predict pseudoknots, so we choose it for comparison. RNAstructure CONTRAfold true structure RNAstructure CONTRAfold E2Efold true structure true structure E2Efold true structure E2Efold Visualization. We visualize predicted structures of three RNA sequences in the main text. More examples are provided in appendix (Fig 8 to 14). In these figures, purple lines indicate edges of pseudoknotted elements. Although CDPfold has higher F1 score than other baselines, its predictions are visually far from the ground-truth. Instead, RNAstructure and CONTRAfold produce comparatively more reasonable visualizations among all baselines, so we compare with them. These two methods can capture a rough sketch of the structure, but not good enough. For most cases, E2Efold produces structures most similar to the ground-truths. Moreover, it works surprisingly well for some RNA sequences that are long and very difficult to predict. Ablation Study. To exam whether integrating the two stages by pushing gradient through the post-process is necessary for performance of E2Efold, we conduct an ablation study (Table 6). We test the performance when the post-processing step is disconnected with the training of Deep Score Network U θ. We apply the post-processing step (i.e., for solving augmented Lagrangian) after U θ is learned (thus the notation "U θ + PP" in Table 6). Although "U θ + PP" performs decently well, with constraints incorporated into training, E2Efold still has significant advantages over it. Discussion. To better estimate the performance of E2Efold on different RNA types, we include the per-family F1 scores in Appendix D.5. E2Efold performs significantly better than other methods in 16S rRNA, tRNA, 5S RNA, tmRNA, and telomerase. These are from a single model. In the future, we can view it as multi-task learning and further improve the performance by learning multiple models for different RNA families and learning an additional classifier to predict which model to use for the input sequence. We propose a novel DL model, E2Efold, for RNA secondary structure prediction, which incorporates hard constraints in its architecture design. Comprehensive experiments are conducted to show the superior performance of E2Efold, no matter on quantitative criteria, running time, or visualization. Further studies need to be conducted to deal with the RNA types with less samples. Finally, we believe the idea of unrolling constrained programming and pushing gradient through post-processing can be generic and useful for other constrained structured prediction problems. Here we explain the difference between our approach and other works on unrolling optimization problems. First, our view of incorporating constraints to reduce output space and to reduce sample complexity is novel. Previous works (; ;) did not discuss these aspects. The most related work which also integrates constraints is OptNet , but its very expensive and can not scale to the RNA problem. Therefore, our proposed approach is a simple and effective one. Second, compared to , our approach has a different purpose of using the algorithm. Their goal is to learn a better algorithm, so they commonly make their architecture more flexible than the original algorithm for the room of improvement. However, we aim at enforcing constraints. To ensure that constraints are nicely incorporated, we keep the original structure of the algorithm and only make the hyperparameters learnable. Finally, although all works consider end-to-end training, none of them can directly optimize the F1 score. We proposed a differentiable loss function to mimic the F1 score/precision/recall, which is effective and also very useful when negative samples are much fewer than positive samples (or the inverse). The maximization step in Eq. 2 can be written as the following minimization: Consider the quadratic approximation of −f (Â) centered at t: and rewrite the optimization in Eq. 10 as Next, we define proximal mapping as a function depending on α as follows: Since we always use •Â instead of in our problem, we can take the absolute value |prox α (Ȧ t+1)| = relu(|Ȧ t+1 | − αρ) without loss of generality. Therefore, the proximal gradient step isȦ A t+1 ← relu(|Ȧ t+1 | − αρ) (correspond to Eq. 5). More specifically, in the main text, we write ∂f ∂Ât The last equation holds since t will remain symmetric in our algorithm if the initial 0 is symmetric. Moreover, in the main text, α is replaced by α · γ t α. We used Pytorch to implement the whole package of E2Efold. Deep Score Network. In the deep score network, we used a hyper-parameter, d, which was set as 10 in the final model, to control the model capacity. In the transformer encoder layers, we set the number of heads as 2, the dimension of the feed-forward network as 2048, the dropout rate as 0.1. As for the position encoding, we used 58 base functions to form the position feature map, which goes through a 3-layer fully-connected neural network (the number of hidden neurons is 5 * d) to generate the final position embedding, whose dimension is L by d. In the final output layer, the pairwise concatenation is carried out in the following way: Let X ∈ R L×3d be the input to the final output layers in Figure 4 (which is the concatenation of the sequence embedding and position embedding). The pairwise concatenation in a tensor Y ∈ R L×L×6d defined as where, and X(j, :) ∈ R 3d. In the 2D convolution layers, the the channel of the feature map gradually change from 6 * d to d, and finally to 1. We set the kernel size as 1 to translate the feature map into the final score matrix. Each 2D convolution layer is followed by a batch normalization layer. We used ReLU as the activation function within the whole score network. Post-Processing Network. In the PP network, we initialized w as 1, s as log, α as 0.01, β as 0.1, γ α as 0.99, γ β as 0.99, and ρ as 1. We set T as 20. Training details. During training, we first pre-trained a deep score network and then fine-tuned the score network and the PP network together. To pre-train the score network, we used binary crossentropy loss and Adam optimizer. Since, in the contact map, most entries are 0, we used weighted loss and set the positive sample weight as 300. The batch size was set to fully use the GPU memory, which was 20 for the Titan Xp card. We pre-train the score network for 100 epochs. As for the fine-tuning, we used binary cross-entropy loss for the score network and F1 loss for the PP network and summed up these two losses as the final loss. The user can also choose to only use the F1 loss or use another coefficient to weight the loss estimated on the score network U θ. Due to the limitation of the GPU memory, we set the batch size as 8. However, we updated the model's parameters every 30 steps to stabilize the training process. We fine-tuned the whole model for 20 epochs. Also, since the data for different RNA families are imbalanced, we up-sampled the data in the small RNA families based on their size. For the training of the score network U θ in the ablation study, it is exactly the same as the training of the above mentioned process. Except that during the fine-tune process, there is the unrolled number of iterations is set to be 0. D.1 DATASET STATISTICS Figure 6: The RNAStralign length distribution. To compare the differences among these data distributions, we can test the following hypothesis: The approach that we adopted is the permutation test on the unbiased empirical Maximum Mean Discrepancy (MMD) estimator: where contains M i.i.d. samples from a distribution P 2, and k(·, ·) is a string kernel. Since we conduct stratified sampling to split the training and testing dataset, when we perform permutation test, we use stratified re-sampling as well (for both Hypothese (a) and (b)). The of the permutation test (permuted 1000 times) is reported in Figure 7. The shows (a) Hypothesis P(RNAStr train) = P(RNAStr test) can be accepted with significance level 0.1. (b) Hypothesis P(RNAStr train) = P(ArchiveII) is rejected since the p-value is 0. Therefore, the data distribution in ArchiveII is very different from the RNAStralign training set. A good performance on ArchiveII shows a significant generalization power of E2Efold. For long sequences, E2Efold still performs better than other methods. We compute F1 scores weighted by the length of sequences (Table 8), such that the are more dominated by longer sequences. The third row reports how much F1 score drops after reweighting. Since domain sequence (subsequences) in ArchiveII are explicitly labeled, we filter them out in ArchiveII and recompute the F1 scores (Table 9). The do not change too much before or after filtering out subsequences. To balance the performance among different families, during the training phase we conducted weighted sampling of the data based on their family size. With weighted sampling, the overall F1 score (S) is 0.83, which is the same as when we did equal-weighted sampling. The per-family are shown in Table 10. RNAstructure CDPfold true structure LinearFold Mfold CONTRAfold RNAfold
A DL model for RNA secondary structure prediction, which uses an unrolled algorithm in the architecture to enforce constraints.
503
scitldr
Learning in recurrent neural networks (RNNs) is most often implemented by gradient descent using backpropagation through time (BPTT), but BPTT does not model accurately how the brain learns. Instead, many experimental on synaptic plasticity can be summarized as three-factor learning rules involving eligibility traces of the local neural activity and a third factor. We present here eligibility propagation (e-prop), a new factorization of the loss gradients in RNNs that fits the framework of three factor learning rules when derived for biophysical spiking neuron models. When tested on the TIMIT speech recognition benchmark, it is competitive with BPTT both for training artificial LSTM networks and spiking RNNs. Further analysis suggests that the diversity of learning signals and the consideration of slow internal neural dynamics are decisive to the learning efficiency of e-prop. The brain seems to be able to solve tasks such as counting, memorizing and reasoning which require efficient temporal processing capabilities. It is natural to model this with recurrent neural networks (RNNs), but their canonical training algorithm called backpropagation through time (BPTT) does not appear to be compatible with learning mechanisms observed in the brain. There, long-term changes of synaptic efficacies depend on the local neural activity. It was found that the precise timing of the electric pulses (i.e. spikes) emitted by the pre-and post-synaptic neurons matters, and these spike-timing dependent plasticity (STDP) changes can be conditioned or modulated by a third factor that is often thought to be a neuromodulator (see for reviews). Looking closely at the relative timing, the third factor affects the plasticity even if it arrives with a delay. This suggests the existence of local mechanisms that retain traces of the recent neural activity during this temporal gap and they are often referred to as eligibility traces. To verify whether three factor learning rules can implement functional learning algorithms, researchers have simulated how interesting learnt behaviours can emerge from them. The third factor is often considered as a global signal emitted when a reward is received or predicted, and this alone can solve learning tasks of moderate difficulty, even in RNNs. Yet in feed-forward networks, it was already shown that plausible learning algorithms inspired by backpropagation and ing in neuron-specific learning signals largely outperform the rules based on a global third factor. This suggests that backpropagation provides important details that are not captured by all three factor learning rules. Here we aim at a learning algorithm for RNNs that is general and efficient like BPTT but remains plausible. A major plausibility issue of BPTT is that it requires to propagate errors backwards in time or to store the entire state space trajectory raising questions on how and where this is performed in the brain. We suggest instead a rigorous re-analysis of gradient descent in RNNs that leads to a gradient computation relying on a diversity of learning signals (i.e. neuron-specific third factors) and a few eligibility traces per synapse. We refer to this algorithm as eligibility propagation (e-prop). When derived with spiking neurons, e-prop fits under the three factor learning rule framework and is qualitatively compatible with experimental data. To test its learning efficiency, we applied e-prop to artificial Long Short-Term Memory (LSTM) networks, and Long short-term memory Spiking Neural Networks (LSNNs) (spiking RNNs combining short and long realistic time constants). We found that it is competitive with BPTT on the TIMIT speech recognition benchmark, and it can solve nontrivial temporal credit assignment problems with long delays. We are not aware of any comparable achievements with previous three factor learning rules. Real-time recurrent learning (RTRL) computes the same loss gradients as BPTT in an online fashion but requires many more operations. Eventhough the method is online, one may wonder where can it be implemented in the brain if it requires a machinery bigger than the network itself. Recent works have suggested that eligibility traces can be used to approximate RTRL. This was shown to be feasible if the neurons do not have recurrent connections, if the recurrent connections are ignored during learning or if the network dynamics are approximated with a trained estimator. However these algorithms were derived for specific neuron models without long-short term memory, making it harder to tackle challenging RNN benchmark tasks (no machine learning benchmarks were considered in ). Other mathematical methods, have suggested approximations to RTRL which are compatible with complex neuron models. Yet those methods lead to gradient estimates with a high variance or requiring heavier computations when the network becomes large. This issue was solved in e-prop, as the computational and memory costs are the same (up to constant factor) as for running any computation with the RNN. This reduction of the computational load arises from an essential difference between e-prop and RTRL: e-prop computes the same loss gradients but only propagates forward in time the terms that can be computed locally. This provides a new interpretation of eligibility traces that is mathematically grounded and generalizes to a broad class of RNNs. Our empirical show that such traces are sufficient to approach the performance of BPTT despite a simplification of the non-local learning signal, but we believe that more complex strategies for computing a learning signals can be combined with e-prop to yield even more powerful online algorithms. A separate paper presents one such example to enable one-shot learning in recurrent spiking neural networks. The mathematical basis for e-prop E-prop applies for a general class of recurrent network models that includes LSTMs and LSNNs, where each neuron j has a hidden state h j P R d where d is typically 1 or 2 (e.g. the memory cell content of an LSTM unit or the membrane potential for a spiking neuron), and an observable state z t j P R (e.g. the LSTM outputs or the spikes). The performance of a network on a specific task is usually expressed using a loss function E, and learning by gradient descent learning amounts to changing the network weights W such that E is minimized. Much like BPTT, e-prop computes the gradient dE dWji with respect to the weights from i to j where the neurons i and j are potentially connected. Here, this gradients depends on learning signals L t j specific to the neuron j and eligibility traces e t ji such that (see proof in The eligibility traces e This allows the loss-independent eligibility traces to be defined for any RNN model. For equation to hold, the ideal learning signal is required to be L are the output targets. E-prop can be implemented online by accumulating the products L j e t ji or applying them directly at each time step and does not require to backpropagate through time or to store the past neural activity. This solves a major plausibility issue raised by BPTT. To also avoid the implausible weight sharing between the feedback and feedforward pathways, one can replace the feedback weights in L t j by fixed random values as done in leading to the two variants: symmetric and random e-prop. E-prop for spiking neurons and data on synaptic plasticity To link the theory to data on STDP and three factor learning rules, we applied e-prop to a recurrent network of spiking neurons. We use leaky-integrate and fire (LIF) neurons for which the dynamics of the membrane voltage is modelled as a leaky integrator and a spike is emitted when the voltage crosses a firing threshold from below. When simulated in discrete time with a time step of one millisecond, a recurrent network of LIF neurons fits into the general class of RNNs described above such that the hidden state h t j is the membrane voltage and the spikes are modelled by a binary observable state z t j P t0, 1u. Non-differentiability of the binary output of spiking neurons is solved as in, by using a pseudo-derivative in-place of. Remarkably, the eligibility trace e t ji that emerges for a LIF neuron is a product of the pre-and post-synaptic activity. It is non-zero only if pre-synaptic spikes have preceded a depolarization of the post-synaptic neuron in a time window of about 20 ms which is reminiscent of STDP. Moreover it was verified in that the replacement of e t ji in equation by a form of voltage dependent STDP used to fit data accurately, does not strongly impair the performance of e-prop on a pattern generation task. Forward and stable propagation of RNN gradients with eligibility traces To enhance the working memory capabilities of the spiking network model, we model slower neural dynamics by introducing a model of firing rate adaptation: after each spike the threshold increases by a constant amount and reduces back to its resting value after hundreds of milliseconds or few seconds. This type of adaptive LIF (ALIF) neuron also includes its current firing threshold in its hidden state h recurrent network of ALIF and LIF neurons connected in an all-to-all fashion is termed LSNN. For an ALIF neuron j each eligibility vector t ji include a slow component t ji,a that decays with the time constants of adaptation, i.e. much slower than for non-adaptive LIF neurons. We tested the ability of e-prop and LSNNs to learn temporal dependencies on a task that was used to study working memory in rodents and that requires to memorize over a delay of hundreds of milliseconds. It requires the rodent to run along a linear track in a virtual environment, where it encounters a number of visual cues to its left and its right, see Figure 1A. After arrival at a T-junction, it has to decide whether it had observed more cues to the left or on the right side and turn towards that direction. This task requires to establish a connection between errors in the decision and the processing of cues that happened a long time ago. We found that LSNNs with 100 neurons can be trained by e-prop to solve this task (Figure 1B), but a similar network of LIF neurons without adaptation cannot solve the task (Figure 1C). The key feature arising with adaptive neurons is the slow component t ji,a of the eligibility vector associated with the threshold adaptation and sharing its slow dynamics (see Figure 1B bottom). As the learning signal L t j is only non-zero during the decision period at the last time steps of the episode, the eligibility traces must hold the information about the relevant cues for hundreds of time steps during the delay (see Figure 1A). In this way e-prop alleviates the need to propagate signals backwards in time. Approaching the performance of BPTT We compare E-prop and BPTT on two benchmarks for RNNs based on the TIMIT dataset: phoneme classification of each audio frame in a spoken sentence, and transcription of the entire sequence of phonemes spoken in a sentence. The LSTM and BPTT baselines were obtained by reproducing the experiments from solving framewise-phoneme classification with 400 LSTM units and solving sentence transcription with three layers of 500 LSTM units. For the LSNN we used 800 spiking neurons in the first task and three layers of 800 spiking neurons in the second. Remarkably, the error rate obtained with e-prop is only a few percents larger than the BPTT baseline in all cases, even if the feedback weights are replaced by random ones (Figure 1D). In contrast, the loss performance of e-prop when using uniform feedback matrices was significantly worse: the error rate jumped from 34.6 to 52% for frame-wise classification and from 24.7 to 60% for the speech transcription. Beyond supervised learning task it is also shown in that e-prop can be applied on reinforcement learning tasks. Discussion E-prop is a novel learning algorithm that qualitatively fits experimental data on synaptic plasticity and maintains a performance that is competitive with BPTT. Our analysis shows that e-prop can take advantage of neuron models with enhanced memory capabilities to solve non-trivial temporal credit assignment problems, and the diversity of the learning signals is decisive for the learning efficiency of three factor learning rules. Interestingly, it was found recently that dopaminergic neurons encode more diverse information than a global reward prediction error, and performance monitoring neurons providing potential learning signals are found prominently in cortices.
We present eligibility propagation an alternative to BPTT that is compatible with experimental data on synaptic plasticity and competes with BPTT on machine learning benchmarks.
504
scitldr
Recurrent neural networks (RNNs) are an effective representation of control policies for a wide range of reinforcement and imitation learning problems. RNN policies, however, are particularly difficult to explain, understand, and analyze due to their use of continuous-valued memory vectors and observation features. In this paper, we introduce a new technique, Quantized Bottleneck Insertion, to learn finite representations of these vectors and features. The is a quantized representation of the RNN that can be analyzed to improve our understanding of memory use and general behavior. We present of this approach on synthetic environments and six Atari games. The ing finite representations are surprisingly small in some cases, using as few as 3 discrete memory states and 10 observations for a perfect Pong policy. We also show that these finite policy representations lead to improved interpretability. Deep reinforcement learning (RL) and imitation learning (IL) have demonstrated impressive performance across a wide range of applications. Unfortunately, the learned policies are difficult to understand and explain, which limits the degree that they can be trusted and used in high-stakes applications. Such explanations are particularly problematic for policies represented as recurrent neural networks (RNNs) BID16 BID14, which are increasingly used to achieve state-of-the-art performance BID15 BID21. This is because RNN policies use internal memory to encode features of the observation history, which are critical to their decision making, but extremely difficult to interpret. In this paper, we take a step towards comprehending and explaining RNN policies by learning more compact memory representations. Explaining RNN memory is challenging due to the typical use of high-dimensional continuous memory vectors that are updated through complex gating networks (e.g. LSTMs, GRUs BID10 BID5). We hypothesize that, in many cases, the continuous memory is capturing and updating one or more discrete concepts. If exposed, such concepts could significantly aid explainability. This motivates attempting to quantize the memory and observation representation used by an RNN to more directly capture those concepts. In this case, understanding the memory use can be approached by manipulating and analyzing the quantized system. Of course, not all RNN policies will have compact quantized representations, but many powerful forms of memory usage can be captured in this way. Our main contribution is to introduce an approach for transforming an RNN policy with continuous memory and continuous observations to a finite-state representation known as a Moore Machine. To accomplish this we introduce the idea of Quantized Bottleneck Network (QBN) insertion. QBNs are simply auto-encoders, where the latent representation is quantized. Given a trained RNN, we train QBNs to encode the memory states and observation vectors that are encountered during the RNN operation. We then insert the QBNs into the trained RNN policy in place of the "wires" that propagated the memory and observation vectors. The combination of the RNN and QBN in a policy represented as a Moore Machine Network (MMN) with quantized memory and observations that is nearly equivalent to the original RNN. The MMN can be used directly or fine-tuned to improve on inaccuracies introduced by QBN insertion. While training quantized networks is often considered to be quite challenging, we show that a simple approach works well in the case of QBNs. In particular, we demonstrate that "straight through" gradient estimators as in BID1 BID6 are quite effective. We present experiments in synthetic domains designed to exercise different types of memory use as well as benchmark grammar learning problems. Our approach is able to accurately extract the ground-truth MMNs, providing insight into the RNN memory use. We also did experiments on 6 Atari games using RNNs that achieve state-of-the-art performance. We show that in most cases it is possible to extract near-equivalent MMNs and that the MMNs can be surprisingly small. Further, the extracted MMNs give insights into the memory usage that are not obvious based on just observing the RNN policy in action. For example, we identify games where the RNNs do not use memory in a meaningful way, indicating the RNN is implementing purely reactive control. In contrast, in other games, the RNN does not use observations in a meaningful way, which indicates that the RNN is implementing an open-loop controller. There have been efforts made in the past to understand the internals of Recurrent Networks BID12 BID0 BID22 BID17 BID11 BID18. However, to the best of our knowledge there is no prior work on learning finite-memory representations of continuous RNN policies. Our work, however, is related to a large body of work on learning finite-state representations of recurrent neural networks. Below we summarize the branches of that work and the relationship to our own. There has been a significant history of work on extracting Finite State Machines (FSMs) from recurrent networks trained to recognize languages BID28 BID23 BID3. Typical approaches include discretizing the continuous memory space via gridding or clustering followed by minimization. A more recent approach is to use classic query-based learning algorithms to extract FSMs by asking membership and equivalence queries BID26. However, none of these approaches directly apply to learning policies, which require extending to Moore Machines. In addition, all of these approaches produce an FSM approximation that is separated from the RNN and thus serve as only a proxy of the RNN behavior. Rather, our approach directly inserts discrete elements into the RNN that preserves its behavior, but allows for a finite state characterization. This insertion approach has the advantage of allowing fine-tuning and visualization using standard learning frameworks. The work most similar to ours also focused on learning FSMs BID28. However, the approach is based on directly learning recurrent networks with finite memory, which are qualitatively similar to the memory representation of our MMNs. That work, however, focused on learning from scratch rather than aiming to describe the behavior of a continuous RNN. Our work extends that approach to learn MMNs and more importantly introduces the method of QBN insertion as a way of learning via guidance from a continuous RNN.This transforms any pre-trained recurrent policy into a finite representation. We note that there has been prior work on learning fully binary networks, where the activation functions and/or weights are binary (e.g. BID1 BID6 BID8). The goal of that line of work is typically to learn more time and space efficient networks. Rather, we focus on learning only discrete representations of memory and observations, while allowing the rest of the network to use arbitrary activations and weights. This is due to our alternative goal of supporting interpretability rather than efficiency. Recurrent neural networks (RNNs) are commonly used in reinforcement learning to represent policies that require or can benefit from internal memory. At each time step, an RNN is given an observation o t (e.g. image) and must output an action a t to be taken in the environment. During execution an RNN maintains a continuous-valued hidden state h t, which is updated on each transition and influences the action choice. In particular, given the current observation o t and current state h t, an RNN performs the following operations: 1) Extract a set of observation features f t from o t, for example, using a CNN when observations are images, 2) Outputting an action a t = π(h t) according to policy π, which is often a linear softmax function of h t, 3) transition to a new state h t+1 = δ(f t, h t) where δ is the transition function, which is often implemented via different types of gating networks such as LSTMs or GRUs. The continuous and high dimensional nature of h t and f t can make interpreting the role of memory difficult. This motivates our goal of extracting compact quantized representations of h t and f t. Such representations have the potential to allow investigating a finite system that captures the key features of the memory and observations. For this purpose we introduce Moore Machines and their deep network counterparts. Moore Machines. A classical Moore Machine (MM) is a standard finite state machine where all states are labeled by output values, which in our case will correspond to actions. In particular, a Moore Machine is described by a finite set of (hidden) statesĤ, an initial hidden stateĥ 0, a finite set of observationsÔ, a finite set of actions A, a transition functionδ, and a policyπ that maps hidden states to actions. The transition functionδ:Ĥ ×Ô →Ĥ returns the next hidden statê h t+1 =δ(ĥ t,ô t) given the current stateĥ t and observationô t. By convention we will use h t andĥ t to denote continuous and discrete states respectively and similarly for other quantities and functions. Moore Machine Networks. A Moore Machine Network (MMN) is a Moore Machine where the transition functionδ and policyπ are represented via deep networks. In addition, since the raw observations given to an MMN are often continuous, or from an effectively unbounded set (e.g. images), an MMN will also provide a mappingĝ from the continuous observations to a finite discrete observation spaceÔ. Hereĝ will also be represented via a deep network. In this work, we consider quantized state and observation representations where eachĥ ∈Ĥ is a discrete vector and each discrete observation inÔ is a discrete vector that describes the raw observation. We will denote the quantization level as k and the dimensions ofĥ andf by B h and B f respectively. Based on the above discussion, an MMN can be viewed as a traditional RNN, where: 1) The memory is restricted to be composed of k-level activation units, and 2) The environmental observations are intermediately transformed to a k-level representationf before being fed to the recurrent module. Given an approach for incorporating quantized units into the backpropagation process, it is straightforward, in concept, to learn MMNs from scratch via standard RNN learning algorithms. However, we have found that learning MMNs from scratch can be quite difficult for non-trivial problems, even when an RNN can be learned with relative ease. For example, we have not been able to train highperforming MMNs from scratch for Atari games. Below we introduce a new approach for learning MMNs that is able to leverage the ability to learn RNNs. Given a trained RNN, our key idea is to first learn quantized bottleneck networks (QBNs) for embedding the continuous observation features and hidden state into a k-level quantized representation. We will then insert the QBNs into the original recurrent net in such a way that its behavior is minimally changed with the option of fine-tuning after insertion. The ing network can be viewed as consuming quantized features and maintaining quantized state, which is effectively an MMN. Below we describe the steps in further detail, which are illustrated in FIG0. A QBN is simply an autoencoder where the latent representation between the encoder and decoder (i.e. the bottleneck) is constrained to be composed of k-level activation units. While, traditional autoencoders are generally used for the purpose of dimensionality reduction in continuous space BID9, QBNs are motivated by the goal of discretizing a continuous space. Conceptually, this can be done by quantizing the activations of units in the encoding layer. We represent a QBN via a continuous multilayer encoder E, which maps inputs x to a latent encoding E(x), and a corresponding multilayer decoder D. To quantize the encoding, the QBN output is given by DISPLAYFORM0 In our case, we use 3-level quantization in the form of +1, 0 and −1 using the quantize function, which assumes the outputs of E(x) are in the range [−1, 1]. 1 One choice for the output nodes of E(x) would be the tanh activation. However, since the gradient of tanh is close to 1 near 0, it can be difficult to produce quantization level 0 during learning. Thus, as suggested in , to support 3-valued quantization we use the following activation function, which is flatter in the region around zero input.φ(x) = 1.5 tanh(x) + 0.5 tanh(−3x)Of course introducing the quantize function in the QBN in b(x) being non-differentiable, making it apparently incompatible with backpropagation, since the gradients between the decoder and encoder will almost always be zero. While there are a variety of ways to deal with this issue, we have found that the straight-through estimator, as suggested and used in prior work BID8 BID1 BID6 ) is quite effective. In particular, the standard straightthrough estimator of the gradient simply treats the quantize function as the identity function during back-propagation. Overall, the inclusion of the quantize function in the QBN effectively allows us to view the last layer of E as producing a k-level encoding. We train a QBN as an autoencoder using the standard L 2 reconstruction error x − b(x) 2 for a given input x. Given a recurrent policy we can run the policy in the target environment in order to produce an arbitrarily large set of training sequences of triples (o t, f t, h t), giving the observation, corresponding observation feature, and hidden state at time t. Let F and H be the sets of all observed features and states respectively. The first step of our approach is to train two QBNs, b f and b h, on F and H respectively. If the QBNs are able to achieve low reconstruction error then we can view latent "bottlenecks" of the QBNs as a high-quality k-level encodings of the original hidden states and features. We now view b f and b h as "wires" that propagate the input to the output, with some noise due to imperfect reconstruction. We insert these wires into the original RNN in the natural way (stage-3 in FIG0). The b f QBN is inserted between the RNN units that compute the features f and the nodes those units are connected to. The b h QBN is inserted between the output of the recurrent network block and the input to the recurrent block. If b f and b h always produced perfect reconstructions, then the of inserting them as described would not change the behavior of the RNN. Yet, the RNN can now be viewed as an MMN since the bottlenecks of b f and b h provide a quantized representation of the features f t and states h t.Fine Tuning. In practice, the QBNs will not achieve perfect reconstruction and thus, the ing MMN may not behave identically to the original RNN. Empirically, we have found that the performance of the ing MMN is often very close to the RNN directly after insertion. However, when there is non-trivial performance degradation, we can fine-tune the MMN by training on the original rollout data of the RNN. Importantly, since our primary goal is to learn a representation of the original RNN, during fine-tuning our objective is to have the MMN match the softmax distribution over actions produced by the RNN. We found that training in this way was significantly more stable than training the MMN to simply output the same action as the RNN. After obtaining the MMN, one could use visualization and other analysis tools to investigate the memory and it's feature bits in order to gain a semantic understanding of their roles. Solving the full interpretation problem in a primitive way is beyond the scope of this work. Extraction. Another way to gain insight is to use the MMN to produce an equivalent Moore Machine over atomic state and observation spaces, where each state and observation is a discrete symbol. This machine can be analyzed to understand the role of different machine states and how they are related. In order to create the Moore Machine we run the learned MMN to produce a dataset of <ĥ t−1,f t,ĥ t, a t >, giving the consecutive pairs of quantized states, the quantized features that led to the transition, and the action selected after the transition. The state-space of the Moore Machine will correspond to the p distinct quantized states in the data and the observation-space of the machine will be the q unique quantized feature vectors in the data. The transition function of the machineδ is constructed from the data by producing a p × q transaction table that captures the transitions seen in the data. Minimization. In general, the number of states p in the ing Moore Machine will be larger than necessary in the sense that there is a much smaller, but equivalent, minimal machine. Thus, we apply standard Moore Machine minimization techniques to arrive at the minimal 2 equivalent Moore Machine BID19. This often dramatically reduces the number of distinct states and observations. Our experiments address the following questions: 1) Is it possible to extract MMNs from RNNs without significant loss in Performance? 2) What is the general magnitude of the number of states and observations in the minimal machines, especially for complex domains such as Atari? 3) Do the learned MMNs help with interpretability of the recurrent policies? In this section, we begin addressing these questions by considering two domains where ground truth Moore Machines are known. The first is a parameterized synthetic environment, Mode Counter, which can capture multiple types of memory use. Second, we consider benchmark grammar learning problems. The class of Mode Counter Environments (MCEs) allows us to vary the amount of memory required by a policy (including no memory) and the required type of memory usage. In particular, MCEs can require varying amounts of memory for remembering past events and implementing internal counters. An MCE is a restricted type of Partially Observable Markov Decision Process, which transitions between one of M modes over time according to a transition distribution, which can depend on the current mode and amount of time spent in the current mode. There are M actions, one for each mode, and the agent receives a reward of +1 at the end of the episode if it takes the correct action associated with the active mode at each time step. The agent does not observe the mode directly, but rather must infer the mode via a combination of observations and memory use. Different parameterizations place different requirements on how (and if) memory needs to be used to infer the mode and achieve optimal performance. Below we give an intuitive description of the MCEs 3 in our experiments. We conduct experiments in three MCE instances, which use memory and observations in fundamentally different ways. This tests our ability to use our approach for determining the type of memory use. 1) Amnesia. This MCE is designed so that the optimal policy does not need memory to track past information and can select optimal actions based on just the current observation. 2) Blind. Here we consider the opposite extreme, where the MCE observations provide no information about optimal actions. Rather, memory must be used to implement counters that keep track of a deterministic mode sequence for determining the optimal actions. 3) Tracker. This MCE is designed so that the optimal policy must both use observations and memory in order to select optimal actions. Intuitively the memory must implement counters that keep track of key time steps where the observations provide information about the mode. In all above instances, we used M = 4 modes. For each MCE instance we use the following recurrent architecture: the input feeds into 1 feed-forward layer with 4 Relu6 nodes BID13 ) (f t), followed by a 1-layer GRU with 8 hidden units (h t), followed by a fully connected softmax layer giving a distribution over the M actions (one per mode). Since we know the optimal policy in the MCEs we use imitation learning for training. For all of the MCEs in our experiments, the trained RNNs achieve 100% accuracy on the imitation dataset and appeared to produce optimal policies. MMN Training. The observation QBN b f and hidden-state QBN b h have the same architecture, except that the number of quantized bottleneck units B f and B h are varied in our experiments. The encoders consist of 1 feed-forward layer of tanh nodes, where the number of nodes is 4 times the size of the bottleneck. This layer feeds into 1 feedforward layer of quantized bottleneck nodes (see Section 4). The decoder for both b f and b h has a symmetric architecture to the encoder. Training of b f and b h in the MCE environments was extremely fast compared to the RNN training, since QBNs do not need to learn temporal dependencies. We trained QBNs with bottleneck sizes of B f ∈ {4, 8} and B h ∈ {4, 8}. For each combination of B f and B h we embedded the QBNs into the RNN to give a discrete MMN and then measured performance of the MMN before and after fine tuning. TAB0 gives the average test score over 50 test episodes. Score of 1 indicates the agent performed optimally for all episodes. In most of the cases no fine tuning was required (marked as '-') since the agent achieved perfect performance immediately after bottleneck insertion due to low reconstruction error. In all other cases, except for Tracker (B h = 4, B f = 4) fine-tuning ed in perfect MMN performance. The exception yielded 98% accuracy. Interestingly in that case, we see that if we only insert one of the bottlenecks at a time, we yield perfect performance, which indicates that the combined error accumulation of the two bottlenecks is responsible for the reduced performance. Moore Machine Extraction. TAB0 also gives the number of states and observations of the MMs extracted from the MMNs both before and after minimization. Recall that the number of states and obsevations before minimization is the number of distinct combinations of values observed for the bottleneck nodes during long executions of the MMN. We see that there are typically significantly more states and observations before minimization than after. This indicates that the MMN learning does not necessarily learn minimal discrete state and observation representations, though the representations accurately describe the RNN. After minimization (Section 4.3), however, in all but one case we get exact minimal machines for each MCE domain. The ground truth minimal machines that are found are shown in the Appendix (Figure 3). This shows that the MMNs learned via QBN insertions were equivalent to the true minimal machines and hence indeed optimal in most cases. The exception matches the case where the MMN did not achieve perfect accuracy. Examining these machines allows one to understand the memory use. For example, the machine for Blind has just a single observation symbol and hence its transitions cannot depend on the input observations. In contrast, the machine for Amnesia shows that each distinct observation symbol leads to the same state (and hence action choice) regardless of the source state. Thus, the policies action is completely determined by the current observation., 2017) ). Here we evaluate our approach over the 7 Tomita Grammars 4, where each grammar defines the set of binary strings that should be accepted or rejected. Since, our focus is on policy learning problems, we treat the grammars as environments with two actions'accept' and'reject'. Each episode corresponds to a random string that is either part of the particular grammar or not. The agent receives a reward of 1 if the correct action accept/reject is chosen on the last symbol of a string. RNN Training. The RNN for each grammar is comprised of a one-layer GRU with 10 hidden units, followed by a fully connected softmax layer with 2 nodes (accept/reject). Since we know the optimal policy, we again use imitation learning to train each RNN using the Adam optimizer and learning rate of 0.001. The training dataset is comprised of an equal number of accept/reject strings with lengths uniformly sampled in the range. TAB1 presents the test for the trained RNNs giving the accuracy over a test set of 100 strings drawn from the same distribution as used for training. Other than grammar #6 5, the RNNs were 100% accurate. MMN Training. Since the raw observations for this problem are from a finite alphabet, we don't need to employ a bottleneck encoder to discretize the observation features. Thus the only bottleneck learned here is b h for the hidden memory state. We use the same architecture for b h as used for the MCE experiments and conduct experiments with B h ∈ {8, 16}. These bottlenecks were then inserted in the RNNs to give MMNs. The performance of the MMNs before and after fine-tuning are shared in TAB1. In almost all cases, the MMN is able to maintain the performance of the RNN without fine-tuning. Fine tuning provides only minor improvements in other cases, which already are achieving high accuracy. Moore Machine Extraction. Our for MM extraction and minimization are in TAB1. In each case, we see a considerable reduction in the MM's state-space after minimization while accurately maintaining the MMN's performance. Again, this shows that the MMN learning does not directly in minimal machines, yet are equivalent to the minimal machines and hence are exact 4 The Tomita grammars are the following 7 languages over the alphabet {0, 1}: solutions. In all cases, except for grammar 6 the minimized machines are identical to the minimal machines that are known for these grammars BID24. DISPLAYFORM0 In this section, we consider applying our technique to RNNs learned for six Atari 6 games using the OpenAI gym BID2. Unlike the above experiments, where we knew the ground truth MMs, for Atari we did not have any preconception of what the MMs might look and how large they might be. The fact that the input observations for Atari (i.e. images) are much more complex than the previous experiments inputs makes it completely unclear if we can expect similar types of . There have been other recent efforts towards understanding Atari agents BID27 BID7. However, we are not aware of any other work which aims to extract finite state representations for Atari policies. RNN Training. All the Atari agents have the same recurrent architecture. The input observation is an image frame, preprocessed by gray-scaling, 2x down-sampling, cropping to an 80 × 80 square and normalizing the values to. The network has 4 convolutional layers (kernel size 3, strides 2, padding 1, and 32,32,16,4 filters respectively). We used Relu as the intermediate activation and Relu6 over the last convolutional layer. This is followed by a GRU layer with 32 hidden units and a fully connected layer with n+1 units, where n is the dimension of the Atari action space. We applied a softmax to first n neurons to obtain the policy and used the last neuron to predict the value function. We used the A3C RL algorithm BID16 ) (learning rate 10 −4, discount factor 0.99) and computed loss on the policy using Generalized Advantage Estimation (λ = 1.0) BID20. We report the trained RNN performance on our six games in the second column of TAB2.MMN Training. We used the same general architecture for the QBN b f as used for the MCE experiments, but adjusted the encoder input and decoder output sizes to match the dimension of the continuous observation features f t. For b h, the encoder has 3 feed-forward layers with (8 × B h), (4 × B h) and B h nodes. The decoder is symmetric to the encoder. For the Atari domains, the training data for b f and b h was generated using noisy rollouts. In particular, each training episode was generated by executing the learned RNN for a random number of steps and then executing an -greedy (with = 0.3) version of the RNN policy. This is intended to increase the diversity of the training data and we found that it helps to more robustly learn the QBNs. We trained bottlenecks for B h ∈ {64, 128} and B f ∈ {100, 400} noting that these values are significantly larger than for our earlier experiments due to the complexity of Atari. Note that while there are an enormous number of potential discrete states for these values of B h the actual number of states observed and hence the number of MMN states can be substantially smaller. Each bottleneck was trained to the point of saturation of training loss and then inserted into the RNN to give an MMN for each Atari game. MMN Performance. TAB2 gives the performance of the trained MMNs before and after finetuning for different combinations of B h and B f. We see that for 4 games, Pong, Freeway, Bowling, and Boxing, the MMNs after fine tuning either achieve identical scores to the RNN or very close (in the case of boxing). This demonstrates the ability to learn a discrete representation of the input and memory for these complex games with no impact on performance. We see that for Boxing and Pong fine tuning was required to match the RNN performance. In the case of Freeway and Bowling, fine-tuning was not required. In the remaining two games, Breakout and Space Invaders, we see that the MMNs learned after fine tuning achieve lower scores than the original RNNs, though the scores are still relatively good. On further investigation, we found that this drop in performance was due to poor reconstruction on some rare parts of the game. For example, in Breakout, after the first board is cleared, the policy needs to press the fire-button to continue, but the learned MMN does not do this and instead times out, which in less score. This motivates the investigation into more intelligent approaches for training QBNs to capture critical information in such rare, but critically important, states. MM Minimization. We see from TAB2 that before minimization, the MMs often have relatively large numbers of discrete states and observations. This is unsurprising given that we are using relatively large values of B h and B f. However, we see that after minimizing the MMNs the number of states and observations reduces by orders of magnitude, sometimes to just a single state and/or single observation. The number of states and observations in many cases are small enough to write out and analyze by hand, making them amenable to careful analysis. However, this analysis is likely to be non-trivial for moderately complex policies due to the need to understand the "meaning" of the observations and in turn of the states. Understanding Memory Use. We were surprised to observe in Atari the same three types of memory use considered for the MCE domains above. First, we see that the MM for Pong has just three states (one per action) and 10 discrete observation symbols (see Figure 2a). Most importantly we see that each observation transitions to the same state (and hence action) regardless of the current state. So we can view this MM as defining a set of rule that maps individual observations to actions with no memory necessary. In this sense, the Pong policy is analogous to the Amnesia MCE [5.1].In contrast, we see that in both Bowling and Freeway there is only one observation symbol in the minimal MM. This means that the MM actually ignores the input image when selecting actions. Rather the policies are open-loop controllers whose action just depends on the time-step rather than the observations. Thus, these policies are analogous to the Blind MCE [5.1]. Freeway has a particularly trivial policy that always takes the Up action at each time step. While this policy behavior could have been determined by looking at the action sequence of the rollouts, it is encouraging that our MM extraction approach also discovered this. As shown in Figure 2b, Bowling has a more interesting open-loop policy structure where it has an initial sequence of actions and then a loop is entered where the action sequence is repeated. It is not immediately obvious that this policy has such an open-loop structure by just watching the policy. Thus, we can see that the MM extraction approach we use here can provide significant additional insight. Breakout, Space Invaders and Boxing use both memory and observations based on our analysis of the MM transition structures. We have not yet attempted a full semantic analysis of the discrete observations and states for any of the Atari policies. This will require additional visualization and interaction tools and is an important direction of future work that is enabled by our approach. Motivated by the goal of better understanding memory use in RNN polices, we introduced an approach for extracting finite state Moore Machines from those policies. The key idea, bottleneck insertion, is to train Quantized Bottleneck Networks to produce binary encodings of continuous RNN memory and input features, and then insert those bottlenecks into the RNN. This yields a BID15, A3C BID16 From the MMN we then extract a discrete Moore machine that can then be transformed into an equivalent minimal machine for analysis and usage. Our on two environments where the ground truth machines are known show that our approach is able to accurately extract the ground truth. We also show experiments in six Atari games, where we have no prior insight into the ground truth machines. We show that, in most cases, the learned MMNs maintain similar performance to the original RNN policies. Further, the extracted machines provide insight into the memory usage of the policies. First, we see that the number of required memory states and observations is surprisingly small. Second, we can identify cases where the policy did not use memory in a significant way (e.g. Pong) and policies that relied only on memory and ignored the observations (e.g. Bowling and Freeway). To our knowledge, this is the first work where this type of insight was reported for policies in such complex domains. A key direction for future work is to develop tools and visualizations for attaching meaning to the discrete observations and in turn states, which will allow for an additional level of insight into the policies. It is also worth considering the use of tools for analyzing finite-state machine structure to gain further insight and analyze formal properties of the policies. An MCE is parameterized by the mode number M, a mode transition function P, a mode life span mapping ∆(m) that assigns a positive integer to each mode, and a count set C containing zero or more natural numbers. At time t the MCE hidden state is a tuple (m t, c t), where m t ∈ {1, 2, . . ., M} is the current mode and c t is the count of time-steps that the system has been consecutively in mode m t. The mode only changes when the lifespan is reached, i.e. c t = ∆(m t) − 1, upon which the next mode m t+1 is generated according to the transition distribution P (m t+1 | m t). The transition distribution also specifies the distribution over initial modes. The agent does not directly observe the state, but rather, the agent only receives a continuous-valued observations o t ∈ at each step, based on the current state (m t, c t). If c t ∈ C then o t is drawn uniformly at random from [m t /M, (m t + 1)/M )] and otherwise o t is drawn uniformly at random from. Thus, observations determine the mode when the mode count is in C and otherwise the observations are uninformative. Note that the agent does not observe the counter. This means that to keep track of the mode for optimal performance the agent must remember the current mode and use memory to keep track of how long the mode has been active, in order to determine when it needs to "pay attention" to the current observation. We conduct experiments with the following three MCE instances 7: 1) Amnesia. This MCE uses ∆(m) = 1 for all modes, C = {0}, and uniformly samples random initial mode and transition distributions. Thus, an optimal policy will not use memory to track information from the past, since the current observation alone determines the current mode. This tests our ability to use MMN extraction to determine that a policy is purely reactive, i.e. not using memory. 2) Blind. Here we use deterministic initial mode and transition distributions, mode life spans that can be larger than 1, and C = {}. Thus, the observations provide no information about the mode and optimal performance can only be achieved by using memory to keep track of the deterministic mode sequence. This allows us to test whether the extraction of an MMN could infer that the recurrent policy is ignoring observations and only using memory. 3) Tracker. This MCE is identical to Amnesia, except that the ∆(m) values can be larger than 1. This requires an optimal policy to pay attention to observations when c t = 0 and use memory to keep track of the current mode and mode count. This is the most general instance of the environment and can in difficult problems when the number of modes and their life-spans grow. In all above instances, we used M = 4. We use'A' and'R' to denote accept and reject states, respectively. Other than Grammar 6, all machines are 100% accurate.
Extracting a finite state machine from a recurrent neural network via quantization for the purpose of interpretability with experiments on Atari.
505
scitldr
Building upon the recent success of deep reinforcement learning methods, we investigate the possibility of on-policy reinforcement learning improvement by reusing the data from several consecutive policies. On-policy methods bring many benefits, such as ability to evaluate each ing policy. However, they usually discard all the information about the policies which existed before. In this work, we propose adaptation of the replay buffer concept, borrowed from the off-policy learning setting, to the on-policy algorithms. To achieve this, the proposed algorithm generalises the Q-, value and advantage functions for data from multiple policies. The method uses trust region optimisation, while avoiding some of the common problems of the algorithms such as TRPO or ACKTR: it uses hyperparameters to replace the trust region selection heuristics, as well as the trainable covariance matrix instead of the fixed one. In many cases, the method not only improves the comparing to the state-of-the-art trust region on-policy learning algorithms such as ACKTR and TRPO, but also with respect to their off-policy counterpart DDPG. The past few years have been marked by active development of reinforcement learning methods. Although the mathematical foundations of reinforcement learning have been known long before BID23, starting from 2013, the novel deep learning techniques allowed to solve vision based discrete control tasks such as Atari 2600 games BID15 as well as continuous control problems BID12. Many of the leading state-of-the-art reinforcement learning methods share the actor-critic architecture BID5. Actorcritic methods separate the actor, providing a policy, and the critic, providing an approximation for the expected discounted cumulative reward or some derived quantities such as advantage functions BID2. However, despite improvements, state-of-the-art reinforcement learning still suffers from poor sample efficiency and extensive parameterisation. For most real-world applications, in contrast to simulations, there is a need to learn in real time and over a limited training period, while minimising any risk that would cause damage to the actor or the environment. Reinforcement learning algorithms can be divided into two groups: on-policy and off-policy learning. On-policy approaches (e. g., SARSA BID18, ACKTR BID28) evaluate the target policy by assuming that future actions will be chosen according to it, hence the exploration strategy must be incorporated as a part of the policy. Off-policy methods (e. g., Qlearning BID27, DDPG BID12) separate the exploration strategy, which modifies the policy to explore different states, from the target policy. The off-policy methods commonly use the concept of replay buffers to memorise the outcomes of the previous policies and therefore exploit the information accumulated through the previous iterations BID13. BID15 combined this experience replay mechanism with Deep Q-Networks (DQN), demonstrating end-to-end learning on Atari 2600 games. One limitation of DQN is that it can only operate on discrete action spaces. BID12 proposed an extension of DQN to handle continuous action spaces based on the Deep Deterministic Policy Gradient (DDPG). There, exponential smoothing of the target actor and critic weights has been introduced to ensure stability of the rewards and critic predictions over the subsequent iterations. In order to improve the variance of policy gradients, BID20 proposed a Generalised Advantage Function. combined this advantage function learning with a parallelisation of exploration using differently trained actors in their Asynchronous Advantage Actor Critic model (A3C); however, BID26 demonstrated that such parallelisation may also have negative impact on sample efficiency. Although some work has been performed on improvement of exploratory strategies for reinforcement learning BID8, but it still does not solve the fundamental restriction of inability to evaluate the actual policy, neither it removes the necessity to provide a separate exploratory strategy as a separate part of the method. In contrast to those, state-of-the-art on-policy methods have many attractive properties: they are able to evaluate exactly the ing policy with no need to provide a separate exploration strategy. However, they suffer from poor sample efficiency, to a larger extent than off-policy reinforcement learning. TRPO method BID19 has introduced trust region policy optimisation to explicitly control the speed of policy evolution of Gaussian policies over time, expressed in a form of Kullback-Leibler divergence, during the training process. Nevertheless, the original TRPO method suffered from poor sample efficiency in comparison to off-policy methods such as DDPG. One way to solve this issue is by replacing the first order gradient descent methods, standard for deep learning, with second order natural gradient . BID28 used a Kroneckerfactored Approximate Curvature (K-FAC) optimiser BID14 in their ACKTR method. PPO method proposes a number of modifications to the TRPO scheme, including changing the objective function formulation and clipping the gradients. BID26 proposed another approach in their ACER algorithm: in this method, the target network is still maintained in the off-policy way, similar to DDPG BID12, while the trust region constraint is built upon the difference between the current and the target network. Related to our approach, recently a group of methods has appeared in an attempt to get the benefits of both groups of methods. BID7 propose interpolated policy gradient, which uses the weighted sum of both stochastic BID24 and deterministic policy gradient BID22. BID17 propose an off-policy trust region method, Trust-PCL, which exploits off-policy data within the trust regions optimisation framework, while maintaining stability of optimisation by using relative entropy regularisation. While it is a common practice to use replay buffers for the off-policy reinforcement learning, their existing concept is not used in combination with the existing on-policy scenarios, which in discarding all policies but the last. Furthermore, many on-policy methods, such as TRPO BID19, rely on stochastic policy gradient BID24, which is restricted by stationarity assumptions, in a contrast to those based on deterministic policy gradient BID22, like DDPG BID12. In this article, we describe a novel reinforcement learning algorithm, allowing the joint use of replay buffers with trust region optimisation and leading to sample efficiency improvement. The contributions of the paper are given as follows:1. a reinforcement learning method, enabling replay buffer concept along with on-policy data; 2. theoretical insights into the replay buffer usage within the on-policy setting are discussed; 3. we show that, unlike the state-of-the-art methods as ACKTR BID28, PPO and TRPO BID19, a single non-adaptive set of hyperparameters such as the trust region radius is sufficient for achieving better performance on a number of reinforcement learning tasks. As we are committed to make sure the experiments in our paper are repeatable and to further ensure their acceptance by the community, we will release our source code shortly after the publication. Consider an agent, interacting with the environment by responding to the states s t, t ≥ 0, from the state space S, which are assumed to be also the observations, with actions a t from the action space A chosen by the policy distribution π θ (·|s t), where θ are the parameters of the policy. The initial state distribution is ρ 0: S → R. Every time the agent produces an action, the environment gives back a reward r(s t, a t) ∈ R, which serves as a feedback on how good the action choice was and switches to the next state s t+1 according to the transitional probability P (s t+1 |s t, a t). Altogether, it can be formalised as an infinite horizon γ-discounted Markov Decision Process (S, A, P, r, ρ 0, γ), γ ∈ BID28 BID19. The expected discounted return BID3 ) is defined as per BID19: DISPLAYFORM0 The advantage function A π BID2, the value function V π and the Q-function Q π are defined as per; BID19: DISPLAYFORM1 DISPLAYFORM2 DISPLAYFORM3 In all above definitions s 0 ∼ ρ 0 (s 0), a t ∼ π(a t |s t), s t+1 ∼ P (s t+1 |s t, a t), and the policy π = π θ is defined by its parameters θ. A straightforward approach for learning a policy is to perform unconstrained maximisation ρ(π θ) with respect to the policy parameters θ. However, for the state-of-the-art iterative gradient-based optimisation methods, this approach would lead to unpredictable and uncontrolled changes in the policy, which would impede efficient exploration. Furthermore, in practice the exact values of ρ(π θ) are unknown, and the quality of its estimates depends on approximators which tend to be correct only in the vicinity of parameters of observed policies. (2015a) mention that in practice the algorithm's convergence rate and the complexity of maximum KL divergence computations makes it impractical to apply this method directly. Therefore, they proposed to replace the unconstrained optimisation with a similar constrained optimisation problem, the Trust Region Policy Optimisation (TRPO) problem: DISPLAYFORM0 where D KL is the KL divergence between the old and the new policy π θ old and π θ respectively, and δ is the trust region radius. Despite this improvement, it needs some further enhancements to solve this problem efficiently, as we will elaborate in the next section. Many of the state-of-the-art trust region based methods, including TRPO BID19 and ACKTR BID28, use second order natural gradient based actor-critic optimisation BID1 BID10. The motivation behind it is to eliminate the issue that gradient descent loss, calculated as the Euclidean norm, is dependent on parametrisation. For this purpose, the Fisher information matrix is used, which is, as it follows from and BID10, normalises per-parameter changes in the objective function. In the context of actor-critic optimisation it can be written as BID28 BID10, where p(τ) is the trajectory distribution p(s 0) T t=0 π(a t |s t)p(s t+1 |s t, a t): DISPLAYFORM0 However, the computation of the Fisher matrix is intractable in practice due to the large number of parameters involved; therefore, there is a need to resort to approximations, such as the Kroneckerfactored approximate curvature (K-FAC) method BID14, which has been first proposed for ACKTR in BID28. In the proposed method, as it is detailed in Algorithm 1, this optimisation method is used for optimising the policy. While the original trust regions optimisation method can only use the samples from the very last policy, discarding the potentially useful information from the previous ones, we make use of samples over several consecutive policies. The rest of the section contains definition of the proposed replay buffer concept adaptation, and then formulation and discussion of the proposed algorithm.3.1 USAGE OF REPLAY BUFFERS BID15 suggested to use replay buffers for DQN to improve stability of learning, which then has been extended to other off-policy methods such as DDPG BID12. The concept has not been applied to on-policy methods like TRPO BID19 or ACKTR BID28, which do not use of previous data generated by other policies. Although based on trust regions optimisation, ACER BID26 uses replay buffers for its off-policy part. In this paper, we propose a different concept of the replay buffers, which combines the on-policy data with data from several previous policies, to avoid the restrictions of policy distribution stationarity for stochastic policy gradient BID24. Such replay buffers are used for storing simulations from several policies at the same time, which are then utilised in the method, built upon generalised value and advantage functions, accommodating data from these policies. The following definitions are necessary for the formalisation of the proposed algorithm and theorems. We define a generalised Q-function for multiple policies {π 1, . . ., π n, . . ., π N} as DISPLAYFORM0 DISPLAYFORM1 |s n t, a n t ), a n t ∼ π n (a n t |s n t). We also define the generalised value function and the generalised advantage function as DISPLAYFORM2 DISPLAYFORM3 DISPLAYFORM4 P (s → x, k, π), as in BID24, is the probability of transition from the state s to the state x in k steps using policy π. Theorem 1. For the set of policies {π 1, . . ., π N} the following equality will be true for the gradient: DISPLAYFORM5 where θ are the joint parameters of all policies {π n} and b πn (s) is a bias function for the policy. The proof of Theorem 1 is given in Appendix B. Applying a particular case of the bias function b πn (s) = −V π (s) and using the likelihood ratio transformation, one can get DISPLAYFORM6 The proposed approach is summarised in Algorithm 1. The replay buffer R p contains data collected from several subsequent policies. The size of this buffer is RBP CAPACITY., increase i by the total number of timesteps in all new paths. {Stage 2} Put recorded paths into the policy paths replay buffer R p ← P. {Stage 3} For every path in R p compute the targets for the value function regression using equation FORMULA0. Ψ = Update the value function estimator parameters {Stage 4} For every path in R p, estimate the advantage function using Equation.{Stage 5} Update parameters of the policy θ for N ITER PL UPDATE iterations using the gradient from Equation FORMULA1 and a barrier function defined in Equation. end while During Stage 1, the data are collected for every path until the termination state is received, but at least TIMESTEPS PER BATCH steps in total for all paths. The policy actions are assumed to be sampled from the Gaussian distribution, with the mean values predicted by the policy estimator along with the covariance matrix diagonal. The covariance matrix output was inspired, although the idea is different, by the EPG paper BID4.At Stage 2, the obtained data for every policy are saved in the policy replay buffer R p.At Stage 3, the regression of the value function is trained using Adam optimiser BID11 with step size VF STEP SIZE for N ITER VF UPDATE iterations. For this regression, the sum-of-squares loss function is used. The value function target values are computed for every state s t for every policy in the replay buffer using the actual sampled policy values, where t max is the maximum policy step index:V DISPLAYFORM0 During Stage 4, we perform the advantage function estimation. BID20 proposed the Generalised Advantage Estimator for the advantage function A π (s t, a t) as follows: DISPLAYFORM1 DISPLAYFORM2 Here k > 0 is a cut-off value, defined by the length of the sequence of occured states and actions within the MDP, λ ∈ is an estimator parameter, andṼ π (s t) is the approximation for the value function V π (s t), with the approximation targets defined in Equation. As proved in BID20, after rearrangement this would in the generalised advantage function estimator DISPLAYFORM3 For the proposed advantage function (see Equation 11), the estimator could be defined similarly to BID20 as DISPLAYFORM4 DISPLAYFORM5 However, it would mean the estimation of multiple value functions, which diminishes the replay buffer idea. To avoid it, we modify this estimator for the proposed advantage function as DISPLAYFORM6 Theorem 2. The difference between the estimators FORMULA1 and FORMULA1 is DISPLAYFORM7 The proof of Theorem 2 is given in Appendix C. It shows that the difference between two estimators is dependent of the difference in the conventional and the generalised value functions; given the continuous value function approximator it reveals that the closer are the policies, within a few trust regions radii, the smaller will be the bias. During Stage 5, the policy function is approximated, using the K-FAC optimiser BID14 with the constant step size PL STEP SIZE. As one can see from the description, and differently from ACKTR, we do not use any adaptation of the trust region radius and/or optimisation algorithm parameters. Also, the output parameters include the diagonal of the (diagonal) policy covariance matrix. The elements of the covariance matrix, for the purpose of efficient optimisation, are restricted to universal minimum and maximum values MIN COV EL and MAX COV EL.As an extention from BID20 and following Theorem 1 with the substitution of likelihood ratio, the policy gradient estimation is defined as DISPLAYFORM8 To practically implement this gradient, we substitute the parameters θ π, derived from the latest policy for the replay buffer, instead of joint θ parameters assuming that the parameters would not deviate far from each other due to the trust region restrictions; it is still possible to calculate the estimation ofà πn (s n t, a n t) for each policy using Equation as these policies are observed. For the constrained optimisation we add the linear barrier function to the function ρ(θ): DISPLAYFORM9 where α > 0 is a barrier function parameter and θ old are the parameters of the policy on the previous iteration. Besides of removing the necessity of heuristical estimation of the optimisation parameters, it also conforms with the theoretical prepositions shown in and, while our approach is proposed independently, pursues the similar ideas of using actual constrained optimisation method instead of changing the gradient step size parameters as per BID19.The networks' architectures correspond to OpenAI Baselines ACKTR implementation,which has been implemented by the ACKTR authors BID28. The only departure from the proposed architecture is the diagonal covariance matrix outputs, which are present, in addition to the mean output, in the policy network. In order to provide the experimental evidence for the method, we have compared it with the on-policy ACKTR BID28, PPO and TRPO BID19 ) methods, as well as with the off-policy DDPG BID12 method on the MuJoCo BID25 robotic simulations. The technical implementation is described in Appendix A. BID25: comparison with TRPO BID19, ACKTR BID28 and PPO BID25: comparison between the proposed algorithm and DDPG BID12 In contrast to those methods, the method shows that the adaptive values for trust region radius can be advantageously replaced by a fixed value in a combination with the trainable policy distribution covariance matrix, thus reducing the number of necessary hyperparameters. The for ACKTR for the tasks HumanoidStandup, Striker and Thrower are not included as the baseline ACKTR implementation diverged at the first iterations with the predefined parameterisation. PPO are obtained from baselines implementation PPO1. Figure 2 compares for different replay buffer sizes; the size of the replay buffers reflects the number of policies in it and not actions (i.e. buffer size 3 means data from three successive policies in the replay buffer). We see that in most of the cases, the use of replay buffers show performance improvement against those with replay buffer size 1 (i.e., no replay buffer with only the current policy used for policy gradient); substantial improvements can be seen for HumanoidStandup task. Figure 3 shows the performance comparison with the DDPG method BID12. In all the tasks except HalfCheetah and Humanoid, the proposed method outperforms DDPG. For HalfCheetah, the versions with a replay buffer marginally overcomes the one without. It is also remarkable that the method demonstrates stable performance on the tasks HumanoidStandup, Pusher, Striker and Thrower, on which DDPG failed (and these tasks were not included into the DDPG article). The paper combines replay buffers and on-policy data for reinforcement learning. Experimental on various tasks from the MuJoCo suite BID25 show significant improvements compared to the state of the art. Moreover, we proposed a replacement of the heuristically calculated trust region parameters, to a single fixed hyperparameter, which also reduces the computational expences, and a trainable diagonal covariance matrix. The proposed approach opens the door to using a combination of replay buffers and trust regions for reinforcement learning problems. While it is formulated for continuous tasks, it is possible to reuse the same ideas for discrete reinforcement learning tasks, such as ATARI games. The parameters of Algorithm 1, used in the experiment, are given in Table 1; the parameters were initially set, where possible, to the ones taken from the state-of-the-art trust region approach implementation BID28, and then some of them have been changed based on the experimental evidence. As the underlying numerical optimisation algorithms are out of the scope of the paper, the parameters of K-FAC optimiser from have been used for the experiments; for the Adam algorithm BID11, the default parameters from Tensorflow BID0 implementation (β 1 = 0.9, β 2 = 0.999, = 1 · 10 −8) have been used. The method has been implemented in Python 3 using Tensorflow BID0 as an extension of the OpenAI baselines package. The neural network for the control experiments consists of two fully connected layers, containing 64 neurons each, following the OpenAI ACKTR network implementation Proof. Extending the derivation from Sutton et al. FORMULA1, one can see that: DISPLAYFORM0 Then, Proof. The difference between the two k-th estimators is given as DISPLAYFORM1 By substituting this into the GAE estimator difference one can obtain ∆à πn (s t, a t) = (1 − λ)(γ∆V 1 + λγ 2 ∆V 2 + λ 2 γ 3 ∆V 3 +... DISPLAYFORM2
We investigate the theoretical and practical evidence of on-policy reinforcement learning improvement by reusing the data from several consecutive policies.
506
scitldr
Convolutional Neural Networks (CNN) have been successful in processing data signals that are uniformly sampled in the spatial domain (e.g., images). However, most data signals do not natively exist on a grid, and in the process of being sampled onto a uniform physical grid suffer significant aliasing error and information loss. Moreover, signals can exist in different topological structures as, for example, points, lines, surfaces and volumes. It has been challenging to analyze signals with mixed topologies (for example, point cloud with surface mesh). To this end, we develop mathematical formulations for Non-Uniform Fourier Transforms (NUFT) to directly, and optimally, sample nonuniform data signals of different topologies defined on a simplex mesh into the spectral domain with no spatial sampling error. The spectral transform is performed in the Euclidean space, which removes the translation ambiguity from works on the graph spectrum. Our representation has four distinct advantages: the process causes no spatial sampling error during initial sampling, the generality of this approach provides a unified framework for using CNNs to analyze signals of mixed topologies, it allows us to leverage state-of-the-art backbone CNN architectures for effective learning without having to design a particular architecture for a particular data structure in an ad-hoc fashion, and the representation allows weighted meshes where each element has a different weight (i.e., texture) indicating local properties. We achieve good on-par with state-of-the-art for 3D shape retrieval task, and new state-of-the-art for point cloud to surface reconstruction task. We present a unifying and novel geometry representation for utilizing Convolutional Neural Networks (CNNs) on geometries represented on weighted simplex meshes (including textured point clouds, line meshes, polygonal meshes, and tetrahedral meshes) which preserve maximal shape information based on the Fourier transformation. Most methods that leverage CNNs for shape learning preprocess these shapes into uniform-grid based 2D images (rendered multiview images) or 3D images (binary voxel or Signed Distance Function (SDF)). However, rendered 2D images do not preserve the 3D topologies of the original shapes due to occlusions and the loss of the third spatial dimension. Binary voxels and SDF representations under low resolution suffer big aliasing errors and under high resolution become memory inefficient. Loss of information in the input bottlenecks the effectiveness of the downstream learning process. Moreover, it is not clear how a weighted mesh where each element is weighted by a different scalar or vector (i.e., texture) can be represented by binary voxels and SDF. Mesh and graph based CNNs perform learning on the manifold physical space or graph spectrum, but generality across topologies remains challenging. In contrast to methods that operate on uniform sampling based representations such as voxel-based and view-based models, which suffer significant representational errors, we use analytical integration to precisely sample in the spectral domain to avoid sample aliasing errors. Unlike graph spectrum based methods, our method naturally generalize across input data structures of varied topologies. Using our representation, CNNs can be directly applied in the corresponding physical domain obtainable by inverse Fast Fourier Transform (FFT) due to the equivalence of the spectral and physical domains. This allows for the use of powerful uniform Cartesian grid based CNN backbone architectures (such as DLA BID40, ResNet ) for the learning task on arbitrary geometrical signals. Although the signal is defined on a simplex mesh, it is treated as a signal in the Euclidean space instead of on a graph, differentiating our framework from graph-based spectral learning techniques which have significant difficulties generalizing across topologies and unable to utilize state-of-the-art Cartesian CNNs. We evaluate the effectiveness of our shape representation for deep learning tasks with three experiments: a controlled MNIST toy example, the 3D shape retrieval task, and a more challenging 3D point cloud to surface reconstruction task. In a series of evaluations on different tasks, we show the unique advantages of this representation, and good potential for its application in a wider range of shape learning problems. We achieve state-of-the-art performance among non-pre-trained models for the shape retrieval task, and beat state-of-the-art models for the surface reconstruction task. The key contributions of our work are as follows:• We develop mathematical formulations for performing Fourier Transforms of signals defined on a simplex mesh, which generalizes and extends to all geometries in all dimensions. (Sec.3)• We analytically show that our approach computes the frequency domain representation precisely, leading to much lower overall representational errors. (Sec. 3)• We empirically show that our representation preserves maximal shape information compared to commonly used binary voxel and SDF representations. (Sec. 4.1)• We show that deep learning models using CNNs in conjunction with our shape representation achieves state-of-the-art performance across a range of shape-learning tasks including shape retrieval (Sec. 4.2) and point to surface reconstruction (Sec. 4.3) DISPLAYFORM0 Index of the n-th element among a total of N elements Ω j n Domain of n-th element of order j x Cartesian space coordinate vector. DISPLAYFORM1 Imaginary number unit Shape learning involves the learning of a mapping from input geometrical signals to desired output quantities. The representation of geometrical signals is key to the learning process, since on the one hand the representation determines the learning architectures, and, on the other hand, the richness of information preserved by the representation acts as a bottleneck to the downstream learning process. While data representation has not been an open issue for 2D image learning, it is far from being agreed upon in the existing literature for 3D shape learning. The varied shape representations used in 3D machine learning are generally classified as multiview images BID28 BID26 BID2, volumetric voxels BID38 BID14 BID37 BID0, point clouds BID19 BID24 BID36, polygonal meshes BID3 BID34 BID15 BID12, shape primitives BID42 BID39, and hybrid representations (Dai & Nießner, 2018).Our proposed representation is closest to volumetric voxel representation, since the inverse Fourier Transform of the spectral signal in physical domain is a uniform grid implicit representation of the shape. However, binary voxel representation suffers from significant aliasing errors during the uniform sampling step in the Cartesian space BID16. Using boolean values for de facto floating point numbers during CNN training is a waste of information processing power. Also, the primitive-in-cell test for binarization requires arbitrary grouping in cases such as having multiple points or planes in the same cell BID32. Signed Distance Function (SDF) or Truncated Signed Distance Function (TSDF) ) provides localization for the shape boundary, but is still constrained to linear surface localization due to the linear interpolation process for recovering surfaces from grids. Our proposed representation under Fourier basis can find nonlinear surface boundaries, achieving subgrid-scale accuracy (See FIG1). Cartesian CNNs are the most ubiquitous and mature type of learning architecture in Computer Vision. It has been thoroughly studied in a range of problems, including image recognition BID5 BID27 ), object detection BID21 BID21, and image segmentation BID11 ). In the spirit of 2D image-based deep learning, Cartesian CNNs have been widely used in shape learning models that adopted multiview shape representation BID28 BID26 BID2 BID29 BID17 BID33. Also, due to its straightforward and analogous extension to 3D by swapping 2D convolutional kernels with 3D counterparts, Cartesian CNNs have also been widely adopted in shape learning models using volumetric representations BID38 BID14 BID37 BID0. However, the dense nature of the operations makes it inefficient for sparse 3D shape signals. To this end, improvements to Cartesian CNNs have been made using space partitioning tree structures, such as Quadtree in 2D and Octree in 3D BID2 BID31. These Cartesian CNNs can leverage backbone CNN architectures being developed in related computer vision problems and thus achieve good performance. Since the physical domain representation in this study is based on Cartesian uniform grids, we directly use Cartesian CNNs. Graph CNNs utilize input graph structure for performing graph convolutions. They have been developed to engage with general graph structured data BID1; ). BID39 used spectral CNNs with the eigenfunctions of the graph Laplacian as a basis. However, the generality of this approach across topologies and geometris is still challenging since consistency in eigenfunction basis is implied. Specially Designed Neural Networks have been used to perform learning on unconventional data structures. For example, Qi et al. (2017a) designed a Neural Network architecture for points that achieves invariances using global pooling, with follow-up work (b) using CNNsinspired hiearchical structures for more efficient learning. BID13 performed convolution directly on the shape manifold and Cohen et al. FORMULA2 designed CNNs for the spherical domain and used it for 3D shapes by projecting the shapes onto the bounding sphere. The original work on analytical expressions for Fourier transforms of 2D polygonal shape functions is given by BID6. Improved and simpler calculation methods have been suggested in. A 3D formulation is proposed by BID41. Theoretical analyses have been performed for the Fourier analysis of simplex domains BID30 BID7 ) and BID23 designed approximation methods for Fast Fourier Transform of polynomial functions defined on simplices. describe shape with elliptic Fourier descriptors for level set-based segmentation and tracking. There has also been a substantial literature on fast non-uniform Fourier transform methods for discretely sampled signal . However we are the first to provide a simple general expression for a j-simplex mesh, an algorithm to perform the transformation, and illustrations of their applicability to deep learning problems. Almost all discrete geometric signals can be abstracted into weighted simplicial complexes. A simplicial complex is a set composed of points, line segments, triangles, and their d-dimensional counterparts. We call a simplicial complex consisting solely of j-simplices as a homogeneous simplicial j-complex, or a j-simplex mesh. Most popular geometric representations that the research community is familiar with are simplex meshes. For example, the point cloud is a 0-simplex mesh, the triangular mesh is a 2-simplex mesh, and the tetrahedral mesh is a 3-simplex mesh. A j-simplex mesh consists of a set of individual elements, each being a j-simplex. If signal is non-uniformly distributed over the simplex, we can define a piecewise constant j-simplex function over the j-simplex mesh. We call this a weighted simplex mesh. Each element has a distinct signal density. J-simplex function For the n-th j-simplex with domain Ω j n, we define a density function f j n (x). For example, for some Computer Vision and Graphics applications, a three-component density value can be defined on each element of a triangular mesh for its RGB color content. For scientific applications, signal density can be viewed as mass or charge or other physical quantity. DISPLAYFORM0 The piecewise-constant j-simplex function consisting of N simplices is therefore defined as the superposition of the element-wise simplex function. Using the linearity of the integral in the Fourier transform, we can decompose the Fourier transform of the density function on the j-simplex mesh to be a weighted sum of the Fourier transform on individual j-simplices. DISPLAYFORM1 We present a general formula for performing the Fourier transform of signal over a single j-simplex. We provide detailed derivation and proof for j = 0, 1, 2, 3 in the supplemental material. DISPLAYFORM0 We define γ j n to be the content distortion factor, which is the ratio of content between the simplex over the domain Ω j n and the unit orthogonal j-simplex. Content is the j-dimensional analogy of the 3-dimensional volume. The unit orthogonal j-simplex is defined as a j-simplex with one vertex at the Cartesian origin and all edges adjacent to the origin vertex to be pairwise orthogonal and to have unit length. Therefore from Equation the final general expression for computing the Fourier transform of a signal defined on a weighted simplex mesh is: DISPLAYFORM1 For computing the simplex content, we use the Cayley-Menger Determinant for a general expression: DISPLAYFORM2 For the matrixB j n, each entry d 2 mn represents the squared distance between nodes m and n. The matrix is of size (j + 2) × (j + 2) and is symmetrical. Since the unit orthogonal simplex has content of C j I = 1/j!, the content distortion factor γ j n can be calculated by: DISPLAYFORM3 Auxiliary Node Method: Equation provides a mean of computing the Fourier transform of a simplex with uniform signal density. However, how do we compute the Fourier transform of polytopes (i.e., polygons in 2D, polyhedra in 3D) with uniform signal density efficiently? Here, we introduce the auxiliary node method (AuxNode) that utilizes signed content for efficient computing. We show that for a solid j-polytope represented by a watertight (j − 1)-simplex mesh, we can compute the Fourier transform of the entire polytope by traversing each of the elements in its boundary (j − 1)-simplex mesh exactly once BID41 ).The auxiliary node method performs Fourier transform over the the signed content bounded by an auxilliary node (a convenient choice being the origin of the Cartesian coordinate system) and each (j − 1)-simplex on the boundary mesh. This forms an auxiliary j-simplex: DISPLAYFORM4 where N is the number of (j − 1)-simplices in the boundary mesh. However due to the overlapping of these auxiliary j-simplices, we need a means of computing the sign of the transform for the overlapping regions to cancel out. Equation provides a general expression for computing the unsigned transform for a single j-simplex. It is trivial to show that since the ordering of the nodes does not affect the determinant in Equation FORMULA6, it gives the unsigned content value. Therefore, to compute the Fourier transform of uniform signals in j-polytopes represented by its watertight (j − 1)-simplex mesh using the auxiliary node method, we modify Equation: s n γ j n is the signed content distortion factor for the n th auxiliary j-simplex where s n ∈ {−1, 1}. For practical purposes, assume that the auxiliary j-simplex is in R d where d = j. We can compute the signed content distortion factor using the determinant of the Jacobian matrix for parameterizing the auxiliary simplex to a unit orthogonal simplex: DISPLAYFORM5 DISPLAYFORM6 (a) Experiment setup 4 8 12 16 4 8 12 16 4 8 12 16 4 8 12 16 20 24 28 20 24 28 20 24 28Since this method requires the boundary simplices to be oriented, the right-hand rule can be used to infer the correct orientation of the boundary element. For 2D polygons, it requires that the watertight boundary line mesh be oriented in a counter-clockwise fashion. For 3D polytopes, it requires that the face normals of boundary triangles ing from the right-hand rule be consistently outward-facing. Algorithmic implementation: Several efficiencies can be exploited to achieve fast runtime and high robustness. First, since the general expression Equation involves division and is vulnerable to division-by-zero errors (that is not a singularity since it can be eliminated by taking the limit), add a minor random noise to vertex coordinates as well as to the k = 0 frequency mode for robustness. Second, to avoid repeated computation, the value should be cached in memory and reused, but caching all σ and e −iσ values for all nodes and frequencies is infeasible for large mesh and/or high resolution output, thus the Breadth-First-Search (BFS) algorithm should be used to traverse the vertices for efficient memory management. In this section, we will discuss the experiment setup, and we defer the details of our model architecture and training process to the supplementary material since it is not the focus of this paper. We use the MNIST experiment as a first example to show that shape information in the input significantly affects the efficacy of the downstream learning process. Since the scope of this research is on efficiently learning from nonuniform mesh-based representations, we compare our method with the state of the art in a slightly different scenario by treating MNIST characters as polygons. We choose this experiment as a first toy example since it is easy to control the learning architecture and input resolution to highlight the effects of shape representation on deep learning performance. We pre-process the original MNIST raw pixel images into polygons, which are represented by watertight line meshes on their boundaries. The polygonized digits are converted into (n × n) binary pixel images and into distance functions by uniformly sampling the polygon at the (n × n) sample locations. For NUFT, we first compute the lowest (n × n) Fourier modes for the polygonal shape function and then use an inverse Fourier transform to acquire the physical domain image. We also compare the with the raw pixel image downsampled to different resolutions, which serves as an oracle for information ceiling. Then we perform the standard MNIST classification experiment on these representations with varying resolution n and with the same network architecture. The experiment are presented in FIG2. It is evident that binary pixel representation suffers the most information loss, especially at low resolutions, which leads to rapidly declining performance. Using the distance function representation preserves more information, but underperforms our NUFT representation. Due to its efficient information compression in the Spectral domain, NUFT even outperforms the downsampled raw pixel image at low resolutions. Shape retrieval is a classic task in 3D shape learning. SHREC17 BID24 which is based on the ShapeNet55 Core dataset serves as a compelling benchmark for 3D shape retrieval performance. We compare the retrieval performance of our model utilizing the NUFT-surface (NUFT-S) and NUFT-volume (NUFT-V) at various resolutions against state-of-the-art shape retrieval algorithms to illustrate its potential in a range of 3D learning problems. We performed the experiments on the normalized dataset in SHREC17. Our model utilizes 3D DLA BID40 as a backbone architecture. Results: Results from the experiment are tabulated in TAB1. For the shape retrieval task, most state-of-the-art methods are based on multi-view representation that utilize a 2D CNN pretrained on additional 2D datasets such as ImageNet. We have achieved on par with, though not better than, state-of-the-art pretrained 2D models. We outperform other models in this benchmark that have not been pre-trained on additional data. We also compared NUFT-volume and NUFT-surface representations in FIG3. Interestingly NUFT-volume and NUFT-surface representations lead to similar performances under the same resolution. Table 3: Quantitative comparison of surface reconstruction methods. The metrics above are distances, hence lower value represents better performance. Best is highlighted in bold. We achieve better than the current stateof-the-art method by a sizable margin, and our are robust to noise in the input. We further illustrate the advantages of our representation with a unique yet important task in computational geometry that has been challenging to address with conventional deep learning techniques: surface reconstruction from point cloud. The task is challenging for deep learning in two aspects: First, it requires input and output signals of different topologies and structures (i.e., input being a point cloud and output being a surface). Second, it requires precise localization of the signal in 3D space. Using our NUFT-point representation as input and NUFT-surface representation as output, we can frame the task as a 3D image-to-image translation problem, which is easy to address using analogous 2D techniques in 3D. We use the U-Net BID22 architecture and train it with a single L 2 loss between output and ground truth NUFT-surface representation. Experiment Setup: We train and test our model using shapes from three categories of ShapeNet55 Core dataset (car, sofa, bottle). We trained our model individually for these three categories. As a pre-processing step we removed faces not visible from the exterior and simplified the mesh for faster conversion to NUFT representation. For the input point cloud, we performed uniform point sampling of 3000 points from each mesh and converted the points into the NUFT-point representation (128 3). Then, we converted the triangular mesh into NUFT-surface representation (128 3). At test time, we post-process the output NUFT-surface implicit function by using the marching cubes algorithm to extract 0.5 contours. Since the extracted mesh has thickness, we further shrink the mesh by moving vertices to positions with higher density values while preserving the rigidity of each face. Last but not least, we qualitatively and quantitatively compare the performance by showing the reconstructed mesh against from the traditional Poisson Surface Reconstruction (PSR) method BID4 at various tree depths (5 and 8) and the Deep Marching Cubes (DMC) algorithm BID9. For quantitative comparison, we follow the literature BID25 and use Chamfer distance, Accuracy and Completeness as the metrics for comparison. For comparison with BID9, we also test the model with noisy inputs (Gaussian of sigma 0.15 voxel-length under 32 resolution), computed distance metrics after normalizing the models to the range of. Table 3 for quantitative comparisons with competing algorithms on the same task, and Figures 5 and 6 for visual comparisons. GT stands for Ground Truth. We achieve new state-of-the-art in the point to surface reconstruction task, due to the good localization properties of the NUFT representations and its flexibility across geometry topologies. We present a general representation for multidimensional signals defined on simplicial complexes that is versatile across geometrical deep learning tasks and maximizes the preservation of shape information. We develop a set of mathematical formulations and algorithmic tools to perform the transformations efficiently. Last but not least, we illustrate the effectiveness of the NUFT representation with a well-controlled example (MNIST polygon), a classic 3D task (shape retrieval) and a difficult and mostly unexplored task by deep learning (point to surface reconstruction), achieving new state-of-the-art performance in the last task. In , we offer an alternative representation for performing CNN based learning on geometrical signals that shows great potential in various 3D tasks, especially tasks involving mixed-topology signals. Without loss of generality assume that the j-simplex is defined in R d space where d ≥ j, since it is not possible to define a j-simplex in a space with dimensions lower than j. For most cases below (except j = 0) we will parameterize the original simplex domain to a unit orthogonal simplex in R j (as shown in Figure 7). Denote the original coordinate system in R d as x and the new coordinate system in the parametric R j space as p. Choose the following parameterization scheme: DISPLAYFORM0 By performing the Fourier transform integral in the parametric space and restoring the by the content distortion factor γ j n we can get equivalent as the Fourier transform on the original simplex domain. Content is the generalization of volumes in arbitrary dimensions (i.e. unity for points, length for lines, area for triangles, volume for tetrahedron). Content distortion factor γ j n is the ratio between the content of the original simplex and the content of the unit orthogonal simplex in parametric space. The content is signed if switching any pair of nodes in the simplex changes the sign of the content, and it is unsigned otherwise. See subsection 3.2 for means of computing the content and the content distortion factor. Points have spatial position x but no size (length, area, volume), hence it can be mathematically modelled as a delta function. The delta function (or Dirac delta function) has point mass as a function equal to zero everywhere except for zero and its integral over entire real space is one, i.e.: DISPLAYFORM0 For unit point mass at location x n: DISPLAYFORM1 Indeed, for 0-simplex, we have recovered the definition of the Discrete Fourier Transform (DFT). For a line with vertices at location x 1, x 2 ∈ R d, by parameterizing it onto a unit line, we get: DISPLAYFORM0 = iγ Figure 7: Schematic of example for 2-simplex. Original j-simplex is parameterized to a unit orthogonal j-simplex in R j space for performing the integration. Parameterization incurs a content distortion factor γ j n which is the ratio between the original simplex and the unit-orthogonal simplex in parametric space. DISPLAYFORM1 For a triangle with vertices x 1, x 2, x 3 ∈ R d, parameterization onto a unit orthogonal triangle gives: DISPLAYFORM0 = γ 2 n e −iω·x3 1 0 dp e −ipω·(x1−x3) DISPLAYFORM1 A.4 TETRAHEDRON: 3-SIMPLEX For a tetrahedron with vertices x 1, x 2, x 3, x 4 ∈ R d, parameterization onto a unit orthogonal tetrahedron gives: DISPLAYFORM2 Besides evaluating and comparing the shape representation schemes in the context of machine learning problems, we evaluate the different representation schemes in terms of its geometrical shape information. To evaluate geometrical shape information, we first convert the original polytopal shapes into the corresponding representations and then reconstruct the shape from these representations using interpolation based upsampling followed by contouring methods. Finally, we use the mesh Intersection over Union (mesh-IoU) metric between the original and constructed mesh to quantify geometrical shape information. Mesh boolean and volume computation for 2D polygons and 3D triangular mesh can be efficiently performed with standard computational geometry methods. In three dimensions contouring and mesh extraction can be performed using marching cubes algorithm for mesh reconstruction. For binarized representation, we perform bilinear upsampling (which does not affect the final ) followed by 0.5-contouring. For our NUFT representation, we perform spectral domain upsampling (which corresponds to zero-padding of higher modes in spectral domain), followed by 0. Training Details: We train the model with batch size of 64, learning rate of 0.01 and learning rate step size of 10. We use the SGD optimizer with momentum of 0.9, weight decay of 1 × 10 −4 for 15 epochs. Model Architecture: We use DLA34 architecture with all 2D convolutions modified to be 3D convolutions. It consists of levels, with filter numers. Training Details We train the model with batch size of 64, learning rate of 1 × 10 −3, learning rate step size of 30. We use the Adam optimizer with momentum of 0.9, and weight decay of 1 × 10 −4 for 40 epochs. Model Architecture We use a modified 3D version of the U-Net architecture consisting of 4 down convolutions and 4 up-convolutions with skip layers. Number of filters for down convolutions are and double of that for up convolutions. Training Details We train the model using Adam optimizer with learning rate 3 × 10 −4 for 200 epochs. We use NUFT point representation as input and a single L 2 loss between output and ground truth (NUFT surface) to train the network. We train and evaluate the model at 128 3 resolution.
We use non-Euclidean Fourier Transformation of shapes defined by a simplicial complex for deep learning, achieving significantly better results than point-based sampling techiques used in current 3D learning literature.
507
scitldr
Discovering causal structure among a set of variables is a fundamental problem in many empirical sciences. Traditional score-based casual discovery methods rely on various local heuristics to search for a Directed Acyclic Graph (DAG) according to a predefined score function. While these methods, e.g., greedy equivalence search, may have attractive with infinite samples and certain model assumptions, they are less satisfactory in practice due to finite data and possible violation of assumptions. Motivated by recent advances in neural combinatorial optimization, we propose to use Reinforcement Learning (RL) to search for the DAG with the best scoring. Our encoder-decoder model takes observable data as input and generates graph adjacency matrices that are used to compute rewards. The reward incorporates both the predefined score function and two penalty terms for enforcing acyclicity. In contrast with typical RL applications where the goal is to learn a policy, we use RL as a search strategy and our final output would be the graph, among all graphs generated during training, that achieves the best reward. We conduct experiments on both synthetic and real datasets, and show that the proposed approach not only has an improved search ability but also allows for a flexible score function under the acyclicity constraint. Discovering and understanding causal mechanisms underlying natural phenomena are important to many disciplines of sciences. An effective approach is to conduct controlled randomized experiments, which however is expensive or even impossible in certain fields such as social sciences and bioinformatics . Causal discovery methods that infer causal relationships from passively observable data are hence attractive and have been an important research topic in the past decades (; ;). A major class of such causal discovery methods are score-based, which assign a score S(G), typically computed with the observed data, to each directed graph G and then search over the space of all Directed Acyclic Graphs (DAGs) for the best scoring: min G S(G), subject to G ∈ DAGs. While there have been well-defined score functions such as the Bayesian Information Criterion (BIC) or Minimum Description Length (MDL) score and the Bayesian Gaussian equivalent (BGe) score for linear-Gaussian models, Problem is generally NP-hard to solve , largely due to the combinatorial nature of its acyclicity constraint with the number of DAGs increasing superexponentially in the number of graph nodes. To tackle this problem, most existing approaches rely on local heuristics to enforce the acyclicity. For example, Greedy Equivalence Search (GES) enforces acyclicity one edge at a time, explicitly checking for the acyclicity constraint when an edge is added. GES is known to find the global minimizer with infinite samples under suitable assumptions , but this is not guaranteed in the finite sample regime. There are also hybrid methods that use constraint-based approaches to reduce the search space before applying score-based methods, e.g., the max-min hill climbing method . However, this methodology lacks a principled way of choosing a problem-specific combination of score functions and search strategies. introduced a smooth characterization for the acyclicity constraint, and Problem can be formulated as a continuous optimization problem w.r.t. the weighted graph adjacency matrix by picking a proper loss function, e.g., the least squares loss. Subsequent works and have also adopted the evidence lower bound and the negative log-likelihood as loss functions, respectively, and used Neural Networks (NNs) to model the causal relationships. Note that the loss functions in these methods must be carefully chosen in order to apply continuous optimization methods. Unfortunately, many effective score functions, e.g., the generalized score function proposed by and the independence based score function given by, either cannot be represented in closed forms or have very complicated equivalent loss functions, and thus cannot be easily combined with this approach. We propose to use Reinforcement Learning (RL) to search for the DAG with the best score according to a predefined score function, as outlined in Figure 1. The insight is that an RL agent with stochastic policy can determine automatically where to search given the uncertainty information of the learned policy, which gets updated promptly by the stream of reward signals. To apply RL to causal discovery, we use an encoder-decoder NN model to generate directed graphs from the observed data, which are then used to compute rewards consisting of the predefined score function as well as two penalty terms to enforce acyclicity. We resort to policy gradient and stochastic optimization methods to train the weights of the NNs, and our output is the graph that achieves the best reward, among all graphs generated in the training process. Experiments on both synthetic and real datasets show that our approach has a much improved search ability without sacrificing any flexibility in choosing score functions. In particular, the proposed approach using BIC as score function outperforms GES with the same score function on linear non-Gaussian acyclic model (LiNGAM) and linear-Gaussian datasets, and also outperforms recent gradient based methods when the causal relationships are nonlinear. Constraint-based causal discovery methods first use conditional independence tests to find causal skeleton and then determine the orientations of the edges up to the Markov equivalence class, which usually contains DAGs that can be structurally diverse and may still have many unoriented edges. Examples include; that use kernel-based conditional independence criteria and the well-known PC algorithm . This class of methods involve a multiple testing problem where the tests are usually conducted independently. The testing may have conflicts and handling them is not easy, though there are certain works, e.g., , attempting to tackle this problem. These methods are also not robust as small errors in building the graph skeleton can in large errors in the inferred Markov equivalence class. Another class of causal discovery methods are based on properly defined functional causal models. Unlike constraint-based methods that assume faithfulness and identify only the Markov equivalence class, these methods are able to distinguish between different DAGs in the same equivalence class, thanks to the additional assumptions on data distribution and/or functional classes. Examples include LiNGAM (; 2011), the nonlinear additive noise model (;, and the post-nonlinear causal model (Zhang and Hyvärinen, 2009).; , other recent NN based approaches to causal discovery include that proposes causal generative NNs to functional causal modeling with a prior knowledge of initial skeleton of the causal graph and that learns causal generative models in an adversarial way but does not guarantee acyclicity. Recent advances in sequence-to-sequence learning have motivated the use of NNs for optimization in various domains (; ;). A particular example is the traveling salesman problem (TSP) which was revisited in the work of pointer networks . Authors proposed a recurrent NN with nonparametric softmaxes trained in a supervised manner to predict the sequence of visited cities. further proposed to use the RL paradigm to tackle the combinatorial problems due to their relatively simple reward mechanisms. It was shown that an RL agent can have a better generalization even when the optimal solutions are used as labeled data in the previous supervised approach. There are many other successful RL applications in recent years, e.g., AlphaGo , where the goal is to learn a policy for a given task. As an exception, applied RL to neural architecture search. While we use a similar idea as the RL paradigm can naturally include the search task, our work is different in the actor and reward designs: our actor is an encoder-decoder model that generates graph adjacency matrices (cf. Section 4) and the reward is tailored for causal discovery by incorporating both a score function and the acyclicity constraint (cf. Section 5.1). We assume the following model for data generating procedure, as in;. Each variable x i is associated with a node i in a d-node DAG G, and the observed value of x i is obtained as a function of its parents in the graph plus an independent additive noise n i, i.e., where x pa(i) denotes the set of variables x j so that there is an edge from x j to x i in the graph, and the noises n i are assumed to be jointly independent. We also assume causal minimality, which in this case reduces to that each function f i is not a constant in any of its arguments. Without further assumption on the forms of functions and/or noises, the above model can be identified only up to Markov equivalence class under the usual Markov and faithful assumptions (; ; in our experiments we will consider synthetic datasets that are generated from fully identifiable models so that it is practically meaningful to evaluate the estimated graph w.r.t. the true DAG. If all the functions f i are linear and the noises n i are Gaussian distributed, the above model yields the class of standard linear-Gaussian model that has been studied in ; ; ; . When the functions are linear but the noises are non-Gaussian, one can obtain the LiNGAM described in Shimizu et al. (2006; 2011) and the true DAG can be uniquely identified under favorable conditions. In this paper, we consider that all the variables x i are scalars; extending to more complex cases is straightforward with a properly defined score function. The observed data X, consisting of a number of vectors, are then sampled independently according to the above model on an unknown DAG, with fixed functions f i and fixed distributions for n i. The objective of causal discovery is to use the observed data X, which gives the empirical version of the joint distribution of x, to infer the underlying causal DAG G. Given a dataset X = {x k} m k=1 where x k denotes the k-th observed sample, we want to infer the causal graph that best describes the data generating procedure. We would like to use NNs to infer the causal graph from the observed data; specifically, we aim to design an NN based graph generator whose input is the observed data and the output is a graph adjacency matrix. A naive choice would be using feed-forward NNs to output d 2 scalars and then reshape them to an adjacency matrix in R d×d. However, this structure failed to produce promising , possibly because the feed-forward NNs could not provide sufficient interactions amongst variables to capture the causal relations. Motivated by recent advances in neural combinatorial optimization, particularly the pointer networks , we draw n random samples (with replacement) {x l} from X and reshape them as s: n is the vector concatenating all the i-th entries of the vectors in {x l}. In an analogy to the TSP problem, this represents a sequence of d cities lying in an n-dim space. We are concerned with generating a binary adjacency matrix A ∈ {0, 1} d×d so that the corresponding graph is acyclic and achieves the best score. In this work we consider encoder-decoder models for graph generation: Encoder We use the attention based encoder in the Transformer structure proposed by. We believe that the self-attention scheme, together with structural DAG constraint, is capable of finding the causal relations amongst variables. Other attention based models such as graph attention network (Veličković et al., 2018) may also be used, which will be considered in a future work. Denote the output of the encoder by enc i with dimension d e, corresponding to each inputx i. Decoder Our decoder generates the graph adjacency matrix in an element-wise manner, by building relationships between two encoder outputs enc i and enc j. We consider the single layer decoder where are trainable parameters and d h is the hidden dimension associated with the decoder. To generate a binary adjacency matrix A, we pass each entry g ij into a logistic sigmoid function σ(·) and then sample according to a Bernoulli distribution with probability σ(g ij), which indicates the probability of existing an edge from x i to x j. To avoid self-loop, we simply mask the (i, i)-th entry in the adjacency matrix. Other possible decoder choices include the neural tensor network model from and the bilinear model to build the pairwise relationships between encoder outputs. Another choice is the Transformer structure which generates an adjacency matrix in a row-wise manner. Empirically, we find that the single layer decoder performs the best, possibly because it contains less parameters and is easier to train to find better DAGs, while the self-attention based encoder has provided sufficient interactions amongst the variables for causal discovery. In Appendix A, we provide more details on the above decoders and their empirical with linear-Gaussian data models. In this section, we propose to use RL as the search strategy for finding the DAG with the best score, outlined in Figure 1. As one will see, the proposed method improves the search ability over traditional score-based methods and also allows for flexible score functions under the acyclicity constraint. Score Function In this work, we consider existing score functions to construct the reward that will be maximized by an RL agent. Often score-based methods assume a parametric model for causal relationships (e.g., linear-Gaussian equations or multinomial distribution), which introduces a set of parameters θ. Among all score functions that can be directly included here, we focus on the BIC score that is not only consistent but also locally consistent for its decomposability . The BIC score for a given directed graph G is whereθ is the maximum likelihood estimator and d θ denotes the dimensionality of the parameter θ. We assume i.i.d. Gaussian additive noises throughout this paper. If we apply linear models to each causal relationship and letx k i be the corresponding estimate for x k i, the i-th entry in the k-th observed sample, then we have the BIC score being (up to some additive constant) where 2 denotes the residual sum of squares for the i-th variable. The first term in Eq. is equivalent to the log-likelihood objective used by GraN-DAG and the second term adds penalty on the number of edges in the graph G. Further assuming that the noise variances are equal (despite the fact that they may be different), we have We notice that i RSS i is the least squares loss used in NOTEARS . Besides assuming linear models, other regression methods can also be used to estimate x k i; in Section 6, we will use quadratic regression and like Gaussian Process Regression (GPR) to model causal relationships with the observed data for our experiments. Acyclicity A remaining issue is the acyclicity constraint. Other than GES that explicitly checks for acyclicity each time an edge is added, we add penalty terms w.r.t. acyclicity to the score function to enforce acyclicity in an implicit way and allow the generated graph to change more than one edges at each iteration. In this work, we use a recent from: a directed graph G with binary adjacency matrix A is acyclic if and only if where e A is the matrix exponential of A. We find that h(A), which is non-negative, can be small for certain cyclic graphs and its minimum over all non-DAGs is not easy to compute. Consequently, we would require a very large penalty weight to obtain exact DAGs if only h(A) is used. As such, we add another penalty term, the indicator function w.r.t. acyclicity, to induce exact DAGs. Here we remark that other functions, e.g., the total length all cyclic paths in the graph, which compute some'distance' from a directed graph to DAGs and need not be smooth, may also be used. Reward Our reward incorporates both the score function and the acyclicity constraint: where I(·) denotes the indicator function and λ 1, λ 2 ≥ 0 are two penalty parameters. It is not hard to see that the larger λ 1 and λ 2 are, the more likely a generated graph with a high reward is acyclic. We then aim to maximize the reward over all possible directed graphs, or equivalently, we have An interesting question is whether this new formulation is equivalent to the original problem with hard acyclicity constraint. Fortunately, the following proposition guarantees that Problems and are equivalent with properly chosen λ 1 and λ 2, which can be verified by showing that a minimizer of one problem is also a solution to the other. A proof is provided in Appendix B for completeness. Proposition 1. Let h min > 0 be the minimum of h(A) over all directed cyclic graphs, i.e., h min = min G / ∈DAGs h(A). Let S * denote the optimal score achieved by some DAG in Problem. Assume that S L ∈ R is a lower bound of the score function over all possible directed graphs, i.e., S L ≤ min G S(G), and S U ∈ R is an upper bound on the optimal score with S * ≤ S U. Then Problems and are equivalent if For practical use, we need to find respective quantities in order to choose proper penalty parameters. An upper bound S U can be easily found by drawing some random DAGs or using the from other methods like NOTEARS. A lower bound S L depends on the particular score function. With BIC score, we can fit each variable x i against all the rest variables, and use only the RSS i terms but ignore the additive penalty on the number of edges. With the independence based score function proposed by, we may simply set S L = 0. The minimum term h min, as previously mentioned, may not be easy to find. Fortunately, with λ 1 = S U − S L, Proposition 1 guarantees the equivalence of Problems and for any λ 2 ≥ 0. However, simply setting λ 2 = 0 could only get good performance with very small graphs (see a discussion in Appendix C). We therefore pick a relatively small value for λ 2, which helps to generate directed graphs that become closer to DAGs. Empirically, we find that if the penalty weights are set too large, the score function would have little effect on the reward, which limits the exploration of the RL agent and usually in DAGs with high scores. Similar to Lagrangian methods, we will start with small penalty weights and gradually increase them so that the condition in Proposition 1 is satisfied. Meanwhile, we notice that different score functions may have different ranges while the acyclicity penalty terms are independent of the Algorithm 1 The proposed RL approach to score-based causal discovery Require: score parameters: S L, S U, and S 0; penalty parameters: λ 1, ∆ 1, λ 2, ∆ 2, and Λ 2; iteration number for parameter update: t 0. 1: for t = 1, 2,... do Run actor-critic algorithm, with score adjustment by if the maximum reward corresponds to a DAG with score S min then 5: update recorded rewards according to new λ 1 and λ 2 9: end if 10: end for particular range of the score function. Therefore, we also adjust the predefined scores to a certain range by using S 0 (S −S L)/(S U −S L) for some S 0 > 0 and the optimal score will lie in [0, S 0]. 1 Our algorithm is summarized in Algorithm 1, where ∆ 1 and ∆ 2 are the updating parameters associated with λ 1 and λ 2, respectively, and t 0 denotes the updating frequency. The weight λ 2 is updated in a similar manner to the updating rule on the Lagrangian multiplier used by NOTEARS and we set Λ 2 as an upper bound on λ 2, as previously mentioned. In all our experiments that use BIC as score function, S L is obtained from a complete directed graph and S U is from an empty graph. Since S U with the empty graph can be very high for large graphs, we also update it by keeping track of the lowest score achieved by DAGs generated during training. Other parameter choices in this work are S 0 = 5, t 0 = 1000, λ 1 = 0, ∆ 1 = 1, λ 2 = 10 − d/3, ∆ 2 = 10 and Λ 2 = 0.01. We comment that these parameter choices may be further tuned for specific applications, and we can select the graph that is acyclic and achieves the best score, among all the final outputs (cf. Section 5.3) of the RL approach with different parameter choices. We believe that the exploitation and exploration scheme in the RL paradigm provides an appropriate way to guide the search. Let π(· | s) and ψ denote the policy and NN parameters for graph generation, respectively. Our training objective is the expected reward defined as During training, the input s is constructed by randomly drawing samples from the observed dataset X, as described in Section 4. We resort to policy gradient methods and stochastic methods to optimize the parameters ψ. The gradient ∇ ψ J(ψ | s) can be obtained by the well-known REINFORCE algorithm . We draw B samples s 1, s 2,..., s B as a batch to estimate the gradient which is then used to train the NNs through stochastic optimization methods like Adam . Using a parametric baseline to estimate the reward can also help training . For the present work, our critic is a simple 2-layer feed-forward NN with ReLU units, with the input being the encoder outputs {enc i}. The critic is trained with Adam on a mean squared error between its predictions and the true rewards. An entropy regularization term is also added to encourage exploration of the RL agent. Although policy gradient methods only guarantee local convergence , here we remark that the inferred graphs from the actor-critic algorithm are all DAGs in our experiments. Training an RL agent typically requires many iterations. In the present work, we find that computing the rewards for generated graphs is much more time-consuming than training NNs. Therefore, we record the computed rewards corresponding to different graph structures. Moreover, the BIC score can be decomposed according to single causal relationships and we also record the corresponding RSS i to avoid repeated computations. Since we are concerned with finding a DAG with the best score rather than a policy, we record all the graphs generated during the training process and output the one with the best reward. In practice, the graph may contain spurious edges and further processing is needed. To this end, we can prune the estimated edges in a greedy way, according to either the regression performance or the score function. For an inferred causal relationship, we remove a parental variable and calculate the performance of the ing graph, with all other causal relationships unchanged. If the performance does not degrade or degrade within a predefined tolerance, we accept pruning and continue this process with the pruned causal relationship. For linear models, pruning can be simply done by thresholding the estimated coefficients. Related to the above pruning process is to add to the reward an increased penalty weight on the number of edges of a given graph. However, this weight is not easy to choose, as a large weight may incur missing edges. In this work, we stick to the penalty weight log n that is included in the BIC score and then apply pruning to the inferred graph in order to reduce false discoveries. We report empirical on synthetic and real datasets to compare our approach against both traditional and recent gradient based approaches, including GES (with BIC score) , the PC algorithm (with Fisher-z test and p-value 0.01) , ICA-LiNGAM , the Causal Additive Model (CAM) based algorithm proposed by Bühlmann et al., NOTEARS , DAG-GNN , and GraN-DAG , among others. All these algorithms have available implementations and we give a brief description on these algorithms and their implementations in Appendix D. Default hyper-parameters of these implementations are used unless otherwise stated. For pruning, we use the same thresholding method for ICA-LiNGAM, NOTEARS, and DAG-GNN. Since the authors of CAM and GraN-DAG proposed to apply significance testing of covariates based on generalized additive models and then declare significance if the reported p-values are lower than or equal to 0.001, we stick to the same pruning method for CAM and GraN-DAG. The proposed RL based approach is implemented based on an existing Tensorflow implementation of neural combinatorial optimizer (see Appendix D for more details). The decoder is modified as described in Section 4 and the RL algorithm related hyper-parameters are left unchanged. We pick B = 64 as batch size at each iteration and d h = 16 as the hidden dimension with the single layer decoder. Our approach is combined with the BIC scores under Gaussianity assumption given in Eqs. and, and are denoted as RL-BIC and RL-BIC2, respectively. We evaluate the estimated graphs using three metrics: False Discovery Rate (FDR), True Positive Rate (TPR), and Structural Hamming Distance (SHD) which is the smallest number of edge additions, deletions, and reversals to convert the estimated graph into the true DAG. The SHD takes into account both false positives and negatives and a lower SHD indicates a better estimate of the causal graph. Since GES and PC may output unoriented edges, we follow to treat GES and PC favorably by regarding undirected edges as true positives as long as the true graph has a directed edge in place of the undirected edge. Given number of variables d, we generate a d × d upper triangular matrix as the graph binary adjacency matrix, in which the upper entries are sampled independently from Bern(0.5). We assign edge weights independently from Unif ([−2, −0.5] ∪ [0.5, 2]) to obtain a weight matrix W ∈ R d×d, and then sample x = W T x + n ∈ R d from both Gaussian and non-Gaussian noise models. The non-Gaussian noise is the same as the one used for ICA-LiNGAM , which generates samples from a Gaussian distribution and passes them through a power nonlinearity to make them non-Gaussian. We pick unit variances for all noises in both models and generate m = 5, 000 samples as our datasets. A random permutation of variables is then performed. This data generating procedure is similar to that used by NOTEARS and DAG-GNN and the true causal graphs in both cases are known to be identifiable (Peters and Bühlmann, 2013;). We first consider graphs with d = 12 nodes. We use n = 64 for constructing the input sample and set the maximum number of iterations to 20, 000. Figure 2 shows the learning process of the proposed method RL-BIC2 on a linear-Gaussian dataset. We use a threshold 0.3, same as NOTEARS and DAG-GNN with this data model, to prune the estimated edges. In this example, RL-BIC2 generates 683, 784 different graphs during training, much lower than the total number (around 5.22 × 10 26) of DAGs. The pruned DAG turns out to be exactly the same as the underlying causal graph. We report the empirical on LiNGAM and linear-Gaussian data models in Table 1. Both PC and GES perform poorly, possibly because we consider relatively dense graphs for our data generating procedure. CAM does not perform well either, as it assumes nonlinear causal relationships. ICA-LiNGAM recovers all the true causal graphs for LiNGAM data but performs poorly on linearGaussian data. This is not surprising because ICA-LiNGAM works for non-Gaussian noise and does not provide guarantee for linear-Gaussian datasets. Both NOTEARS and DAG-GNN have good causal discovery whereas GraN-DAG performs much worse. We believe that it is because GraN-DAG uses 2-layer feed-forward NNs to model the causal relationships, which may not be able to learn a good linear relationship in this experiment. Modifying the feed-forward NNs to linear functions reduces to NOTEARS with negative log-likelihood as loss function, which yields similar performance on these datasets (see Appendix E.1 for detailed ). As to our proposed methods, we observe that RL-BIC2 recovers all the true causal graphs on both data models in this experiment while RL-BIC has a worse performance. One may wonder whether this observation is due to the same noise variances that are used in our data models; we conduct additional experiments where the noise variances are randomly sampled and RL-BIC2 still outperforms RL-BIC by a large margin (see also Appendix E.1). Nevertheless, with the same BIC score, RL-BIC performs much better than GES on both datasets, indicating that the RL approach brings in a greatly improved search ability. Finally, we test the proposed method on larger graphs with d = 30 nodes, where the upper entries are sampled independently from Bern(0.2). This edge probability choice corresponds to the fact that large graphs in practice usually has a lower average degree; see, e.g., the experiment settings of;;. To incorporate this prior information in our approach, we add to each g ij a common bias term initialized to −10.0 (see Appendix E.1 for details). Considering the much increased search space, we also choose a larger number of observed samples, n = 128, to construct the input for graph generator and increase the training iterations to 40, 000. On LiNGAM datasets, RL-BIC2 has FDR, TPR, and SHD being 0.14 ± 0.15, 0.94 ± 0.07, and 19.8 ± 23.0, respectively, comparable to NOTEARS with 0.13 ± 0.09, 0.94 ± 0.04 and 17.2 ± 13.12. We now consider nonlinear causal relationships with quadratic functions. We generate an upper triangular matrix in a similar way to the first experiment. For a causal relationship with parents T at the i-th node, we expand x pa(i) to contain both first-and second-order features. The coefficient for each term is then either 0 or sampled from Unif ([−1, −0.5] ∪ [0.5, 1]), with equal probability. If a parent variable does not appear in any feature term with a non-zero coefficient, then we remove the corresponding edge in the causal graph. The rest follows the same as in first experiment and here we use the non-Gaussian noise model with 10-node graphs and 5, 000 samples. The true causal graph is identifiable according to. For this quadratic model, there may exist very large variable values which cause computation issues for quadratic regression. We treat these samples as outliers and detailed processing is given in Appendix E.2. We use quadratic regression for a given causal relationship and calculate the BIC score (assuming equal noise variances) in Eq.. For pruning, we simply apply thresholding, with threshold as 0.3, to the estimated coefficients of both first-and second-order terms. If the coefficient of a second-order term, e.g., x i1 x i2, is non-zero after thresholding, then we have two directed edges that are from x i1 to x i and from x i2 to x i, respectively. We do not consider PC and GES in this experiment due to their poor performance in the first experiment. Our with 10-node graphs are reported in Table 2, which shows that RL-BIC2 achieves the best performance. For fair comparison, we apply the same quadratic regression based pruning method to the outputs of NOTEARS, denoted as NOTEARS-2. We see that this pruning further reduces FDR, i.e., removes spurious edges, with little effect on TPR. Since pruning does not help discover additional positive edges or increase TPR, we will not apply this pruning method to other methods as their TPRs are much lower than that of RL-BIC2. Finally, with prior knowledge that the function form is quadratic, we can modify NOTEARS to apply quadratic functions to modeling the causal relationships, with an equivalent weighted adjacency matrix constructed using the coefficients of the first-and second-order terms, similar to the idea used by GraN-DAG (detailed derivations are given in Appendix E.2). The problem then becomes a nonconvex optimization problem with (d − 1)d 2 /2 parameters (which are the coefficients of both first-and second-order features), compared to the original problem with d 2 parameters. This method corresponds to NOTEARS-3 in Table 2. Despite the fact that NOTEARS-3 did not achieve a better overall performance than RL-BIC2, we comment that it discovered almost correct causal graphs (with SHD ≤ 3) on more than half of the datasets, but performed poorly on the rest datasets. We believe that it is due to the increased number of optimization parameters and the more complicated equivalent adjacency matrix which make the optimization problem harder to solve. Meanwhile, we do not exclude that NOTEARS-3 can achieve a better causal discovery performance with other optimization algorithms. Given a randomly generated causal graph, we consider another nonlinear model where each causal relationship f i is a function sampled from a Gaussian process, with RBF kernel of bandwidth one. The additive noise n i is normally distributed with variance sampled uniformly. This setting is known to be identifiable according to. We use a setup that is also considered by GraN-DAG : 10-node and 40-edge graphs with 1, 000 generated samples. The empirical are reported in Table 3. One can see that ICA-LiNGAM, NOTEARS, and DAG-GNN perform poorly on this data model. A possible reason is that they may not be able to model this type of causal relationship. More importantly, these methods operate on the weighted adjacency matrix which is not obvious here. For our method, we apply Gaussian Process Regression (GPR) with RBF kernel to model the causal relationships. Notice that even though the observed data are from a function sampled from Gaussian process, it is not guaranteed that GPR with the same kernel can achieve a good performance. Indeed, using a fixed kernel bandwidth would lead to severe overfitting that incurs many spurious edges and the graph with the highest reward is usually not a DAG. To proceed, we normalize the observed data and apply median heuristics for kernel bandwidth. Both our methods perform reasonably well, with RL-BIC outperforming all the other methods. We consider a real dataset to discover a protein signaling network based on expression levels of proteins and phospholipids . This dataset is a common benchmark in graphical models, with experimental annotations well accepted by the biological community. Both observational and interventional data are contained in this dataset. Since we are interested in using observational data to infer causal mechanisms, we only consider the observational data with m = 853 samples. The ground truth causal graph given by has 11 nodes and 17 edges. Notice that the true graph is indeed sparse and an empty graph can have an SHD as low as 17. Therefore, we report more detailed regarding the estimated graph: number of total edges, number of correct edges, and the SHD. Both PC and GES output too many unoriented edges, and we will not report their here. We apply GPR with RBF kernel to modeling the causal relationships, with the same data normalization and median heuristics for kernel bandwidth as in Section 6.3. We also use CAM pruning on the inferred graph from the training process. The empirical are given in Table 4. Both RL-BIC and RL-BIC2 achieve promising , compared with other methods. We have proposed to use RL to search for the DAG with the optimal score. Our reward is designed to incorporate a predefined score function and two penalty terms to enforce acyclicity. We use the actor-critic algorithm as our RL algorithm, where the actor is constructed based on recently developed encoder-decoder models. Experiments are conducted on both synthetic and real datasets to show the advantages of our method over other causal discovery methods. We have also shown the effectiveness of the proposed method with 30-node graphs, yet dealing with large graphs (with more than 50 nodes) is still challenging. Nevertheless, many real applications, like Sachs dataset , have a relatively small number of variables. Furthermore, it is possible to decompose large causal discovery problems into smaller ones; see, e.g.,. Prior knowledge or constraint-based methods is also applicable to reduce the search space. There are several future directions from the present work. In our current implementation, computing scores is much more time consuming than training NNs. While more computing resources definitely help accelerate training, we believe that developing an efficient and effective score function is key to the proposed approach. Many other RL algorithms may also be used. For example, the asynchronous advantage actor-critic algorithm has been shown to be effective in many applications . In addition, we observe that the total iteration numbers used in our experiments are usually more than needed. A proper early stopping criterion will be favored. We briefly describe the NN based decoders for generating binary adjacency matrices: • Single layer decoder: where are trainable parameters and d h denotes the hidden dimension associated with the decoder. • Bilinear decoder: where W ∈ R de×de is a trainable parameter. • Neural Tensor Network (NTN) decoder :, and b ∈ R K×1. • Transformer decoder uses the multi-head attention module to obtain the decoder outputs {dec i}, followed by a feed-forward NN whose weights are shared across all dec i . The output consists of d vectors in R d which are treated as the row vectors of a d × d matrix. We then pass each element of this matrix into sigmoid functions and sample a binary adjacency matrix accordingly. Table 5 provides the empirical on linear-Gaussian data models with 12-node graphs and unit variances (see Section 6.1 for more details on this data generating procedure). In our implementation, we pick d e = 64 as the dimension of the encoder output, d h = 16 for the single layer decoder, and K = 2 for the NTN decoder. We find that single layer decoder performs the best, possibly because it has less parameters and is easier to train to find better DAGs, while the Transformer encoder has provided sufficient interactions amongst variables to capture the causal relationships. LinearGaussian FDR 0 ± 0 0.07 ± 0.13 0.12 ± 0.14 0.67 ± 0.07 TPR 1 ± 0 0.95 ± 0.08 0.90 ± 0.11 0.32 ± 0.09 SHD 0 ± 0 4.00 ± 7.04 7.40 ± 8.52 38.2 ± 3.12 B EQUIVALENCE OF PROBLEMS AND Proof of Proposition 1. Let G be a solution to Problem. Then we have S * = S(G) and G must be a DAG due to the hard acyclicity constraint. Assume that G is not a solution to Problem, which indicates that there exists a directed graph G (with binary adjacency matrix A) so that Clearly, G cannot be a DAG, for otherwise we would have a DAG that achieves a lower score than the minimum S *. By our assumption, it follows that which contradicts the fact that S U is an upper bound on S *. For the other direction, let G be a solution to Problem but not a solution to Problem. This indicates that either G is not a DAG or G is a DAG but has a higher score than the minimum score, i.e., S(G) > S *. The latter case clearly contradicts the definition of the minimum score. For the former case, assume that some DAG G achieves the minimum score. Then plugging G into the negative reward, we can get the same inequality in Eq. since both penalty terms are zeros for a DAG. This then contradicts the assumption that G minimizes the negative reward. Although setting λ 2 = 0, or equivalently using only the indicator function w.r.t. acyclicity, can still make Problem equivalent to the original problem with hard acyclicity constraint, we remark that this choice usually does not in good performance of the RL approach, largely due to that the reward with only the indicator term is likely to fail to guide the RL agent to generate DAGs. To see why it is the case, consider two cyclic directed graphs, one with all the possible directed edges in place and the other with only two edges (i.e., x i → x j and x j → x i for some i = j). The latter is much'closer' to acyclicity in many senses, such as h(A) given in Eq. and number of edge deletion, addition, and reversal to make a directed graph acyclic. Assume a linear data model that has a relatively dense graph, the former will have a lower BIC score when using linear regression for fitting causal relations, yet the penalty terms of acyclicity are the same with only the indicator function. The former graph then has a higher reward, which does not help the agent to tend to generate DAGs. Also notice that the graphs in our approach are generated according to Bernoulli distributions determined by NN parameters that are randomly initialized. Without loss of generality, consider that each edge is drawn independently according to Bern(0.5). For small graphs (with less than or equal to 6 nodes or so), a few hundreds of samples of directed graphs are very likely to contain a DAG. Yet for large graphs, the probability of sampling a DAG is much lower. If no DAG is generated during training, the RL agent can hardly learn to generate DAGs. The above facts indeed motivate us to choose a small value of λ 2 so that the agent can be trained to produce graphs closer to acyclicity and finally to generate exact DAGs. A question is then what if the RL approach starts with a DAG, e.g., by initializing the probability of generating each edge to be nearly zero. This setting did not lead to good performance, either. The generated directed graphs at early iterations can be very different from the true graphs in that many true edges do not exist, and the ing score is much higher than the minimum under the DAG constraint. With small penalty weights of the acyclicity terms, the agent could be trained to produce cyclic graphs with lower scores, similar to the case with randomly initialized NN parameters. On the other hand, large penalty weights, as we have discussed in the paper, limit exploration of the RL agent and usually in DAGs whose scores are far from optimum. We use existing implementations of causal discovery algorithms in comparison, listed below: • ICA-LiNGAM assumes linear non-Gaussian additive model for data generating procedure and applies independent component analysis method to recover the weighted adjacency matrix, followed by thresholding on the weights before outputting the inferred graph. A Python implementation is available at the first author's website https://sites.google.com/site/sshimizu06/lingam. • GES and PC: we use the fast greedy search implementation of GES which has been reported to outperform other techniques such as max-min hill climbing . Implementations of both methods are available through the py-causal package at https://github.com/bd2kccd/py-causal, written in highly optimized Java codes. • CAM decouples the causal order search among the variables from feature or edge selection in a DAG. CAM also assumes additive noise as in our work, with an additional condition that each function is nonlinear. Codes are available through the CRAN R package repository at https://cran.r-project.org/web/packages/CAM. • NOTEARS recovers the causal graph by estimating the weighted adjacency matrix with the least squares loss and the smooth characterization for acyclicity constraint, followed by thresholding on the estimated weights. Codes are available at the first author's github repository https://github.com/xunzheng/notears. We also re-implement the augmented Lagrangian method following the same updating rule on the Lagrange multiplier and the penalty parameter in Tensorflow, so that the augmented Lagrangian at each iteration can be readily minimized without the need of obtaining closedform gradients. We use this implementation in Sections 6.2 and 6.3 when the objective function and/or the acyclicity constraint are modified. • DAG-GNN formulates causal discovery in the framework of variational autoencoder, where the encoder and decoder are two shallow graph NNs. With a modified smooth characterization on acyclicity, DAG-GNN optimizes a weighted adjacency matrix with the evidence lower bound as loss function. Python codes are available at the first author's github repository https://github.com/fishmoon1234/DAG-GNN. • GraN-DAG uses feed-foward NNs to model each causal relationship and chooses the sum of all product paths between variables x i and x j as the (i, j)-th element of an equivalent weighted adjacency matrix. GraN-DAG uses the same smooth constraint from to find a DAG that maximizes the log-likelihood of the observed samples. Codes are available at the first author's github repository https: //github.com/kurowasan/GraN-DAG. Our implementation is based on an existing Tensorflow implementation of neural combinatorial optimizer, which is available at https://github.com/MichelDeudon/ neural-combinatorial-optimization-rl-tensorflow. We add an entropy regularization term, and modify the reward and decoder as described in Section 4 and Section 5.1. E.1 EXPERIMENT 1 IN SECTION 6.1 We replace the feed-forward NNs with linear functions in GraN-DAG and obtain similar experimental as NOTEARS (FDR, TPR, SHD): 0.05 ± 0.04, 0.93 ± 0.06, 3.2 ± 2.93 and 0.05 ± 0.04, 0.95 ± 0.03, 2.40 ± 1.85 for LiNGAM and linear-Gaussian data models, respectively. We conduct additional experiments with linear models where the noise variances are uniformly sampled according to Unif ([0.5, 2] ). Results are given in Table 6. Table 6: Empirical on LiNGAM and linear-Gaussian data models with 12-node graphs and different noise variances. RL-BIC RL-BIC2 PC GES ICA-LiNGAM CAM NOTEARS DAG-GNN GraN-DAG LiNGAM FDR 0.29 ± 0.12 0.09 ± 0.06 0.57 ± 0.10 0.59 ± 0.13 0 ± 0 0.70 ±0.07 0.08 ± 0.10 0.14 ± 0.07 0.71 ± 0.10 TPR 0.77 ± 0.15 0.94 ± 0.03 0.28 ± 0.06 0.27 ± 0.10 1 ± 0 0.45 ± 0.12 0.94 ± 0.07 0.91 ± 0.04 0.25 ± 0.09 SHD 14.4 ± 7.17 4.00 ± 2.61 30.4 ± 4.13 32.0 ± 5.18 0 ± 0 41.6 ± 3.32 3.20 ± 3.97 6.60 ± 1.02 38.7 ± 4.86 LinearGaussian FDR 0.36 ± 0.07 0.10 ± 0.07 0.54 ± 0.10 0.61 ± 0.14 0.67 ± 0.05 0.65 ± 0.10 0.07 ± 0.09 0.12 ± 0.04 0.71 ± 0.12 TPR 0.68 ± 0.09 0.93 ± 0.04 0.29 ± 0.05 0.26 ± 0.11 0.75 ± 0.06 0.51 ± 0.14 0.95 ± 0.06 0.94 ± 0.04 0.21 ± 0.08 SHD 18.8 ± 3.43 4.60 ± 3.07 30.0 ± 2.83 32.2 ± 5.42 49.0 ± 4.82 37.8 ± 6.31 3.00 ± 3.58 5.40 ± 2.06 39.6 ± 5.85 Knowing a sparse true causal graph a priori is also helpful. To incorporate this information in our experiment with 30-node graphs, we add an additional biased termc ∈ R to each decoder output: for the single layer decoder, we have g ij (W 1, W 2, u) = u T tanh(W 1 enc i + W 2 enc j) +c, where we letc be trainable and other parameters have been defined in Appendix A. In our experiments, c is initialized to be −10; this choice aims to set a good starting point for generating graph adjacency matrices, motivated by the fact that a good starting point is usually helpful to locally convergent algorithms.
We apply reinforcement learning to score-based causal discovery and achieve promising results on both synthetic and real datasets
508
scitldr
The topic modeling discovers the latent topic probability of given the text documents. To generate the more meaningful topic that better represents the given document, we proposed a universal method which can be used in the data preprocessing stage. The method consists of three steps. First, it generates the word/word-pair from every single document. Second, it applies a two way parallel TF-IDF algorithm to word/word-pair for semantic filtering. Third, it uses the k-means algorithm to merge the word pairs that have the similar semantic meaning. Experiments are carried out on the Open Movie Database (OMDb), Reuters Dataset and 20NewsGroup Dataset and use the mean Average Precision score as the evaluation metric. Comparing our with other state-of-the-art topic models, such as Latent Dirichlet allocation and traditional Restricted Boltzmann Machines. Our proposed data preprocessing can improve the generated topic accuracy by up to 12.99\%. How the number of clusters and the number of word pairs should be adjusted for different type of text document is also discussed. After millennium, most collective information are digitized to form an immense database distributed across the Internet. Among all, text-based knowledge is dominant because of its vast availability and numerous forms of existence. For example, news, articles, or even Twitter posts are various kinds of text documents. For the human, it is difficult to locate one's searching target in the sea of countless texts without a well-defined computational model to organize the information. On the other hand, in this big data era, the e-commerce industry takes huge advantages of machine learning techniques to discover customers' preference. For example, notifying a customer of the release of "Star Wars: The Last Jedi" if he/she has ever purchased the tickets for "Star Trek Beyond"; recommending a reader "A Brief History of Time" from Stephen Hawking in case there is a "Relativity: The Special and General Theory" from Albert Einstein in the shopping cart on Amazon. The content based recommendation is achieved by analyzing the theme of the items extracted from its text description. Topic modeling is a collection of algorithms that aim to discover and annotate large archives of documents with thematic information BID0. Usually, general topic modeling algorithms do not require any prior annotations or labeling of the document while the abstraction is the output of the algorithms. Topic modeling enables us to convert a collection of large documents into a set of topic vectors. Each entry in this concise representation is a probability of the latent topic distribution. By comparing the topic distributions, we can easily calculate the similarity between two different documents. BID25 Some topic modeling algorithms are highly frequently used in text-mining BID13, preference recommendation BID27 and computer vision BID28. BID0 Many of the traditional topic models focus on latent semantic analysis with unsupervised learning. Latent Semantic Indexing (LSI) BID11 applies Singular-Value Decomposition (SVD) BID6 to transform the term-document matrix to a lower dimension where semantically similar terms are merged. It can be used to report the semantic distance between two documents, however, it does not explicitly provide the topic information. The Probabilistic Latent Semantic Analysis (PLSA) BID9 model uses maximum likelihood estimation to extract latent topics and topic word distribution, while the Latent Dirichlet Allocation (LDA) BID1 model performs iterative sampling and characterization to search for the same information. The availability of many manually categorized online documents, such as Internet Movie Database (IMDb) movie review Inc., Wikipedia articles, makes the training and testing of topics models possible. All of the existing workds are based on the bag-of-words model, where a document is considered as a collection of words. The semantic information of words and interaction among objects are assumed to be unknown during the model construction. Such simple representation can be improved by recent research advances in natural language processing and word embedding. In this paper, we will explore the existing knowledge and build a topic model using explicit semantic analysis. The work studies the best data processing and feature extraction algorithms for topic modeling and information retrieval. We investigate how the available semantic knowledge, which can be obtained from language analysis or from existing dictionary such as WordNet, can assist in the topic modeling. Our main contributions are:• We redesign a new topic model which combines two types of text features to be the model input.• We apply the numerical statistic algorithm to determine the key elements for each document dynamically.• We apply a vector quantization method to merge and filter text unit based on the semantic meaning.• We significantly improve the accuracy of the prediction using our proposed model. The rest of the paper is structured as follows: In Section 2, we review the existing methods, from which we got the inspirations. This is followed in Section 3 by details about our topic models. Section 4 describes our experimental steps and evaluate the . Finally, Section 5 concludes this work. Many topic models have been proposed in the past decades. This includes LDA, Latent Semantic Analysis(LSA), word2vec, and Restricted Boltzmann Machine (RBM), etc. In this section, we will compare the pros and cons of these topic models. LDA was one of the most widely used topic models. LDA introduces sparse Dirichlet prior distributions over document-topic and topic-word distributions, encoding the intuition that documents cover a small number of topics and that topics often use a small number of words. BID1 LSA was another topic modeling technique which is frequently used in information retrieval. LSA learned latent topics by performing a matrix decomposition (SVD) on the term-document matrix. BID4 In practice, training the LSA model is faster than training the LDA model, but the LDA model is more accurate than the LSA model. Traditional topic models did not consider the semantic meaning of each word and cannot represent the relationship between different words. Word2vec can be used for learning high-quality word vectors from huge data sets with billions of words, and with millions of words in the vocabulary. BID14 During the training, the model generated word-context pairs by applying a sliding window to scan through a text corpus. Then the word2vec model trained word embeddings using word-context pairs by using the continuous bag of words (CBOW) model and the skip-gram model. BID15 The generated word vectors can be summed together to form a semantically meaningful combination of both words. Moreover, there were a lot of extensions of the word2vec model, such as paragraph2vec and LDA2vec. Paragraph2vec was an unsupervised framework that learned continuous distributed vector representations for pieces of texts. The training of the paragraph2vec model is based on the similar idea as the word2vec. One of the advantages of the generated paragraph vector was that they take into consideration the word order. BID12 LDA2vec focused on utilizing document-wide feature vectors while simultaneously learning continuous document weights loading onto topic vectors. LDA2vec embedded both words and document vectors into the same space and train both representations simultaneously. BID18 RBM was proposed to extract low-dimensional latent semantic representations from a large collection of documents BID8. The architecture of the RBMs is an undirected bipartite graphic, in which word-count vectors were modeled as Softmax input units and the output units were usually binary units. The RBM model can be generalized much better than LDA in terms of both log-probability on the document and the retrieval accuracy. A deeper structure of neural network was developed based on stacked RBMs, which was the Deep Belief Network (DBN). In BID7, the input layer was the same as RBM mentioned above, other layers are all binary units. 3.1 WORD PAIR BASED RBM MODEL Current RBM model for topic modeling uses Bag of Words approach. Each visible neuron represents the number of appearance of a dictionary word. We believe that the order of the words also exhibits rich information, which is captured by the bag of words approach. Our hypothesis is that including word pairs (with specific dependencies) helps to improve topic modeling. In this work, Stanford natural language parser BID3 BID19 is used to analyze sentences in both training and testing texts, and extract word pairs. For example, if we have the following sentence, by applying the word pair extraction, we will get 38 word pairs and one root pair as the FIG0.The strongest rain ever recorded in India shut down the financial hub of Mumbai, snapped communication lines, closed airports and forced thousands of people to sleep in their offices or walk home during the night, officials said today. The first part in each word pair segment represents the relationship between two words, such as det, nsubj, advmod, etc. And the number after each word gives its position of in the original sentence. The two words extracted in this way are not necessarily adjacent to each other, however, they are semantically related. Because each single word may have different combinations with other words, the total number of the word pairs will be much larger than the number of word in the training dataset. If we use all word pairs which are extracted from the training dataset, it will significantly increase the size of our dictionary and reduce the performance. So, we only keep the first 10000 most frequent word pairs to be our word pair dictionary. TF-IDF stands for the term frequency -inverse document frequency, and the TF-IDF weight is a weight often used in information retrieval and text mining. This weight is a statistical measure used to evaluate how important a word is to a document in a collection or corpus. The importance increases proportionally to the number of times a word appears in the document but is offset by the frequency of the word in the corpus. BID23 BID22 BID21 BID29 The equation of the TF-IDF is as following: DISPLAYFORM0 N umber of times term t appears in a docuemnt T otal number of terms in the document DISPLAYFORM1 DISPLAYFORM0 Term Frequency (TF), Equation 1, measures how frequently a term occurs in a document. Inverse document frequency (IDF), Equation 2, measures how important a term is. There are many words in the training dataset. If we use all words to build the training dictionary, it will contain a lot of high frequency but useless words, like "first" and "name". Beside removing stop word and processing word tense in the dataset, we also used the TF-IDF algorithm to filter the dataset. In addition, based on the original TF-IDF algorithm, we proposed a two-step TF-IDF processing method. This method is applied to keep the number of word pairs to a manageable size as shown in First, word pairs are generated and word-level TF-IDF is performed. The of word level TF-IDF is used as a filter and a word pair is kept only if the TF-IDF scores of both words are higher than the threshold (0.01). After that, we treat each word pair as a single unit, and the TF-IDF algorithm is applied to the word pairs and further filter out word pairs that are either too common or too rare. Even with the TF-IDF processing, the size of the word pair dictionary is still prohibitively large and selecting only the most frequent ones is a brute force approach. We further cluster semantically close word pairs to reduce the dictionary size. The semantic distance between two words is measured as the distance of their embedding vectors calculated using Googles word2vec model. The only issue is how to determine the cluster centrum. Our first approach is to use upper level words in WordNet BID17 BID16 as cluster centrum. Our hypothesis is that, since those upper-level words have broader meaning, a relatively small number of these words can provide good coverage of the semantical space. We first build a word level tree based on the relations specified in the WordNet. We then pick those words that are closest to the root as the cluster centrum. Words usually have multiple paths to the root. We first add those synsets that have only one path to the root into the graph, then iteratively examine the concepts that we have left and add the paths that grow the tree by as little as possible. Based on which path is selected, this approach can be further divided into min-path clustering and max-path clustering. The min-path clustering will add the shortest path and the max-path clustering will add the longest path to the graph The second approach is K-means clustering. By applying the K-mean algorithm, we group the embedding vector of words into K clusters. Then we use the index of each cluster to represent a group of words. Because the word embedding is a very dense space, the k-mean clustering does not give good Silhouette score BID20. Our original hypothesis is that the K-mean clustering based approach will not be as effective as the WordNet-based approach. However, our experimental show that this is not right. So, we pick the K-means clustering algorithm to be our final feature dictionary organization method. In our experiment, we generate the topic distribution for each document by using RBM model. Then we retrieve the top N documents by calculating Euclidean distance. Our proposed method is evaluated on 3 datasets: OMDb, Reuters, and 20NewsGroup. For the Reuters and 20NewsGroup dataset, we download them from BID2. The OMDb dataset is collected manually by using OMDb APIs (Fritz). All the datasets are divided into three sub-dataset: training, validation, and testing. The split ratio is 70:10:20. For each dataset, a 5-fold cross-validation is applied.• OMDb stands for The Open Movie Database. The training dataset contains 6043 movie descriptions, the validation dataset contains 863 movie descriptions and the testing dataset contains 1727 movie descriptions. We define the class for each movie by applying K-means clustering on category information. The value of K is set to 20.• Reuters, these documents appeared on the Reuters newswire in 1987 and were manually classified by personnel from Reuters Ltd. There are 7674 documents in total which belongs to 8 classes. The training dataset contains 5485 news, the validation dataset contains 768 news and the testing dataset contains 1535 news.• 20NewsGroup, this dataset is a collection of approximately 20,000 newsgroup documents, partitioned (nearly) evenly across 20 different newsgroups. The training dataset contains 13174 news, the validation dataset contains 1882 news and the testing dataset contains 3765 news. We use mean Average P recision (mAP) score to evaluate our proposed method. It is a score to evaluate the information retrieval quality. This evaluation method considers the effect of order in the information retrieval . If the relational is shown in the front position, the score will tend to 1; if the relational is shown in the back position, the score will tend to 0. mAP 1, mAP 3, mAP 5, and mAP 10 are used to evaluate the retrieval performance. For each document, we retrieve 1, 3, 5, and 10 documents whose topic vectors have the smallest Euclidean distance with that of the query document. The documents are considered as relevant if they share the same class label. Before we calculate the mAP, we need to calculate the Average P recision (AveP) for each document first. The equation of AveP is described below. DISPLAYFORM0 where rel(k) is an indicator function equaling 1 if the item at rank k is a relevant document, 0 otherwise. BID26 Note that the average is over all relevant documents and the relevant documents not retrieved get a precision score of zero. The equation of the mean Average P recision (mAP) score is as following: DISPLAYFORM1 where Q indicates the total number of queries. In this experiment, the total feature size is a fixed number. Then, we compare the performance between two RBM models. One of them only considers words as the input feature, while the other has combined words and word pairs as the input feature. The total feature size varies from 10500, 11000, 11500, 12000, 12500, 15000. For the word/word pair combined RBM model, the number of word is 10000, and the number of word pairs is varied to meet the total feature requirement. Both models are applied to the OMDb dataset, and the are shown in FIG2 and TAB0, the word/word pair combined model almost always performs better than the word-only model. For the mAP 1, the mAP 5 and the mAP 10, the most significant improvement shown in Feature Number = 11000, about 10.48%, 7.97% and 9.83%. For the mAP 3, the most significant improvement shown in Feature Number = 12000, about 9.35%. The two models are further applied on the Reuters dataset, and the are shown in FIG3 and TAB1. Again the word/word pair combined model outperforms the word-only model almost all the time. For the mAP 1, the most significant improvement happens when Feature Number = 12500. Under this feature size, the combined model improves the mAP score by approximately 1.05%. For the mAP 3, the mAP 5 and the mAP 10, the most significant improvement happens when Feature Number = 15000. Under this feature size, the combined model gives about 1.11%, 1.02% and 0.89% improvement. The for 20NewsGroup dataset are shown in Figure 5 and Table 3. Similar to previous two datasets, all the from word/word pair combined model are better than the word-only model. For the mAP 1 the mAP 3, the mAP 5 and the mAP 10, the most significant improvement happens when Feature Number = 11500. And the improvements are about 10.40%, 11.91%, 12.46% and 12.99%.Observing the for all three datasets, we found that they all reflect the same pattern. The word/word pair combined model consistently outperforms the word only model, given that both of them are constructed based on the same number of input features. In the second experiment, we focus on how the different K values affect the effectiveness of the generated word pairs in therms of their ability of topic modeling. First, we average the performance word/word pair combination all K value. The potential K values are 100, 300, 500, 800 and 1000. Then we compare the mAP between our model and the baseline model, which consists of word only input features. The OMDb dataset are shown in Table 4. As we can observe, all the K values give us better performance than the baseline. The most significant improvement shown in K = 100, which are about 2.41%, 2.15%, 1.46% and 4.46% for mAP 1, 3, 5, and 10 respectively. The of Reuters dataset are shown in FIG5 and Table 5. When the K value is greater than 500, all mAP for word/word pair combination model are better than the baseline. Because the mAP score for Reuters dataset in original model is already very high almost all of them higher than 0.9, it is hard to get the improvement as large as OMDb dataset. For the mAP 1, the most significant improvement happens when K = 500, which is 0.31%. For the mAP 5, the mAP 5 and the mAP 10, the most significant improvement shown in K = 800, about 0.50%, 0.38% and 0.42%.The for 20NewsGroup dataset are shown in FIG6 and Table 6. Similar to the Reuters dataset, when the K value is greater than 800, all mAP score for word/word pair combination model are better than the baseline. For the mAP 1, 3, 5, and 10, the most significant im- provements about 2.82%, 2.90%, 3.2% and 3.33% respectively, and they all happened when K = 1000.In summary, a larger K value can give us a better . As we can see from the Reuters dataset and the 20NewsGroup dataset, when K is greater than 800, the combined model outperforms the baseline model in all four mAP evaluation scores. However, the best appear when K = 800 or K = 1000. For the OMDb dataset, because the mAP score for the baseline model is not very high, the combined model gives better than the baseline model with any K value that we tested. In this experiment, we compare different word pair generation algorithms with the baseline. Similar to previous experiments, the baseline is the word-only RBM model whose input consists of the 10000 most frequent words. The "semantic" word pair generation is the method we proposed in this paper. By applying the idea from the skip-gram BID15 algorithm, we generate the word pairs from each word's adjacent neighbor, and we call it "N-gram" word pair generation. And the window size we used in here is N = 2. For the Non-K word pair generation, we use the same algorithm as the semantic except that no K-means clustering is applied on the generated word pairs. The first thing we observe from From the TAB2 is that both "semantic" word pair generation and "Non-K" word pair generation give us better mAP score than the baseline; however, the mAP score of the semantic generation is slightly higher than the mAP score of the Non-K generation. This is because, although both Non-K and semantic techniques extract word pairs using natural language processing, without the K-means clustering, semantically similar pairs will be considered separately. Hence there will be lots of redundancies in the input space. This will either increase the size of the input space, or, in order to control the input size, reduce the amount of information captured by the input set. The K-mean clustering performs the function of compress and feature extraction. The second thing that we observe is that, for the N-gram word pair gener-ation, its mAP score is even lower than the baseline. Beside the OMDb dataset, other two datasets show the same pattern. This is because the semantic model extracts word pairs from natural language processing, therefore those word pairs have the semantic meanings and grammatical dependencies. However, the N-gram word pair generation simply extracts words that are adjacent to each other. When introducing some meaningful word pairs, it also introduces more meaningless word pairs at the same time. These meaningless word pairs act as noises in the input. Hence, including word pairs without semantic importance does not help to improve the model accuracy. In this paper, we proposed a few techniques to processes the dataset and optimized the original RBM model. During the dataset processing part, first, we used a semantic dependency parser to extract the word pairs from each sentence of the text document. Then, by applying a two way parallel TF-IDF processing, we filtered the data in word level and word pair level. Finally, Kmeans clustering algorithm helped us merge the similar word pairs and remove the noise from the feature dictionary. We replaced the original word only RBM model by introducing word pairs. At the end, we showed that proper selection of K value and word pair generation techniques can significantly improve the topic prediction accuracy and the document retrieval performance. With our improvement, experimental have verified that, compared to original word only RBM model, our proposed word/word pair combined model can improve the mAP score up to 10.48% in OMDb dataset, up to 1.11% in Reuters dataset and up to 12.99% in the 20NewsGroup dataset.
We proposed a universal method which can be used in the data preprocessing stage to generate the more meaningful topic that better represents the given document
509
scitldr
Multi-task learning has been successful in modeling multiple related tasks with large, carefully curated labeled datasets. By leveraging the relationships among different tasks, multi-task learning framework can improve the performance significantly. However, most of the existing works are under the assumption that the predefined tasks are related to each other. Thus, their applications on real-world are limited, because rare real-world problems are closely related. Besides, the understanding of relationships among tasks has been ignored by most of the current methods. Along this line, we propose a novel multi-task learning framework - Learning To Transfer Via Modelling Multi-level Task Dependency, which constructed attention based dependency relationships among different tasks. At the same time, the dependency relationship can be used to guide what knowledge should be transferred, thus the performance of our model also be improved. To show the effectiveness of our model and the importance of considering multi-level dependency relationship, we conduct experiments on several public datasets, on which we obtain significant improvements over current methods. Multi-task learning aims to train a single model on multiple related tasks jointly, so that useful knowledge learned from one task can be transferred to enhance the generalization performance of other tasks. Over the last few years, different types of multi-task learning mechanisms (; ; ;) have been proposed and proved better than single-task learning methods from natural language processing and computer vision to chemical study . Despite the success of multi-task learning, when applying to'discrete' data (graph/text), most of the current multi-task learning frameworks only leverage the general task dependency with the assumption that the task dependency remains the same for different data samples; and different sub-structures (node/word) in one data sample (graph/text). However, this assumption is not always true in many real-world problems. Different data samples may have different task dependency. For example, when we want to predict the chemical properties of a particular toxic molecule, despite the general task dependency, its representations learned from toxicity prediction tasks should be more significant than the other tasks. Even for the same data sample, different sub-structures may have different task dependency. Take sentence classification as an example. Words like'good' or'bad' may transfer more knowledge from sentiment analysis tasks, while words like'because' or'so' may transfer more from discourse relation identification tasks. In this work, to accurately learn the task dependency in both general level and data-specific level, we propose a novel framework,'Learning to Transfer via ModellIng mulTi-level Task dEpeNdency' (L2T-MITTEN). The general task dependency is learned as a parameterized weighted dependency graph. And the data-specific task dependency is learned with the position-wise mutual attention mechanism. The two-level task dependency can be used by our framework to improve the performance on multiple tasks. And the objective function of multi-task learning can further enhance the quality of the learned task dependency. By iteratively mutual enhancement, our framework can not only perform better on multiple tasks, but also can extract high-quality dependency structures at different levels, which can reveal some hidden knowledge of the datasets. Another problem is that to transfer task-specific representations between every task pair, the number of transfer functions will grow quadratically as the number of tasks increases, which is unaffordable. To solve this, we develop a universal representation space where all task-specific representations get mapped to and all target tasks can be inferred from. This decomposition method reduces the space complexity from quadratic to linear. We validate our multi-task learning framework extensively on different tasks, including graph classication, node classification, and text classification. Our framework outperforms all the other state-ofthe-art (SOTA) multi-task methods. Besides, we show that L2T-MITTEN can be used as an analytic tool to extract interpretable task dependency structures at different levels on real-world datasets. Our contributions in this work are threefold: • We propose a novel multi-task learning framework to learn to both general task dependency and data-specific task dependency. The learned task dependency structures can be mutually enhanced with the objective function of multi-task learning. • We develop a decomposition method to reduce the space complexity needed by transfer functions from quadratic to linear. • We conduct extensive experiments on different real-world datasets to show the effectiveness of our framework and the importance of modelling multi-level task dependency. According to a recent survey , existing multi-task learning methods can be categorized by whether they share the parameters hardly or softly. For hard parameter sharing, a bottom network will be shared among all the tasks, and each individual task will have its own task-specific output network. The parameter sharing of the bottom network reduces the parameter needed to be learned and thus can avoid over-fitting to a specific task. However, when the tasks are not relevant enough , the shared-bottom layers will suffer from optimization conflicts caused by mutually contradicted tasks. If the bottom model is not capable enough to encode all the necessary knowledge from different tasks, this method will fail to correctly capture all the tasks. points out that the gradients of some dominant task will be relatively larger than gradients of other tasks. This dominant phenomenon will be more obvious when the proportions of labeled data between tasks are uneven, in which case the model will be majorly optimized on data-rich tasks. To alleviate this problem, some recent works try to dynamically adjust the task weight during the training stage. casts the multi-task learning to a multi-objective optimization problem, and they use a gradient-based optimization method to find a Pareto optimal solution. proposes a new normalization method on gradients, which attempts to balance the influences of different tasks. proposes to apply Mixture-of-Experts on multi-task learning, which linearly combines different experts (bottoms) by learnable gates. Because different experts can capture different knowledge among tasks, this model can, to some extent, model the dependency relationship among tasks. Methods using soft parameter sharing (; ; ;) do not keep the shared bottom layers. Instead, for soft-parameter models, most of the model parameters are task-specific. focuses on reducing the annotation effort of the dependency parser tree. By combining two networks with a L2 normalization mechanism, knowledge from a different source language can be used to reduce the requirement of the amount of annotation. Further, in some existing works, the shallow layers of the model will be separated from other layers, and be used as the feature encoders to extract task-specific representations. For example, proposes a Cross-Stitch model which is a typical separate bottom model. Cross-Stitch model will be trained on different tasks separately to encode the task-specific representations from different bottom layers. Then, a cross-stitch unit is used a as a gate to combine those separately trained layers. introduces the tensor factorization model to allow common knowledge to be shared at each layer in the network. By the strategy proposed in their work, parameters are softly shared across the corresponding layers of the deep learning network and the parameter sharing ratio will be determined by the model itself. We also note that some recent works (; ;) can learn to capture task dependency. computes an affinity matrix among tasks based on whether the solution for one task can be sufficiently easily read out of the representation trained for another task. However, it can only capture the general task dependency. uses a sigmoid gated interaction module between two tasks to model their relation. But it will suffer from quadratic growth in space as the number of tasks increases. utilizes a shared network to share features across different tasks and uses the attention mechanism to automatically determine the importance of the shared features for the respective task. However, there is no knowledge transferred or interaction between tasks. In this section, we propose our framework L2T-MITTEN, which can end-to-end learn the task dependency in both general and data-specific level, and help to assemble multi-task representations from different task-specific encoders. To formulate our framework, we first start by briefly introducing a general setting of multi-task learning. For each task t ∈ {1, ..., T}, we have a corresponding dataset {(X k=1 with N (t) data samples, where k represent the feature vector of k-th data sample and y k is its label. We would like to train T models for these tasks, and each model has its own parameter W (t). Note that for different multi-task learning frameworks, these parameters can be shared hardly or softly. The goal of multi-task learning is to improve general performance by sharing information among related tasks. The total loss of multi-task learning is calculated as: 3.2 ARCHITECTURE OVERVIEW Figure 1: The overall architecture of our multi-task learning framework (L2T-MITTEN). For each input data, we will transfer its task-specific representations among different tasks, and assemble the transferred representations via a task-specific Interaction Unit to get the final representation for each task. As is shown in Figure 1, our framework consists of three components: Task-specific Encoders, Transfer Block, and Readout Block. The Task-specific Encoders consists of T separate feature encoders, which can be any type of feed-forward networks based on specific data. Unlike hard parameter sharing methods that tie the bottom encoders' parameters together, we keep each feature encoder separate to efficiently extract task-specific knowledge. In this way, for a given data sample X k, we can use these encoders to get task-specific representations, where E t (X k) is the representation of X k for task t. To conduct multi-task learning, one can simply use the representation of each task alone to predict labels without sharing any parameters. However, this model will suffer for tasks without sufficient labeled data. Therefore, we would like to transfer the knowledge among these T tasks and assemble the transferred representations, which is what we do in the Transfer Block. The Readout Block also consists of T separate readout modules depending on the specific data. The detailed architecture for different tasks can be found in Appendix B. In the Transfer Block, the first step is to transfer the task-specific representations from source to target tasks. A naive way is to use a transfer function F i→j (·) to transfer the task-specific representation from the space of task i to task j for every task pair: where W i→j ∈ R d×d, and d is the dimension of the task-specific representation. However, this will in a total number of T 2 transfer functions. Thus, to prevent the quadratic growth, we develop a universal representation space where all task-specific representations get mapped to and all target tasks can be inferred from. More specifically, we decompose each transfer function F i→j (·) to F T j • F Si (·). Assume that we are trying to transfer the task-specific representation E i (X k) from task i to task j, where i, j ∈ {1, 2, ..., T}. We can decompose the transfer matrix W i→j to S i and T j. In this way, we only need 2T transfer functions in total for F Si (·) and F T j (·). The space complexity is reduced from O(T 2) to O(T). Here we denote the transferred representation from task i to task j as: where S i, T j ∈ R d×d and d is a hyper-parameter. Figure 2: Position-wise mutual attention mechanism. With the transferred representations, the next step of the Transfer Block is to assemble the transferred representations with respect to the multi-level task dependency. Here, the multi-level task dependency consists of two parts: the general task dependency and the data-specific task dependency. The multi-level task dependency is modelled by the position-wise mutual attention mechanism as shown in Figure 2. To model the general task dependency, we represent it by a parameterized weighted dependency graph D ∈ R T ×T. The learnable weight of this parameterized dependency graph represents the transferable weight between any task pair. Note that the dependency graph is asymmetrical. In this way, the negative influence of irrelevant tasks can be reduced as much as possible. Further, even for the same task pair, the transferable weight (dependency) may be different for different data samples; different sub-structure (node/word) in one data sample (graph/text). Therefore, we study the data-specific task dependency in depth. To efficiently model the dataspecific task dependency, we consider the mutual attention between representations of the same data sample under source and target tasks. Given H i→j, the transferred representation from the task i to task j, and E j (X k), the original representation from the target task, we get the position-wise mutual attention by: where W Qi, W Kj ∈ R d×d are the query and key projection matrices, d is a hyper-parameter, ⊗ is the Hadamard product, and SUM is used to eliminate the last dimension (d). We use Hadamard product instead of matrix multiplication because we only want the sub-structure in a given data sample to interact with its counterpart under other tasks. Take graph data as an example, a certain node of one graph will only give attention to the same node of that graph under other tasks. Then, for a target task j, we obtain a set of general task dependency; and a set of data-specific task dependency. To integrate them, we first scale data-specific task dependency A j by the general task dependency D j. And then, we calculate the weighted sum of the transferred representations according to the multi-level task dependency. The final assembled representationX j k (for data sample k and task j) is as follow: whereD is the normalized version of D, W V i ∈ R d×d is the value projection matrix, W Oj ∈ R d×d is the output projection matrix. Note that η here is a scalar parameter used to prevent the vanish gradient problem of two normalization operations. In this section, we evaluate the performance of our proposed L2T-MITTEN approach against several classical and SOTA approaches on two application domains: graph and text. In graph domain, we train a multitask Graph Convolutional Network (GCN) for both graph-level and node-level classification. And in text domain, we train a multitask Recurrent Neural Network (RNN) for text classification. Further, we provide visualization and analysis on the learned hidden dependency structure. Codes and datasets will be released. For graph-level classification, we use Tox21 and SIDER . Tox21: Toxicology in the 21st Century (Tox21) is a database measuring the toxicity of chemical compounds. This dataset contains qualitative toxicity assays for 8014 organic molecules on 12 different targets including nuclear receptors and stress response pathways. In our experiment, we treat each molecule as a graph and each toxicity assay as a binary graph-level classification task (for 12 tasks in total). The Side Effect Resource (SIDER) is a database of marketed drugs and adverse drug reactions. This dataset contains qualitative drug side-effects measurements for 1427 drugs on 27 side-effects. In our experiment, we treat each drug (organic molecule) as a graph and the problem of predicting whether a given drug induces a side effect as a individual graph-level classification tasks (for 27 tasks in total). For node-level classification, we use DBLP and BlogCatalog . In the dataset, authors are represented by nodes and its feature is generated by titles of their papers. Two authors are linked together if they have co-authored at least two papers in 2014-2019. We use 18 representative conferences as labels. An author is assigned to multiple labels if he/she has published papers in some conferences. The processed DBLP dataset is also published in our repository. BlogCatalog: The BlogCatalog is a collection of bloggers. In the dataset, bloggers are represented as nodes, and there is a link between two bloggers if they are friends. The interests of each blogger can be tagged according to the categories that he/she published blogs in. This dataset uses 39 categories as labels. Each blogger is assigned to multiple labels if he/she published a blog in some categories. For text classification, we use TMDb 1 dataset. TMDb: The Movie Database (TMDb) dataset is a collection of information for 4803 movies. For each movie, the dataset includes information ranging from the production company, production country, release date to plot, genre, popularity, etc. In our experiment, we select plots as the input with genres as the label. We treat the problem of predicting whether a given plot belongs to a genre as an individual text-level classification tasks (for 20 tasks in total). A summary of the five datasets is provided in Appendix A. We compare our L2T-MITTEN approach with both classical and SOTA approaches. The details are given as follows: Single-task method: Single-Task: Simply train a network consists of encoder block and readout block for each task separately. Classical multi-task method: Shared-Bottom : This is a widely adopted multi-task learning framework which consists of a shared-bottom network (encoder block in our case) shared by all tasks, and a separate tower network (readout block in our case) for each specific task. The input is fed into the sharedbottom network, and the tower networks are built upon the output of the shared-bottom. Each tower will then produce the task-specific outputs. Cross-Stitch : This method uses a "cross-stitch" unit to learn the combination of shared and task-specific representation. The "cross-stitch" unit is a k × k trainable matrix (k is the number of tasks) which will transfer and fuse the representation among tasks by the following equation: where x i is the output of the lower level layer for task i, α ij is the transfer weight from task j to task i, andx i is the input of the higher level layer for task i. MMoE : This method adopts the Multi-gate Mixture-of-Expert structure. This structure consists of multiple bottom networks (experts), and multiple gating networks which take the input features and output softmax gates assembling the experts with different weights. The assembled features are then passed into the task-specific tower networks. All the baseline models use the same encoder and readout block for each task. The architecture details are provided in Appendix B. We partition the datasets into 80:20 training/testing sets (i.e. each data sample can either appear in the training or testing set) and evaluate our approach under multiple settings 2: Sufficient setting: all tasks have sufficient labeled training data; Imbalanced setting: some tasks have more labeled training data than others; Deficient setting: all tasks have deficient labeled training data. Models are trained for 100 epochs using the ADAM optimizer. We report the performance of our approach and baselines on graph classification, node classification and text classification tasks in terms of AUC-ROC score in Table 1 and 2 respectively. 3 From the above , first of all, we can see that the multi-task methods outperform the single-task method in most cases which shows the effectiveness of knowledge transfer and multi-task learning. Further, we can see that our proposed L2T-MITTEN approach outperforms both classical and SOTA in most tasks. Finally, our approach shows significant improvement under deficient labeled training data setting, since our approach leverages the structure of the data sample itself to guide the transfer among tasks. Secondly, we found that in the real-world dataset, like DBLP dataset, our model can outperform other SOTA methods significantly, which demonstrate the importance of taking multi-level dependency into consideration. Note that the Single-Task can achieve the second-best . This fact indicates that in the real-world dataset, tasks may be irrelevant to each other. Our multi-level task dependency can be more effective to prevent the influence of other irrelevant tasks. Furthermore, we conduct experiments on the text classification dataset, TMDb. The Cross-Stitch model achieves the best when the label ratio for every task is 80%. However, our task can achieve the best for partially labeled setting (partially 10%) and few labeled setting (all 10%). This fact demonstrates that our directed task dependency graph can effectively prevent the negative knowledge be transferred among different tasks when the training label is few. For visualization and analysis of the learned multi-level task dependency structure, we will take DBLP as an example here due to its simplicity in interpreting and understanding. First, in Figure 3a, where we directly visualize the learned general task dependency matrix, we can see our approach indeed captures the task dependency structure in general, i.e. conferences from the same domain are more likely to be in the same sub-tree. Moreover, in Figure 3b we plot the authors (nodes) according to the learned data-specific task dependency matrix and we can see that there are some clusters formed by authors. Further, we visualize the mean value of the data-specific task dependency for each cluster, as shown in Figure 3c. We can see that different cluster does have different task dependency. This is desirable since when predicting if an author has published papers in some conferences, authors from different domains should have different transfer weight among conferences (tasks). As a summary, it is demonstrated that our approach can capture the task dependency at multiple levels according to specific data. We propose L2T-MITTEN, a novel multi-task learning framework that employs the positionwise mutual attention mechanism to learn the multi-level task dependency; transfers the taskspecific representations between tasks with linear space-efficiency; and uses the learned multilevel task dependency to guide the inference. We design three experimental settings where training data is sufficient, imbalanced or deficient, with multiple graph/text datasets. Experimental demonstrate the superiority of our method against both classical and SOTA baselines. We also show that our framework can be used as an analytical tool to extract the task dependency structures at different levels, which can reveal some hidden knowledge of tasks and of datasets A DATASET SUMMARY Figure 4, in the Encoder Block, we use several layers of graph convolutional layers followed by the layer normalization . In the Readout Block, for graph-level task, we use set-to-set as the global pooling operator to extract the graph-level representation which is later fed to a classifier; while for node-level task, we simply eliminate the global pooling layer and feed the node-level representation directly to the classifier. Figure 4: Graph convolutional networks architecture. Note that in node-level task, the Set2Set layer (global pooling) is eliminated. The text model uses long short-term memory (LSTM) architecture in their Encoder Block, and the dot-product attention in the Readout Block, as shown in Figure 5. The dot-product attention used to get the text-level representation is as follows: where O ∈ R n×d is the output of the LSTM, H n ∈ R 1×d is the hidden state for the last word, α ∈ R n×1 is attention weight for each word, andÔ ∈ R 1×d is the text-level representation (n is the number of words, d is the feature dimension for each word).
We propose a novel multi-task learning framework which extracts multi-view dependency relationship automatically and use it to guide the knowledge transfer among different tasks.
510
scitldr
We design simple and quantifiable testing of global translation-invariance in deep learning models trained on the MNIST dataset. Experiments on convolutional and capsules neural networks show that both models have poor performance in dealing with global translation-invariance; however, the performance improved by using data augmentation. Although the capsule network is better on the MNIST testing dataset, the convolutional neural network generally has better performance on the translation-invariance. Convolutional neural networks (CNN) have achieved state-of-the-art performance than the human being on many computer vision tasks BID6; BID2. The deep learning community trend to believe that the success of CNN mainly due to two key features in CNN, reduced computation cost with weight sharing in convolutional layers and generalization with local invariance in subsampling layers BID7; BID8. Due to convolutional layers are'place-coded' equivariant and max-pooling layers are local invariant BID1, CNN has to learn different models for different viewpoints which need big data and expensive cost. More Generalization model should be able to train on a limited range of viewpoints and getting good performance on a much more wider range. Capsule network is robust in dealing with different viewpoints BID3 BID9; BID4. Capsules are a group of neurons which includes the pose, colour, lighting and deformation of the visual entity. Capsule network aims for'rate-coded' equivariance because it's the weights that code viewpoint-invariant knowledge, not the neural activities. Viewpoint changes in capsule network are linear effects on the pose matrices of the parts and the whole between different capsules layers. However, it still unclear whether capsule networks be able to generalize for global translation invariance. Visualize and Quantify the translation-invariance in deep learning model are essential for understanding the architectural choices and helpful for developing Generalization model that is invariant to viewpoint changes. An analysis using translation-sensitivity map for MNIST digit dataset has been used to investigate translation invariance in CNN BID5. In this paper, we introduce a simple method to test the performance of global translation-invariance in convolutional and capsule neural network models trained on the MNIST dataset. Global translational invariance (GTI) of a deep learning model trained on the MINST data which can test by using a simple testing dataset. All images in the GTI testing dataset generated by shifting the centre of mass of a Helvetica font digit from top left to bottom right one pixel each time as shown in FIG0. The TI testing images size is 28 × 28 which the same size as MNIST images. Locations of the centre of mass for each digit has 18 columns and 14 rows. In total there are 18 × 14 × 10 = 2520 testing images, which cover all the possible cases of translational translations. We train all deep learning models using MNIST training dataset which includes 60000 samples, and testing on both MNIST testing dataset with 10000 samples and GTI testing dataset with 2520 samples. Nearly all the images in the MNIST dataset are located at the centre of canvas while the GTI dataset distributes uniformly on the canvas. Compare to translation-sensitivity maps method based on average on numerous images of each class in MNIST testing dataset and shift it to all possible translations , our way is robust to overcome random noise due to miss labelling in the MNIST testing dataset and our GTI training dataset is much smaller. Since the GTI training dataset is identical for testing all different models, it can capture the tiny difference in those models. Another advantage of the GTI dataset is it natural to quantify the global invariance because the accuracy of model predictions on the GTI testing dataset reflects the ability of the model to dealing with global translational invariance. The CNN model has nine layers as shown in FIG1. Layer 1, 2, 4, 5 are convolutional layers with 32, 32, 64, and 64 channels respectively. Each layer has 3 × 3 filters and a stride of 1. Layer 3 and 6 are max-pooling layers, and layer 7, 8, 9 are fully connected layers with size 256, 128 and 10 respectively. Dropout with the drop rate of 0.5 applies to the first max-pooling layer and all the fully connected layers. The total number of parameters is 361578 which is about 23 times smaller than the Capsule networks. Except the last layer's activation function is softmax, all the other layers use ReLU. The optimizer is Adam with default parameters in Keras, and the objective function is cross entropy loss. The of CNN shown in FIG2 and TAB0. The CNN models trained on MNIST data, thus it achieves very high accuracy on the MNIST testing set. However, the model trained on the MNIST training dataset without any data augmentation has only 42.16% accuracy on GTI testing dataset, which implies CNN's performance on dealing with global translational invariance is abysmal. As we can see in the left picture of FIG2, images with the digit's centre of mass around the centre of the canvas predicted correctly, and images with the number at the corner assigned to an incorrect class. Those images around the centre are classified correctly due to the max-pooling layers preserve the local invariance in feature maps. Since only MNIST images used to train the model, and the model could not accurately predict those images shift toward the corner in GTI dataset, which strongly suggests that CNN is'place-code' equivariant. 5 5 5 8 0 5 0 0 0 6 6 6 6 6 6 1 7 7 5 5 5 2 2 2 2 2 6 6 6 6 6 6 6 6 7 7 5 5 5 2 2 2 2 2 2 6 6 6 6 6 1 7 7 7Without Augmentation Shift 20% in training data To improve the performance of CNN on the GTI dataset, we train the MNIST with data augmentation by shifting the image from the centre in x and y-direction. The accuracy on GTI testing dataset increase to 98.05% by randomly moving the centre of an MNIST training image in x and y-direction up to 30% of width or hight. Data Augmentation of training dataset to improve the performance imply place-code' equivariance in CNN, because those neurons at the corner of feature maps are activated when the model start to see training sample with objects at the edge. We test GTI dataset on CapsNet with the same architecture in BID9 bashed on the. The CapsNet has 8.2M parameters that are about 23 times larger than the CNN. We trained the model with Adam optimizer and using exponential decay of learning rate with decay parameter 0.9. We use margin lass with the same parameters in BID9. We also add reconstruction loss but scale down it by 0.0005.Capsule network is robust in viewpoints invariance of object recognition; however, our experiment implies capsule network's performance on global invariance has yet to be improved. As we can see in the left of FIG3, the model trained on the MNIST without data augmentation fails to predict the class correctly and also generated incorrect images if the digit close to the edge. Data augmentation in MNIST training dataset helps the CapsNet achieve better accuracy on the GTI dataset. The right of FIG3 is an example trained on MNIST with 20% shifting. Nearly all images being predicted correctly except those close to the edge. It's interesting that the generated images look like handwriting even the input images are Helvetica font. CNN's performance on GTI dataset generally better than the CapsNet as shown in Figure 5. The accuracy on CNN is always better than the CapsNet with different amount of shifting, even the convolutional layers in CapsNet using wider receptive fields. Since we remove the max-pooling layers in the CapsNet and the convolutional layers in the CapsNet is'place-coded' equivalent, which could be the reason that the CapsNet has lower performance on the GTI dataset. We believe there is much room to improve for the CapsNet to handling translational invariance. We introduce a simple GTI testing dataset for deep learning models trained on MNIST dataset. The goal is to get a better understanding of the ability of CNN and CapsNet to dealing with global translational invariance. Although the current version of CapsNet could not handle global translational invariance without data augmentation, we still believe CapsNet architecture potentially better than CNN on dealing with global translational invariance because capsules could train to learn all viewpoint no matter it receives the information for the centre or the edge. Our testing method is sample Figure 5: GTI dataset accuracy of models trained on CNN and CapsNet with different amount of random shifting in MNIST training dataset.and quantifiable, and it easy to implement for other datasets of computer vision tasks by taking a clear and correct labelled image from each class and apply the translational shifting to cover all possible cases.
Testing of global translational invariance in Convolutional and Capsule Networks
511
scitldr
Gaussian processes are ubiquitous in nature and engineering. A case in point is a class of neural networks in the infinite-width limit, whose priors correspond to Gaussian processes. Here we perturbatively extend this correspondence to finite-width neural networks, yielding non-Gaussian processes as priors. The methodology developed herein allows us to track the flow of preactivation distributions by progressively integrating out random variables from lower to higher layers, reminiscent of renormalization-group flow. We further develop a perturbative prescription to perform Bayesian inference with weakly non-Gaussian priors. Gaussian processes model many phenomena in the physical world. A prime example is Brownian motion (Brown, 1828), modeled as the integral of Gaussian-distributed bumps exerted on a pointlike solute . The theory of elementary particles also becomes a Gaussian process in the free limit where interactions between particles are turned off, and manybody systems as complex as glasses come to be Gaussian in the infinite-dimensional, mean-field, limit . In the context of machine learning, pointed out that a class of neural networks give rise to Gaussian processes in the infinite-width limit, which can perform exact Bayesian inference from training to test data . They occupy a corner of theoretical playground wherein the karakuri of neural networks is scrutinized (; ; ; ;). In reality, Gaussian processes are but mere idealizations. Brownian particles have finite-size structure, elementary particles interact, and many-body systems respond nonlinearly. In order to understand rich phenomena exhibited by these real systems, Gaussian processes rather serve as starting points to be perturbed around. Indeed many edifices in theoretical physics are build upon the successful treatment of non-Gaussianity, with a notable example being renormalization-group flow (; ; ;). In the quest to elucidate behaviors of real neural networks away from the infinite-width limit, it is thus natural to wonder if the similar treatment of non-Gaussianity yields equally elegant and powerful machinery. Here we set out on this program, perturbatively treating finite-width corrections to neural networks. Prior distributions of outputs are obtained through progressively integrating out preactivation of neurons layer by layer, yielding non-Gaussian priors. Intriguingly, intermediate recursion relations and their derivation resemble renormalization-group flow . Such a recursive approach further enables us to treat finite-width corrections on Bayesian inference and their regularization effects, with arbitrary activation functions. The rest of the paper is structured as follows. In Section 2 we review and set up basic concepts. Our master recursive formulae (R1,R2,R3) are derived in Section 3, which control the flow of preactivation distributions from lower to higher layers. After an interlude with concrete examples in Section 4, we extend the Gaussian-process Bayesian inference method to non-Gaussian priors in Section 5 and use the ing scheme to study inference of neural networks at finite widths. We conclude in Section 6 with dreams. In this paper we study real finite-width neural networks in the regime where the number of neurons in hidden layers is asymptotically large whereas input and output dimensions are kept constant. Let us focus on a class of neural networks termed multilayer perceptrons, with model parameters, θ = b i, W i,j, and an activation function, σ. For each input, x ∈ R n0, a neural network outputs a vector, z(x; θ) = z (L) ∈ R n L, recursively defined as sequences of preactivations through i,j x j for i = 1,..., n 1, , we assume priors for biases and weights given by independent and identically distributed Gaussian distributions with zero means, E b i,j = 0, and variances Higher moments are then obtained by Wick's contractions . For instance, For those unfamiliar with Wick's contractions and connected correlation functions (a.k.a. cumulants), a pedagogical review is provided in Appendix A as our formalism heavily relies on them. In the infinite-width limit where n 1, n 2,..., n L−1 → ∞ (but finite n 0 and n L), it has been argued -with varying degrees of rigor (; ;) -that the prior distribution of outputs is governed by the Gaussian process with a kernel and all the higher moments given by Wick's contractions. In particular, there exists a recursive formula that lets us evaluate this kernel for any pair of inputs [c.f. Equation (R1)]. Importantly, once the values of the kernel are evaluated for all the pairs of..,ND, consisting of N R training inputs with target outputs and N E test inputs with unknown targets, we can perform exact Bayesian inference to yield mean outputs as predictions for N E test data [c.f. Equation (GPM)]. This should be contrasted with stochastic gradient descent (SGD) optimization , through which typically a single estimate for the optimal model parameters of the posterior, θ, is obtained and used to predict outputs for test inputs; Bayesian inference instead marginalizes over all model parameters, performing an ensemble average over the posterior distribution . We shall now study real finite-width neural networks in the regime n 1,..., n L−1 ∼ n 1. 1 At finite widths, there are corrections to Gaussian-process priors. In other words, a whole tower of 1 Note that input and output dimensions, n0 and nL, are arbitrary. To be precise, defining n1,..., nL−1 ≡ µ1n,..., µL−1n, we send n 1 while keeping C , µ1,..., µL−1, n0, and nL constants, and compute the leading 1/n corrections. In particular it is crucial to keep the number of outputs nL constant in order to consistently perform Bayesian inference within our approach. nontrivial preactivation correlation functions beyond the kernel, collectively dictate the distribution of preactivations. Our aim is to trace the flow of these distributions progressively and cumulatively all the way up to the last layer whereat Bayesian inference is executed. More specifically, we shall inductively and self-consistently show that two-point preactivation correlation functions take the form and connected four-point preactivation correlation functions is symmetric under α 1 ↔ α 2, α 3 ↔ α 4, and (α 1, α 2) ↔ (α 3, α 4). At the first layer the preactivation distribution is exactly Gaussian for any finite widths and hence Equations (KS) and (V) are trivially satisfied, with = 0, and Obtained in Section 3 are the recursive formulae that link these core kernel, self-energy, and fourpoint vertex at the -th layer to those at the (+ 1)-th layer while in Section 5 these tensors at the last layer = L are used to yield the leading 1/n correction for Bayesian inference at finite widths. Our Schwinger operator approach is orthogonal to the replica approach by and, unlike the planar diagrammatic approach by , applies to general activation functions, made possible by accumulating corrections layer by layer rather than dealing with them all at once. More substantially, in contrast to these previous approaches, we here study finite-width effects on Bayesian inference and find that the renormalization-group picture naturally emerges. As auxiliary objects in recursive steps, let us introduce activation correlation functions Our basic strategy is to establish relations zigzagging between sets of preactivation correlation functions and sets of activation correlation functions, keeping track of leading finite-width corrections. Below, relations G → H are obtained by integrating out preactivations while relations H → G (+1) are obtained by integrating out biases and weights. At first glance the algebra in this paper may look horrifying but repeated applications of Wick's contractions are all there is to it. The are summarized in Section 3.2. 2 In the main text we place tildes on objects that depend only on sample indices α's in order to distinguish them from those that depend both on sample indices α's and neuron indices i's. 3 Given that the means of biases and weights are zero, G The remaining task is to relate preactivation correlations G to activation correlations H within the same layer, which will complete the zigzag relation (ZIGZAG) for these correlation functions. 4 Once the sorcery of Wick's contractions and connected correlation functions is mastered, it is simple to derive the following combinatorial hack (Appendix A.4): viewing prior preactivations as a random (n N D)-dimensional vector and defining the Gaussian integral with the kernel z, the prior average capture 1/n corrections due to selfenergy and four-point vertex, respectively, and are defined as where the sample indices are lowered by using the inverse core kernel as a metric, meaning Using the above hack, we can evaluate the activation correlations by straightforward algebra with -you guessed it -Wick's contractions. As the Gaussian integral is diagonal in the neuron index i, we just need to disentangle cases with repeated and unrepeated neuron indices. The solution for this exercise is in Appendix B: this is the most cumbersome algebra in the paper and the ability to perform it certifies the graduation from the magical school of Wick's crafts and wizardly cumulants. 4 The nontrivial parts of the inductive proof for Equations (KS) and (V) are to show (i) that the right-hand side of Equation is finite as n → ∞, (ii) that the leading contribution of Equation is the Gaussianprocess kernel, and (iii) that higher-point connected preactivation correlation functions are all suppressed by O 1 n 2, all of which are verified in obtaining the recursive equations. See Appendix B for a full proof. Denoting the Gaussian integral with the core kernel z..,ND, and plugging in of Appendix B into Equations and, we arrive at our master recursion relations, and For = 1, a special note about the ratio n n −1 is in order: even though n 0 stays constant while n 1 1, the terms proportional to that ratio are identically zero due to the complete Gaussianity (R0). The preactivation distribution in the first layer (R0) sets the initial condition for the flow from lower to higher layers dictated by these recursive equations. Once recursed up to the last layer = L, the ing distribution of outputs z = z (L) can be succinctly encoded by the probability distribution with the potential, and (D1) By now, the reader should be deriving this through Wick's contractions without solicitation. It is important to note that n L is constant and thus H 1 [z] can consistently be treated perturbatively. 5 If nL were of order n 1, the potential H would become a large-n vector model, for which we would have to sum the infinite series of bubble diagrams . The recursive relations obtained above can be evaluated numerically [or sometimes analytically for ReLU ], which is a perfectly adequate approach: at the leading order it involves four-dimensional Gaussian integrals at most. Here, continuing the theme of wearing out Wick's contractions, we develop an alternative analytic method that works for any polynomial activations , providing another perfectly cromulent approach. For a general polynomial activation of degree p, σ(z) = p k=0 a k z k, the nontrivial term in Equation (R1) can be expanded as Each term can then be evaluated by Wick's contractions and the same goes for all the terms in Equations (R2) and (R3). Below and in Appendix C, we illustrate this procedure with simple examples. When the activation function is linear, σ(z) = z, multilayer perceptrons go under the aweinspiring name of deep linear networks . Setting C = 0 and C W = 1 for simplicity, our recursion relations reduce to K α1,α2, and S α1,α2 Solving them yields the layer-independent core kernel and zero self-energy and the linearly layer-dependent four-point vertex. It succinctly reproduces the that can be obtained through planar diagrams in this special setup . Quadratic activation is worked out in Appendix C.1. The recursion relations simplify drastically for the case of a single input, N D = 1, as worked out in detail in Appendix C.2. For instance, for rectified linear unit (ReLU) activation with C b = 0 and C W = 2, we obtain the layer-independent core kernel, zero self-energy, and the four-point vertex 2. Interestingly, as for deep linear networks, the factor (1/n) appears again. This factor has also been found by , which provides guidance for network architectural design through its minimization. We generalize this factor for monomial activations in Appendix C.2.1 Here we put our theory to the test. For concreteness, take a single black-white image of handwritten digits with 28-by-28 pixels (i.e. n 0 = 784) from the MNIST dataset without preprocessing, set depth L = 3, bias variance C b = 0, weight variance C W = C W, and widths (n 0, n 1, n 2, n 3) = (784, n, 2n, 1), and use activations σ(z) = z (linear) with C W = 1 and max(0, z) (ReLU) with C W = 2. In Figure 1, for each width-parameter n of the hidden layers we record the prior distribution of outputs over 10 6 instances of Gaussian weights and compare it with the theoretical prediction -obtained by cranking the knob from the initial condition (R0) through the recursion relations (R1-R3) to the distribution (D0-D2). The prior distribution becomes increasingly non-Gaussian as networks narrow and the deviation from the Gaussian-process prior is correctly captured by our theory. Higher-order perturbative calculations are expected to systematically improve the quality -and extend the range -of the agreement. Additional experiments are performed in Appendix C.3, which further corroborates our theory. Figure 1: Comparison between theory and experiments for prior distributions of outputs for a single input. The agreement between our theoretical predictions (smooth thick lines) and experimental data (rugged thin lines) is superb, correctly capturing the initial deviations from Gaussian processes at n = ∞ (black), all the way down to n ∼ 10 for linear activation and to n ∼ 30 for ReLU activation. Let us take off from the terminal point of Section 3: we have obtained the recursive equations (R0-R3) for the Gaussian-process kernel and the leading finite-width corrections and codified them in the weakly non-Gaussian prior distributions with ≡..,n L. We shall develop a formalism to infer outputs for test inputs a lá Bayes, perturbatively extending the textbook by. For field theorists, our calculation is just a tree-level field calculation in disguise. Taking the liberty of notations, we let the number of input-data arguments dictate the summation over sample indices α inside the potential H, and denote the joint probabilities Given the training targets y R, the posterior distribution of test outputs are given by Bayes' rule: The leading Gaussian-process contributions can be segregated out through the textbook manipulation [c.f. Appendix D.1]: denoting the full Gaussian-process kernel in the last layer as and the Gaussian-process posterior mean prediction as and defining a fluctuation For any function F, its expectation over the Bayesian posterior (Bayes) then turns into where the deviation kernel (δz E) and the normalization factor In particular the mean posterior output is given by ], recalling Equation (D2) for H 1, and using Wick's contractions for one last time, the mean prediction becomes With additional manipulations in Appendix D, this expression is simplified into the actionable form that is amenable to use in practice. For illustration, there, we also show a simple preliminary experiment, which indicates the 1/n regularization effect for sufficiently large width n and small amount of training data N R. This is in line with expectations that finite widths ameliorate overfitting and that non-Gaussian priors increase the expressivity of neural functions, but additional large-scale extensive experiments would be desirable in the future. In this paper, we have developed the perturbative formalism that captures the flow of preactivation distributions from lower to higher layers. The spiritual resemblance between our recursive equations and renormalization-group flow equations in high-energy and statistical physics is highly appealing. It would be exciting to investigate the structure of fixed points away from the Gaussian asymptopia and fully realize the dream articulated by beyond their limited example of a mapping between two antiquated techniques -the audacious hypothesis that neural networks wash away microscopic irrelevancies and extract relevant features. In addition we have developed the perturbative Bayesian inference scheme universally applicable whenever prior distributions are weakly non-Gaussian, and have applied it to the specific cases of neural networks at finite widths. In light of finite-width regularization effects, it would be prudent to revisit the empirical comparison between SGD optimization and Bayesian inference at finite widths , especially for convolutional neural networks. Finally, given surging interests in SGD dynamics within the large-width regime (; ; ;), it would be natural to adapt our formalism for investigating corrections to neural tangent kernels, and even nonperturbatively aspire to capture a phase transition out of lazy-learning into feature-learning regimes. Welcome to the magical school of Wick's crafts and wizardly cumulants. Here is all you need to know in order to follow the calculations in the paper. In the main text, Wick's contractions are used trivially for integrating out biases and weights as straightforward applications of Appendix A.1 while they are used more nontrivially for integrating out preactivations, with concepts of cumulants reviewed in Appendix A.2 and A.3, culminating in a hack derived in Appendix A.4. For most parts, we shall forget about the neuron index i for pedagogy and put them back in at the very end. For Gaussian-distributed variables z = {z α} α=1,...,N with a kernel K α,α, moments (S1) For any odd m such moments identically vanish. For even m, Isserlis-Wick's theorem states that where the sum is over all the possible pairings of m variables, For a proof, see for example. In order to understand and use the theorem, it is instructive to look at a few examples: and Given general (not necessarily Gaussian) random variables, connected correlation functions are defined inductively through where the sum is over all the possible subdivisions of m variables into s > 1 clusters of sizes (ν 1, . . ., ν s) as (k νs). In order to understand the definition, it is again instructive to look at a few examples. Assuming that all the odd moments vanish, Rearranging them in particular yields If these examples do not suffice, here is yet another example to chew on: and hence We emphasize that these are just renderings of the definition (S6). The power of this definition will be illustrated in the next two subsections. We often encounter situations with the hierarchy where 1 is a small perturbative parameter and here again odd moments are assumed to vanish. Often comes with the hierarchical structure is the asymptotic limit → 0 where with the Gaussian kernel K α1,α2 at zero and the leading self-energy correction S α1,α2. Let us also denote the leading four-point vertex For instance this hierarchy holds for weakly-coupled field theories -from which we are importing names such as self-energy, vertex, and metric -and, in this paper, such hierarchical structure is inductively shown to hold for prior preactivations z with = 1 n −1 in the regime n 1,..., n L−1 ∼ n 1. Note that, by definition, K α1,α2 and S α1,α2 are symmetric under α 1 ↔ α 2 and V α1,α2,α3,α4 is symmetric under permutations of (α 1, α 2, α 3, α 4). With the review of connected correlation functions passed us, we can now readily see that where in the last equality Wick's theorem was used backward. So far we have reviewed the standard technology of Wick's contractions, connected correlation functions, and all that. Here is one sorcery, which lets us magically evaluate E [z α1 · · · z αm] by mindless repetitions of Wick's contractions. Throughout, we shall assume the hierarchical structure (S12), 7 which is the inductive assumption in the main text, and start from its consequence (CLUSTER). Below, let us use the inverse kernel K −1 α1,α2 as a metric to lower indices: First note that 7 More precisely, we shall only use the weaker assumption that for m ≥ 6 along with Equations (S13) and (S14). where the symmetry α 1 ↔ α 2 of S α1,α2 was used. Hence, defining we obtain, for one term in Equation (CLUSTER), The similar algebraic exercise renders the other term in Equation (CLUSTER) to be In summary, for any function In order to get the expressions used in the main text at the -th layer, we need only to put back neuron indices i by replacing α → (α, i), identify = 1 n −1, and use the inductive assumptions (KS) The operators in Equations (OS') and (OV') then become i.e., the operators in Equations (OS) and (OV) in the main text. In this Appendix, we provide a full inductive proof for one of the main claims in the paper, streamlined in the main text. Namely, we assume at the -th layer that Equations (KS) and (V) hold and that all the higher-point connected preactivation correlation functions are of order O 1 n 2 -which are trivially true at = 1 -and prove the same for the (+ 1)-th layer. We assume the full mastery of Appendix A or, conversely, this section can be used to test the mastery of wicked tricks. First, trivial Wick's contractions yield Studiously disentangling cases with different numbers of repetitions in neuron indices (j 1, . . ., j k), we notice that at order O 1 n, terms without repetition or with only one repetition contribute, finding where we used the inductive hierarchical assumption at the -th layer, i.e., its consequence (HACK) and denoted a single-neuron random vectorz = {z α} α=1,...,ND and the Gaussian integral with the core kernel z As special cases, we obtain expressions advertised in the main text to be contained in this Appendix: Assembling everything, In particular, completing our inductive proof. Note that B (α1,α2),(α3,α4) = O 1 n. Nowhere in our derivation had we assumed anything about the form of activation functions. The only potential exceptions to our formalism are exponentially growing activation functions -which we have never seen in practice -that would make the Gaussian integrals unintegrable. Let us take multilayer perceptrons with quadratic activation, σ(z) = z 2, and study the distributions of preactivations in the second layer as another illustration of our technology. From the master recursion relations (R1-R3) with the initial condition (R0), Wickology yields, and =0. where. These expressions are used in Appendix D.2 for the experimental study of finite-width corrections on Bayesian inference. The recursive relations simplify drastically for the case of a single input, N D = 1. Setting C b = 0 for simplicity and dropping α index, our recursive equations reduce to, and (S34) Under review as a conference paper at ICLR 2020 For monomial activations, σ(z) = z p, such as in deep linear networks and quadratic activations ,, and (S37) In particular the four-point vertex solution is given by The factor 1 n p 2 generalizes the factor 1 n for linear and ReLU activations. , this factor guides us to narrow hidden layers as we pass through nonlinear activations. ReLU activation, σ(z) = max(0, z), can also be worked out for a single input through Wick's contractions, noting that the Gaussian integral is halved, yielding, and (S41) Setting C W = 2 for simplicity, these equations can be solved, leading to, and (S44) Here is an extended version of experiments in Section 4.3. As in the main text, take a single blackwhite image of hand-written digits from the MNIST dataset as an n 0 = 784-dimensional input, without preprocessing. Set bias variance C and max(0, z) (ReLU) with C W = 2. For all three cases, we consider both depth L = 2 with widths (n 0, n 1, n 2) = (784, n, 1) and depth L = 3 with widths (n 0, n 1, n 2, n 3) = (784, n, 2n, 1). As in Figure 1, in Figure S1, for each width-parameter n of the hidden layers we record the prior distribution of outputs over 10 6 instances of Gaussian weights and compare it with the theoretical prediction. Results again corroborate our theory. Figure S1: Comparison between theory and experiments for prior distributions of outputs for a single input. Our theoretical predictions (smooth thick lines) and experimental data (rugged thin lines) agree, correctly capturing the initial deviations from the Gaussian processes (black, n = ∞), at least down to n = n with n ∼ 10 for linear cases, n ∼ 30 for ReLU cases and depth L = 2 quadratic case, and n ∼ 100 for depth L = 3 quadratic case. This also illustrates that nonlinear activations quickly amplify non-Gaussianity. This expression simplifies drastically through the identity which can be checked explicitly, recalling RR K RE. Incidentally, this identity can also be used to prove Equation (GP∆). Now equipped with this identity, recalling φ Finally, denoting the matrix inside the parenthesis to be are actionable, i.e., easy to program. It turns out that for deep linear networks the leading finite-width correction given above vanishes, and the first correction is likely to show up at higher order in 1/n asymptotic expansion, which is not carried out in this paper. Here we instead use the L = 2 multilayer perceptron with the quadratic activation for illustration, plugging Equations (S30,S31,S32) into Equations (NGPM') and (NGPM"). Set C Figure S2 indicate the regularization effects of finite widths, at least when the number of training samples, N R, is small, ing in peak performance at finite widths. Figure S2: Test accuracy for N E = 10000 MNIST test data as a function of the inverse width = 1/n L−1 of the hidden layer with quadratic activation. For each number N R of subsampled training data, the is averaged over 10 distinct choices of such subsamplings. For small numbers of training data, finite widths in regularization effects, improving the test accuracy.
We develop an analytical method to study Bayesian inference of finite-width neural networks and find that the renormalization-group flow picture naturally emerges.
512
scitldr
Distillation is a method to transfer knowledge from one model to another and often achieves higher accuracy with the same capacity. In this paper, we aim to provide a theoretical understanding on what mainly helps with the distillation. Our answer is "early stopping". Assuming that the teacher network is overparameterized, we argue that the teacher network is essentially harvesting dark knowledge from the data via early stopping. This can be justified by a new concept, Anisotropic In- formation Retrieval (AIR), which means that the neural network tends to fit the informative information first and the non-informative information (including noise) later. Motivated by the recent development on theoretically analyzing overparame- terized neural networks, we can characterize AIR by the eigenspace of the Neural Tangent Kernel(NTK). AIR facilities a new understanding of distillation. With that, we further utilize distillation to refine noisy labels. We propose a self-distillation al- gorithm to sequentially distill knowledge from the network in the previous training epoch to avoid memorizing the wrong labels. We also demonstrate, both theoret- ically and empirically, that self-distillation can benefit from more than just early stopping. Theoretically, we prove convergence of the proposed algorithm to the ground truth labels for randomly initialized overparameterized neural networks in terms of l2 distance, while the previous was on convergence in 0-1 loss. The theoretical ensures the learned neural network enjoy a margin on the training data which leads to better generalization. Empirically, we achieve better testing accuracy and entirely avoid early stopping which makes the algorithm more user-friendly. Deep learning achieves state-of-the-art in many tasks in computer vision and natural language processing. Among these tasks, image classification is considered as one of the fundamental tasks since classification networks are commonly used as base networks for other problems. In order to achieve higher accuracy using a network with similar complexity as the base network, distillation has been proposed, which aims to utilize the prediction of one (teacher) network to guide the training of another (student) network. , the authors suggested to generate a soft target by a heavy-duty teacher network to guide the training of a light-weighted student network. More interestingly,; proposed to train a student network parameterized identically as the teacher network. Surprisingly, the student network significantly outperforms the teacher network. Later, it was suggested by Zagoruyko & Komodakis (2016a);; to transfer knowledge of representations, such as attention maps and gradients of the classifier, to help with the training of the student network. In this work, we focus on the distillation utilizing the network outputs;; Yang et al. (2018a);; Yang et al. (2018b). To explain the effectiveness of distillation, suggested that instead of the hard labels (i.e one-hot vectors), the soft labels generated by the pre-trained teacher network provide extra information, which is called the "Dark Knowledge". The "Dark knowledge" is the knowledge encoded by the relative probabilities of the incorrect outputs.;; Yang et al. (2018a), the authors pointed out that secondary information, i.e the semantic similarity between different classes, is part of the "Dark Knowledge", and observed that the "Dark Knowledge" can help to refine noisy labels. In this paper, we would like to answer the following question: can we theoretically explain how neural networks learn the Dark Knowledge? Answering this question will help us to understand the regularization effect of distillation. In this work, we assume that the teacher network is overparameterized, which means that it can memorize all the labels via gradient descent training Du et al. (2018b; a);;. In this case, if we train the overparameterized teacher network until convergence, the network's output coincides exactly with the ground truth hard labels. This is because the logits corresponding to the incorrect classes are all zero, and hence no "Dark knowledge" can be extracted. Thus, we claim that the core factor that enables an overparameterized network to learn "Dark knowledge" is early stopping. What's more,;; observed that "Dark knowledge" represents the discrepancy of convergence speed of different types of information during the training of the neural network. Neural network tends to fit informative information, such as simple pattern, faster than non-informative and unwanted information such as noise. Similar phenomenon was observed in the inverse scale space theory for image restoration;;;. In our paper, we call this effect Anisotropic Information Retrieval (AIR). With the aforementioned interpretation of distillation, We further utilize AIR to refine noisy labels by introducing a new self-distillation algorithm. To extract anisotropic information, we sequentially extract knowledge from the output of the network in the previous epoch to supervise the training in the next epoch. By dynamically adjusting the strength of the supervision, we can theoretically prove that the proposed self-distillation algorithm can recover the correct labels, and empirically the algorithm achieves the state-of-the-art on Fashion MNIST and CIFAR10. The benefit brought by our theoretical study is twofold. Firstly, the existing approach using large networks; often requires a validation set to early terminate the network training. However, our analysis shows that our algorithm can sustain long training without overfitting the noise which makes the proposed algorithm more user-friendly. Secondly, our analysis is based on an 2 -loss of the clean labels which enables the algorithm to generate a trained network with a bigger margin and hence generalize better. We summarize our contributions as follows • This paper aims to understand distillation theoretically (i.e. understand the regularization effect of distillation). Distillation works due to the soft targets generated by the teacher network. Based on the observation that the overparameterized network can exactly fit the one-hot labels which contain no dark knowledge, we theoretically justify that early stopping is essential for an overparameterized teacher network to extract dark knowledge from the hard labels. This provides a new understanding of the regularization effect of distillation. • This is the first attempt to theoretically understand the role of distillation in noisy label refinery using overparameterized neural networks. Inspired by, we utilize distillation to propose a self-distillation algorithm to train a neural network under label corruption. The algorithm is theoretically guaranteed to recover the unknown correct labels in terms of the 2 -loss rather than the previous 0-1-loss. This enables the algorithm to generate a trained network whose output has a bigger margin and hence generalizes better. Furthermore, our algorithm does not need a validation set to early stop during training, which makes it more hyperparameter friendly. The theoretical understanding of the overparameterized networks encourages us to use large models which empirically produce better . 2.1 NO EARLY STOPPING, NO DARK KNOWLEDGE As mentioned in the introduction, an overparameterized teacher network is able to extract dark knowledge from the on-hot hard labels because of early stopping. In this section we present an experiment to verify this effect of early stopping, where we use a big model as a teacher to teach a smaller model as the student. In this experiment, we train a WRN-28 Zagoruyko & Komodakis (2016b) Figure 1: A good teacher may not be able to produce a good student. To maximize the effectiveness of distillation, you should early stop your epoch at a proper time. As we can see, the teacher model does not suffer from overfitting during the training, while it does not always educate a good student model either. As suggested by Yang et al. (2018a), a more tolerant teacher educates better students. The teacher model trained with 80 epochs produces the best which indicates that early stopping of the bigger model can extract more informative information for distillation. In this paper, we introduce a new concept called Anisotropic Information Retrieval (AIR), which means to exploit the discrepancy of the convergence speed of different types of information during the training of an overparameterized neural network. An important observation of AIR is that informative information tends to converge faster than non-informative and unwanted information such as noise. Selective bias of an iterative algorithm to approximate a function has long been discovered in different areas. For example, observed that iterative linear equation solver fits the low frequency component first, and they proposed a multigrid algorithm to exploit this property. In image processing,;;; proposed inverse scale space methods that recover image features earlier in the iteration and noise comes back later. In kernel learning, early stopping of gradient descent is equivalent to the ridge regression;; , which means that bias from the eigenspace corresponding to the larger eigenvalues is reduced quicker by the gradient descent. Under the neural network setting,; observed that neural networks find low frequency patterns more easily. discovered that noisy labels can slow down the training.; studied the memorization effects of the deep networks, and revealed that, during training, neural networks first memorize the data with clean labels and later with the wrong labels. In the next subsection, we will characterize AIR of overparametrized neural networks using the. , the authors introduced the Neural Tangent Kernel to characterize the trajectory of gradient descent algorithm learning infinitely wide neural networks. Denote f (θ, x) ∈ R as the output of neural network with θ ∈ R n being the trainable parameter and x ∈ R p the input. Consider training the neural network using the 2 -loss on the dataset {( We use the gradient descent θ t+1 = θ t − η∇l(θ t) to train the neural network. Let u t = (f (θ t, x i)) i∈[n] ∈ R n be the network outputs on all the data {x i} at iteration t and y = (y i) i∈ [n]. It was shown by Du et al. (2018b);; that the evolution of the error u t − y can be formulated in a quasi-linear form, It is known thatĤ t − H t = o with respect to d, where. Note that H * is a symmetric positive semi-definite matrix. Assume that λ 1 > · · · > λ n ≥ 0 are its n eigenvalues and e 1, e 2, · · ·, e n are the corresponding eigenvectors. The eigenvectors are orthogonal < e i, e j >= 0 and. Consider the evolution of the projection of the loss function in different eigenspaces We can see that the component lies in the eigenspace with a larger eigenvalue converges faster. Therefore, AIR describes the phenomenon that the gradient descent algorithm searches for information components corresponding to different eigenspaces at different rates.;;; has shown that that one possible reason of neural network's good generalization property is that neural network fits useful information faster. Thus in our paper, we regard informative information as the eigenspaces associated with the largest few eigenvalues of NTK. Figure 2: Components of label noise in the largest five eigensapces of NTK are decreasing. We denote the projection of supervision signal to the eigenspace with a larger eigenvalue as useful information. In Figure 2, We calculate the ratio of the172norm of the label vector provided by the self-distillation algorithm lies in the top-5 eigenspace as a representative of informative information. We can see that the informative information decreases when the noise level increases. This motivates us to further explore how "Dark Knowledge" helps with label refinery. Supervised learning requires high quality labels. However, due to noisy crowd-sourcing platforms and data augmentation pipeline , it is hard to acquire entirely clean labels for training. On the other hand, successive deep models often have huge capacities with millions or even billions of parameters. Such huge capacity enables the network to memorize all the labels, right or wrong , which makes learning deep neural networks with noisy labels a challenging task.;; pointed out that neural networks often fit the clean labels before the noisy ones during training. theoretically showed that early stopping can clean up label noise with overparametrized neural networks. This, together with our understanding of distillation with AIR, inspired us to use distillation for noisy label refinery. We shall introduce a new self-distillation algorithm with theoretically guaranteed recovery of the clean labels under suitable assumptions. Training deep models on datasets with label corruption is an important and challenging problem that has attracted much attention lately. , the authors proposed to regularize the Local Intrinsic Dimensionality of deep representations to detect label noise. proposed a joint optimization framework to simultaneously optimize the network parameters and output labels. In the literature of distillation, first utilized distillation to refine noisy labels of ImageNet, while Yang et al. (2018b) introduced a distillation method to complete teacherstudent training in one generation. The latter is most related to our proposed self-distillation algorithm. However, the difference is that their model aims to ensemble diverse models in one training trajectory but ours aims to utilize AIR to refine noisy labels during the training. It is known that the label noise lies in the eigenspaces associated to small eigenvalues;. used early stopping to remove label noise. However, early stopping is hard to tune and sometimes leads to unsatisfactory . In this section, we proposed a self-distillation algorithm with an excellent empirical performance and a theoretical guarantee to recover the correct labels under certain conditions but without the requirement of early stopping. From the perspective of AIR, we observe that the knowledge learned in early epochs is informative information (i.e. the eigenspaces associated with the largest few eigenvalues of NTK) and can be used to refine the training for later epochs. In other words, the algorithm distills knowledge sequentially to guide the training of the model in later epochs by the knowledge distilled by the model from earlier epochs. The informative information learned during early epochs is, in some sense, "low frequency information", which is the core factor to enable the model to generalize well. A nice property of the self-distillation algorithm is that it generates the final model in one generation (i.e. single-round training), which has almost no additional computational cost compared to normal training. The proposed self-distillation algorithm is given by Algorithm 1. Here, the function h(·) in the algorithm is the label function. It can either be a hard label function such as hardmax, or a soft label function such as softmax with a certain temperature. The choice of h(·) and interpolation coefficient α t depends on the usage of the self-distillation algorithm. If Randomly initialize the network. we want to clean up label noise, we normally choose h(·) to be hardmax or softmax with a low temperature. The weight α t is chosen to be adaptively decreasing corresponding to the increase of our confidence on the learned model at current epoch. The introduction of h(·) helps to boost AIR and the information gained from the previous epoch. In this section, we provide a theoretical justification of the performance of the self-distillation algorithm with overparameterized neural networks. Here, we only consider binary classification task with label ∈ {−1, +1}. • We consider a dataset with n data:. (x i, y i) are the input data and its associated label seen by the model whileỹ i is the unobserved ground truth label. The pair (x i, y i) with y i =ỹ i is called a clean data, otherwise it is called a corrupted data. • We assume that {x i} i∈ [n] contains points with unit Euclidean norm and has K clusters. The data is randomly i.i.d sampled from a distribution P. Let n l be the number of points in the lth cluster. In our analysis, we ssume that number of data in each cluster is balanced in the sense that n l ≥ c low n K for constant c low > 0 and all of the data is bounded, i.e. • For each of the K clusters, we assume that all the input data lie within the Euclidean ball B(c l,), where c l is the center with unit Euclidean norm and > 0 is the radius. • Assume that the data in the same cluster has the same ground truth labelỹ. For the lth cluster, we denote ρ l the proportion of the data with wrong labels. Let ρ = max{ρ i : i ∈ [K]} and assume that ρ < 1 2. • A dataset satisfying the above assumptions is called an (, ρ) dataset. The above definition of dataset follows that of the previous work;. It is reasonable to assume that ρ < 1 2 in order to ensure the correct labels dominate each cluster. In this work, we consider two-layers neural networks. For input data x ∈ R d, the output of the neural network f is: where W ∈ R k×d is the weight matrix and φ is the activation function applied to W x entry-wise. We suppose k is even and fix the output layer by assigning half of the entries T, we simply denote the output vector on the data matrix as Following the previous work on training overparameterized neural network Du et al. (2018b), we consider the MSE loss Definition 2. For a data matrix D ∈ m×d, we denote λ(D) the small eigenvalue of the neural network covariance matrix The above definition reveals the matching score of the model and data. We denote C = [c 1, c 2, . . ., c K] T the matrix composed by the center of the cluster. We denote Λ = min(λ(C), λ(X)) for simplification. We are now ready to present our main theorem that establishes the convergence of the proposed self-distillation algorithm to the ground truth labels under certain conditions. Theorem 1. Assume that |φ|, |φ (·)| and |φ (·)| are bounded with upper bound Γ ≥ 1. We fix a learning rate η = 1 2Γ 2 n for the gradient descent. Assume that the sequence α t monotonically decreases to 0. Furthermore, we have two slow-decay conditions on α t where ) and T 2 = inf{t : For the self-distillation algorithm, if the following two conditions for the radius and the width k are satisfied then for random initialization W 0 ∼ N k×d, with probability 1 − δ, we have: where W t is the parameter generated by the full-batch self-distillation algorithm at iteration t. Theorem 2. Combining Theorem 1 and , with failure probability δ ∈, using the self-distillation algorithm and neural network described in Theorem 1 and under the same condition, we have Compared to previous on noisy label , our made the following improvements. Firstly, our algorithm fits the ground truth labels without the help of early stopping as long as a mild condition on α t is satisfied. Secondly, while previous work only ensures the algorithm to yield correct class labels on training set, our state the 2 convergence of the outputs to the ground truth labels. As a , the solution that our algorithm finds tends to have larger margin which leads to better generalization. This is shown in Theorem 2. and will be supported by our empirical studies in the next subsection. 3.4 NOISY LABEL REFINERY Figure 4: Training on CIFAR10 with 40% noise injection. The normal training suffers from over-fitting, while selfdistillation does not. (Note that we are conducting cosine learning rate scheduling. The learning rate is extremely small at the end of learning.) In this section, we conduct experiments on the self-distillation algorithm. In the experiments, we applied our algorithm on corrupted Fashion MNIST and CI-FAR10. At noise level p, every data in the original training dataset is chosen and assigned a symmetric noisy label with probability p. We test our algorithm for p = 0.2, 0.4, 0.6 and 0.8. The test accuracy is calculated with respect to the ground truth labels. We adopted the shake-shake network of with 32 channels and cross entropy loss. We trained the network by momentum stochastic gradient descent (SGD) with batch size 128, momentum 0.9 and a weight decay of 1e-4. We schedule the learning rate following cosine learning rate with a maximum learning rate 0.2 and minimum learning rate 0. In order to ensure convergence, we trained the models for 600 epochs. , mean subtraction, horizontal random flip, 32 × 32 random crops after padding with 4 pixels on each side from the padded image is performed as the data augmentation process. For testing, We only do evaluation on the original 32 × 32 image. In self-distillation, we adaptively adjust α t by setting 1 − α t = λ * accuracy, where accuracy is the accuracy calculated on the current batch. The direct ratio λ is the only tuning parameter. For CIFAR10, We simply set λ as 1 when the noisy level is low, such as p = 0, 0.2 and 0.4. When the noisy level p = 0.6 and 0.8, we take λ = 1.5. For Fashion MNIST, We simply set λ as 0.6, 1, 1, 1.4, 1.6 respectively when p = 0, 0.2, 0.4, 0.6, 0.8. We report the average final test accuracy of 3 runs for Fashion MNIST and CIFAR10 in Figure 5: Self-distillation always gains information. Distillation can benefit from more than just early stopping. Distillation has the ability to enhance the AIR and thus can extract more dark knowledge from the data via early stopping. For example, enhanced AIR by adjusting the temperature in the softmax layer. In self-distillation, the label function h(·) is proposed to amplify the information gained in the earlier epochs so that the knowledge gained in the earlier epochs is preserved. Thus, self-distillation does not require early stopping which makes the algorithm more user-friendly. On the other hand, the selfdistillation algorithm dynamically enhances AIR which enables it to achieve 2 convergence and thus better generalization. Again, we use the ratio of the norm of the label vector which lies in the top-5 eigenspace as a representative of informative information. We call the subtraction of informative information corresponding to the label vector provided by self-distillation algorithm and original label vector as information gain. From 5 we can see that the information gain is mostly larger than zero during the training of self-distillation algorithm. This phenomenon indicates that the supervision signal of self-distillation algorithm gains more information than directly using the noisy label. To sum up, a well-designed distillation algorithm can enjoy a regularization effect beyond early stopping and is able to gain more knowledge from the data. This paper provided an understanding of distillation using overparameterized neural networks. We observed that such neural networks posses the property of Anisotropic Information Retrieval (AIR), which means the neural network tends to fit the infomrative information (i.e. the eigenspaces associated with the largest few eigenvalues of NTK) first and the non-informative information later. Through AIR, we further observed that distillation of the Dark Knowledge is mainly due to early stopping. Based on this new understanding, we proposed a new self-distillation algorithm for noisy label refinery. Both theoretical and empirical justifications of the performance of the new algorithm were provided. Our analysis is based on the assumption that the teacher neural network is overparameterized. When the teacher network is not overparameterized, the network will be biased towards the label even without early stopping. It is still an interesting and unclear problem that whether the bias can provide us with more information. For label refinery, our analysis is mostly based on the symmetric noise setting. We are interested in extending our analysis to the asymmetric setting. A PROOF DETAILS A.1 NEURAL NETWORK PROPERTIES As preliminaries, we first discuss some properties of the neural network. We begin with the jacobian of the one layer neural network x → v φ(W x), the Jacobian matrix with respect to W takes the form First we borrow Lemma 6.6, 6.7, 6.8 from and Theorem 6.7, 6.8 from. T be a data matrix made up of data with unit Euclidean norm. Assuming that λ(X) > 0, the following properties hold., at random Gaussian initialization W 0 ∼ N k×d, with probability at least 1 − δ, we have T in whichx i corresponds to the center of cluster including x i. What's more, we define the matrix of cluster center C = [c 1, c 2, . . ., c K] T. Assuming that λ(C) > 0, the following properties hold. •, at random Gaussian initialization W 0 ∼ N k×d, with probability at least 1 − δ, we have • range(J(W,X)) ⊂ S + for any parameter matrix W. Then, we gives out the perturbation analysis of the Jacobian matrix. Lemma 3. Let X be a -clusterable data matrix with its center matrixX. For parameter matrices W,W, we have Proof. We bound J(W, X) − J(W,X) by The first term is bounded by Lemma 1. As to the second term, we bound it by Combining the inequality above, we get Lemma 4. Let X be a -clusterable data matrix with its center matrixX. We assume W 1, W 2 have a upper bound c √ k. Then for parameter matrices W 1, W 2,W 1,W 2, we have Proof. By the definition of average Jacobian, we have A.2 PROVE OF THE THEOREM First, we introduce the proof idea of our theorem. Our proof of the theorem divides the learning process into two stages. During the first stage, we aim to prove that the neural network will give out the right classification, i.e. the 0-1-loss converges to 0. The proof in this part is modified from. Furthermore, we proved that training 0-1-loss will keep 0 until the second stage starts and the margin at the first stage will larger than 1−2ρ 2. During the second stage, we prove that the neural networks start to further enlarge the margin and finally the 2 loss starts to converge to zero. has shown that this dynamic can be illustrated by the average Jacobian. Definition 3. We define the average Jacobian for two parameters W 1 and W 2 and data matrix X as The residualr = f (θ) − y, r = f (θ) − y obey the following equation r = (I − ηC(θ))r In our proof, we project the residual to the following subspace Definition 4. Let {x i} n i=1 be a -clusterable dataset and {x i} n i=1 be the associated cluster centers, that is,x i = c l iff x i is from lth cluster. We define the support subspace S + as a subspace of dimension K, dictated by the cluster membership as follows. Let Λ l ⊂ {1, 2, · · ·, n} be the set of coordinates i such that = c l. Then S + is characterized by Definition 5. We define the minimum eigenvalue of a matrix B on a subspace S σ min (B, S) = min where P S is the projection to the space S. Recall the generation process of the dataset Definition 6. (Clusterable Dataset Descriptions) • We assume that {x i} i∈[n] contains points with unit Euclidean norm and has K clusters. Let n l be the number of points in the lth cluster. Assume that number of data in each cluster is balanced in the sense that n l ≥ c low n K for constant c low > 0. • For each of the K clusters, we assume that all the input data lie within the Euclidean ball B(c l,), where c l is the center with unit Euclidean norm and > 0 is the radius. • A dataset satisfying the above assumptions is called an -clusterable dataset. First, we reduct the dataset to its cluster center, i.e. = 0 for -clusterable dataset. Lemma 6. We fix the label function be a -clusterable dataset and {x i} n i=1 be the associated cluster centers, that is,x i = c l iff x i is from lth cluster. We denote the data matrix X andX. We denote α =. We set the learning rate η = min(where Θ is maximum of the residual norm during the optimization. We suppose along the optimization path we have α ≤ J(W,X)v ≤ β for all v ∈ S +. We set T 1 = log 1− ηα 2 4 1−2ρ 8 r0 2, wherer 0 = P S+ (f (W 0,X) − y 0 ) is the projected residual on the space S +. Then ∀t ≥ T 1, we have • The neural network can learn the true label sgn (f) (W t,X) =ỹ. • The weight vector will be close to its initialization for all iterations Proof. We denote C 1 max t≥0 2 √ n(α t − α t+1). We denote J t = J(W t,X). Follow the step of gradient descent, we have We denote G t = J(W t+1, W t,X)J(W t,X) T. Then the dynamic of gradient descent can be written by We consider the dynamic of the residual r t r t+1 = (I − ηG t)r t + y t − y t+1. We project the residual on S +r t+1 = (I − ηG t)r t +ȳ t −ȳ t+1. Thus, the norm of the residual can be bounded by Utilizing the following lemma Lemma 7. (Claim2.) Let P S+ be the projection matrix to S +, then the following inequality holds Therefore, as long as η ≤ α Lβ rt 2 holds, we have When By simple calculation, we have After T 1 = log 1− As a , all the data have been classified correctly at iteration T 1 and the following inequality holds For the next step, we use induction to prove that For t = T 1, it has already been proved. We assume the claim is correct for arbitrary t, we establish the induction for t + 1. By applying projection to S + on equation 7, we have By applying lemma 7, we have Given that h(f t) =ỹ, we haveȳ Thus, for f t+1 we have By the definition of h(·), we deduce that h(f t+1) =ỹ. Back to equation 8, we have Finally, we estimate the total variation of the parameter. Combining equation 11 and 12, we can conclude that inequality holds for all t ≥ 0. After taking sum on both sides for t = 0, 1, 2,..., we have By simple calculation, we get Combining equation 6, we have Then we further to adapt the above theorem to the -clusterable dataset by a pertubation analysis. Since we have a simple inequality α a ≥ α b, in the following discussion, we simply replace the α a in the previous by α b, the also holds. be a -clusterable dataset and {x i} n i=1 be the associated cluster centers, that is,x i = c l iff x i is from lth cluster. We denote the data matrix X andX. For the same initialization W 0 =W 0, we run the self-distillation algorithm on X and tX respectively. We denote the parameter matrix W t andW t for t ≥ 0. We denote α = c low nΛ. We set the learning rate η = min(where Θ is maximum of the residual norm during the optimization. We denote c √ k the upper bound of the Frobenius norm of the parameter matrxi and we set M = c + 1. We set T 2 = inf{t : α t < 1 24 √ n}. Then if the following conditions hold Proof. We introduce the following notations. We can conclude the following inequalities from lemma 3 and lemma 4 Thus the parameters are updated by gradient descent, we have Also, we have To sum up, if we can guarantee that p t ≤ 1−2ρ 4 holds for t ≤ T 2, we have For r t 2, we have r t 2 ≤ Tr t 2 + r t − Tr t 2 Thus we have the following inequality for p t We claim that if the following conditions for and k hold It is obviously when t = 0. We further suppose the inequalities hold for an arbitrary t satisfying t < T 2, we have because of the condition on k ensures that LΘd t ≤ M ΓΘ √ n. When it comes to p t+1, we have because of the conditions on and k ensure that Ld T2 ≤ M Γ √ n and p t ≤ Θ. Now we are ready to finalize the proof of our main theorem. Proof. We denote C 2 max s≥T2 2 √ n(α s − α s+1). We take Θ = (C 3 Γ log 8 δ + 1) √ n. C 3 is the constant in Hoeffding's inequality. By lemma 6, we have Θ ≥ max t≥0 T r t 2 with probability 1 − δ 4. Combining lemma 6 and 8, we have f (W t, X) =ỹ for T 1 ≤ t ≤ T 2 and Similar to the proof in 6, we consider the gradient descent on original dataset X after T 2. We proof the following claim by induction For s = T 2, it has already been proved. We assume the claim is correct for arbitrary s, we establish the induction for s + 1. By equation 7, we have By applying lemma 7, we have f (W s+1,X) −ȳ s 2 ≤ r s 2 ≤ 5 8 (1 − 2ρ) + 1 24. Given that h(f (W s, X)) =ỹ, we have y s (i)ỹ(i) ≥ 1 − 2α T2, i = 1, 2,..., n. Thus, for f (W s+1, x i) we have As a , we have f (W s+1, X) =ỹ. Furthermore, we can bound r s+1 2 by r s+1 2 ≤ (I − ηG s)r s 2 + (1 − α s) h(f (W s, X)) − h(f (W s+1, X)) 2 + (α s − α s+1) ȳ − h(f (W s+1, X)) 2 ≤(1 − ηα To sum up, we have the following inequality holds for all s ≥ T 2 r s+1 2 ≤ (1 − ηα 2 2) r s 2 + 2 √ n(α s − α s+1). After taking sum on both sides for s ≥ T 2, we have By lemma 1 and lemma 2, as long as k ≥ To ensure α lower bounding the eigenvalue of the gram matrix, we need to verify that That is to say Another condition related to R is the condition on M. We require By Bernstein's inequality, we have with probability 1 − δ/4. On the condition that k ≥ R 2 (acutually one can show that we can choose M = d + C 4 log 8 δ + 2. We take these constants to the lemma 8. Firstly, for we have ---512 bn fc. ---10 dropout Table 2 : Architecture of the student network. After each convolution layer, there is a Rectified Linear Unit(ReLU) layer. Experiment In Section2.3 and Section3.5 For these two experiments, we modify CIFAR10 to a binary classification task. We choose class 2, 7 to be the positive class and the others to be negative. We train resnet56 with MSE loss. We set batch size 128, momentum 0.9, weight decay 5e − 4 and learning rate 0.1. In the experiment in section2.3, we fetch a batch of data (batch size=128) from the testset and randomly corrupted the label by noise level 0, 0.1, 0.2, 0.3, 0.4, 0.5. We plot the ratio of the norm of the label vector which lies in the subspaces corresponding to top-5 eigenvalues of NTK. In the experiment in section2.3, we calculate the ratio of the norm of the label vector which lies in the subspaces corresponding to top-5 eigenvalues of NTK firstly. We calculate the ratio of the norm of the label vector provided by the self-distillation algorithm lies in the top-5 eigenspace. We calculate the difference of the latter ratio and the former ratio and called it information gain. We plot the information gain of the first 1500 iterations.
theoretically understand the regularization effect of distillation. We show that early stopping is essential in this process. From this perspective, we developed a distillation method for learning with corrupted Label with theoretical guarantees.
513
scitldr
Lifelong learning poses considerable challenges in terms of effectiveness (minimizing prediction errors for all tasks) and overall computational tractability for real-time performance. This paper addresses continuous lifelong multitask learning by jointly re-estimating the inter-task relations (\textit{output} kernel) and the per-task model parameters at each round, assuming data arrives in a streaming fashion. We propose a novel algorithm called \textit{Online Output Kernel Learning Algorithm} (OOKLA) for lifelong learning setting. To avoid the memory explosion, we propose a robust budget-limited versions of the proposed algorithm that efficiently utilize the relationship between the tasks to bound the total number of representative examples in the support set. In addition, we propose a two-stage budgeted scheme for efficiently tackling the task-specific budget constraints in lifelong learning. Our empirical over three datasets indicate superior AUC performance for OOKLA and its budget-limited cousins over strong baselines. Instead of learning individual models, learning from multiple tasks leverages the relationships among tasks to jointly build better models for each task and thereby improve the transfer of relevant knowledge between the tasks, especially from information-rich tasks to information-poor ones. Unlike traditional multitask learning, where the tasks are presented simultaneously and an entire training set is available to the learner , in lifelong learning the tasks arrives sequentially BID27 ). This paper considers a continuous lifelong learning setting in which both the tasks and the examples of the tasks arrive in an online fashion, without any predetermined order. Following the online setting, particularly from BID24 BID7, at each round t, the learner receives an example from a task, along with the task identifier and predicts the output label for the example. Subsequently, the learner receives the true label and updates the model(s) as necessary. This process is repeated as we receive additional data from the same or different tasks. Our approach follows an error-driven update rule in which the model for a given task is updated only when the prediction for that task is in error. Lifelong learning poses considerable challenges in terms of effectiveness (minimizing prediction errors for all tasks) and overall computational tractability for real-time performance. A lifelong learning agent must provide an efficient way to learn new tasks faster by utilizing the knowledge learned from the previous tasks and also not forgetting or significantly degrading performance on the old tasks. The goal of a lifelong learner is to minimize errors as compared to the full ideal hindsight learner, which has access to all the training data and no bounds on memory or computation. This paper addresses lifelong multitask learning by jointly re-estimating the inter-task relations from the data and the per-task model parameters at each round, assuming data arrives in a streaming fashion. We define the task relationship matrix as output kernels in Reproducing Kernel Hilbert Space (RKHS) on multitask examples. We propose a novel algorithm called Online Output Kernel Learning Algorithm (OOKLA) for lifelong learning setting. For a successful lifelong learning with kernels, we need to address two key challenges: learn the relationships between the tasks (output kernel) efficiently from the data stream and bound the size of the knowledge to avoid memory explosion. The key challenge in learning with a large number of tasks is to adaptively learn the model parameters and the task relationships, which potentially change over time. Without manageability-efficient updates at each round, learning the task relationship matrix automatically may impose a severe computational burden. In other words, we need to make predictions and update the models in an efficient real time manner. We propose simple and quite intuitive update rules for learning the task relationship matrix. When we receive a new example, the algorithm updates the output kernel when the learner made a mistake by computing the similarity between the new example and the set of representative examples (stored in the memory) that belongs to a specific task. If the two examples have similar (different) labels and high similarity, then the relationship between the tasks is increased (decreased) to reflect the positive (negative) correlation and vice versa. To avoid the memory explosion associated with the lifelong learning setting, we propose a robust budget-limited version of the proposed algorithm that efficiently utilizes the relationship between the tasks to bound the total number of representative examples in the support set. In addition, we propose a two-stage budgeted scheme for efficiently tackling the task-specific budget constraints in lifelong learning. It is worth noting that the problem of lifelong multitask learning is closely related to online multitask learning. Although the objectives of both online multitask learning and lifelong learning are similar, one key difference is that the online multitask learning, unlike in the lifelong learning, may require that the number of tasks be specified beforehand. In recent years, online multitask learning has attracted extensive research attention BID0; BID10; BID16 BID7; BID24 BID17. We evaluate our proposed methods with several state-of-the-art online learning algorithms for multiple tasks. Throughout this paper, we refer to our proposed method as online multitask learning or lifelong learning. There are many useful application areas for lifelong learning, including optimizing financial trading as market conditions evolve, email prioritization with new tasks or preferences emerging, personalized news, and spam filtering, with evolving nature of spam. Consider the latter, where some spam is universal to all users (e.g. financial scams), some messages might be useful to certain affinity groups, but spam to most others (e.g. announcements of meditation classes or other special interest activities), and some may depend on evolving user interests. In spam filtering each user is a "task," and shared interests and dis-interests formulate the inter-task relationship matrix. If we can learn the matrix as well as improving models from specific spam/not-spam decisions, we can perform mass customization of spam filtering, borrowing from spam/not-spam feedback from users with similar preferences. The primary contribution of this paper is precisely the joint learning of inter-task relationships and its use in estimating per-task model parameters in a lifelong learning setting. Most existing work in online learning of multiple task focuses on how to take advantage of task relationships. To achieve this, BID16 imposed a hard constraint on the K simultaneous actions taken by the learner in the expert setting, BID1 used matrix regularization, and BID10 proposed a global loss function, as an absolute norm, to tie together the loss values of the individual tasks. Different from existing online multitask learning models, our paper proposes an intuitive and efficient way to learn the task relationship matrix automatically from the data, and to explicitly take into account the learned relationships during model updates. BID7 assumes that task relationships are available a priori. However often such taskrelation prior knowledge is either unavailable or infeasible to obtain for many applications especially when the number of tasks K is large BID28 ) and/or when the manual annotation of task relationships is expensive BID15 ). BID24 formulated the learning of task relationship matrix as a Bregman-divergence minimization problem w.r.t. positive definite matrices. The model suffers from high computational complexity as semi-definite programming is required when updating the task relationship matrix at each online round. We show that with a different formulation, we can obtain a similar but much cheaper updating rule for learning the inter-task weights. BID17 proposed an efficient method for learning the task relationship matrix using the cross-task performance measure, but their approach learns only the positive correlation between the tasks. Our proposed approach learns positive and negative correlations between the tasks for robust transfer of knowledge from the previously learned tasks. Recent work in output kernel learning estimate the task covariance matrix in RKHS space, inferred it directly from the data BID12 BID25 BID14 ). The task covariance matrix is called the output kernel defined on the tasks, similar to the scalar kernel on the inputs. Most recently, BID14 showed that for a class of regularization functions, we can efficiently learn this output kernel. Unfortunately most of the proposed methods for learning output kernels require access to the entire data for the learning algorithm, a luxury unavailable in online learning and especially in the lifelong learning setting. Unlike in online multitask learning, most lifelong learning approaches use a single model for all the tasks or reuse the models from the previous tasks to build a model for the new task BID2 FORMULA0 ). These approaches either increase the computation time on iterations where we encounter a novel task or reduce the prediction power of the model learned from the previous tasks due to catastrophic forgetting. To the best of our knowledge, relationships among the tasks has not been successfully exploited in the lifelong learning setting due to the difficulty in learning a positive semi-definite task relationship matrix in large-scale applications. This paper provides an efficient way to learn the task relationship matrix in the lifelong learning setting. Let ((x t, i t), y t ) be the example received by the learner from the task i t (at the time step t) where we assume that the x t ∈ X and y t is its corresponding true label. The task i t can be a new task or one seen by the learner in the previous iterations. We denote by [N] the consecutive integers ranging from 1 to N. In this paper, we do not assume that the number of tasks is known to the learner ahead of time, an important constraint in lifelong learning problems. Let K be the number of tasks seen so far until the current iteration t. For brevity, we consider a binary classification problem for each task y t ∈ {−1, +1}, but the methods generalize to multi-class cases and are also applicable to regression tasks. We assume that the learner made a mistake if y t =ŷ t whereŷ t is the predicted label. Our approach follows a mistake-driven update rule in which the model for a given task is updated only on rounds where the learner predictions differ from the true label. Let K: X × X → R (kernel on input space) and Ω: N × N → R (output kernel) be symmetric, positive semi-definite (p.s.d) multitask kernel functions and denote H as their corresponding RKHS of functions with the norm · H on multitask examples BID26; BID12 BID14 ). Using the above notation, we can define a kernel representation of an example based on a set of representative examples collected on the previous iterations (prototypes). Formally, given an example x ∈ X, its kernel representation can be written using this set:x −→ {K(x, x s): s ∈ S} S is the set of stored examples for which the learner made a mistake in the past. The set S is called the support set. The online classification function is then defined as the weighted sum of the kernel combination of the examples in the support set. To account for the examples from the different tasks, we consider both the kernel on the input space K and the output kernel Ω in our classification function. DISPLAYFORM0 We set α s = y s. The predicted label for a new example is computed from the linear combination of the labels of the examples from the support set S weighted by their input similarity K and the task similarity Ω to the new example. Using the kernel trick, one can write: DISPLAYFORM1 Note that, in the above representation, we need to learn both the support set S and the output kernel Ω from the data. As explained in the previous section, for a successful lifelong learning with kernels, we need to address two key challenges: learn the relationships between the tasks (output kernel) efficiently from the data arriving in an online fashion and bound the size of the support set S to avoid memory explosion. We address these two challenges in the following sections. DISPLAYFORM2 (or) DISPLAYFORM3 end end Our objective function for the lifelong learning problem is given as follows: DISPLAYFORM0 where (·) is some loss function such as hinge loss or logistic loss, R(·) is the regularization on the task relationship matrix Ω and λ is the regularization parameter. Note that f in the above equation depends on Ω. In order to reduce the time taken for each time-step, we require an efficient update to the task relationship matrix Ω. Following the work of BID14 in the batch setting, we consider a subset of regularization functions R for which we can efficiently learn the task covariance matrix. Consider the dual function of the above equation, at time-step t (see BID4 ; BID14): DISPLAYFORM1 When we consider the entry-wise l p norm between Ω and Ω (t−1) from the previous iteration as our regularization i.e., R(Ω, DISPLAYFORM2 we get the update function in Equation 2. Similarly, if we consider the generalized KL-divergence between Ω and Ω (t−1) i.e., R(Ω, FIG1) itk, we get the update function in Equation 3. Unlike in the previous work, we update only the row (and the corresponding column) of the task relationship matrix Ω specific to the task i t, which significantly reduces the time taken per example. DISPLAYFORM3 We can see that the update equations are simple and quite intuitive. For a given new example (x t, i t) at round t, the algorithm updates Ω itk (for some k ∈ [K]) by computing the similarity between the new example and the examples in the support set S that belongs to the task k. If the two examples have similar (different) labels and high similarity K(x t, x s), then the Ω itk is increased to reflect the positive (negative) correlation and vice versa. A value close to 0 implies no significant relationship between the tasks. The update to the Ω itk is normalized by the regularization parameter λ for scaling. It is worth noting that our update equations do not violate the p.s.d constraints on Ω in Equation 5.If Ω from the previous iteration is a p.s.d matrix and the update is a p.s.d matrix (as it is computed using the Gram matrix of the example from the previous iteration), the sum and Hadamard product of two p.s.d matrices satisfy the p.s.d constraint (using the Schur Product Theorem).Algorithm 1 outlines the key steps in our proposed method. We write f ((x t, i t)) as f it (x t) for notational convenience. At each time-step t, the learner receives an example x t ∈ X and predicts the output label y t usingŷ t = sign(f it (x t)). We update both the support set S and the output kernel Ω it· when the learner makes a mistake. DISPLAYFORM4 Find an example to remove arg max DISPLAYFORM5 as in Algorithm 1. end end Algorithm 3: Two-Stage Budgeted Learning Initialize: DISPLAYFORM6 In Algorithm 1, we can see that both the classification function f and the update equations for Ω use the support set S. When the target function changes over time, the support set S grows unboundedly. This leads to serious computational and runtime issues especially in the lifelong learning setting. The most common solution to this problem is to impose a bound on the number of examples in the support set S. There are several budget maintenance strategies proposed recently BID6; BID11; BID18 ). Unfortunately these schemes cannot be directly used in our setting due to the output kernels in our learning formulation. BID5 proposed multitask variants of these schemes but they are impractical for the lifelong learning setting. We follow a simple support set removal schemes based on BID8. In single-task setting, when the number of examples in the support set S exceeds the limit (say B), a simple removal scheme chooses an example x r with the highest confidence from S. The confidence of an example x r is measured using y r f(x r) after removing x r from the support set S. DISPLAYFORM0 We extend the above approach to the multitask and lifelong learning settings. Since the support set S is shared by all the tasks, we choose an example x r with high confidence to remove from each task function f k, weighted by the relationship among the tasks. The objective function to choose the example is shown in Equation 6. We show in the experiment section that this simple approach is efficient and performs significantly better than the state-of-the-art budget maintenance strategies. Algorithm 2 shows pseudocode of the proposed budgeted learning algorithm. In lifelong learning setting, the number of tasks is typically large. The support set S may have hundreds or thousands of examples from all the tasks. Each task does not use all the examples from the support set S. For example, in movie recommendations task, recommendation for each user (task) can be characterized by just a few movies (subset of examples) in the support set S. Motivated by this observation, we propose a two-stage budgeted learning algorithm for the lifelong learning setting. Algorithm 3 shows pseudocode of the proposed two-stage budgeted learning algorithm. In addition to the support set S, we maintain task-specific support set T k. We choose the budget for each task (say L) where L <<< B. Similar to the removal strategies for S, we remove an example from T k when |T k | > L and replace with an example from the set S − T k. The proposed two-stage approach provides better runtime complexity compared to the budgeted algorithm in Algorithm 2. Since only a subset of tasks may hold an example from S, the removal step in Equation 6 requires only a subset of tasks for choosing an example. This improves the runtime per iteration significantly when the number of tasks is large. One may consider a different budget size for each task L k based on the complexity of the task. In addition, the proposed two-stage budgeted learning algorithm provides an alternative approach to using state-of-the-art budget maintenance strategies. For example, it is easier to use the Projectron algorithm BID18 ) on T k, rather than on S. We will further explore this line of research in our future work. In this section, we evaluate the performance of our algorithms. All reported are averaged over 10 random runs on permutations of the training data. Unless otherwise specified, all model parameters are chosen via 5-fold cross validation. We use three benchmark datasets, commonly used for evaluating online multitask learning. Details are given below:Newsgroups Dataset 2 consists of 20 tasks generated from two subject groups: comp and talk.politics. We paired two newsgroups, one from each subject (e.g.,comp.graphics vs talk.politics.guns), for each task. In order to account for positive/negative correlation between the tasks, we randomly choose one of the newsgroups as positive (+) or negative (−) class. Each post in a newsgroup is represented by a vocabulary of approximately 60K unique features. 3 We use the dataset obtained from ECML PAKDD 2006 Discovery challenge for the spam detection task. We used the task B challenge dataset, which consists of labeled training data from the inboxes of 15 users. We consider each user as a single task and the goal is to build a personalized spam filter for each user. Each task is a binary classification problem: spam (+) or non-spam (−) and each example consists of approximately 150K features representing term frequency of the word occurrences. Some spam is universal to all users (e.g. financial scams), but some messages might be useful to certain affinity groups and spam to most others. Such adaptive behavior of each user's interests and dis-interests can be modeled efficiently by utilizing the data from other users to learn per-user model parameters. 4 We also evaluated our algorithm on product reviews from amazon. The dataset contains product reviews from 25 domains. We consider each domain as a binary classification task. Reviews with rating > 3 were labeled positive (+), those with rating < 3 were labeled negative (−), reviews with rating = 3 are discarded as the sentiments were ambiguous and hard to predict. Similar to the previous datasets, each example consists of approximately 350K features representing term frequency of the word occurrences. We choose 2000 examples (100 posts per task) for 20 Newsgroups, 1500 emails for spam (100 emails per user inbox) and 2500 reviews for sentiment (100 reviews per domain) as training set for our experiments. Note that we intentionally kept the size of the training data small to simulate the lifelong learning setting and drive the need for learning from previous tasks, which diminishes as the training sets per task become large. Since these datasets have a class-imbalance issue (with few (+) examples as compared to (−) examples), we use average Area Under the ROC Curve (AU C) as the performance measure on the test set. To evaluate the performance of our proposed algorithm (OOKLA), we use the three datasets (Newsgroups, Spam and Sentiment) for evaluation and compare our proposed methods to 5 baselines. We implemented Perceptron and Passive-Aggressive algorithm (PA) BID9 for online multitask learning. Both Perceptron and PA learn independent model for each task. These two baselines do not exploit the task-relationship or the data from other tasks during model update. Next, we implemented two online multitask learning related to our approach: FOML -initializes Ω with fixed weights BID7, Online Multitask Relationship Learning (OMTRL) BID24 -learns a task covariance matrix along with task parameters. Since OMTRL requires expensive calls to SVD routines, we update the task-relationship matrix every 10 iterations. In addition, we compare our proposed methods against the performance of Online Smooth Multitask Learning (OSMTL) which learns a probabilistic distribution over all tasks, and adaptively refines the distribution over time BID17. We implement two versions of our proposed algorithm with different update rules for the task-relationship matrix: OOKLA-sum (Equation 2 OOKLA with sum update) OOKLA-exp (Equation 3 OOKLA with exponential update) as shown in Algorithm 1. TAB0 summarizes the performance of all the above algorithms on the three datasets. In addition to the AU C scores, we report the average total number of support vectors (nSV) and the CPU time taken for learning from one instance (Time).From the table, it is evident that both OOKLA-sum and OOKLA-exp outperform all the baselines in terms of both AU C and nSV. This is expected for the two default baselines (Perceptron and PA). The update rule for FOML is similar to ours but using fixed weights. The justify our claim that learning the task-relationship matrix adaptively leads to improved performance. As expected, both OOKLA and OSMTL consume less or comparable CPU time than the subset of baselines which take into account learning inter-task relationships. Unlike in the OMTRL algorithm that recomputes the task covariance matrix every iteration using expensive SVD routines, the task-relationship matrix in our proposed methods (and OSMTL) are updated independently for each task. We implement the OSMTL with exponential update for our experiments as it has shown to perform better than the other baselines. One of the major drawbacks of OSMTL is that it learn only the positive correlations between the tasks. The performance of OSMTL worsens when the tasks are negatively correlated. As we can see from the table, our proposed methods outperform OSMTL significantly in the Newsgroup dataset. TAB1 compares the proposed methods with different budget schemes and budget sizes in terms of test set AU C scores and the runtime. We use OOKLA-sum for this experiment. We set the value of B to {50, 100, 150} for all the datasets. We compare our proposed budgeted learning algorithm (Algorithm 2) with the following state-of-the-art algorithms for online budgeted learning: TAB1 shows both the test set AUC scores (first line) and time taken for learning from one instance (including the removal step). It is evident from the table, our proposed budgeted learning algorithm for online multitask learning significantly outperforms the other state-of-the-art budget schemes on most settings. Our proposed algorithm uses the relationship between the tasks efficiently to choose the next example for removal. Finally, we evaluate the performance of the proposed two-stage budgeted scheme compared to the Algorithm 2. To study the effect of different budget sizes L, we compute the cumulative mistake rate which uses all the examples from the support set S. We observe similar trend in the test set AUC scores. On average, we achieved over 16% improvement in running time compared to the budget maintenance scheme in Algorithm 2. We believe that the time consumption and the performance improvement will be even better for applications with larger numbers of tasks. We proposed a novel lifelong learning algorithm using output kernels. The proposed method efficiently learns both the model and the inter-task relationships at each iteration. Our update rules for learning the task relationship matrix, at each iteration, were motivated by the recent work in output kernel learning. In order to handle the memory explosion from an unbounded support set in the lifelong learning setting, we proposed a new budget maintenance scheme that utilizes the task relationship matrix to remove the least-useful (high confidence) example from the support set. In addition, we proposed a two-stage budget learning scheme based on the intuition that each task only requires a subset of the representative examples in the support set for efficient learning. It provides a competitive and efficient approach to handle large number of tasks in many real-life applications. The effectiveness of our algorithm is empirically verified over several benchmark datasets, outperforming several competitive baselines both in the unconstrained case and the budget-limited case, where selective forgetting was required.
a novel approach for online lifelong learning using output kernels.
514
scitldr
Minecraft is a videogame that offers many interesting challenges for AI systems. In this paper, we focus in construction scenarios where an agent must build a complex structure made of individual blocks. As higher-level objects are formed of lower-level objects, the construction can naturally be modelled as a hierarchical task network. We model a house-construction scenario in classical and HTN planning and compare the advantages and disadvantages of both kinds of models. Minecraft is an open-world computer game, which poses interesting challenges for Artificial Intelligence BID0 BID12, for example for the evaluation of reinforcement learning techniques BID21. Previous research on planning in Minecraft focused on models to control an agent in the Minecraft world. Some examples include learning planning models from a textual description of the actions available to the agent and their preconditions and effects BID4, or HTN models from observing players' actions BID15., on the other hand, focused on online goal-reasoning for an agent that has to navigate in the minecraft environment to collect resources and/or craft objects. They introduced several propositional, numeric BID7 and hybrid PDDL+ planning models BID8.In contrast, we are interested in construction scenarios, where we generate instructions for making a given structure (e.g. a house) that is composed of atomic blocks. Our longterm goal is to design a natural-language system that is able to give instructions to a human user tasked with completing that construction. As a first step, in the present paper we consider planning methods coming up with what we call a construction plan, specifying the sequence of construction steps without taking into account the natural-language and dialogue parts of the problem. For the purpose of construction planning, the Minecraft world can be understood as a Blocksworld domain with a 3D environment. Blocks can be placed at any position having a non-empty adjacent position. However, while obtaining a sequence of "put-block" actions can be sufficient for an AI agent, communicating the plan to a human user requires more structure in order to formulate higher-level instructions like build-row, or build-wall. The objects being constructed (e.g. rows, walls, or an entire house) are naturally organized in a hierarchy where high-level objects are composed of lower-level objects. Therefore, the task of constructing a high-level object naturally translates into a hierarchical planning network (HTN) BID19 BID20 BID22 BID6.We devise several models in both classical PDDL planning BID5 BID13 ) and hierarchical planning for a simple scenario where a house must be constructed. Our first baseline is a classical planning model that ignores the high-level objects and simply outputs a sequence of place-blocks actions. This is insufficient for our purposes since the ing sequence of actions can hardly be described in natural language. However, it is a useful baseline to compare the other models. We also devise a second classical planning model, where the construction of high-level objects is encoded via auxiliary actions. HTN planning, on the other hand, allows to model the object hierarchy in a straightforward way, where there is a task for building each type of high-level object. The task of constructing each high-level object can be decomposed into tasks that construct its individual parts. Unlike in classical planning, where the PDDL language is supported by most/all planners, HTN planners have their own input language. Therefore, we consider specific models for two individual HTN planners: the PANDA planning system BID3 BID2 and SHOP2 BID14. We consider a simple scenario where our agent must construct a house in Minecraft. We model the Minecraft environment as a 3D grid, where each location is either empty or has a block of a number of types: wood, stone, or dirt. FIG0 shows the hierarchy of objects of our construction scenario. For the high-level structure the house consists of four stone walls, a stone roof, and a door. The walls and the roof are further decomposed into single rows that need to be built out of individual blocks. The door consists of two gaps, i.e., empty positions inside one of the walls. As our focus is on the construction elements we abstract low-level details away. For example, we avoid encoding the position of the agent and assume that all positions are always reachable. We also assume Minecraft's creative mode, where all block types are always available so we do not need to keep track of which blocks are there in the inventory. This is a very simplistic model, where planning focuses simply on the construction actions (i.e. placing or removing blocks), of high-level structures. Nevertheless, it can still pose some challenges to modern planners, specially due to the huge size of the Minecraft environment. Our first model is a classical planning model in the PDDL language that consists of only two actions: putblock(?location, ?block-type) and remove-block(?location, ? block-type) where there is a different location for each of the x-y-z coordinates in a 3D grid. The goal specifies what block-type should be in each location. As blocks cannot be placed in the air, the precondition of put-block requires one of the adjacent locations of? location to be non-empty. Other than that, blocks of any type can always be added or removed at any location. The goal is simply a set of block at facts. A limitation of this simple model is that it completely ignores the high-level structure of the objects being constructed. As there is no incentive to place blocks in certain order, a high-level explanation of the plan may be impossible. To address this, we introduce auxiliary actions that represent the construction of high-level objects. Figure 2 shows the auxiliary actions that represent building a wall. The attributes of the wall are specified in the initial state via attributes expressed by predicates wall dir, wall length, wall height, wall type, and current wall loc. In order to avoid the huge amount of combinations of walls that could be constructed of any dimensions and in any direction, the walls that are relevant for the construction at hand are specified in the initial state via these predicates. These three actions decompose the construction of a wall into several rows. Action begin wall ensures that no other high-level object is being constructed at the moment and adds the fact constructing wall to forbid the construction of any other wall (or roof) until the current wall has been finished. Action build row in wall ensures that a row of the given length will be built on the corresponding location and direction by adding predicates (building row) and (rest row ?loc ? len ?dir ?t). Simultaneously, it updates the location for the rest of the wall to be built and decreases its height by one. (not (constructing wall)) (not (current wall ? w))))Figure 2: Auxiliary PDDL actions to build a wall. When the height is zero, the action end wall becomes applicable, which finishes the construction of the wall. In the goal we then use the predicates wall at and roof at that force the planner to use these constructions, instead of a set of block at facts as we did in the simple model. HTN models encode the construction of high-level objects in a straightforward way by defining tasks such as build house, build wall and build row. These tasks will then be decomposed with methods until only primitive tasks will be left, in our case place-block and remove-block. We consider specific models for two individual HTN planners: the PANDA planning system BID3 BID2 ) and SHOP2 BID14. PANDA uses an HTN formalism BID9, which allows combining classical and HTN planning. The predicates describing the world itself, i.e. the relations between different locations remain the same as in the PDDL model, as do the place-block and remove-block primitive actions. On top of this, high-level objects are described as an HTN where each object corresponds to a task, without requiring to express their attributes with special predicates as we did in the PDDL model. Specifically, we defined tasks that correspond to building a house, a wall, a roof, a row of blocks, and the door. FIG1 shows the methods used to decompose the task of building a wall. These methods work in a recursive fashion over the height of the wall. For walls with height one, the build wall 1 method is used to build them. For walls with larger height, the build wall 2 method decomposes the task of building them into building a row in the current location and building the rest of the wall (i.e., a wall of height-1) in the location above the previous one. These subtasks are ordered, so that walls are always built from bottom to top. The methods for buildrow and buildroof work in the same fashion, while buildhouse only has one method decomposing the house into four walls, the roof, and the door. The task builddoor also has just one method stating which two blocks have to be removed to form a door. Choosing this way of modeling the door by first forcing the planner to place two blocks and later removing them again may seem inefficient, but for communication with a human user this may be preferable over indicating that these positions should remain empty in the first place. The SHOP2 model follows a similar hierarchical task structure as the PANDA model, having methods for decomposing the house into walls, a wall into rows and rows into single blocks. Since one of the advantages of SHOP2 is that it can call arbitrary LISP functions, we can represent the locations using integers as coordinates and replace the predicates used in PANDA and PDDL to express their relations by simple arithmetic operations. This also allows us to compute the end point of rows of any given length in a given direction, which means we can construct the walls by alternating the direction of the rows. Based on this, we define two different recursive decompositions of walls as shown in FIG2. In the first method we simply build the row starting in the current location, while in the second method we change the direction of the row we want to build and identify the position that would previously have been the end of the row by replacing the x-coordinate with x`length´1. Since this computation is different for each direction, we need separate methods for them. Apart from this, the decomposition structure is the same as with PANDA, building the walls, roof, and rows incrementally using a recursive structure. To evaluate the performance of common planners on our models 1, we scale them with respect to two orthogonal parameters: the size of the construction, and the size of the cubic 3D world we are considering. We use different planners for each model. For the classical planning models we use the LAMA planner BID16. The PANDA planning system implements several algorithms, including plan space POCL-based search methods BID3 BID2, SAT-based approaches, and forward heuristic search. We use a configuration using heuristic search with the FF heuristic, which works well on our models. For SHOP2, we use the depthfirst search configuration BID14. All experiments were run on an Intel i5 4200U processor with a time limit of 30 minutes and a memory limit of 2GB.In our first experiment, we scale the size of the house starting with a 3ˆ3ˆ3 house and increasing one parameter (length, width, and height) at a time (4ˆ3ˆ3, 4ˆ41 3, . . ., 9ˆ9ˆ9.). The size of the 3D world is kept as small as possible to fit the house with some slack, so initially is set to 5ˆ5ˆ5 and is increased by one unit in each direction every three steps, once we have scaled the house in all dimensions. The upper row of FIG3 shows the search and total time of the planners on the different models. The construction size in the x-axis refers to the number of blocks that need to be placed in the construction. All planners scale well with respect to search time, solving problems of size up to 9ˆ9ˆ9 in just a few seconds. The non-hierarchical PDDL planning model (PDDL blocks) that only uses the place-block and remove-block actions without any hierarchical information is the one with worst search performance. Moreover, it also in typically longer plans that build many "support" structures to place a block in a wall without one of the adjacent blocks in the wall being there yet. However, there is a huge gap between search and total time for the PANDA and PDDL models, mostly due to the overhead of the grounding phase. SHOP2 does not do any preprocessing or grounding so it is not impacted by this. For the PANDA and PDDL models, total time significantly increases every three problems, whenever the world size is increased. This suggests that, somewhat counterintuitively, the size of the world environment has a greater impact on these planners' performance than the size of the construction. In the PDDL based approaches, the number of operators and facts produced in the preprocessing shows a similar trend so the planner's performance seems directly influenced by the size of the grounded task. For PANDA, on the other hand, we observe a linear increase in the number of facts and only a comparatively small increase in the number of operators. To test more precisely what is the impact of increasing the world size, we ran a second set of experiments where we kept the size of the house fixed at 5ˆ5ˆ5 and just increased the size of the world. As shown in the bottom part of FIG3 the performance of SHOP2 is not affected at all, since it does not require enumerating all possible locations. Search time for PANDA also stays mostly constant, but the overhead in the preprocessing phase dominates the total time. This contrasts with the number of operators and facts, which is not affected by the world size at all. The PDDL based models are also affected in terms of preprocessing time, due to a linear increase in the number of facts and operators with respect to world size, but to a lesser degree. However, search time increases linearly with respect to the world size due to the overhead caused in the heuristic evaluation. We have introduced several models of a construction scenario in the Minecraft game. Our experiments have shown that, even in the simplest construction scenario which is not too challenging from the point of view of the search, current planners may struggle when the size of the world increases. This is a serious limitation in the Minecraft domain, where worlds with millions of blocks are not unrealistic. Lifted planners like SHOP2 perform well. However, it must be noted that they follow a very simple search strategy, which is very effective on our models where any method decomposition always leads to a valid solution. However, it may be less effective when other constraints must be met and/or optimizing quality is required. For example, if some blocks are removed from the ground by the user, then some additional blocks must be placed as auxiliary structure for the main construction. Arguably, this could be easily fixed by changing the model so that whenever a block cannot be placed in a target location, an auxiliary tower of blocks is built beneath the location. However, this increases the burden of writing new scenarios since suitable task decompositions (along with good criteria of when to select each decomposition) have to be designed for all possible situations. This makes the SHOP2 model less robust to unexpected situations that were not anticipated by the domain modeler. PANDA, on the other hand, supports insertion of primitive actions BID9, allowing the planner to consider placing additional blocks, e.g., to build supporting structures that do not correspond to any task in the HTN. This could help to increase the robustness of the planner in unexpected situations where auxiliary structures that have not been anticipated by the modeler are needed. However, this is currently only supported by the POCL-plan-based search component and considering all possibilities for task insertion significantly slows down the search and it runs out of memory in our scenarios. This may point out new avenues of research on more efficient ways to consider task insertion. In related Minecraft applications, cognitive priming has been suggested as a possible solution to keep the size of the world considered by the planner at bay BID17. In construction scenarios, however, large parts of the environment can be relevant so incremental grounding approaches may be needed to consider different parts of the scenario at different points in the construction plan. Our models are still a simple prototype and they do not yet capture the whole complexity of the domain. We plan to extend them in different directions in order to capture how hard it is to describe actions or method decompositions in natural language. For example, while considering the position of the user is not strictly necessary, his visibility may be important because objects in his field of view are easier to describe in natural language. How to effectively model the field of vision is a challenging topic, which may lead to combinations with external solvers like in the planning modulo theories paradigm BID10.Another interesting extension is to consider how easy it is to express the given action in natural language and for example by reducing action cost for placing blocks near objects that can be easily referred to. Such objects could be landmarks e.g. blocks of a different type ("put a stone block next to the blue block") or just the previously placed block (e.g., "Now, put another stone block on top of it").
We model a house-construction scenario in Minecraft in classical and HTN planning and compare the advantages and disadvantages of both kinds of models.
515
scitldr
Attacks on natural language models are difficult to compare due to their different definitions of what constitutes a successful attack. We present a taxonomy of constraints to categorize these attacks. For each constraint, we present a real-world use case and a way to measure how well generated samples enforce the constraint. We then employ our framework to evaluate two state-of-the art attacks which fool models with synonym substitution. These attacks claim their adversarial perturbations preserve the semantics and syntactical correctness of the inputs, but our analysis shows these constraints are not strongly enforced. For a significant portion of these adversarial examples, a grammar checker detects an increase in errors. Additionally, human studies indicate that many of these adversarial examples diverge in semantic meaning from the input or do not appear to be human-written. Finally, we highlight the need for standardized evaluation of attacks that share constraints. Without shared evaluation metrics, it is up to researchers to set thresholds that determine the trade-off between attack quality and attack success. We recommend well-designed human studies to determine the best threshold to approximate human judgement. Advances in deep learning have led to impressive performance on many tasks, but models still make mistakes. Models are particularly vulernable to adversarial examples, inputs designed to fool models . demonstrated that image classification models could be fooled by perturbations indistinguishable to humans. Due to the importance of natural language processing (NLP) tasks, a large body of research has focused on applying the concept of adversarial examples to text, including (; ; ; ; ; ; ; ; ; ; a). The importance of tasks such as spam and plagiarism detection highlights the need for robust NLP models. However, there are fundamental differences between image and text data. Unlike images, two different sequences of text are never entirely indistinguishable. This raises the question: if indistinguishable perturbations aren't possible, what are adversarial examples in text? We observe that each work from recent literature has a slightly different definition of what constitutes an adversarial example in natural language. Comparing the success rate of two attacks is meaningless if the attacks use different methods to evaluate the same constraints or define different constraints altogether. In this paper, we build on to introduce a taxonomy of constraints specific to adversarial examples in natural language. To the best of our knowledge, our work provides the first comprehensive framework for categorizing and evaluating attack constraints in natural language. We discuss use cases and propose standardized evaluation methods for each of these constraints. We then apply our evaluation methods to the synonym-substitution based attacks of and. These attacks claimed to preserve the syntax and semantics of the original sentence, while remaining non-suspicious to a human interpreter. However, we find that most of their adversarial examples contain additional grammatical errors, and human surveys reveal that many adversarial examples also change the meaning of the sentence and/or do not appear to be written by humans. These call into question the ubiquity of synonym-based adversarial examples and emphasize the need for more careful evaluation of attack approaches. Lastly, we discuss how previous works rely on arbitrary thresholds to determine the semantic similarity of two sentences. These thresholds can be tuned by the researcher to make their methods seem more successful with little penalty in quantitative metrics. Thus, we highlight the importance of standardized human evaluations to approximate the true threshold value. Any method that introduces a novel approach to measure semantic similarity should support their choice of threshold with defensible human studies. The three main contributions of this paper are: • We formally define and categorize constraints on adversarial examples in text, and introduce evaluation methods for each category. • Using these categorizations and evaluation methods, we quantitatively disprove claims that stateof-the-art synonym-based substitutions preserve semantics and grammatical correctness. • We show the sensitivity of attack success rate to changes in semantic similarity thresholds set by researchers. We assert that perturbations which claim semantic similarity should use standardized human evaluation studies with precise wording to determine an appropriate threshold. Research has shown that current deep neural network models lack the ability to classify adversarial examples correctly . Two images that appear identical to a human but are different by a minuscule amount can be assigned grossly different classification scores by a model. These adversarial examples expose issues that arise from the black-box nature of deep learning predictions. Studying them is key to eventually building secure applications based on deep learning. The search for adversarial examples can be framed as a simple optimization problem: maximizing the change in prediction while minimizing the change in input (; ; b;). Past work in the image domain explored applying a small factor of the gradient to move an input in the "worst-case" direction. applied a similar approach for text by mapping sentences into latent space. Since then, most approaches have been more heuristic-driven due to the difficulty of maintaining syntactic and semantic similarity otherwise. We categorize these attacks into four groups. Attacks by Character Substitution: Recently, multiple studies proposed to attack natural language classification models by replacing words with deliberately misspelled words (; ;). These studies used character replacements to change a word into one that the model doesn't recognize. Such replacements are designed to create character sequences that a human would easily correct into the intended words. Character replacements, while effective in certain circumstances, can be detected with any spellchecking software. demonstrated a way to train a model that correctly classifies these misspelled inputs. Attacks by Word Insertion or Removal: and devised a way to determine the most important words in the input and then used heuristic-driven rules to generate perturbed inputs by adding or removing important words. appended extra sentences to paragraphs that were inputs to reading comprehension tasks. These three studies each point out interesting flaws in natural language models, but do not produce adversarial examples that retain the semantics of the original input. Attacks by Paraphrase: The bulk of recent work has defined adversarial examples in natural language as any sequences that both fool the model and share the meaning of the original input . Thus, if document x is of class y, then any paraphrase of x that is not of class y is a bona-fide adversarial example. and proposed to use neural machine translation systems to paraphrase input text to create semantic-preserving perturbations. However, these systems have difficulty generating diverse, syntactically correct paraphrases. Their paraphrases often have bad grammar and overall have trouble fooling the model. Attacks by Synonym Substitution: Due to the difficulty of generating paraphrases in general,;;; Papernot et al. (2016a) have developed easier ways to generate a subset of all paraphrases: by replacing a certain number of words from the input with synonyms. proposed generating word-level swaps with a population-based optimization algorithm. The study aimed to generate semantically and syntactically similar adversarial examples that fool well-trained sentiment analysis and textual entailment models. In order to select the best replacement word, the study computed the nearest neighbors of a selected word according to the distance in the counter-fitted embedding space. It also used the Google one billion words language model to filter out words that do not fit within the context. Another recent system, TextFooler, was proposed by to attack three DNN models, including the powerful pre-trained BERT , and the widely used convolutional and recurrent neural networks. The method identifies the most important words in the input sequence and replaces them one-by-one with synonyms until the model output changes. The study used counter-fitted word embeddings to select the best replacement word and then utilized the Universal Sentence Encoder to measure a similarity score between the perturbed and original inputs. and used a technique called Interval Bound Propagation (IBP) to train models that withstand synonym attacks. IBP encourages models to make the same prediction on all sequences within a predefined neighborhood. In this case, it ensures that sentences that are a few synonym swaps away from each other have similar prediction scores. However, IBP is currently only feasible for simple models: feedforward networks, CNNs, and LSTMs with only a few layers. Perhaps more problematic, training with IBP causes a substantial drop in accuracy on the test set. define a broad framework to determine if a machine learning model is secure, but do not specify to adversarial examples on text. laid out a set of potential constraints for the attack space when generating adversarial examples, which are each useful in different real-world scenarios. defined a framework for evaluating attacks on machine translation models, focusing on meaning preservation constraints, but restricted their definitions to sequence-to-sequence models. When constructing adversarial examples, it's important to define the space of inputs available to the attacker. A successful adversarial example is defined as one that fools the model, but other constraints are often defined to restrict the outputs that an attacker may produce. For a given classifier F and a test input x, we denote y gt as the ground-truth output, F (x) as the model output, and C 0, C 1,... C n as a set of functional constraints, each of which represents whether an attack satisfies a certain constraint. An example x adv is considered adversarial if: Adversarial examples aim to achieve two goals at the same time: to induce misclassification by achieving F (x adv) = y gt (we focus on untargeted attacks), and to restrict x adv. For instance, C 0 (x, x adv) requires morphology be preserved between x and x adv. Unlike with images, text perturbations are never indistinguishable. The attacker must decide what makes two natural language inputs indistinguishable based on the scenario. This increases the importance of clearly defining constraints. Different sets of constraints are appropriate for different use cases. We extend on the categorization of attack spaces for adversarial examples introduced by to introduce a taxonomy of constraints for natural language specifically, and discuss possible motivations for studying each constraint. We propose a method for evaluating whether adversarial examples meet each constraint. We define three mutually exclusive categories of semantics-related constraints on attacks in natural language: altering an input sequence while changing as few characters as possible (morphologicallypreserving), altering an input sequence while retaining its meaning (semantics-preserving), and crafting an input sequence which contains a specific message (semantics-constrained). We then define two constraints which may be used in addition to any semantic constraints: a requirement for adversarial examples to be grammatically correct, and the requirement to appear to have been written by a human (non-suspicious). Appendix A.1 categorizes a selection of prior work into these groups. 3.1 MORPHOLOGY-PRESERVING PERTURBATION In some situations, the attacker is willing to change the semantics of the input as long as all changed sentences read the same to a human. These character substitutions may change the semantics of the sentence, as long as the reader can figure out the intended words from the context. If no defense is provided for morphological perturbations, an attacker can easily transmit a message by adding or a deleting a handful of characters . A perturbation is morphology-preserving if satisfying C(x, x adv):= {d(x, x adv) ≤ }, where d measures the morphological distance between two strings. One common choice of d for morphological distance is d edit (x, x adv), the edit distance between x and x adv. Edit distance is the minimum number of operations (insertions, deletions, or substitutions) required to transform the string x adv back into x. 3.2 SEMANTICS-PRESERVING PERTURBATION The attacker starts with an input sequence and is allowed to modify the input sequence however they want as long as the semantics is preserved. A use case for this class of adversarial examples is tricking plagiarism detection software. An attacker wants to preserve as much of the original document as possible but still avoid detection. An adversarial example is semantics-preserving if a human labeler agrees that any changes between the input sequence x and the modified sequence x adv do not alter the meaning of the sequence. We define d sem (x, x adv) as the score given by humans when asked to rate if the meaning is preserved on a Likert scale of 1-5, where 1 is "Strongly Agree" and 5 is "Strongly Disagree" . A perturbation is semantics-preserving if: We propose = 2 as a general rule: on average, humans should either "agree" or "strongly agree" that x and x adv have the same meaning. Evaluation of semantics-preserving perturbations should ideally be done by humans, but machine evaluation of semantic similarity can be used as a proxy for human judgment, as explored by. Automated metrics of meaning should always be supported by human studies. The attacker may generate any input sequence as long as it contains the semantic content that the attacker intends. This is distinct from the semantics-preserving case because there is no starting input sequence to perturb; the attacker can convey their message by any means possible. In these cases, the attacker can write anything, as long as it conveys specific meaning. As discussed by , one use case for semantics-constrained input is to fool a spam classifier. The attacker finds some way to relay their message that evades spam detection. The attacker may create any sequence as long as it contains the semantic content they desire. Examples with this constraint are more difficult to generate than semantics-preserving perturbations because the search space is much bigger and it is challenging to check that the model produces incorrect output without human review. Evaluation of semantics-constrained input is more difficult than semantics-preserving perturbations, as there is no starting point to compare to. There will never be automated way to tell if an email is spam with 100% accuracy. Whether an input is semantics-constrained must be determined by a human judge. 3.4 SYNTACTIC CONSTRAINT Under this constraint, the attacker is constrained to inputs that are grammatically valid. Perturbations that introduce grammar errors often aren't semantics-preserving. However, in many cases, a human reader will recognize that x and x adv have the same meaning even if x adv introduced additional syntactic errors. There may be use cases for both semantics-preserving perturbations and semanticsconstrained inputs with and without syntactic constraints: • Semantics-preserving perturbation with syntactic constraint: Consider a student who wishes to alter a copied assignment to evade plagiarism detection. They must perturb the assignment while preserving the meaning, but introducing grammatical errors would get them a bad grade. • Semantics-preserving perturbation without syntactic constraint: An attacker who wishes to illegally distribute a PDF online must retain the content in the PDF, but readers of the PDF are indifferent to whether grammar errors are introduced as long as the meaning is unchanged. Similar to the case of illegal video streaming discussed in , the consumers of the PDF may be thought of as colluding with the attacker. • Semantics-constrained input with syntactic constraint: An adversary wishing to evade a fake news classifier must generate an article with correct grammar. • Semantics-constrained input without syntactic constraint: Someone wishing to post offensive content on social media may be willing to purposely misspell words or insert grammatical errors in order to get their content onto a platform. Evaluation of whether adversarial examples follow the grammatical constraint requires a way of measuring the amount of grammar errors in a document. Define G(s) as the number of syntactic errors in an input sequence s. When applying a semantics-preserving perturbation from x to x adv, the grammatical constraint may be represented as where represents the amount of additional grammatical errors the adversary is willing to introduce. When generating input from scratch, the constraint becomes simply: G(x adv) ≤, where is the amount of total grammatical errors the adversary is willing to create. Methods which claim to introduce no grammatical errors imply = 0. Evaluation of G(x) and G(x adv) may be done by humans, but pragmatically it is more convenient and consistent to use an automatic grammar checker such as LanguageTool . 3.5 NON-SUSPICIOUS CONSTRAINT The non-suspicious constraint specifies that the adversarial example must appear to be humanwritten to other humans. It has some overlap with the syntactic constraint, but they do not always go together. Sentences without grammar errors may still not seem human-written. Conversely, grammatically incorrect sentences could always have been written by a human. An example use case of an attack with the non-suspicious constraint is the plagiarism example discussed above: the student would not want to make modifications to their assignment such that the teacher would be able to tell the assignment was not human-written. A case where the non-suspicious constraint does not apply is the illegal PDF as mentioned earlier, as the consumers of the PDF do not care if it has been altered as long as meaning is preserved. Evaluation of the non-suspicious constraint must be done by humans. We propose a method in which humans are shown a shuffled mix of real and adversarial sequences and must guess whether each one is real or computer-altered, or computer-generated in the case of semantics-constrained input. Define the portion of people who correctly identify an adversarial example x adv as computeraltered as R(x adv) An adversarial example is considered to meet the non-suspicious constraint if R(x adv) <, where 0 < ≤ 1. A smaller value of enforces more rigorous standards on what must appear to be written by a human. We now apply the evaluation framework discussed in Section 3 to evaluate the effectiveness of the attack techniques from and. We chose these these works because: • They claim to find semantics-preserving perturbations which adhere to the syntactic constraint, and imply adherence to the non-suspicious constraint. However, our inspection of the adversarial perturbations revealed that many introduced syntax errors or did not preserve semantics. • They report high attack success rates 1. • These methods cover two of the most effective models for text classification: LSTM and BERT. To generate examples for evaluation, we attacked BERT using's method and attacked a WordLSTM using's method. We only considered classification tasks as our focus is on evaluating the adversarial examples. We evaluate both methods on the IMDB 2 and Yelp polarity document-level sentiment classification datasets and's method on the MR sentence-level sentiment classification dataset . We ran experiments to evaluate whether the generated examples met three constraints: syntax, semanticspreservation and non-suspicion. We evaluate fulfillment of the syntactic constraint using a grammar checker. These programs are easy to use, free, and cover a wide range of grammatical rules over many languages. We chose to use LanguageTool, an open-source proofreading tool . LanguageTool ships with thousands of human-curated rules for the English language and provides a downloadable server interface for analyzing sentences. We ran each of the generated (x, x adv) pairs through LanguageTool and compared the amount of detected errors to determine if the syntactic constraint as defined in Equation 3 was fulfilled. LanguageTool detected more grammatical errors in x adv for 52.1% of pairs across the five datasets. There is a clear linear relationship between the number of words changed and the number of grammatical errors induced (Figure 1). Clearly, the majority of generated examples don't fulfill the syntactic constraint. As shown in A savvy defender who wants to build a system robust against these types of substitution attacks may notice that adversarial examples contain strange phrasings and unnatural errors that a human would almost never make. LanguageTool detected errors in several categories that appeared far often in adversarial samples than in the original, human-generated content. As discussed in Section 3.2, tools for automatically evaluating semantic similarity exist, but human judgement is essential to confirm such automatic evaluations. and both include human evaluation, these studies include a single scale on which users rated the similarity of x and x adv to be relatively high. We believe that this phrasing did not generate an accurate measure of semantic similarity, as users may have considered two sentences with clearly different meanings to be "similar" due to their small morphological distance. Instead, we ask users to determine whether the changes between x and x adv preserve the meaning of the passage. We set up our own survey to evaluate human judgement on the adversarial examples from and. We enlisted workers from Amazon Mechanical Turk to label 3907 data points across the five datasets. Our survey randomized the order of x and x adv and displayed them side-by-side. To evaluate whether the perturbation was semantics-preserving as defined in Equation 2, we asked users to rate whether they agreed that the changes between the two passages preserved the meaning of the passage on a scale of 1 (Strongly Agree) to 5 (Strongly Disagree). Average scores as shown in Figure 2 vary by dataset, but users' answers generally generally average out to around 3 ("Not Sure"). Clarifying the survey question makes it clear that many of the examples are not semantics-preserving. Additionally, we note that due to acquiescence bias, labelers are generally more likely to agree than disagree when presented with a statement 3. To test this effect, we negated the question: instead of asking whether the changes from x to x adv "preserved" the meaning of x, we asked whether the differences "changed" the meaning. Table 3: Likert scale response: Does changing from x to x adv change meaning? Table 3 shows that averages are now closest to "2 -Agree" rather than "3 -Not sure". Thus, inverting the question ed in generated examples being rated as less semantics-preserving. Future human evaluation studies should use similar practices to minimize the effects of this bias. The final claim we investigated was whether adversarial examples generated by these two attack techniques were truly non-suspicious. We presented examples one by one to humans and asked if they were real or computer-altered as proposed in Section 3.5. A method that consistently generated non-suspicious adversarial examples would produce inputs would appear to be human-written. In this case, human labelers would average an accuracy of 50%. If humans could consistently discern perturbed inputs from real inputs, they would achieve 100% on this task. As this is a time-consuming task for long documents, we only evaluated adversarial examples generated by 's method on the sentence-level MR dataset. We sampled 100 (x, x adv) pairs then used Mechanical Turk to get 10 guesses for each example. Humans achieved 69.2% accuracy on this task. Table 4 presents the confusion matrix of from the survey. Interestingly, workers guessed that the examples were real 62.2% of the time, but when they guessed that examples were computer-altered they were right 75.4% of the time. Thus while some perturbed examples are non-suspicious, there are some which workers identify with high precision. Original 814 431 Perturbed 186 570 Sometimes, prior work has chosen the same metric d but different thresholds for. Synonym substitution attacks provide one example. These attacks consider two sentences (x, x adv) that differ by a single word (w ∈ x, w adv ∈ x adv) to have the same meaning if w adv is one of the closest neighbors in the counter-fitted embedding space. In other words, indicates how many synonyms they allowed. At least three different such attacks rely on this metric: Alzantot et al. uses = 8, Kuleshov et al. uses = 15, and Jin et al. (2019 uses = 50. Without an agreed-upon standard value, choice of is at the researcher's discretion. This is problematic because there is a direct correlation between the value chosen for and the success of an attack. used an additional distance metric d defined as the cosine similarity between embeddings encoded by the Universal Sentence Encoder (USE) in order to determine if a synonym swap preserves semantic similarity . Figure 3 shows accuracy under BERT under attack by's method as the maximum allowed cosine similarity between two sentences' USE embeddings increases 4. As becomes more strict, the attack becomes less successful. Figure 4 plots the accuracy under attack as the number of synonyms considered for each substitution increases. An attack that is more lenient with its standard for what constitutes a synonym is more successful. For any distance metric, there is no perfect value of. Whether two sentences are truly paraphrases is often subjective. We cannot expect any automated evaluation tool to perfectly separate paraphrases from non-paraphrases. However, there will be a range of values for that correlate the most closely with the judgement of humans. Without using standardized human studies like the one in Section 4.2 to verify the linguistic standard provided by the choice of, generated adversarial examples may not be useful at all. The correct range of values of will vary by metric. Every study that presents a new way to measure semantic similarity should perform a human study to approximate the proper value for. With consistent values for, researchers will be able to easily compare methods that use the same metrics. We introduced a framework for evaluating fulfillment of attack constraints in natural language. Applying this framework to synonym substitution attacks raised concerns about the semantic preservation, syntactic accuracy, and conspicuity of the adversarial examples they generate. Future work may expand our hierarchy to categorize and evaluate different attack constraints in natural language. Standardized terminology and evaluation metrics will make it easier for defenders to determine which attacks they must protect themselves from-and how. It remains to be seen how robust BERT is when subject to synonym attacks which rigorously preserve semantics and syntax. It is up to future research to determine how prevalent adversarial examples are throughout the broader space of paraphrases. A.1 CATEGORIZATION OF NLP ATTACKS Past research in attacks on textual models has provided many disparate sets of constraints for adversarial examples. Here, we select a small slice of the past work and categorize it based on the taxonomy outlined in Section 3. Constraint on Meaning Syntactic Constraint Non-suspicious Synonym Substitution. (; ;) semantics-preserving perturbation Character Substitution. (; ;) morphology-preserving perturbation Word Insertion or Removal. ) semantics-preserving perturbation General Paraphrase. (; ;) semantics-preserving perturbation Good Word Attack. semantics-preserving perturbation Sentence Insertion. semantics-preserving perturbation A.2 DETAILS ABOUT HUMAN STUDIES. Our experiments relied on labor crowd-sourced from Amazon Mechanical Turk to provide labels for two tasks. We used five datasets: MIT and Yelp datasets from and MIT, Yelp, and Movie Review datasets from . We added a limitation so that only workers with "Masters" status on Mechanical Turk could complete our tasks. We estimated that each task would take approximately 15 seconds to complete, so we paid workers $.05 per label to ensure a fair wage of $12 per hour. For both tasks, we allotted workers 3 minutes per assignment. Task 1: Real vs. fake? In Section 4.3, we present from our Mechanical Turk survey where we asked users to guess if a sample of text was real or Computer-altered. We restricted this task to a single dataset, Movie Review. We chose Movie Review because it had an average sample length of 20 words, much shorter than Yelp or IMDB. We made this restriction because of the time-consuming nature of classifying long samples as Real or Fake. Task 2: Semantic Similarity. In Section 4.2, we present from two Mechanical Turk questionnaires to judge semantic similarity or dissimilarity. For each task, we show x and x adv, in a random order. We added a custom script to highlight the different characters between the two sequences. For both tasks, we provided the following description: "Compare two short pieces of English text and determine if they mean different things or the same." We then prompted labelers: "Changing from one of these sentences to the other X the meaning," where X was "changes" or "preserves". Phrasing matters. Mechanical Turk comes with a set of pre-designed questionnaire interfaces. These include one titled "Semantic Similarity" which asks users to rate a pair of sentences on a scale from "Not Similar At All" to "Highly Similar." Examples generated by synonym attacks benefit from this question formulation because humans tend to rate two sentences that share many words as "Similar" due to their small morphological distance, even if they have different meanings. Cognitive biases in the Likert scale. It was interesting to compare from the surveys within Task 2. Since all we did was negate the question, the should have been inverses: i.e., every label that said "Agree" for the first formulation of the question should have said "Disagree" for the second, etc. However, we found that labelers, at least on the Mechanical Turk, tend to select "Agree" regardless of the question. When asked if the change from x to x adv "changes" meaning, 68.3% of labelers selected "Agree". Surprisingly, when asked if the change from x to x adv "preserves" meaning, 43.6% of labelers still answered with "Agree"! Overall, 55.9% of our 7,816 responses, from two opposite surveys, were simply "Agree". We believe that this is an instance of a phenomenon called acquiescence bias, where respondents to a survey gravitate towards answers that are generally positive . Notes for future surveys. In the future, we would try to filter out bad labels by mixing a small number of ground-truth "easy" data points into our dataset and rejecting the work of labelers who performed poorly on this set. A.3 LANGUAGETOOL ERROR TYPES LanguageTool presents helpful error message along with each detected grammatical rule violation. This table displays sample error messages alongside the error codes from Figure 2. For example, code PRP RB NO VB appears 20 times more frequently in adversarial samples than it does in real, human-crafted input. This specific error often appears when the attack substitutes a verb with a noun. 6 TO NON BASE 12.0 The verb after "to" should be in the base form:'constitute'. 7 PRP PAST PART 8.7 Possible grammatical error. You used a past participle without using any required verb ('be' or 'have'). Did you mean'was','were'? 8 AFFORD VB 7.5 This verb is used with the infinitive:'to better','to well'
We present a framework for evaluating adversarial examples in natural language processing and demonstrate that generated adversarial examples are often not semantics-preserving, syntactically correct, or non-suspicious.
516
scitldr
In order for machine learning to be deployed and trusted in many applications, it is crucial to be able to reliably explain why the machine learning algorithm makes certain predictions. For example, if an algorithm classifies a given pathology image to be a malignant tumor, then the doctor may need to know which parts of the image led the algorithm to this classification. How to interpret black-box predictors is thus an important and active area of research. A fundamental question is: how much can we trust the interpretation itself? In this paper, we show that interpretation of deep learning predictions is extremely fragile in the following sense: two perceptively indistinguishable inputs with the same predicted label can be assigned very different}interpretations. We systematically characterize the fragility of the interpretations generated by several widely-used feature-importance interpretation methods (saliency maps, integrated gradient, and DeepLIFT) on ImageNet and CIFAR-10. Our experiments show that even small random perturbation can change the feature importance and new systematic perturbations can lead to dramatically different interpretations without changing the label. We extend these to show that interpretations based on exemplars (e.g. influence functions) are similarly fragile. Our analysis of the geometry of the Hessian matrix gives insight on why fragility could be a fundamental challenge to the current interpretation approaches. Predictions made by machine learning algorithms play an important role in our everyday lives and can affect decisions in technology, medicine, and even the legal system . As the algorithms become increasingly complex, explanations for why an algorithm makes certain decisions are ever more crucial. For example, if an AI system predicts a given pathology image to be malignant, then the doctor would want to know what features in the image led the algorithm to this classification. Similarly, if an algorithm predicts an individual to be a credit risk, then the lender (and the borrower) might want to know why. Therefore having interpretations for why certain predictions are made is critical for establishing trust and transparency between the users and the algorithm .Having an interpretation is not enough, however. The explanation itself must be robust in order to establish human trust. Take the pathology predictor; an interpretation method might suggest that a particular section in an image is important for the malignant classification (e.g. that section could have high scores in saliency map). The clinician might then focus on that section for investigation, treatment or even look for similar features in other patients. It would be highly disconcerting if in an extremely similar image, visually indistinguishable from the original and also classified as malignant, a very different section is interpreted as being salient for the prediction. Thus, even if the predictor is robust (both images are correctly labeled as malignant), that the interpretation is fragile would still be highly problematic in deployment. Our contributions. The fragility of prediction in deep neural networks against adversarial attacks is an active area of research BID4;; ). In that setting, fragility is exhibited when two perceptively indistinguishable images are assigned different labels by the neural network. In this paper, we extend the definition of fragility to neural network interpretation. More precisely, we define the interpretation of neural network to be fragile if perceptively indistinguishable images that have the same prediction label by the neural network are given substantially different interpretations. We systematically The fragility of feature-importance maps. We generate feature-importance scores, also called saliency maps, using three popular interpretation methods: simple gradient (a), DeepLIFT (b) and integrated gradient (c). The top row shows the the original images and their saliency maps and the bottom row shows the perturbed images (using the center attack with = 8, as described in Section 3) and the corresponding saliency maps. In all three images, the predicted label has not changed due to perturbation; in fact the network's (SqueezeNet) confidence in the prediction has actually increased. However, the saliency maps of the perturbed images are meaningless.investigate two classes of interpretation methods: methods that assign importance scores to each feature (this includes simple gradient , DeepLift , and integrated gradient ), as well as a method that assigns importances to each training example: influence functions . For both classes of interpretations, we show that targeted perturbations can lead to dramatically different interpretations (FIG0).Our findings highlight the fragility of interpretations of neural networks, which has not been carefully considered in literature. Fragility directly limits how much we can trust and learn from the interpretations. It also raises a significant new security concern. Especially in medical or economic applications, users often take the interpretation of a prediction as containing causal insight ("this image is a malignant tumor likely because of the section with a high saliency score"). An adversary could minutely manipulate the input to draw attention away from relevant features or onto his/her desired features. Such attacks might be especially hard to detect as the actual labels have not changed. While we focus on image data here because most of the interpretation methods have been motivated by images, the fragility of neural network interpretation could be a much broader problem. Fig. 2 illustrates the intuition that when the decision boundary in the input feature space is complex, as is the case with deep nets, a small perturbation in the input can push the example into a region with very different loss contours. Because the feature importance is closely related to the gradient which is perpendicular to the loss contours, the importance scores can also be dramatically different. We provide additional analysis of this in Section 5. This first class of methods explains predictions in terms of the relative importance of features in a test input sample. Given the sample x t ∈ R d and the network's prediction l, we define the score of the predicted class S l (x t) to be the value of the l-th output neuron right before the softmax operation. We take l to be the class with the max score; i.e. the predicted class. Feature-importance methods seek to find the dimensions of input data point that most strongly affect the score, and in doing so, these methods assign an absolute saliency score to each input feature. Here we normalize the scores for each image by the sum of the saliency scores across the features. This ensures that any perturbations that we design change not the absolute feature saliencies (which may still preserve DISPLAYFORM0 This training point has a large influence on the loss at + This training point has a large influence on the loss at Figure 2 : Intuition for why interpretation is fragile. Consider a test example x t ∈ R 2 (black dot) that is slightly perturbed to a new position x t + δ in input space (gray dot). The contours and decision boundary corresponding to a loss function (L) for a two-class classification task are also shown, allowing one to see the direction of the gradient of the loss with respect to the input space. Neural networks with many parameters have decision boundaries that are roughly piecewise linear with many transitions. We illustrate that points near the transitions are especially fragile to interpretability-based analysis. A small perturbation to the input changes the direction of ∇ x L from being in the direction of x 1 to being in the direction of x 2, directly affecting feature-importance analyses. Similarly, a small perturbation to the test image changes which training image, when up-weighted, has the largest influence on L, directly affecting exemplar-based analysis.the ranking of different features), but their relative values. We summarize three different methods to calculate the normalized saliency score, denoted by R(x t).Simple gradient method Introduced in BID2 and applied to deep neural networks in , the simple gradient method applies a local linear approximation of the model to detect the sensitivity of the score to perturbing each of the input dimensions. Given input x t ∈ R d, the score is defined as: DISPLAYFORM0 Integrated gradients A significant drawback of the simple gradient method is the saturation problem discussed by;. introduced the integrated gradients method where the gradients of the score with respect to M scaled versions of the input are summed and then multiplied by the input. Letting x 0 be the reference point and ∆x t = x t − x 0, the feature importance vector is calculated by: DISPLAYFORM1 which is then normalized for our analysis. Here the absolute value is taken for each dimension. DeepLIFT DeepLIFT is an improved version of layer-wise relevance propagation (LRP) method BID1. LRP methods decompose the score S l (x t) backwards through the neural network. In each step, the score from the last layer is propagated to the previous layer, with the score being divided proportionally to magnitude of the activations of the neurons in the previous layer. The scores are propagated to the input layer, and the is a relevance score assigned to each of the input dimensions. DeepLIFT defines a reference point in the input space and propagates relevance scores proportionally to the changes in the neuronal activations from the reference. We use DeepLIFT with the Rescale rule; see for details. A complementary approach to interpreting the of a neural network is to explain the prediction of the network in terms of its training examples, {(x i, y i)}. DISPLAYFORM0 where z i def = (x i, y i) and z t is defined analogously. L(z,θ) is the loss of the network with parameters set toθ for the (training or test) data point z. Hθ DISPLAYFORM1 is the empirical Hessian of the network calculated over the training examples. The training examples with the highest influence are understood as explaining why a network made a particular prediction for a test example. We consider two natural metrics for quantifying the similarity between interpretations for two different images. As shown in Fig. 3, these metrics can be used to evaluate the effectiveness of a targeted attack on interpretability.• Spearman's rank order correlation: Because interpretation methods rank all of the features or training examples in order of importance, it is natural to use the rank correlation to compare the similarity between interpretations.• Top-k intersection: In many settings, only the most important features or interpretations are of interest. In these settings, we can compute the size of the intersection of the k most important features before and after perturbation. Problem statement For a given fixed neural network N and input data point x t, the feature importance and influence function methods that we have described produce an interpretation I(x t ; N). For feature importance, I(x t ; N) is a vector of feature scores; for influence function I(x t ; N) is a vector of scores for training examples. We would like to devise efficient perturbations to change the interpretability of a test image. Yet, the perturbations should be visually imperceptible and should not change the label of the prediction. Formally, we define the problem as: DISPLAYFORM0 where D(·) measures the change in interpretation (e.g. how many of the top-k pixels are no longer the top-k pixels of the saliency map after the perturbation) and > 0 constrains the norm of the perturbation. In this paper, we carry out three kinds of input perturbations. Random sign perturbation As a baseline, we generate random perturbations in which each pixel is randomly perturbed by ±. This is used to measure robustness against untargeted perturbations. Iterative attacks against feature-importance methods In Algorithm 1 we define two adversarial attacks against feature-importance methods, each of which consists of taking a series of steps in the direction that maximizes a differentiable dissimilarity function between the original and perturbed interpretation. The top-k attack seeks to perturb the saliency map by decreasing the relative importance of the k most important features of the original image. When the input data are images, the center of mass of the saliency map often captures the user's attention. The mass-center attack is designed to in the maximum spatial displacement of the center of mass of the saliency scores. Both of these attacks can be applied to any of the three feature-importance methods. We can obtain effective adversarial images for influence functions without resorting to interative procedures. We linearize FORMULA3 around the values of the current inputs and parameters. If we further constrain the L ∞ norm of the perturbation to, we obtain an optimal single-step perturbation: DISPLAYFORM0 Algorithm 1 Iterative Feature-Importance Attacks Input: test image x t, maximum norm of perturbation, normalized feature importance function R(·), number of iterations P, step size α Define a dissimilarity function D to measure the change between interpretations of two images: DISPLAYFORM1 where B is the set of the k largest dimensions a of R(x t), and C(·) is the center of saliency mass b. DISPLAYFORM2 Perturb the test image in the direction of signed gradient c of the dissimilarity function: DISPLAYFORM3 If needed, clip the perturbed input to satisfy the norm constraint: DISPLAYFORM4.., x P }, return the element with the largest value for the dissimilarity function and the same prediction as the original test image.a The goal is to damp the saliency scores of the k features originally identified as the most important. b The center of mass is defined for a W × H image as: DISPLAYFORM5 In some networks, such as those with ReLUs, this gradient is always 0. To attack interpretability in such networks, we replace the ReLU activations with their smooth approximation (softplus) when calculating the gradient and generate the perturbed image using this approximation. The perturbed images that are effective adversarial attacks against the original ReLU network, as discussed in Section 4.The attack we use consists of applying the negative of the perturbation in to decrease the influence of the 3 most influential training images of the original test image 1. Of course, this affects the influence of all of the other training images as well. We follow the same setup for computing the influence function as was done by the authors of. Because the influence is only calculated with respect to the parameters that change during training, we calculate the gradients only with respect to parameters in the final layer of our network (InceptionNet, see Section 4). This makes it feasible for us to compute exactly, but it gives us the perturbation of the input into the final layer, not the first layer. So, we use standard back-propagation to calculate the corresponding gradient for the input test image. We then take the sign of this gradient as the perturbation and clip the image to produce the adversarial test image. Data sets and models To evaluate the robustness of feature-importance methods, we used two image classification data sets: ILSVRC2012 (ImageNet classification challenge data set) and CIFAR-10 . For the ImageNet classification data set, we used a pre-trained SqueezeNet 2 model introduced by BID5. For the CIFAR-10 data set we trained our own convolutional network, whose architecture is presented in Appendix A.1 In other words, we generate the perturbation given by: − sign(DISPLAYFORM0 where z (i) is the i th most influential training image of the original test image. 2 https://github.com/rcmalli/keras-squeezenet For both data sets, the are examined using simple gradient, integrated gradients, and DeepLIFT feature importance methods. For DeepLIFT, we used the pixel-wise and the channelwise mean images as the CIFAR-10 and ImageNet reference points respectively. For the integrated gradients method, the same references were used with parameter M=100. We ran all iterative attack algorithms for P = 300 iterations with step size α = 0.5.To evaluate the robustness of influence functions, we followed a similar experimental setup to that of the original authors: we trained an InceptionNet v3 with all but the last layer frozen (the weights were pre-trained on ImageNet and obtained from Keras 3). The last layer was trained on a binary flower classification task (roses vs. sunflowers), using a data set consisting of 1,000 training images 4. This data set was chosen because it consisted of images that the network had not seen during pre-training on ImageNet. The network achieved a validation accuracy of 97.5% on this task. Results for feature-importance methods From the ImageNet test set, 512 correctly-classified images were randomly sampled for evaluation purposes. Examples of the mass-center attack against three feature importance methods were presented in FIG0 Figure 3: Evaluation metrics vs subjective change We generate snapshots of the perturbed image and its simple gradient saliency maps along with iterations of mass-center attack to visualize the gradual change in saliency map with its corresponding the rank-correlation and top-1000 intersection metrics. In FIG1, we present aggregated over all 512 images. We compare different attack methods using top-1000 intersection and rank correlation methods. In all the images, the attacks does not change the original predicted label of the image. Random sign perturbation already causes significant changes in both top-1000 intersection and rank order correlation. For example, with L ∞ = 8, on average, there is less than 30% overlap in the top 1000 most salient pixels between the original and the randomly perturbed images across all three of interpretation methods. This suggests that the saliency of individual or small groups of pixels can be extremely fragile to the input and should be interpreted with caution. With targeted perturbations, we observe more dramatic fragility. Even with a perturbation of L ∞ = 2, the interpretations change significantly. Both iterative attack algorithms have similar effects on feature importance of test images when measured on the basis of rank correlation or top-1000 intersection. In Appendix D, we show an additional metric: the displacement of the center of mass between the original and perturbed saliency maps. Empirically, we find this metric to correspond most strongly with intuitive perceptions of the similarity between two saliency maps. Not surprisingly, we found that the center attack method was more effective than the top-k attack at moving the center of mass of the saliency maps. Comparing the fragility of neural network interpretation among the three different methods, we found that the integrated gradients method was Across 512 correctly-classified ImageNet images, we find that the top-k and center attacks perform similarly in top-1000 intersection and rank correlation measures, and are far more effective than the random sign perturbation at demonstrating the fragility of interpretability, as characterized through top-1000 intersection (top) as well as rank order correlation (bottom). This is true for (a) the simple gradient method, (b) DeepLift, and (c) the integrated gradients method.the most robust to both random and adversarial attacks. Similar for CIFAR-10 can be found in Appendix D.Results for influence functions We evaluate the robustness of influence functions on a test data set consisting of 200 images of roses and sunflowers. Fig. 5 shows a representative test image to which we have applied the gradient sign attack. Although the prediction of the image does not change, the most influential training examples selected according to, as explanation for the prediction, change entirely from images of sunflowers and yellow petals that resemble the input image to those of red and pink roses that do not. Additional examples can be found in Appendix E.In Fig. 6, we compare the random perturbations and gradient sign attacks across all of the test images. We find that the gradient sign-based attacks are significantly more effective at decreasing the rank correlation of the influence of the training images, as well as distorting the top-5 influential images. For example, on average, with a targeted perturbation of magnitude = 8, only 2 of the top 5 most influential training images remain as the top 5 most influential images after the visually imperceptible perturbation. The influences of the training images before and after an adversarial attack are essentially uncorrelated. However, we find that even random attacks can have a nonnegligible effect on influence functions, on average reducing the rank correlation to 0.8 (≈ 10). In this section, we try to understand the source of interpretation fragility. The question is whether fragility a consequence of the complex non-linearities of a deep network or a characteristic present even in high-dimensional linear models, as is the case for adversarial examples for prediction BID4. To gain more insight into the fragility of gradient based interpretations, let S(x; W) denote the score function of interest; x ∈ R d is an input vector and W is the weights of the neural network, which is fixed since the network has finished training. We are interested in the Figure 5: Gradient sign attack on influence functions. An imperceptible perturbation to a test image can significantly affect exemplar-based interpretability. The original test image is that of a sunflower that is classified correctly in a rose vs. sunflower classification task. The top 3 training images identified by influence functions are shown in the top row. Using the gradient sign attack, we perturb the test image (with = 8) to produce the leftmost image in the second row. Although the image is even more confidently predicted as a sunflower, influence functions suggest very different training images by means of explanation: instead of the sunflowers and yellow petals that resemble the input image, the most influential images are pink/red roses. The plot on the right shows the influence of each training image before and after perturbation. The 3 most influential images (targeted by the attack) have decreased in influence, but the influences of other images have also changed. Figure 6: Comparison of random and targeted perturbations on influence functions. Here, we show the averaged of applying random (green) and gradient sign-based (orange) perturbations to 200 test images on the flower classification task. While random attacks affect interpretability, the effect is small and generally doesn't affect the most influential images. On the other hard, a targeted attack can significantly affect (a) the rank correlation and (b) even change the make-up of the 5 most influential images. Even at the maximal level of noise, the changes to the perturbed images were visually imperceptible, and prediction confidence was not significantly changed (the mean change was < 1% for random attacks and < 5% for targeted attacks at the highest level of noise).Hessian H whose entries are H i,j = ∂S ∂xi∂xj. The reason is that the first order approximation of gradient for some input perturbation direction δ ∈ R d is: DISPLAYFORM0 First, consider a linear model whose score for an input x is S = w x. Here, ∇ x S = w and ∇ 2 x S = 0; the feature-importance vector w is robust, because it is completely independent of x. Thus, some non-linearity is required for interpretation fragility. A simple network that is susceptible to adversarial attacks on interpretations consists of a set of weights connecting the input to a single neuron followed by a non-linearity (e.g. softmax): S = g(w x).We can calculate the change in saliency map due to a small perturbation in x → x + δ. The first-order approximation for the change in saliency map will be equal to: H · δ = ∇ 2 x S · δ. In particular, the saliency of the i th feature changes by (∇ 2 x S · δ) i and furthermore, the relative change DISPLAYFORM1 For the simple network, this relative change is: DISPLAYFORM2 where we have used g (·) and g (·) to refer to the first and second derivatives of g(·). Note that g (w x) and g (w x) do not scale with the dimensionality of x because in general, independent from the dimensionality, x and w are 2 -normalized or have fixed 2 -norm due to data preprocessing and weight decay regularization. However, if we choose δ = sign(w), then the relative change in the saliency grows with the dimension, since it is proportional to the 1 -norm of w. When the input is high-dimensional-which is the case with images-the relative effect of the perturbation can be substantial. Note also that this perturbation is exactly the sign of the first right singular vector of the Hessian ∇ 2 x S, which is appropriate since that is the vector that has the maximum effect on the gradient of S. A similar analysis can be carried out for influence functions (see Appendix F).For this simple network, the direction of adversarial attack on interpretability, sign(w) is the same as the adversarial attack on prediction. This means that we cannot perturb interpretability independently of prediction. For more complex networks, this is not the case and in Appendix G we show this analytically for a simple case of a two-layer network. As an empirical test, in FIG3, we plot the distribution of the angle between ∇ x S and v 1 (the first right singular vector of H which is the most fragile direction of feature importance) for 1000 CIFAR10 images (Details of the network in Appendix A). In FIG3, we plot the equivalent distribution for influence functions, computed across all 200 test images. The confirms that the steepest direction of change in interpretation and prediction are generally orthogonal, justifying how the perturbations can change the interpretation without changing the prediction. Related works To the best of our knowledge, the notion of adversarial examples has not previously been studied in the context of interpretation of neural networks. Adversarial attacks to the input that changes the prediction of a network have been actively studied. demonstrated that it is relatively easy to fool neural networks into making very different predictions for test images that are visually very similar to each other. BID4 introduced the Fast Gradient Sign Method (FGSM) as a one-step prediction attack. This was followed by more effective iterative attacks Interpretation of neural network predictions is also an active research area. Post-hoc interpretability is one family of methods that seek to "explain" the prediction without talking about the details of black-box model's hidden mechanisms. These included tools to explain predictions by networks in terms of the features of the test example (; ; ;), as well as in terms of contribution of training examples to the prediction at test time . These interpretations have gained increasing popularity, as they confer a degree of insight to human users of what the neural network might be doing .Conclusion This paper demonstrates that interpretation of neural networks can be fragile in the specific sense that two similar inputs with the same predicted label can be given very different interpretations. We develop new perturbations to illustrate this fragility and propose evaluation metrics as well as insights on why fragility occurs. Fragility of neural network interpretation is orthogonal to fragility of the prediction-we demonstrate how perturbations can substantially change the interpretation without changing the predicted label. The two types of fragility do arise from similar factors, as we discuss in Section 5. Our focus is on the interpretation method, rather than on the original network, and as such we do not explore how interpretable is the original predictor. There is a separately line of research that tries to design simpler and more interpretable prediction models BID0.Our main message is that robustness of the interpretation of a prediction is an important and challenging problem, especially as in many applications (e.g. many biomedical and social settings) users are as interested in the interpretation as in the prediction itself. Our raise concerns on how interpretations of neural networks are sensitive to noise and can be manipulated. Especially in settings where the importance of individual or a small subset of features are interpreted, we show that these importance scores can be sensitive to even random perturbation. More dramatic manipulations of interpretations can be achieved with our targeted perturbations, which raise security concerns. We do not suggest that interpretations are meaningless, just as adversarial attacks on predictions do not imply that neural networks are useless. Interpretation methods do need to be used and evaluated with caution while applied to neural networks, as they can be fooled into identifying features that would not be considered salient by human perception. Our demonstrate that the interpretations (e.g. saliency maps) are vulnerable to perturbations, but this does not imply that the interpretation methods are broken by the perturbations. This is a subtle but important distinction. Methods such as saliency measure the infinitesimal sensitivity of the neural network at a particular input x. After a perturbation, the input has changed tox = x + δ, and the salency now measures the sensitivity at the perturbed input. The saliency correctly captures the infinitesimal sensitivity at the two inputs; it's doing what it is supposed to do. The fact that the two ing saliency maps are very different is fundamentally due to the network itself being fragile to such perturbations, as we illustrate with Fig. 2.While we focus on image data (ImageNet and CIFAR-10), because these are the standard benchmarks for popular interpretation tools, this fragility issue can be wide-spread in biomedical, economic and other settings where neural networks are increasingly used. Understanding interpretation fragility in these applications and develop more robust methods are important agendas of research. We trained the following structure using ADAM optimizer with default parameters. The ing test accuracy using ReLU activation was 73%. For the experiment in FIG3, we replaced ReLU activation with Softplus and retrained the network (with the ReLU network weights as initial weights). The ing accuracy was 73%. 3 × 3 conv. 96 ReLU 3 × 3 conv. 96 ReLU 3 × 3 conv. 96 Relu Stride 2 3 × 3 conv. 192 ReLU 3 × 3 conv. 192 ReLU 3 × 3 conv. 192 Relu Stride 2 1024 hidden sized feed forward Here we provide three more examples from ImageNet. For each example, all three methods of feature importance are attacked by random sign noise and our two targeted adversarial algorithms. Figure 12: Center-shift for three feature importance methods on ImageNet: As discussed in the paper, among our three measurements, center-shift measure was the most correlated measure with the subjective perception of change in saliency maps. The in Appendix B also show that the center attack which ed in largest average center-shift, also in the most significant subjective change in saliency maps. Random sign perturbations, on the other side, did not substantially change the global shape of the saliency maps, though local pockets of saliency are sensitive. Just like rank correlation and top-1000 intersection measures, the integrated gradients method is the most robust method against adversarial attacks in the center-shift measure. FIG0: Results for adversarial attacks against CIFAR10 feature importance methods: For CIFAR10 the mass-center attack and top-k attack with k=100 achieve similar for rank correlation and top-100 intersection measurements and both are stronger than random perturbations. Mass-center attack moves the center of mass more than two other perturbations. Among different feature importance methods, integrated gradients is more robust than the two other methods. Additionally, for CIFAR10 show that images in this data set are more robust against adversarial attack compared to ImageNet images which agrees with our analysis that higher dimensional inputs are tend to be more fragile. In this appendix, we provide additional examples of the fragility of influence functions, analogous to Fig. 5. Here, we demonstrate that increasing the dimension of the input of a simple neural network increases the fragility of that network with respect to influence functions, analogous to the calculations carried out for importance-feature methods in Section 5. Recall that the influence of a training image z i = (x i, y i) on a test image z = (x, y) is given by: DISPLAYFORM0 We restrict our attention to the term in that is dependent on x, and denote it by J def = ∇ θ L. J represents the infinitesimal effect of each of the parameters in the network on the loss function evaluated at the test image. Now, let us calculate the change in this term due to a small perturbation in x → x + δ. The firstorder approximation for the change in J is equal to: DISPLAYFORM1 For the simple network defined in Section 5, this evaluates to (replacing θ with w for consistency of notation): DISPLAYFORM2 where for simplicity, we have taken the loss to be L = |y − g(w x)|, making the derivatives easier to calculate. Furthermore, we have used g (·) and g (·) to refer to the first and second derivatives of g(·). Note that g (w x) and g (w x) do not scale with the dimensionality of x because x and w are generalized L 2 -normalized due to data preprocessing and weight decay regularization. However, if we choose δ = sign(w), then the relative change in the saliency grows with the dimension, since it is proportional to the L 1 -norm of w. Consider a two layer neural network with activation function g(·), input x ∈ R d, hidden vector u ∈ R h, and score function S), we have: DISPLAYFORM0 where w j = ||w j || 2ŵj. We have: DISPLAYFORM1 Now for an input sample x perturbation δ, for the change in feature importance: DISPLAYFORM2 which is equal to: DISPLAYFORM3 We further assume that the input is high-dimensional so that h < d and for i = j we have w j · w i = 0. For maximizing the 2 norm of saliency difference we have the following perturbation direction:δ m = argmax ||δ||=1 ||∇ x S(x + δ) − ∇ x S(x)|| =ŵ k where: k = argmax|v j g (w j .x)| × ||w k || 2 2 comparing which to the direction of feature importance: DISPLAYFORM4 we conclude that the two directions are not parallel unless g = g which is not the case for many activation functions like Softplus, Sigmoid, etc. The analyses and experiments in this paper have demonstrated that small perturbations in the input layers of deep neural networks can have large changes in the interpretations. This is analogous to classical adversarial examples, whereby small perturbations in the input produce large changes in the prediction. In that setting, it has been proposed that the Lipschitz constant of the network be constrained during training to limit the effect of adversarial perturbations . This has found some empirical success BID3.Here, we propose an analogous method to upper-bound the change in interpretability of a neural network as a of perturbations to the input. Specifically, consider a network with K layers, which takes as input a data point we denote as y 0. The output of the i th layer is given by y i+1 = f i (y i) for i = 0, 1... K − 1. We define S def = f K−1 (f K−2 (. . . f 0 (y 0)...)) to be the output (e.g. score for the correct class) of our network, and we are interested in designing a network whose gradient S = ∇ y0 S is relatively insensitive to perturbations in the input, as this corresponds to a network whose feature importances are robust. A natural quantity to consider is the Lipschitz constant of S with respect to y 0. By the chain rule, the Lipschitz constant of S is DISPLAYFORM0 Now consider the function f i (·), which maps y i to y i+1. In the simple case of the fully-connected network, which we consider here, f i (y i) = g i (W i y i), where g i is a non-linearity and W i are the trained weights for that layer. Thus, the Lipschitz constant of the i th partial derivative in FORMULA24 is the Lipschitz constant of DISPLAYFORM1 which is upper-bounded by ||W i || 2 · L(g i (·)), where ||W || denotes the operator norm of W (its largest singular value) 5. This suggests that a conservative upper ceiling for is DISPLAYFORM2 Because the Lipschitz constant of the non-linearities g i (·) are fixed, this suggests that a regularization based on the operator norms of the weights W i may allow us to train networks that are robust to attacks on feature importance. The calculations in this Appendix section is meant to be suggestive rather than conclusive, since in practice the Lipschitz bounds are rarely tight.
Can we trust a neural network's explanation for its prediction? We examine the robustness of several popular notions of interpretability of neural networks including saliency maps and influence functions and design adversarial examples against them.
517
scitldr
Stochastic AUC maximization has garnered an increasing interest due to better fit to imbalanced data classification. However, existing works are limited to stochastic AUC maximization with a linear predictive model, which restricts its predictive power when dealing with extremely complex data. In this paper, we consider stochastic AUC maximization problem with a deep neural network as the predictive model. Building on the saddle point reformulation of a surrogated loss of AUC, the problem can be cast into a {\it non-convex concave} min-max problem. The main contribution made in this paper is to make stochastic AUC maximization more practical for deep neural networks and big data with theoretical insights as well. In particular, we propose to explore Polyak-\L{}ojasiewicz (PL) condition that has been proved and observed in deep learning, which enables us to develop new stochastic algorithms with even faster convergence rate and more practical step size scheme. An AdaGrad-style algorithm is also analyzed under the PL condition with adaptive convergence rate. Our experimental demonstrate the effectiveness of the proposed algorithms. Deep learning has been witnessed with tremendous success for various tasks, including computer vision (; ; ;), speech recognition (; ;), natural language processing (; ;), etc. From an optimization perspective, all of them are solving an empirical risk minimization problem in which the objective function is a surrogate loss of the prediction error made by a deep neural network in comparison with the ground-truth label. For example, for image classification task, the objective function is often chosen as the cross entropy between the probability distribution calculated by forward propagation of a convolutional neural network and the vector encoding true label information (; ;), where the cross entropy is a surrogate loss of the misclassification rate. However, when the data is imbalanced, this formulation is not reasonable since the data coming from minor class have little effect in this case and the model is almost determined by the data from the majority class. To address this issue, AUC maximization has been proposed as a new learning paradigm . Statistically, AUC (short for Area Under the ROC curve) is defined as the probability that the prediction score of a positive example is higher than that of a negative example (; 1983). Compared with misclassification rate and its corresponding surrogate loss, AUC is more suitable for imbalanced data setting . Several online or stochastic algorithms for time based on a new sampled/received training data. Instead of storing all examples in the memory, employ reservoir sampling technique to maintain representative samples in a buffer, based on which their algorithms update the model. To get optimal regret bound, their buffer size needs to be O(√ n), where n is the number of received training examples. design a new algorithm which is not buffer-based. Instead, their algorithm needs to maintain the first-order and second-order statistics of the received data to compute the stochastic gradient, which is prohibitive for high dimensional data. Based on a novel saddle-point reformulation of a surrogate loss of AUC proposed by , there are several studies (; ;) trying to design stochastic primal-dual algorithms. employ the classical primal-dual stochastic gradient and obtain O(1/ √ t) convergence rate. add a strongly convex regularizer, invoke composite mirror descent and achieve O(1/t) convergence rate. leverage the structure of the formulation, design a multi-stage algorithm and achieve O(1/t) convergence rate without strong convexity assumptions. However, all of them only consider learning a linear model, which in a convex objective function. Non-Convex Min-max Optimization. Stochastic optimization of non-convex min-max problems have received increasing interests recently (; ; ; ;). When the objective function is weakly convex in the primal variable and is concave in the dual variable, design a proximal guided algorithm in spirit of the inexact proximal point method , which solves a sequence of convexconcave subproblems constructed by adding a quadratic proximal term in the primal variable with a periodically updated reference point. Due to the potential non-smoothness of objective function, they show the convergence to a nearly-stationary point for the equivalent minimization problem. In the same vein as , design an algorithm by adopting the block alternating minimization/maximization strategy and show the convergence in terms of the proximal gradient. When the objective is weakly convex and weakly concave, propose a proximal algorithm which solves a strongly monotone variational inequality in each epoch and establish its convergence to stationary point. consider non-convex non-concave min-max games where the inner maximization problem satisfies a PL condition, based on which they design a multi-step deterministic gradient descent ascent with convergence to a stationary point. It is notable that our work is different in that (i) we explore the PL condition for the outer minimization problem instead of the inner maximization problem; (ii) we focus on designing stochastic algorithms instead of deterministic algorithms. Leveraging PL Condition for Minimization. PL condition is first introduced by Polyak , which shows that gradient descent is able to enjoy linear convergence to a global minimum under this condition. show that stochastic gradient descent, randomized coordinate descent, greedy coordinate descent are able to converge to a global minimum with faster rates under the PL condition. If the objective function has a finite-sum structure and satisfies PL condition, there are several non-convex SVRG-style algorithms (; ; ; ; ;), which are guaranteed to converge to a global minimum with a linear convergence rate. However, the stochastic algorithms in these works are developed for a minimization problem, and hence is not applicable to the min-max formulation for stochastic AUC maximization. To the best of our knowledge, is the only work that leverages an equivalent condition to the PL condition (namely quadratic growth condition) to develop a stochastic primal-dual algorithm for AUC maximization with a fast rate. However, as mentioned before their algorithm and analysis rely on the convexity of the objective function, which does not hold for AUC maximization with a deep neural network. Finally, we notice that PL condition is the key to many recent works in deep learning for showing there is no spurious local minima or for showing global convergence of gradient descent and stochastic gradient descent methods (; ; ; ; b; a; ; ; ;). Using the square loss, it has also been proved that the PL condition holds globally or locally for deep linear residual network , deep linear network, one hidden layer neural network with Leaky ReLU activation . Several studies (; ; ; b;) consider the trajectory of (stochastic) gradient descent on learning neural networks, and their analysis imply the PL condition in a certain form. For example, Du et al. (2018b) show that when the width of a two layer neural network is sufficiently large, a global optimum would lie in the ball centered at the initial solution, in which PL condition holds. extends this insight further to overparameterized deep neural networks with ReLU activation, and show that the PL condition holds for a global minimum around a random initial solution. Let · denote the Euclidean norm. A function f (x) is ρ-weakly convex if f (x) + ρ 2 x 2 is convex, where ρ is the so-called weak-convexity parameter. A function f (x) satisfies PL condition with 2, where x * stands for the optimal solution of f. Let z = (x, y) ∼ P denote a random data following an unknown distribution P, where x ∈ X represents the feature vector and y ∈ Y = {−1, +1} represents the label. Denote by Z = X × Y and by p = Pr(y = 1) = E y I [y=1], where I(·) is the indicator function. The area under the curve (AUC) on a population level for a scoring function h: X → R is defined as AUC(h) = Pr (h(x) ≥ h(x)|y = 1, y = −1), where z = (x, y) and z = (x, y) are drawn independently from P. By employing the squared loss as the surrogate for the indicator function that is a common choice used by previous studies , the AUC maximization problem can be formulated as min where H denotes a hypothesis class. All previous works of AUC maximization assume h(x) = w x for simplicity. Instead, we consider learning a general nonlinear model parameterized by w, i.e. h(w; x), which is not necessarily linear or convex in terms of w (e.g., h(w; x) can be a score function defined by a neural network with weights denoted by w). Hence, the corresponding optimization problem becomes min The following proposition converts the original optimization problem into a saddle-point problem, which is similar to Theorem 1 in . For completeness, the proof is included in the supplement. Proposition 1. The optimization problem is equivalent to min where z = (x, y) ∼ P, and Remark: It is notable that the min-max formulation is more favorable than the original formulation for developing a stochastic algorithm that updates the model parameters based on one example or a mini-batch of samples. For stochastic optimization of, one has to carefully sample both positive and negative examples, which is not allowed in an online setting. It is notable that in the classical batch-learning setting, p becomes the ratio of positive training examples and the expectation in becomes average over n individual functions. However, our algorithms are applicable to both batch-learning setting and online learning setting. It is clear that min w P (w) = min v φ(v) and P (w) ≤ φ(v) for any v = (w, a, b). The following assumption is made throughout the paper., where µ > 0 and v * is the optimal solution where v * is the global minimum of φ. The first condition is inspired by a PL condition on the objective function P (w) for learning a deep neural network. The following lemma establishes the connection. Algorithm 1 Proximally Guided Algorithm (PGA) 1: Initializev 0 = 0 ∈ R d+2,ᾱ 0 = 0, the global index j = 0 2: for k = 1,..., K do 3: 10: end for 11: Sample τ uniformly randomly from {1, . . ., K} 12: returnv τ,ᾱ τ Lemma 1. Suppose ∇ w h(w; x) ≤L for all w and x. If P (w) satisfies PL condition, i.e. there exists Remark: The PL condition of P (w) could be proved for learning a neural network similar to existing studies, which is not the main focus of this paper. Nevertheless, In Appendix A.7, we provide an example for AUC maximization with one-hidden layer neural network. Warmup. We first discuss the algorithms and their convergence of applied to the considered min-max problem. They have algorithms for problems in batch-learning setting and online learning setting. Since the algorithms for the batch-learning setting have complexities scaling with n, we will concentrate on the algorithm for the online learning setting. The algorithm is presented in Algorithm 1, which is a direct application of Algorithm 2 of to an online setting. Since their analysis requires the domain of the primal and the dual variable to be bounded, hence we add a ball constraint on the primal variable and the dual variable as well. As long as R 1 and R 2 is sufficiently large, they should not affect the solution. The convergence of Algorithm 1 is stated below. Remark: Under the condition φ(v) is smooth and the returned solution is within the added bounded ball constraint, the above implies We can see that this complexity under the PL condition of φ(v) is worse than the typical complexity of stochastic gradient descent method under the PL condition (i.e., O(1/)) . It remains an open problem how to design a stochastic primal-dual algorithm for solving min v max α F (v, α) in order to achieve a complexity of O(1/) in terms of minimizing φ(v). A naive idea is to solve the inner maximization problem of α first and the use SGD on the primal variable v. However, this is not viable since exact maximization over α is a non-trivial task. In this section, we present two primal-dual algorithms for solving the min-max optimization problem with corresponding theoretical convergence . For simplicity, we first assume the positive ratio p is known in advance, which is true in the batch-learning setting. Handling the unknown p in an online learning setting is a simple extension, which will be discussed in Section 4.3. The proposed algorithms follow the same proximal point framework proposed in , i.e., we Draw a minibatch {z j, . . ., solve the following convex-concave problems approximately and iteratively: where γ < 1/L to ensure that the new objective function becomes convex and concave, and v 0 is periodically updated. Algorithm 2. Similar to Algorithm 1, it has a nested loop, where the inner loop is to approximately solve a regularized min-max optimization problem using stochastic primal-dual gradient method, and the outer loop updates the reference point and learning rate. One key difference is that PPD-SG uses a geometrically decaying step size scheme, while Algorithm 1 uses a polynomially decaying step size scheme. Another key difference is that at the end of k-th outer loop, we update the dual variableᾱ k in Step 12, which is motivated by its closed-form solution givenv k. In particular, the givenv k, the dual solution that optimizes the inner maximization problem is given by: In the algorithm, we only use a small number of samples in Step 11 to compute an estimation of the optimal α givenv k. These differences are important for us to achieve lower iteration complexity of PPD-SG. Next, we present our convergence of PPD-SG. where E k−1 stands for the conditional expectation conditioning on all the stochastic events until v k−1 is generated. Theorem 2. Suppose the same conditions in Lemma 2 hold. Set 2 ) Lη0 in Algorithm 2, where O(·) hides logarithmic factor of L, µ,, G, σ. Remark: The above complexity is similar to that of for solving nonconvex minimization problem under the PL condition up to a logarithmic factor. Compared with the complexity of Algorithm 1 discussed earlier, i.e., O(1/(µ 3 3)), the above complexity in the ) is much better -it not only improves the dependence on but also improves the dependence on µ. Our second algorithm named Proximal Primal-Dual Adagrad (PPD-Adagrad) is a AdaGrad-style algorithm. Since it only differs from PPD-SG in the updates of the inner loop, we only present the inner loop in Algorithm 3. The updates in the inner loop are similar to the adaptive updates of traditional AdaGrad . We aim to achieve an adaptive convergence by using PPD-AdaGrad. The analysis of PPD-AdaGrad is inspired by the analysis of AdaGrad for non-convex minimization problems . The key difference is that we have to carefully deal with the primal-dual updates for the non-convex min-max problem. We summarize the convergence of PPD-AdaGrad below. where E k−1 stands for the conditional expectation conditioning on all the stochastic events until v k−1 is generated. Theorem 3. Suppose the same conditions as in Lemma 3 hold. Set The number of iterations is at most O, and the required number of samples is at, where O(·) hides logarithmic factors of L, µ,, δ. Remark: When the cumulative growth of stochastic gradient is slow, i.e., α < 1/2, the number of iterations is less than that in Theorem 2, which exhibits adaptive iteration complexity. It is notable that the setting of η k, T k, m k depends on unknown parameters µ, L, etc., which are typically unknown. One heuristic to address this issue is that we can decrease η k by a constant factor larger than 1 (e.g., 2 or 5 or 10), and similarly increase T k and m k by a constant factor. Another heuristic is to decrease the step size by a constant factor when the performance on a validation data saturates . Variants when p is unknown. In the online learning setting when p is unknown, the stochastic gradients of f in both v and α are not directly available. To address this issue, we can keep unbiased estimators for both p and p(1 − p) which are independent of the new arrived data, and update these estimators during the optimization procedure. All values depending on p and p(1 − p) (i.e., F, g v, g α) are estimated by substituting p and p(1 − p) by p and p(1 − p) (i.e.,F,ĝ v,ĝ α) respectively. The approach for keeping unbiased estimatorp and p(1 − p) during the optimization is described in Algorithm 4, where j is the global index, and m is the number of examples received. Extensions to multi-class problems. In the previous analysis, we only consider the binary classification problem. We can extend it to the multi-class setting. To this end, we first introduce the definition of AUC in this setting according to . Suppose there are c classes, we have c scoring functions for each class, namely h(w 1 ; x),..., h(w c ; x). We assume that these scores are normalized such that c k=1 h(w c ; x) = 1. Note that if these functions are implemented by a deep neural network, they can share the lower layers and have individual last layer of connections. The AUC is defined as Similar to Proposition 1, we can cast the problem into where ij. Then we can modify our algorithms to accommodate the multiple class pairs. We can also add another level of sampling of class pairs into computing the stochastic gradients. In this section, we present some empirical to verify the effectiveness of the proposed algorithms. We compare our algorithms (PPD-SG and PPD-AdaGrad) with three baseline methods including PGA (Algorithm 1), Online AUC method (OAUC) that directly employs the standard primal-dual stochastic gradient method with a decreasing step size for solving the min-max formulation, and the standard stochastic gradient descent (SGD) for minimizing cross-entropy loss. Comparing with PGA and OAUC allows us to verify the effectiveness of the proposed algorithms for solving the same formulation, and comparing with SGD allows us to verify the effectiveness of maximizing AUC for imbalanced data. We use a residual network with 20 layers to implement the deep neural network for all algorithms. We use the stagewise step size strategy as in for SGD, i.e. the step size is decreased by 10 times at 40K, 60K. For PPD-SG and PPD-AdaGrad, we set We conduct the comparisons on four benchmark datasets, i.e., Cat&Dog (C2), CIFAR10 (C10), CIFAR100 (C100), STL10. STL10 is an extension of CIFAR10 and the images are acquired from ImageNet. Cat&Dog is from Kaggle containing 25,000 images of dogs and cats and we choose an 80:20 split to construct training and testing set. We use 19k/1k, 45k/5k, 45k/5k, 4k/1k training/validation split on C2, C10, C100, and STL10 respectively. For each dataset, we construct multiple binary classification tasks with varying imbalanced ratio of number negative examples to number of positive examples. For details of construction of binary classification tasks, please refer to the Appendix A.8. We report the convergence of AUC on testing data in Figure 1, where the title shows the ratio of the majority class to the minority class. The about the convergence of AUC versus the time in seconds are also presented in Figure 3. From the we can see that for the balanced settings with ratio equal to 50%, SGD performs consistently better than other methods on C2 and CIFAR10 data. However, it is worse than AUC optimization based methods on CIFAR100 and STL10. For imbalanced settings, AUC maximization based methods are more advantageous than SGD in most cases. In addition, PPD-SG and PPD-AdaGrad are mostly better than other baseline algorithms. In certain cases, PPD-AdaGrad can be faster than PPD-SG. Finally, we observe even better performance (in Appendix) by a mixed strategy that pre-trains the model with SGD and then switchs to PPD-SG. In this paper, we consider stochastic AUC maximization problem when the predictive model is a deep neural network. By abuilding on the saddle point reformulation and exploring Polyak-Łojasiewicz condition in deep learning, we have proposed two algorithms with state-of-the-art complexities for stochastic AUC maximization problem. We have also demonstrated the efficiency of our proposed algorithms on several benchmark datasets, and the experimental indicate that our algorithms converge faster than other baselines. One may consider to extend the analysis techniques to other problems with the min-max formulation. Proof. It suffices to prove that Note that the optimal values of a, b, α are chosen as a * 2, (c) comes from the standard analysis of primal-dual stochastic gradient method. Denote E k−1 by taking the conditional expectation conditioning on all the stochastic events until v k−1 is generated. Taking E k−1 on both sides and noting thatĝ k t is an unbiased estimator of g k t for ∀t, k, we have By the update ofᾱ k−1, 2L-Lipschitz continuity of E [h(w; x)|y = −1] − E [h(w; x)|y = 1], and noting that α, then we have We can see that φ k (v) is convex and smooth function since γ ≤ 1/L. The smoothness parameter of φ k isL = L+γ −1. Define s k = arg min v∈R d+2 φ k (v). According to Theorem 2.1.5 of , we have Combining with Lemma 2 yields Note that φ k (v) is (γ −1 − L)-strongly convex, and γ = 1 2L, we have Plugging in s k into Lemma 2 and combining yield 2 ), rearranging the terms, and noting that Combining and yields Taking expectation on both sides over all randomness untilv k−1 is generated and by the tower property, we have is L-smooth and hence is L-weakly convex, so we have where (a) and (b) hold by the definition of φ k. Rearranging the terms in yields where (a) holds by using a, b ≤ 1 2 (a 2 + b 2), and (b) holds by the PL property of φ. Combining and, we can see that As a , we have Published as a conference paper at ICLR 2020 2 ), by the setting of η k, we set The required number of samples is A.4 PROOF OF LEMMA 3 2, (c) holds by Jensen's inequality. Now we bound I and II separately. Define Combining and, we have By Lemma 4 of and setting δ ≥ max t ĝ k t ∞, we know that T k 2, and hence Denote E k−1 by taking the conditional expectation conditioning on filtration F k−1, where F k−1 is the σ-algebra generated by all random variables untilv k−1 is generated. Taking E k−1 on both sides of, and employing yields where the equality holds sincev k−1 − s k is measurable with respect to F k−1. Note that where (By setting, then T k is a stopping time which is bounded almost surely. By stopping time argument, we have E k−1 (II) = 0, and hence A.5 PROOF OF THEOREM 3 We can see that φ k (v) is convex and smooth function since γ ≤ 1/L. The smoothness parameter of φ k isL = L+γ −1. Define s k = arg min v∈R d+2 φ k (v). According to Theorem 2.1.5 of , we have Combining with Lemma 3 yields Note that Plugging in s k into Lemma 3 and combining yield, rearranging the terms, and noting that Combining and yields Taking expectation on both sides over all randomness untilv k−1 is generated and by the tower property, we have Note that φ(v) is L-smooth and hence is L-weakly convex, so we have where (a) and (b) hold by the definition of φ k. Rearranging the terms in yields where (a) holds by using a, b ≤ 1 2 (a 2 + b 2), and (b) holds by the PL property of φ. Combining and, we can see that which implies that As a , we have, and note that when τ ≥ 1,, and hence, we can see that the total iteration complexity is. The required number of samples is A.6 PROOF OF LEMMA 1 Proof. For any fixed w, define (a * w, b * w) = arg min a,b φ(w, a, b) (φ(w, a, b) is strongly convex in terms of (a, b), so the argmin is well-defined and unique). Note that we can write σ(w x) and σ(w x) as aw x and bw x respectively, and it is obvious that a 2 ≥ min(c We construct the datasets in the following ways: For CIFAR10/STL10, we label the first 5 classes as negative ("-") class and the last 5 classes as positive ("+") class, which leads to a 50/50 class ratio. For CIFAR100, we label the first 50 classes as negative ("-") class and the last 50 classes as positve ("+") class. For the imbalanced cases, we randomly remove 90%, 80%, 60% data from negative samples on all training data, which lead to 91/9, 83/17, 71/29 ratio respectively. For testing data, we keep them unchanged. Model pretraining is effective in many deep learning tasks, and thus we further evaluate the performance of the proposed methods on pretrained models. We first train the model using SGD up to 2000 iterations with an initial step size of 0.1, and then continue training using PPD-SG. We denote this method as PPD-SG+pretrain and the are shown in Figure 2. The parameters are tuned in the same range as in Section 5. It is observed that pretraining model helps the convergence of model and it can achieve the better performance in terms of AUC in most cases. To investigate the effects of labeling order, we also attempt to randomly partition the classes as positive or negative equally. For CIFAR10 and STL10 dataset, we randomly partition the 10 classes into two labels (i.e., randomly select 5 classes as positive label and other 5 classes as negative label). For CIFAR100 dataset, we randomly partition the 100 classes into two labels (i.e., randomly select 50 classes as positive label and other 50 classes as negative label). After that we randomly remove 95%, 90%, from negative samples on all training data, which lead to 20:1, 10:1 ratios respectively. For testing data, we keep them unchanged. We also add AdaGrad for minimizing cross-entropy loss as a new baseline. The corresponding experimental are included in Figure 3. We can see that PPD-Adagrad and PPD-SG converge faster than other baselines. and STL10 dataset, we randomly partition the 10 classes into two labels (i.e., randomly select 5 classes as positive label and other 5 classes as negative label). For CIFAR100 dataset, we randomly partition the 100 classes into two labels (i.e., randomly select 50 classes as positive label and other 50 classes as negative label).
The paper designs two algorithms for the stochastic AUC maximization problem with state-of-the-art complexities when using deep neural network as predictive model, which are also verified by empirical studies.
518
scitldr
Designing rewards for Reinforcement Learning (RL) is challenging because it needs to convey the desired task, be efficient to optimize, and be easy to compute. The latter is particularly problematic when applying RL to robotics, where detecting whether the desired configuration is reached might require considerable supervision and instrumentation. Furthermore, we are often interested in being able to reach a wide range of configurations, hence setting up a different reward every time might be unpractical. Methods like Hindsight Experience Replay (HER) have recently shown promise to learn policies able to reach many goals, without the need of a reward. Unfortunately, without tricks like resetting to points along the trajectory, HER might take a very long time to discover how to reach certain areas of the state-space. In this work we investigate different approaches to incorporate demonstrations to drastically speed up the convergence to a policy able to reach any goal, also surpassing the performance of an agent trained with other Imitation Learning algorithms. Furthermore, our method can be used when only trajectories without expert actions are available, which can leverage kinestetic or third person demonstration. Reinforcement Learning (RL) has shown impressive in a plethora of simulated tasks, ranging from attaining super-human performance in video-games BID18 BID35 and board-games , to learning complex locomotion behaviors BID34 BID4. Nevertheless, these successes are shyly echoed in real world robotics (Riedmiller et BID36 . This is due to the difficulty of setting up the same learning environment that is enjoyed in simulation. One of the critical assumptions that are hard to obtain in the real world are the access to a reward function. Self-supervised methods have the power to overcome this limitation. A very versatile and reusable form of self-supervision for robotics is to learn how to reach any previously observed state upon demand. This problem can be formulated as training a goal-conditioned policy BID14 BID27 that seeks to obtain the indicator reward of having the observation exactly match the goal. Such a reward does not require any additional instrumentation of the environment beyond the sensors the robot already has. But in practice, this reward is never observed because in continuous spaces like the ones in robotics, the exact same observation is never observed twice. Luckily, if we are using an off-policy RL algorithm BID17 BID11, we can "relabel" a collected trajectory by replacing its goal by a state actually visited during that trajectory, therefore observing the indicator reward as often as we wish. This method was introduced as Hindsight Experience Replay BID0 or HER.In theory these approaches could learn how to reach any goal, but the breadth-first nature of the algorithm makes that some areas of the space take a long time to be learned BID7 . This is specially challenging when there are bottlenecks between different areas of the statespace, and random motion might not traverse them easily BID5 . Some practical examples of this are pick-and-place, or navigating narrow corridors between rooms, as illustrated in Fig. 5 in appendix depicting the diverse set of environments we work with. In both cases a specific state needs to be reached (grasp the object, or enter the corridor) before a whole new area of the space is discovered (placing the object, or visiting the next room). This problem could be addressed by engineering a reward that guides the agent towards the bottlenecks, but this defeats the purpose of trying to learn without direct reward supervision. In this work we study how to leverage a few demonstrations that traverse those bottlenecks to boost the learning of goal-reaching policies. Learning from Demonstrations, or Imitation Learning (IL), is a well-studied field in robotics BID15 BID25 BID2. In many cases it is easier to obtain a few demonstrations from an expert than to provide a good reward that describes the task. Most of the previous work on IL is centered around trajectory following, or doing a single task. Furthermore it is limited by the performance of the demonstrations, or relies on engineered rewards to improve upon them. In this work we study how IL methods can be extended to the goal-conditioned setting, and show that combined with techniques like HER it can outperform the demonstrator without the need of any additional reward. We also investigate how the different methods degrade when the trajectories of the expert become less optimal, or less abundant. Finally, the method we develop is able to leverage demonstrations that do not include the expert actions. This is very convenient in practical robotics where demonstrations might have been given by a motion planner, by kinestetic demonstrations (moving the agent externally, and not by actually actuating it), or even by another agent. To our knowledge, this is the first framework that can boost goal-conditioned policy learning with only state demonstrations. We define a discrete-time finite-horizon discounted Markov decision process (MDP) by a tuple M = (S, A, P, r, ρ 0, γ, H), where S is a state set, A is an action set, P: S × A × S → R + is a transition probability distribution, γ ∈ is a discount factor, and H is the horizon. Our objective is to find a stochastic policy π θ that maximizes the expected discounted reward within the DISPLAYFORM0 We denote by τ = (s 0, a 0, ...,) the entire state-action trajectory, where s 0 ∼ ρ 0 (s 0), a t ∼ π θ (a t |s t), and s t+1 ∼ P(s t+1 |s t, a t). In the goal-conditioned setting that we use here, the policy and the reward are also conditioned on a "goal" g ∈ S. The reward is r(s t, a t, s t+1, g) = 1 s t+1 == g, and hence the return is the γ h, where h is the number of time-steps to the goal. Given that the transition probability is not affected by the goal, g can be "relabeled" in hindsight, so a transition (s t, a t, s t+1, g, r = 0) can be treated as (s t, a t, s t+1, g = s t+1, r = 1). Finally, we also assume access to D trajectories (s DISPLAYFORM1 that were collected by an expert attempting to reach a goal g j sampled uniformly among the feasible goals. Those trajectories must be approximately geodesics, meaning that the actions are taken such that the goal is reached as fast as possible. In this section we describe the different algorithms we compare to pure Hindsight Experience Replay BID0 . See the Appendix to prior work on adding a Be- havioral Cloning loss to the policy update as in BID19 . Here we propose a novel expert relabeling technique, we formulate for the first time a goal-conditioned GAIL algorithm, and propose a method to train it with state-only demonstrations. The expert trajectories are collected by asking the expert to reach a specific goal g j . But they are also valid trajectories to reach any other state visited within the demonstration! This is the key motivating insight to propose a new type of relabeling: if we have the transitions (s DISPLAYFORM0 t+k) as also coming from the expert! This can be understood as a type of data augmentation leveraging the assumption that the tasks we work on are quasi-static. It will be particularly effective when not many demonstrations are available. In FIG2 we compare the final performance of two agents for Four Rooms environment, one trained with pure Behavioral Cloning, and the other one also using expert relabeling. The compounding error in Behavioral Cloning might make the policy deviate arbitrarily from the demonstrations, and it requires too many demonstrations when the state dimension increases. The first problem is less severe in our goalconditioned case because in fact we do want to visit and be able to purposefully reach all states, even the ones that the expert did not visited. But the second drawback will become pressing when attempting to scale this method to practical robotics tasks where the observations might be high-dimensional sensory input like images. Both problems can be mitigated by using other Imitation Learning algorithms that can leverage additional rollouts collected by the learning agent in a self-supervised manner, like GAIL BID13. In this section we extend the formulation of GAIL to tackle goal-conditioned tasks, and then we detail how it can be combined with HER BID0, which allows to outperform the demonstrator and generalize to all goals. We call this algorithm goal-GAIL.First of all, the discriminator needs to also be conditioned on the goal D ψ (a, s, g). Once the discriminator is fitted, we can run our favorite RL algorithm on the DISPLAYFORM0 In our case we used the offpolicy algorithm DDPG BID17 to allow for the relabeling techniques outlined above. In the goalconditioned case we also supplement with the indicator reward r DISPLAYFORM1 h. This combination is slightly tricky because now the fitted Q φ does not have the same clear interpretation it has when only one of the two rewards is used BID6. Nevertheless, both rewards are pushing the policy towards the goals, so it shouldn't be too conflicting. Furthermore, to avoid any drop in final performance, the weight of the reward coming from GAIL (δ GAIL) can be annealed. See Appendix for details. Both Behavioral Cloning and GAIL use state-action pairs from the expert. This limits the use of the methods, combined or not with HER, to setups where the exact same agent was actuated to reach different goals. Nevertheless, much more data could be cheaply available if the action was not required. For example, kinestetic demonstration or third-person imitation . The main insight we have here is that we can replace the action in the GAIL formulation by the next state s, and in most environments this should be as informative as having access to the action directly. Intuitively, given a desired goal g, it should be possible to determine if a transition s → s is taking the agent in the right direction. The loss function to train a discriminator able to tell apart the current agent and demonstrations (always transitioning towards the goal) is simply: DISPLAYFORM0 We are interested in answering the following questions:1. Can the use of demonstrations accelerate the learning of goal-conditioned tasks without reward? 2. Is the Expert Relabeling an efficient way of doing dataaugmentation on the demonstrations? We evaluate these questions in two different simulated robotic goal-conditioned tasks that are detailed in the next subsection. All the use 20 demonstrations. All curves have 5 random seeds and the shaded area is one standard deviation Experiments are conducted in two continuous environments in MuJoCo BID33. The performance metric we use in all our experiments is the percentage of goals in the feasible goal space the agent is able to reach. A point mass is placed in an environment with four rooms connected through small openings. The action space is continuous and specifies the desired change in state space which corresponds to the goal space. Pick and Place: A fetch robot needs to pick a block and place it in a desired point in space as described in BID19. The control is four-dimensional, corresponding to a change in position of the end-effector and a change in gripper opening. The goal space is the position of the block. In goal-conditioned tasks, HER BID0 should eventually converge to a policy able to reach any desired goal. Nevertheless, this might take a long time, specially in environments where there are bottlenecks that need to be traversed before accessing a whole new area of the goal space. In this section we show how the methods introduced in the previous section can leverage a few demonstrations to improve the convergence speed of HER. This was already studied for the case of Behavioral Cloning by BID19, and in this work we show we also get a benefit when using GAIL as the Imitation Learning algorithm. In both environments, we observe that running GAIL with relabeling (GAIL+HER) considerably outperforms running each of them in isolation. HER alone has a very slow convergence, although as expected it ends up reaching the same final performance if run long enough. On the other hand GAIL by itself learns fast at the beginning, but its final performance is capped. This is because despite collecting more samples on the environment, those come with no reward of any kind indicating what is the task to perform (reach the given goals). Therefore, once it has extracted all the information it can from the demonstrations it cannot keep learning and generalize to goals further from the demonstrations. This is not an issue anymore when combined with HER, as our show. Here we show that the Expert Relabeling technique introduced in Section 3.1 is beneficial in the goal-conditioned imitation learning framework. As shown in FIG4, our expert relabeling technique brings considerable performance boosts for both Behavioral Cloning methods and goal-GAIL in both environments. We also perform a further analysis of expert relabeling in the four-rooms environment. We see in FIG2 that without the expert relabeling, the agent fails to learn how to reach many intermediate states visited in the middle of a demonstration. Behavioral Cloning and standard GAIL rely on the stateaction (s, a) tuples from the expert. Nevertheless there are many cases in robotics where we only have access to observation-only demonstrations. In this section we want to emphasize that all the obtained with our goal-GAIL method and reported in FIG3 and FIG4 do not require actions that the expert took. Surprisingly, in the four rooms environment, despite the more restricted information goal-GAIL has access to, it outperforms BC combined with HER. This might be due to the superior imitation learning performance of GAIL, and also to the fact that these tasks might be possible to solve by only matching the state-distribution of the expert. With GAIL conditioned only on current state but not action (as also done in other non-goal-conditioned works BID8), we observe that the discriminator learns a very well shaped reward that encourages the agent to go towards the goal, as pictured in Fig. 6 in appendix. See the Appendix for more details. In the above sections we assumed access to optimal experts. Nevertheless, in practical applications the experts might have a more erratic behavior. In this section we study how the different methods perform with a sub-optimal expert. To do so we collect trajectories attempting goals g by modifying our optimal expert π * (a|s, g) in two ways: We add noise α to the optimal actions and make it be -greedy. The sub-optimal expert is then a = 1[DISPLAYFORM0) and u is a uniformly sampled random action. In FIG5 we observe that approaches that copy the action of the expert, like Behavioral Cloning, greatly suffer under a sub-optimal expert. On the other hand, discriminator-based methods are able to leverage noisier experts. A possible explanation is that a discriminator approach can give a positive signal as long as the transition is "in the right direction", without trying to exactly enforce a single action. Under this lens, having some noise in the expert might actually improve the performance of these adversarial approaches, as it has been observed in many generative models literature (Goodfellow et al.). Hindsight relabeling can be used to learn useful behaviors without any reward supervision for goal-conditioned tasks, but they are inefficient when the state-space is large or includes exploration bottlenecks. In this work we show how only a few demonstrations can be leveraged to improve the convergence speed of these methods. We introduce a novel algorithm, goal-GAIL, that converges faster than HER and to a better final performance than a naive goal-conditioned GAIL. We also study the effect of doing expert relabeling as a type of data augmentation on the provided demonstrations, and demonstrate it improves the performance of our goal-GAIL as well as goal-conditioned Behavioral Cloning. We emphasize that our goal-GAIL method only needs state demonstrations, without using expert actions like other Behavioral Cloning methods. Finally, we show that goal-GAIL is robust to sub-optimalities in the expert behavior. Imitation Learning can be seen as an alternative to reward crafting to train desired behaviors. There are many ways to leverage demonstrations, from Behavioral Cloning BID22 ) that directly maximizes the likelihood of the expert actions under the training agent policy, to Inverse Reinforcement Learning that extracts a reward function from those demonstrations and then trains a policy to maximize it BID38 BID3 BID8. Another formulation close to the later introduced by BID13 is Generative Adversarial Imitation Learning (GAIL), explained in details in the next section. Originally, the algorithms used to optimize the policy were on-policy methods like Trust Region Policy Optimization BID29, but recently there has been a wake of works leveraging the efficiency of off-policy algorithms without loss in stability BID1 BID26 BID28 BID16. This is a key capability that we are going to exploit later on. Unfortunately most work in the field cannot outperform the expert, unless another reward is available during training BID34 BID9 BID32, which might defeat the purpose of using demonstrations in the first place. Furthermore, most tasks tackled with these methods consist on tracking expert state trajectories BID37 BID21, but can't adapt to unseen situations. In this work we are interested in goal-conditioned tasks, where the objective is to be able to reach any state upon demand. This kind of multi-task learning are pervasive in robotics, but challenging if no reward-shaping is applied. Relabeling methods like Hindsight Experience Replay BID0 unlock the learning even in the sparse reward case BID6. Nevertheless, the inherent breath-first nature of the algorithm might still make very inefficient learning to learn complex policies. To overcome the exploration issue we investigate the effect of leveraging a few demonstrations. The closest prior work is by BID19, where a Behavioral Cloning loss is used with a Q-filter. We found that a simple annealing of the Behavioral Cloning loss BID23 works better. Furthermore, we also introduce a new relabeling technique of the expert trajectories that is particularly useful when only few demonstrations are available. We also experiment with Goal-conditioned GAIL, leveraging the recently shown compatibility with off-policy algorithms. For a more comprehensive review of related work, please see Appendix. The most direct way to leverage demonstrations DISPLAYFORM0 is to construct a data-set D of all state-action-goal tuples (s j t, a j t, g j), and run a supervised regression algorithm. In the goal-conditioned case and assuming a deterministic policy π θ (s, g), the loss is: DISPLAYFORM1 This loss and its gradient are computed without any additional environments samples from the trained policy π θ. This makes it particularly convenient to combine a gradient descend step based on this loss with other policy updates. In particular we can use a standard offpolicy Reinforcement Learning algorithm like DDPG BID17, where we fit the Q φ (a, s, g), and then estimate the gradient of the expected return as: DISPLAYFORM2 ). The improvement guarantees with respect to the task reward are lost when we combine the BC and the deterministic policy gradient updates, but this can be side-stepped by either applying a Q-filter: 1 Q(s t, a t, g) > Q(s t, π(s t, g), g) to the BC loss as proposed in BID19, or by annealing it as we do in our experiments, which allows the agent to eventually outperform the expert. All possible variants we study are detailed in Algorithm 1 as presented in appendix. In particular, α = 0 falls back to pure Behavioral Cloning, β = 0 removes the BC component, p = 0 doesn't relabel agent trajectories, δ GAIL = 0 removes the discriminator output from the reward, and EX-PERT RELABEL indicates whether the here explained expert relabeling should be performed. In the two environments, i.e. Four Rooms environment and Fetch Pick & Place, the task horizons are set to 300 and 100 respectively. The discount factors are γ = 1 − 1 H. In all experiments, the Q function, policy and discriminator are paramaterized by fully connected neural networks with two hidden layers of size 256. DDPG is used for policy optimization and hindsight probability is set to p = 0.8. The initial value of the behavior cloning loss weight β is set to 0.1 and is annealed by 0.9 per 250 rollouts collected. The initial value of the discriminator reward weight δ GAIL is set to 0.1. We found empirically that there is no need to anneal δ GAIL.t, g h ))27: Anneal δ GAIL and β 28: end while For experiments with sub-optimal expert in section 4.5, is set to 0.4 and 0.5, and σ α is set to 1.5 and 0.3 respectively for Four Rooms environment and Fetch Pick & Place. We trained the discriminator in three settings:• current state and goal: (s, g)• current state, next state and goal: (s, s, g)• current state, action and goal: (s, a, g)We compare the three different setups in Fig. 7 and 8.
We tackle goal-conditioned tasks by combining Hindsight Experience Replay and Imitation Learning algorithms, showing faster convergence than the first and higher final performance than the second.
519
scitldr
Bayesian neural networks, which both use the negative log-likelihood loss function and average their predictions using a learned posterior over the parameters, have been used successfully across many scientific fields, partly due to their ability to `effortlessly' extract desired representations from many large-scale datasets. However, generalization bounds for this setting is still missing. In this paper, we present a new PAC-Bayesian generalization bound for the negative log-likelihood loss which utilizes the \emph{Herbst Argument} for the log-Sobolev inequality to bound the moment generating function of the learners risk. Deep neural networks are ubiquitous across disciplines and often achieve state of the art (e.g., ; ;). Albeit neural networks are able to encode highly complex input-output relations, in practice, they do not tend to overfit . This tendency to not overfit has been investigated in numerous works on generalization bounds (; ; a; 2019; ; ;). Indeed, many generalization bounds apply to neural networks. However, most of these bounds assume that the loss function is bounded (a; ;). Unfortunately, this assumption excludes the popular negative log-likelihood (NLL) loss, which is instrumental to Bayesian neural networks that have been used extensively to calibrate model performance and provide uncertainty measures to the model prediction. In this work we introduce a new PAC-Bayesian generalization bound for NLL loss of deep neural networks. Our work utilizes the Herbst argument for the logarithmic-Sobolev inequality in order to bound the moment-generating function of the model risk. Broadly, our PACBayesian bound is comprised of two terms: The first term is dominated by the norm of the gradients with respect to the input and it describes the expressivity of the model over the prior distribution. The second term is the KL-divergence between the learned posterior and the prior, and it measures the complexity of the learning process. In contrast, bounds for linear models or bounded loss functions lack the term that corresponds to the expressivity of the model over the prior distribution and therefore are the same when applied to shallow and deep models. We empirically show that our PAC-Bayesian bound is tightest when we learn the mean and variance of each parameter separately, as suggested by in the context of Bayesian neural networks (BNNs). We also show that the proposed bound holds different insights regarding model architecture, optimization and prior distribution selection. We demonstrate that such optimization minimizes the gap between risk and the empirical risk compared to the standard Bernoulli dropout and other Bayesian inference approximation while being consistent with the theoretical findings. Additionally, we explore in-distribution and out-of-distribution examples to show that such optimization produces better uncertainty estimates than the baseline. PAC-Bayesian bounds for the NLL loss function are intimately related to learning Bayesian inference . Recently many works applied various posteriors in Bayesian neural networks.; introduce a Bayesian inference approximation using Monte Carlo (MC) dropout, which approximates a Gaussian posterior using Bernoulli dropout. introduced Gaussian dropout which effectively creates a Gaussian posterior that couples between the mean and the variance of the learned parameters. explored the relation of this posterior to log-uniform priors, while suggests to take a full Bayesian perspective and learn separately the mean and the variance of each parameter. Our work uses the bridge between PAC-Bayesian bounds and Bayesian inference, as described by , to find the optimal prior parameters in PAC-Bayesian setting and apply it in the Bayesian setting. Most of the literature regarding Bayesian modeling involves around a two-step formalism : a prior is specified for the parameters of the deep net; given the training data, the posterior distribution over the parameters is computed and used to quantify predictive uncertainty. Since exact Bayesian inference is computationally intractable for neural networks, approximations are used, including; Hernández-;;;. In this study we follow this two-step formalism, particularly we follow a similar approach to in which we learn the mean and standard deviation for each parameter of the model using variational Bayesian practice. Our experimental validation emphasizes the importance of learning both the mean and the variance. Generalization bounds provide statistical guarantees on learning algorithms. They measure how the learned parameters w perform on test data given their performance on the training data S = {(x 1, y 1),..., (x m, y m)}, where x i is the data instance and y i is its corresponding label. The performance of the learning algorithm is measured by a loss function (w, x, y). The risk of a learner is its average loss, when the data instance and its label are sampled from their true but unknown distribution D. We denote the risk by L D (w) = E (x,y)∼D (w, x, y). The empirical risk is the average training set loss L S (w) = 1 m m i=1 (w, x i, y i). PAC-Bayesian theory bounds the risk of a learner E w∼q L D (w) when the parameters are averaged over the learned posterior distribution q. The parameters of the posterior distribution are learned from the training data S. In our work we focus on the following PAC-Bayesian bound: Theorem 1 . Let KL(q||p) = q(w) log(q(w)/p(w))dw be the KLdivergence between two probability density functions p, q. For any λ > 0 and for any δ ∈ and for any prior distribution p, with probability at least 1 − δ over the draw of the training set S, the following holds simultaneously for any posterior distribution q: PAC-Bayesian theory is intimately connected to Bayesian inference when considering the negative log-likelihood loss function (w, x, y) = − log p(y|x, w) and λ = m. proved that the optimal posterior in this setting is q(w) = p(w|S). Bayesian inference considers the posterior p(y|x, S) = p(w|S)p(y|x, w)dw, at test time for a data instance x, which corresponds to the risk of the optimal posterior. Unfortunately, the optimal posterior is rarely available, and PAC-Bayes relies on the approximated posterior q. Coincidently, the approximated posterior and its KL-divergence from the prior distribution are instrumental to the evidence lower bound (ELBO), which is extensively used in Bayesian neural networks (BNNs) to bound the log-likelihood While the right hand side of a PAC-Bayesian bound, with the negative log-likelihood loss and λ = m, is identical to the right hand side of the ELBO bound in term of learning, they serve different purposes. One is used for bounding the risk while the other is used for bounding the marginal loglikelihood. Nevertheless, the same algorithms can be used to optimize BNNs and PAC-Bayesian intuitions and components can influence the practice of Bayesian neural networks. It is challenging to derive a PAC-Bayesian bound for the negative log-likelihood (NLL) loss as it requires a bound on the log-partition function log E w∼p, In cases where the loss function is uniformly bounded by a constant, e.g., the zero-one loss, the log-partition function is bounded as well. Unfortunately, the NLL loss is unbounded, even when y is discrete. For instance, consider fully connected case, where the input vector of the (k)-th layer is a function of the parameters of all previous layers, i.e., x k (W 0, . . ., W k−1). The entries of x k are computed from the response of its preceding layer, i.e., W k−1 x k, followed by a transfer function σ(·), i.e.,, if the rows in W k consist of the vector rx k then the NLL loss increases with r, and is unbounded when r → ∞. Our main theorem shows that for smooth loss functions, the log-partition function is bounded by the expansion of the loss function, i.e., the norm of its gradient with respect to the data x. This property is appealing since these gradients often decay rapidly for deep neural networks, as we demonstrate in our experimental evaluation. Consequently deep networks enjoy tighter generalization bounds than shallow networks. Our proof technique follows the Herbst Argument for bounding the log-partition function using the Log-Sobolev inequality for Gaussian distributions . Theorem 2. Assume (x, y) ∼ D and x given y follows the Gaussian distribution. Let (w, x, y) be a smooth loss function (e.g., the negative log-likelihood loss). For any δ ∈ and for any real number λ > 0, with probability at least 1 − δ over the draw of the training set S the following holds simultaneously for any posterior probability density function: The Gaussian assumption for the data generating distribution D can be relaxed to any log-concave distribution, using , Corollary 2.5. We use the Gaussian assumption to avoid notational overhead. Broadly, the proposed bound is comprised of two terms: The first term is the log-partition function which is dominated by the norm of the gradients with respect to the input, namely E (x,ŷ)∼D e α(− (w,x,ŷ)) dα, and it describes the expressivity of the model over the prior distribution. The second term is the KL-divergence between the learned posterior and the prior, and it measures the complexity of the learning process. The proof starts with Eq. and uses the Herbst Argument and the Log-Sobolev inequality to bound the moment-generating function Specifically, the proof consists of three steps. First we use the statistical independence of the training samples to decompose the moment generating function Then we use the Herbst argument to bound the function M (] and obtain the following bound: Finally we use the log-Sobolev inequality for Guassian distributions, The above theorem can be extended to settings for which x is sampled from any log-concave distribution, e.g., the Laplace distribution. The log-concave setting modifies the gradient norm and the log-Sobolev constant 2 in Eq. that corresponds to Gaussian distributions, cf.. We avoid this generalization to simplify our mathematical derivations. A detailed description of the proof can be found on Section 8.1 in the Appendix. The bound in Theorem 2 is favorable when applied to deep networks since their gradients w.r.t. data often decay rapidly. Nevertheless we can also apply our technique to shallow nets trained with NLL loss. We obtain PAC-Bayesian bounds for multi-class logistic regression. The NLL loss for multiclass logistic regression takes the form:, where x ∈ R d is the data instance, y ∈ {1, . . ., k} are the possible labels, and W ∈ R k×d is the matrix of parameters. The bound in Theorem 2 takes the form: given y follows the Gaussian distribution. Let (w, x, y) = − log p(y|x, w) be the negative log-likelihood loss for k−class logistic regression. For any δ ∈, for any λ > 0 and for any prior density function with variance σ 2 p ≤ m/16λ 2, with probability at least 1 − δ over the draw of the training set S the following holds simultaneously for any posterior probability density function: Full proof can be found on Section 8.2 in the Appendix, while we sketch the main steps of the proof below. The above corollary shows that PAC-Bayesian bound for classification using the NLL loss can achieve rate of λ = m. This augments the PAC-Bayesian for regression using the NLL loss for regression, i.e., the square loss, of. The PAC-Bayesian bound for logistic regression is derived by applying Theorem 2. We begin by realizing the gradient of log p(y|x, w) with respect to x. We denote by w y the y−th row of the parameter matrix W. Thus ∇ x log p(y|w, x) = ŷ p(ŷ|x, w)(w y − wŷ), and the gradient norm is upper bounded as follows: ∇ x log p(y|w, x) 2 ≤ 2 y w y 2. Plugging this into Eq. we obtain the following bound: Finally, whenever λσ p ≤ m/8 we derive the bound A detailed description of the proof can be found on Section 8.2 in the Appendix. In this section we study the derived bound empirically. We start with an ablation study of the proposed bound using classification and regression models. Next, we present our for multiclass classification tasks using different datasets and different architectures. We conclude the section, with an analysis of the models' uncertainty estimates using for in-distribution examples and outof-distribution examples. All suggested models follows the a Bayesian Neural Networks (BNN) perspective, in which we learn the mean and standard deviation for each learnable parameter in the network where we define N (0, σ 2 p I) to be the prior over weights. 6.1 ABLATION Effect of σ p. We start by exploring the effect of σ p on the models' performance and the proposed generalization bound. For that, we trained several models using σ p ∈ {0.05, 0.1, 0.2, 0.3} using the MNIST and datasets. All were obtained using fully connected layers with ReLU as non-linear activation function. We optimized the NLL loss function using Stochastic Gradient Descent (SGD) for 50 epochs with a learning rate of 0.01 and momentum of 0.9. For each model we compute the average train and test loss and accuracy together with the absolute difference between the training loss and the test loss, denoted as Generalization Loss. Moreover, we compute the generalization bound as stated in Eq. for all settings. Results are summarized in Table 1. Although σ p = 0.2 reaches slightly better generalization bound on MNIST dataset, σ p = 0.1 performs better over all calculated metrics, i.e., average loss and accuracy, both on MNIST and Fashion-MNIST. Notice, for Fashion-MNIST we observed slightly better generalization gap while using σ p = 0.05, however, its loss and accuracy are worse comparing to σ p = 0.1. Effect of λ. Recall, we bound the moment generating function using the norm of the functions' gradient with respect to the data x (Eq.). To construct tighter generalization bounds, we would like to set λ → m. However, in Eq. λ appears in both numerator and denominator. It is hence not clear whether the bound will converge, which depends on the model architecture, which is represented by the norm of its gradient. In other words, models with lower gradient norm could benefit from larger values of λ, hence tighter generalization bounds. To further explore this property we trained five different models with different number of layers. We look into both classification models while optimizing the NLL loss function, and regression tasks while optimizing the Mean Squared Error (MSE) loss function. For classification we used MNIST and Fashion-MNIST datasets, while for regression we use the Boston Housing dataset (for the regression models, were obtained using 5-fold cross validation). Except for the linear models, we force all models to have roughly the same numbers of parameters (∼80K for MNIST, ∼800K for Fashion-MNIST, ∼1500 for regression). For all models we set ReLU as non-linear activation functions. We optimize all models for 50 epochs using SGD with learning rate of 0.01 and momentum of 0.9. Based on of the prior paragraph, in all reported settings we set σ p = 0.1. Results are reported in Table 2. It can be seen that deeper models produce tighter generalization bounds on all three datasets. When considering model performance on down-stream classification task we notice that in general, models with better generalization bounds perform slightly better in terms of loss and accuracy. One possible explanation is that deeper models have smaller gradients w.r.t. the input. To validate that we further computed the average squared gradient norm w.r.t. the input as a function of the model depth, for both MNIST and Fashion-MNIST datasets. It can be seen from Figure 1a that indeed the gradients decay rapidly as we add more layers to the network. Next, we present in Figure 1b the generalization bound as a function of λ for MNIST models. We explored λ ∈ [√ m, m] and stopped the plot once the bound can no longer be computed. Experiments using Fashion-MNIST produce similar plot and can be found on Section 8.7 in the Appendix. Weights visualization. Since we consider Bayesian Neural Networks (BNNs) and optimize the KLdivergence between the prior and the posterior over the weights, we can visualize the average mean and standard deviation (STD) of the posterior as a function of the model depth. Figure 2a presents this for MNIST and Fashion-MNIST models using four and five depth levels. As expected, we can see that the average mean over the weights is zero for all layers while weights STD approaches 0.1. For the MNIST models (Figure 2a top row), we observed the standard deviation are ∼0.7 and not 0.1. We suspect this behaviour is due to fast optimization, hence the models do not have much signal to push the model towards the prior distribution. Notice, in all settings the average STD of the model weights decreases on the last layer. We observed a similar behavior also for the other models. 6.2 CLASSIFICATION Next, we compare BNN models against two commonly used baselines. The first baseline is a softmax model using the same architecture as the BNN while adding dropout layers. The second base- line is a Bayesian approximation using Monte Carlo Dropout , denoted as MC-Dropout, using different dropout rates and weight decay value of 1e-5. To evaluate these approaches we conducted multi-class classification experiments using three classification benchmarks: MNIST, Fashion-MNIST, and CIFAR-10 . We report train set and test set loss and accuracy, together with their generalization gaps (e.g., the difference between the test and training loss and accuracy). Notice, as oppose to our are reported for multi-class classification and not for binary classification. For completeness, we report binary classification on Section 8.4 in the Appendix. The premise beyond these type of experiments is to preset the benefit of learning the mean and STD separately for each of the models' parameters. Results are reported in Table 7. For BNN and MC-Dropout models, we sample 20 times from the posterior distribution and average their outputs to produce the model output. We also sampled more times, however, we did not see any significant differences. We observe that BNN models achieve comparable to both baselines but with lower loss and accuracy generalization gaps. Throughout the experiments we use dropout value of 0.3 for Softmax and MC-Dropout models and σ p = 0.1 for BNN models. We chose these values after grid search over different dropout values for all baseline models. A detailed description of all implementation details together with for more dropout rates can be found on Sections 8.4, 8.5, and 8.3 in the Appendix. Lastly, we evaluated the uncertainty estimates of BNN models against softmax models and MCDropout models. We experimented with both in-distribution and out-of-distribution examples. The purpose of the following experiments is to demonstrate that following the Bayesian approach together with the carefully picked prior can lead to better uncertainty estimates. In-Distribution Examples. In the context of in-distribution examples we follow the suggestion of and calculate the Expected Calibration Error (ECE) and Maximum Calibration Error (MCE) for all three models. Figure 2b provides visual representation of the . Results suggest that BNNs produce better calibrated outputs for all settings, with two exception of ECE for MNIST and MCE for CIFAR10. Out-of-Distribution Examples. Next, we evaluated the uncertainty estimates using OOD examples. We apply a model trained using dataset A to OOD examples from dataset B. We trained models on MNIST, Fashion-MNIST and CIFAR-10 and assess prediction confidence using OOD examples from MNIST, Fashion-MNIST, NotMNIST , and SVHN . Results are summarized in Table 8. More OOD experiments using different dropout and prior rates can be found on Sections 8.3, 8.6 in the Appendix. All models performed at the chance level (∼ 10% for 10 classes) for both OOD train and test sets. When considering the loss, we observe significantly higher values for the softmax and MC-Dropout models. These two findings imply that the softmax and MC-Dropout models are overly confident and tend to output a high probability for the max label. Hence, we measure the average entropy for all models. We expect BNNs to have higher entropy, due to the fact that it produces better uncertainty estimates, i.e., its' predictions for OOD samples are closer to a uniform distribution. Indeed, reported in Table 8 confirm this intuition. In the following study we present a new PAC-Bayesian generalization bound for learning a deep net using the NLL loss function. The proof relies on bounding the log-partition function using the squared norm of the gradients with respect to the input. Experimental validation shows that the ing bound provides insight for better model optimization and prior distribution search. We demonstrate that learning the mean and STD for all parameters together with optimize prior over the parameters leads to better uncertainty estimates over the baselines and makes it harder to overfit. Proof. We begin by using the statistical independence of the training samples to decompose the following function: Next we represent the moment generating function M (where the last equality follows from a change of integration variable and the integral limits. K (α) refers to the derivative at α. We then compute K (α) and K: Concluding the Herbst argument we obtain the following equality: Combining Eq. with Eq. we derive: Finally we apply the log-Sobolev inequality for Gaussian distributions (cf. , Chapter 2), as described in Eq.. To complete the proof we combine Eq. with Eq. to obtain: Proof. To apply Theorem 2 we start by realizing the gradient of log p(y|x, w) with respect to x. We denote by w y the y−th row of the parameter matrix W. Thus ∇ x log p(y|w, x) = ŷ p(ŷ|x, w)(w y − wŷ). Using the convexity of the norm function we upper bound the gradient norm: Next we use the fact that the gradient norm upper bound is independent of x to simplify the moment generating function bound in Theorem 2. Since (w, x, y) = − log p(ŷ|x, w), we use the bound in Eq.: Thus we are able to simplify Theorem 2 as follows Finally, we recall that p is the prior density function N (0, σ 2 p). Since the parameters are statistically independent, this expectation decomposes to its kd parameters: And the follows from the fact for √ 2 and the follows. The architechtures described in this sub-section are used for the multi-class, binary, and uncertainty estimates experiments. We use multilayer perceptrons for the MNIST dataset, while we use convolutional neural networks (CNNs) for both Fashion-MNIST and CIFAR-10. A detailed description of the architectures is available in Table 5. We optimize the NLL loss function using SGD with a learning rate of 0.01 and a momentum value of 0.9 in all settings. We use mini-batches of size 128 and did not use any learning rate scheduling. For the MC-Dropout models we experienced with different weight decay values, however found that 1e-5 provides the best validation loss, hence choose this value. 8.4 BINARY CLASSIFICATION Experiments in this sub-section were conducted to show consistency with. We follow the same setting in which we use the MNIST dataset, where we group digits into label zero, and labels into label one. All experiments in this subsection were conducted using multilayer perceptrons with one hidden layer consisting of 300 hidden neurons. We use the Rectified Linear Unit (ReLU) as our activation function . We optimize the negative log-likelihood loss function using stochastic gradient descent (SGD) with a learning rate of 0.1 and a momentum value of 0.9. We did not use any learning rate scheduling. SGD is run in mini-batches of size 128. Each model was trained for 20 epochs. We compared BNN to softmax models with dropout rates chosen from the set {0.0, 0.3, 0.5}. Hereby, a dropout with a rate of 0.0 means no dropout at all. In addition to the training set and test set loss and accuracy, we measure the generalization loss, while setting L D (w) to be the average test set loss. In the same manner, we measure the generalization accuracy, while using the zero-one loss instead of the negative log-likelihood loss. Table 6 summarizes the . All models achieve comparable accuracy levels, however the softmax models suffer from larger generalization errors both in terms of loss and accuracy. Notice, as expected, using higher Bernoulli-dropout rates mitigates the generalization gap. Here we report for multi-class classification for BNN and the baselines. Table 7 summarizes the . The main purpose of these additional experiments is to explore more dropout and σ p values for different models. Figure 3: Analysis of the proposed bound as a function network depth. We report the the generalization bound as a function of λ for different deep net depth levels using the Fashion-MNIST dataset.
We derive a new PAC-Bayesian Bound for unbounded loss functions (e.g. Negative Log-Likelihood).
520
scitldr
Data augmentation techniques, e.g., flipping or cropping, which systematically enlarge the training dataset by explicitly generating more training samples, are effective in improving the generalization performance of deep neural networks. In the supervised setting, a common practice for data augmentation is to assign the same label to all augmented samples of the same source. However, if the augmentation in large distributional discrepancy among them (e.g., rotations), forcing their label invariance may be too difficult to solve and often hurts the performance. To tackle this challenge, we suggest a simple yet effective idea of learning the joint distribution of the original and self-supervised labels of augmented samples. The joint learning framework is easier to train, and enables an aggregated inference combining the predictions from different augmented samples for improving the performance. Further, to speed up the aggregation process, we also propose a knowledge transfer technique, self-distillation, which transfers the knowledge of augmentation into the model itself. We demonstrate the effectiveness of our data augmentation framework on various fully-supervised settings including the few-shot and imbalanced classification scenarios.
We propose a simple self-supervised data augmentation technique which improves performance of fully-supervised scenarios including few-shot learning and imbalanced classification.
521
scitldr
Long short-term memory (LSTM) networks allow to exhibit temporal dynamic behavior with feedback connections and seem a natural choice for learning sequences of 3D meshes. We introduce an approach for dynamic mesh representations as used for numerical simulations of car crashes. To bypass the complication of using 3D meshes, we transform the surface mesh sequences into spectral descriptors that efficiently encode the shape. A two branch LSTM based network architecture is chosen to learn the representations and dynamics of the crash during the simulation. The architecture is based on unsupervised video prediction by an LSTM without any convolutional layer. It uses an encoder LSTM to map an input sequence into a fixed length vector representation. On this representation one decoder LSTM performs the reconstruction of the input sequence, while the other decoder LSTM predicts the future behavior by receiving initial steps of the sequence as seed. The spatio-temporal error behavior of the model is analysed to study how well the model can extrapolate the learned spectral descriptors into the future, that is, how well it has learned to represent the underlying dynamical structural mechanics. Considering that only a few training examples are available, which is the typical case for numerical simulations, the network performs very well. Data driven virtual product design is nowadays an essential tool in the automotive industry saving time and resources during the development process. For a new car model, numerical crash simulations are performed where design parameters are changed to study their effects on physical and functional properties of the car such as firewall intrusion, weight, or cost . Since one simulation run takes a couple of hours on a compute cluster, running a large number of simulation is not feasible. Therefore, a system that is able to use a limited dataset and predict new simulations would make the development process faster and more efficient. The rise of deep neural networks (DNNs) in recent years encourages further research and industrial usages. Besides manifold research for autonomous driving, it is natural for the automotive industry to seek and evaluate the possible applications of DNNs also in the product design stages. As an example, we investigate car crash tests, in which for example the plate thickness of certain parts strongly influences the bending behavior of structural beams and as a also the intrusion of the firewall into the passenger compartment. Here, numerical crash simulations for different variations of such thicknesses are used as a dataset for learning. The aim is to design a system based on a DNN architecture that learns the crash behavior and would be able to imitate the crash dynamics. Car crash simulations are based on a mathematical model of the plastic deformations and other physical and mechanical effects. They are defined on a computing mesh of currently up to three million points and up to a hundred time steps are stored. Each data instance is a simulation run-of pre-selected parts and/or time steps-that is very high dimensional. Working with this data directly exasperates any machine learning (ML) method, but a transformation of this data presented in allows to obtain a new representation that uses only a small number of coefficients to represent the high resolution numerical solutions. The transformed representation is employed here to compress the mesh geometries to feature sets suitable for neural networks, while avoiding to directly handle geometries in the machine learning method. This way, a network designed for video prediction and embedding based on a long short-term memory (LSTM) based architecture can be adapted for mesh data. Since LSTM is a recurrent neural network that allows to exhibit temporal dynamic behavior with feedback connections, it is a natural choice for learning the 3D sequences. The aim is that the network learns the observed crash behavior including translation, rotation, or deformation of the parts in the model. Since the contribution of this paper is using DNNs for analyzing car crash data, the related works are categorized into a group of publications in which DNNs are extended for 3D graphics and one that concerns the use of ML techniques for analyzing car crash simulations. For the latter, one typically uses different embedding techniques to obtain a low dimensional representation for the intrinsic underlying data space and to cluster simulations with similar characteristics together (; ; ; ;). The majority of publications about 3D DNN tried to extend CNN for 3D space and focus on description learning and shape correspondence, also known as geometric deep learning,;;;;; ) and some developed CNN filters for unorganized point clouds (a; b). The very active research is so far very compute resource consuming and there is no extension of ConvLSTM for 3D space to our knowledge, but for prediction one would need an LSTM (or GAN) approach. However, a couple of very recent works introduce new feature sets and architectures for mesh embedding using autoencoders and LSTM (b; ; a). The feature representation is using local shape deformations obtained by solving an optimization problem at each node and a global optimization for compensating for rotations. They have shown that after training the network, a sequences of 3D shapes as an animation can be generated by doing operations in the latent space. The bidirectional LSTM architecture is shown to outperform autoeconders (a). An LSTM based learning network has also been proposed in , where the obtained feature representation is then taken as the temporal data to be feed into a CNN that takes the features and represents them in a lower dimensional latent space. This information is subsequently feed into the LSTM module. Data collection is a bottleneck for deep learning applications. If the training data is not diverse enough, the network would neither be able to properly learn the intrinsic data space nor be able to return reasonable output for before unseen data. We focus on surface mesh data from numerical simulations of car crashes, where in the industrial setting the car model is divided into several physcical car parts. As a simple car model we use a Chevrolet C2500 pick-up truck, a model with around 60,000 nodes from the National Crash Analysis Center 1. The data stems from numerical crash simulation 2 of a frontal crash for random variations of the plate thickness of nine structural components, a setup similar to. The thickness variations in different deformation behavior. Figure 1 shows a snapshot of the crash simulation for the truck model. The geometries of the car model and parts are available in a regular mesh format and the correspondence of vertices between different simulations and over the time of the simulation are known by their node id. Therefore, instead of working with the meshes one can simply work with vertices and treat them like organized point clouds, while being able to recover the mesh and connectivity at any time. Now, the features for training the network are obtained from these point clouds and the network outputs a feature vector that is later post-processed to a point cloud or mesh. Instead of working directly with 3D surface meshes, a set of features is extracted for training the network. Such a feature set should be able to represent the dynamics of the crash efficiently. a compact representation for deforming shapes has been presented for the car crash case. The approach is based on the property that the Laplace Beltrami Operator (LBO) on a surface is invariant to isometric deformations. That is, the LBO is the same if a surface mesh is de- Figure 1: A snapshot of the crash simulation of the 3D model used for data collection with a zoom unto the studied longitudinal beams, from (left). The four selected parts (beams) after the crash for one selected simulation, later used for illustration (right). The colors of the parts are used to display error behavior per part. formed in such a way that it is neither stretched nor teared apart, which is the case for the considered simulations, which additionally all start from the same geometry. Consequently the eigenvectors, which form an orthogonal basis, do not change under an isometric transformation and can be used to represent mesh functions such as the deformations (as three functions, one each for x, y, z). The representation is obtained by projecting mesh functions onto the common orthogonal basis to compute so-called spectral coefficients. It turns out that most of the variations of the deformed shapes are concentrated in a small number of coefficients. Therefore an efficient representation is obtained using few spectral coefficients. As shown by Brezis & Gómez- using suitable assumptions, for the L 2 -approximation of functions controlled in the H 1 -Sobolevnorm the orthonormal basis stemming from the Laplace operator provides an optimal approximation in a certain sense; this can be extended to the LBO and functions in the Sobolev space H 2,2. Furthermore, the spectral representation can be understood as a mesh surface analogy to the Fourier decomposition of signals. This representation is introduced here for training LSTMs and allows to bypass the complexity of dealing with large meshes directly. To formalise, in B n×m we collect the first m unit eigenvectors of the LBO, where n is the number of points in the considered 3D shape. The spectral coefficients C 3×m at one time step are obtained by where R 3×n contains the x, y, z coordinates of the n points from the considered 3D shape. Four B matrices are required for four distinct parts in the dataset, which are four distinct geometries. Recovering the 3D shapes from their spectral coefficients is possible by since B n×m is orthonormal. R is an approximation of R and by choosing larger m, the approximation error gets smaller. In other words, the more eigenvectors are used in B n×m, the more details are saved in the spectral representation. Figure 2 visualizes the localization and histogram of average error for reconstructing four selected parts from their spectral coefficients. The averaging is done over 205 × 10 samples for each part independently. Note that the bounding box of the entire data is × [−600, 600] ×, determined over simulation time. Comparing the histogram with the tight bounding box's dimensions shows that the reconstruction error is not very high for m = 40, while the part with blue color coding seems to have more overall error. Note that the error is localized mostly toward the front of the parts, this area goes through very large deformations during the crash. Note that with m = 100 the observed maximal observed error would go down from 40 to 20. For frame prediction in video and to obtain latent representations a number of quite interesting applications in robotics and computer vision have been developed. Different DNN architectures have been investigated very extensively. was one of the first proposals of an unsupervised LSTM based representation learning method for video. The logic behind the choice is that the same operation must be applied at each step to propagate dynamics to the next step. This implies that the underlying dynamics of the dataset remains the same, i.e. the same dynamics acting on any state, at anytime, should produce the next state . Others extended this work by adding convolutional operations in LSTM, introducing new recurrent networks, or using model based architectures (; ; ; ;). Car crash simulation data can basically be considered a sequence of 3D geometries. In analogy to the video processing case, one can see each geometry in the sequence as a video frame. Inspired from the work on video prediction, we use a similar architecture for the case of car crash representation learning. We choose a two branch LSTM architecture with reconstruction and prediction decoders from. The encoder LSTM maps an input sequence of k time steps into a fixed length vector representation. This representation is decoded using a decoder LSTM to perform the reconstruction of the input sequence of size k. Receiving a few steps l, l < k, from the beginning of the sequence, the other decoder LSTM predicts future sequences of length k − l, see Figure 3, where we here use l = k/2 for simplicity. The architecture is in shown to have a better performance for predicting future frames compared to other approaches. The design of the prediction encoder-decoder LSTM is same as that of the autoencoder LSTM, except that it is like a supervised method in which the decoder LSTM predicts future behavior that comes after the input sequence while the encoders hidden state will capture information about the representation of the input sequence. In comparing to an autoencoder LSTM that receives the entire k time steps of the simulations and reconstruct them again, the prediction encoder-decoder LSTM receives just the first l steps and predicts the remaining time steps. Therefore, the input sequence acts as seed points for generating the rest of the sequence. This compositional architecture is supposed to address the shortcomings that one has to confront by the use of an autoencoder or encoder-predictor alone. Namely, on the one hand an autoencoder LSTM suffers from a bias by memorizing the inputs, which is not sufficient for predicting future frames. On the other hand, an encoder-decoder LSTM suffers from the bias to save information mostly about the last few frames since these carry more information for predicting the future frames, but then the representation at the end of the encoder will ignore large parts of the input. In the compositional architecture, the model is trained to also predict all of the input sequence, therefore it cannot just store information about the last few frames . It is worth to mention that the dataset contains the translation, rotation, and deformation of car parts during the crash. Therefore the network would learn the entire degrees of freedom of the crash dynamics and would be able to imitate the crash by the prediction decoder after receiving the initial time steps as seeds. Figure 3: The two branches LSTM autoencoder network architecture. The data sequence of f s are feed into the network in two lengths and a joint encoder is learned. The top decoder is learning the reconstruction, while the bottom decoder is performing future prediction. Although the 3D truck model consists of several parts, only the left and right structural beam, which are made of two parts each, are considered for evaluating our approach. These beams are structurally very important parts of the car and typically investigated by engineers, therefore it is justified to concentrate on these parts. Overall 205 different simulations were performed. For each simulation, ten snapshots equally distributed in time are selected and the deformation of the two beams are extracted from it. Therefore a data sample consists of the four selected parts during ten time steps. Together, the dataset after the extraction from the crash simulations contains 205 samples and each full sample is a long concatenated vector of 4 × 3 × 40 × 10 elements. The entire dataset is divided into training and test set of 105 and 100 samples, respectively. Each data point, i.e. each time step of a simulation, is normalized by the l 2 norm of the features at t = 0 beforehand. Note that one could consider a different scenario in which each part, out of the four, over all ten time steps is a data sample, that is treat the for beams with individual machine learning models. Since the dynamics of the two beams are coupled during the crash, considering them together can reduce the error for learning the crash dynamics by the network and we thereforce consider this setup. We employ the two branch LSTM from Figure 3, where the implementation is done with Keras. In our experiments, the encoder component of the network has 1000, the decoder has 1500, and the prediction part has 2000 LSTM units without any convolutional layer, as defining a convolutional layer for 3D graphics is not trivial and existing approaches are resource intensive. Further, we use ReLU as the activation function. Using ADAM with default parameters the entire network is trained together, which includes the encoder, reconstruction, and prediction parts over 100 iterations, until the mean squared error (over all parts) reaches the order of 10 −7 and becomes stable. The training phase takes about 30 minutes on an Intel i7-7700 CPU@3.60GHz × 8. The achieved minimum during training is a trade-off between reconstruction and prediction loss functions. The prediction part of the network receives the first five time steps as inputs and generates the next five steps of the crash (which are both a vector of 4 × 3 × 40 × 5 elements). For each grid point j we compute in the following the least squares error at time t and averaged over the simulations, as well as accumulated over time: where s is the number of simulations and S i j (t),Ŝ i j (t) are the three-dimensional (one for each direction) original mesh function and its reconstruction (or prediction) for simulation i at time t. Figure 4 shows the histograms of the reconstruction and prediction error for the training and test datasets of all four parts, only here for the spectral coefficients. The reconstruction error is larger than the prediction error since the reconstruction part recovers the entire k = 10 time steps while the prediction part just generates the last l = 5 time steps. and have reported a bifurcation for this car crash dataset. That is, two different bending behavior for the beams arise in the simulation data, which is due to changes in the plate thicknesses. One can cluster the 205 simulations into two groups of 71 and 134 for the two bending behaviors. It gives an unique opportunity to investigate the performance of the proposed system by analyzing and visualizing the error for each branch of bifurcation individually. Moreover, it is possible to see if and how well the network can recognize this bifurcation and how the spectral coefficients can preserve this. The 105 simulations used for training include samples from both branches of the bifurcation, as do the remaining 100 for testing. The distances between ground truth and the outputs of two decoders, after decompression and re-normalization from the spectral coefficient back to 3D shapes, are visualized on a template to present the spatial localization of the average error of each bifurcation branch and its histogram is also shown for a better comparison (all for the testing dataset). Figure 5 gives the accumulated error over time. Overall, the reconstruction and prediction errors are small in relation to the bounding box of the data, the network based on the spectral coefficients is able to learn the complex structural mechanics. It can be seen in first two histograms of Figure 5 that the errors accumulation for the reconstruction in the two branches is similar, which can also be seen in the first and second row in Figure 5. The part color coded red shows somewhat higher error values, while the blue part has more mid-range errors. The third and fourth row in Figure 5 for the prediction have very different error localization. Considering the localization and histograms of the average error over time for the reconstruction branch of the network, one can observe that the error behavior stays roughly the same over time, i.e. the localization of error and histograms show that the error stay the same. Regarding the localization and histogram of the average error over time for the prediction branch of the network, now the error increases in time and the behavior changes at the eighth time step. Here, one can observe that the localization of the error changes and the histograms show that the error increases more strongly for the first branch, see Figure 7 in the Appendix. The encoder LSTM weights can be used for visualizing the intrinsic underlying space of the dataset. Therefore, in order to see how well the encoder learned the representation and dynamics of the car crash simulations, the weights are visualized using different markers for the two branches of bifurcation. Due to the encoder having 1000 layers, to which each sample is mapped, one needs to use some visualization techniques for embedding from higher dimensions to a lower dimensional space. We use t-SNE and show the in Figure 6. As can be seen, the bifurcation is shown as two well-separated clusters. There are a few points from each reconstruction branch 1 reconstruction branch 2 prediction branch 1 prediction branch 2 bifurcation branch that are misaligned with the rest of their branch members. This might be because of using the spectral coefficients as an approximation which leads to losing some details about the bifurcation, therefore increasing m might rectify the issue. Or a 2D embedding is not sufficient for properly visualization the encoder LSTM weights. Nevertheless, this 2D visualization proves that the network was able to learn the complex dynamics of the crash since the bifurcation is well represented by the encoder weights. Video frames prediction has been in the center of attention of researchers for a while, but there has been only very few extensions of these works to the 3D case so far. The problem is addressed here by introducing spectral coefficients to encode functions on the geometry together with a two branch LSTM based architecture without any convolutional layer, which has already proven to be feasible for video embedding and future frames prediction. The employed LBO basis and the ing spectral coefficients provide a trade-off between accuracy and required computational resources. We encode the 3D shapes by a set of features using the eigenvectors of the LBO. For empirical evaluation, a dataset is employed from a set of numerical simulations of a car during crash under different design conditions, i.e. plate thickness variations. The appearance of a bifurcation during the crash in the dataset, motivates an error analysis done for both groups to see how good the network performs in the presence of a bifurcation. In both branches, the network is able to perform very good predictions, while we observe different error localisations for reconstruction versus prediction. Moreover, the 2D visualization of the reconstruction branch shows the bifurcation as two clusters. In any case, from a relatively small number of data, the proposed network using spectral coefficients is able to learn complex dynamical structural mechanical behaviors. Future work could go toward scaling the pipeline for learning the crash dynamics of the entire car and larger mesh sizes, which increases the needed computational effort. On the other hand, one might be able to use smaller number of eigenvectors by not simply selecting the first few ones, but those with a large variance in the spectral coefficients of the data set. Furthermore, in practical settings, re-meshing of the parts can take place, here using spectral coefficients can ease this step since one can encode shapes with different vertices number to fixed size feature vectors, as long as the geometry is (approximately) isometric. Still, there is the overall question, if and how a trained network can be evaluated for changed geometries (relevant question for any 3D DNN approach introduced so far) or different crash setups. Moreover, adding design parameters could also improve the accuracy but requires modifications of the networks architecture. For practical applications, as each crash simulation requires hours of heavy computation running computational solvers on a large cluster, a system that is able to learn the representation of experiments with very few training data and generate the predicted simulation for new design parameters would save much resources. Moreover, the ultimate goal of research along this direction would be a data driven system that receives very little information about the simulation (like design parameters) and output the crash sequences with minimum error. Another application of the current system could be feasibility detectors while running the simulation on the compute cluster. Using the network, one could check if the simulation goes well or if for some reasons it should be terminated. From the current stage of the system, one would be able to generate the parts of the future simulation simply by extrapolating the learned spectral coefficients from a few initial time steps, which are already computed on the cluster, as inputs. If the distance between network predicts and simulation gets very large over the iterations, the simulation can be terminated since it failed the feasibility check. Further, related works such as introduce a specific feature set and LSTM autoencoders, where also graph convolution operation is required. This approach could be applied for car crash data under the assumption that the local optimization can still be applied for large deformations as the ones occurring in our applications. Further, the ing features are long vectors, which in 8 hours for learning on a CPU/GPU system for a data set similar in size to ours, where we need 30 minutes. Nevertheless, a comparison of these two approach will be worthwhile future work. A APPENDIX time step 6 time step 7 time step 8 time step 9 time step 10 time step 6 time step 7 time step 8 time step 9 time step 10
A two branch LSTM based network architecture learns the representation and dynamics of 3D meshes of numerical crash simulations.
522
scitldr
The purpose of an encoding model is to predict brain activity given a stimulus. In this contribution, we attempt at estimating a whole brain encoding model of auditory perception in a naturalistic stimulation setting. We analyze data from an open dataset, in which 16 subjects watched a short movie while their brain activity was being measured using functional MRI. We extracted feature vectors aligned with the timing of the audio from the movie, at different layers of a Deep Neural Network pretrained on the classification of auditory scenes. fMRI data was parcellated using hierarchical clustering on 500 parcels, and encoding models were estimated using a fully connected neural network with one hidden layer, trained to predict the signals for each parcel from the DNN features. Individual encoding models were successfully trained and predicted brain activity on unseen data, in parcels located in the superior temporal lobe, as well as dorsolateral prefrontal regions, which are usually considered as areas involved in auditory and language processing. Taken together, this contribution extends previous attempts on estimating encoding models, by showing the ability to model brain activity using a generic DNN (ie not specifically trained for this purpose) to extract auditory features, suggesting a degree of similarity between internal DNN representations and brain activity in naturalistic settings. One important motivation for incorporating machine learning in neuroscientific discovery is the establishment of predictive models, as opposed to models based on statistical inference. While the latter are unable to generalize to a new dataset, the former aim at sucessful generalization. In particular, encoding models aim at predicting brain activity given a model of the stimulus presented to the subject. A successful model should enable generalization to unseen data, enabling a better understanding of the underlying brain functions. Furthermore, an accurate encoding model could potentially be used to enhance machine learning, by providing an auxiliary source of training data, as recent evidence suggest that actual brain activity can guide machine learning. In this study, we tested whether a pretrained network could be used to estimate encoding models, in the case of naturalistic auditory perception. We downloaded the ds001110 dataset version 3 on openNeuro, in which 36 subjects watched a 20 minute long movie in an fMRI scanner. MRI data was collected on a 3 T full-body scanner (Siemens Skyra) with a 20-channel head coil. Functional images were acquired using a T 2 * -weighted echo planar imaging pulse sequence (TR 1500 ms, in-plane resolution 3 by 3 mm). Anatomical images were acquired using a T1-weighted magnetization-prepared rapid-acquisition gradient echo (MPRAGE) pulse sequence (0.89 mm3 resolution). More details can be found in the original paper. In this study, we report only data from 16 subjects who watched the same episde from the TV show "Sherlock". At the time of submitting this paper, OpenNeuro included only raw data. Therefore, we used a preprocessed version 1. An overview of our method is presented in figure 1, and consists in three steps 2. First, we extracted the audio track from the movie, and extracted feature vectors from all seven convolutional layers of SoundNet. The 20 minute long audio file was fed as input, and we performed interpolations according to the width of the feature maps in each layer, in order to realign the obtained feature vectors with the temporal resolution (1.5 second) of the fMRI signal. This procedure yielded a total of 946 feature vectors, for each SoundNet layer. Next, in order to reduce the dimensionality of the fMRI data, we applied hierarchical clustering using Ward criterion to parcellate each individual brain into 500 regions of interests (ROI). We subsequently realigned the fMRI data on the beginning of the movie, yielding 946 vectors of dimension 500 for each subject. Finally, encoding models were estimated separately for each subject and layer of SoundNet. We trained fully connected neural networks with one hidden layer to predict brain activity in the 500 ROI simultaneously. We report when varying the number of neurons in the hidden layer. We used ReLu activation for the hidden layer, and linear activation for the output layer. We used a learning rate of 0.001 and a L 2 penalty of 0.0001. Cross validation on the data was performed using four folds without shuffling the data (in order to ensure that the train and test data were as distant as possible temporally), and for each training fold, 10% of the data was kept for validation. We used the Adam optimiser with batches of size 50. Mean Square Error (MSE) was used as a loss function, and we applied an early stopping criterion that stops when validation MSE is not improving for 10 consecutive epochs. The final metric we use for evaluating the is the R 2 score on the test set, indicating the quality of predicting fMRI data. Additionally, we also perfomed a control analysis in which we estimated 100 null encoding models by extrating feature vectors from an untrained SoundNet, using the exact same procedure as described above. This analysis enables us to estimate the chance level of our dataset, as well as the gain obtained when using the pretrained network. 3 Results and discussion 3.1 Which layers enable the training of an encoding model? To begin with, as expected, the null models could not yield any significant training for any SoundNet layer, as indicated by R 2 scores of less than 1e −6 ± 1e −5. Next, the first four layers of SoundNet could not be used to sucessfully train encoding models, as all R 2 were less than 0.03. In the following analysis, we focus on from layers conv5, conv6 and conv7. Table 1 depicts the influence of the number of neurons in the hidden layer on the maximum over all ROI of R 2 scores, averaged across subjects and CV folds. The best were obtained using 1000 neurons in the hidden layer, which enables succesfully training an encoding model on conv5, conv6 and conv7. Furthermore, we noticed that maximum R 2 scores across folds were much higher number of neurons 50 100 500 1000 1500 Table 1: Hyper parameter exploration: average (standard error of the mean) across all subjects of maximum R 2 score, as a function of transfered layer, and number of neurons in hidden layer. than the average over folds, and we found that the second fold consistently yielded low R 2 scores (R 2 < 0.05), for all subjects, for conv5 and conv6. Regarding conv7, the first two folds yielded an average of R 2 = 0.15 across all subjects. While this issue would demand closer inspection of the data, we suspect a systematic bias in the feature vectors for conv5 and conv6 in the first half of the video. Nonetheless, we obtained good generalization for the other three folds, especially for conv7. As a consequence, in the next section we will select the fold that yielded the maximum R 2 in order to interpret the spatial maps. For all sixteen subjects, brain activity in ROIs including the superior and middle temporal gyri could be predicted with R 2 > 0.25. The corresponding SoundNet layer for which the R 2 was maximal was conv7 for 14 subjects, conv6 for one subject and conv5 for one subject. This suggest that the information in the last layers of SoundNet is linked to the fMRI activity in regions previously associated with general purpose auditory processing. Furthermore, for eleven out of sixteen subjects, brain activity in ROIs located in the left dorsolateral prefrontal cortex was predicted with R 2 > 0.15 (8 out of those eleven had a R 2 > 0.25). The corresponding layer was conv7 for 9 subjects and conv5 for 2 subjects. The left dorsolateral prefrontal cortex has been previously associated to verbal encoding in numerous studies, suggesting that the information in conv5 and conv7 might be linked to verbal content in the original stimuli, to a certain extent. Note that we also found less than five subjects for which brain activity in medial regions of the Default Mode Network, such as the medial prefrontal cortex, or the anterior cingulate cortex, could be predicted, but this didn't seem like a consistent pattern across the majority of subjects. We depict We were able to train encoding models on individual subjects to predict brain activity using the deepest layers of SoundNet, using less than 20 minutes of fMRI data. The obtained models best predicted the activity in brain areas that are part of a language-related network. However, the current study has the following limitations. First, we extracted features from the auditory part of the stimuli, while the modeled brain activity involves many other brain functions, namely visual perception, as well as higher level cognitive functions such as memory and emotional responses. This probably explains why we obtain R 2 = 0.5 in the best case. Providing a richer stimuli representation using more general purpose feature extractors would probably enable a more complete model of brain activity. Second, we estimated brain parcellations on single subject data using only 20 minutes of MRI, which might not be enough to obtain a reliable set of ROIs. Further studies should use either more repetitions on each subject, or attempt at learning parcellations across subjects, after having spatially normalized each individual to a template. Third, we didn't find a clear relationship between spatial extent of our encoding models as a function of the SoundNet layer. This could be due to the fact that SoundNet was trained independently of the brain data, and was never optimized for encoding models. One possible avenue would be to perform fine tuning, or retrain from scratch, in order to optimize the estimation of encoding models. Finally, in our approach we ignored the temporal dynamics of both the feature vectors and the fMRI data, as well as the dependencies between ROIs implied by brain connectivity. In future studies, we will consider the use of recurrent neural networks, as well as graph representation learning, in order to tackle those issues.
Feature vectors from SoundNet can predict brain activity of subjects watching a movie in auditory and language related brain regions.
523
scitldr
In this paper, we describe the "implicit autoencoder" (IAE), a generative autoencoder in which both the generative path and the recognition path are parametrized by implicit distributions. We use two generative adversarial networks to define the reconstruction and the regularization cost functions of the implicit autoencoder, and derive the learning rules based on maximum-likelihood learning. Using implicit distributions allows us to learn more expressive posterior and conditional likelihood distributions for the autoencoder. Learning an expressive conditional likelihood distribution enables the latent code to only capture the abstract and high-level information of the data, while the remaining information is captured by the implicit conditional likelihood distribution. For example, we show that implicit autoencoders can disentangle the global and local information, and perform deterministic or stochastic reconstructions of the images. We further show that implicit autoencoders can disentangle discrete underlying factors of variation from the continuous factors in an unsupervised fashion, and perform clustering and semi-supervised learning. Deep generative models have achieved remarkable success in recent years. One of the most successful models is the generative adversarial network (GAN) BID7, which employs a two player min-max game. The generative model, G, samples the noise vector z ∼ p(z) and generates the sample G(z). The discriminator, D(x), is trained to identify whether a point x comes from the data distribution or the model distribution; and the generator is trained to maximally confuse the discriminator. The cost function of GAN is DISPLAYFORM0 GANs can be viewed as a general framework for learning implicit distributions BID18 BID12. Implicit distributions are probability distributions that are obtained by passing a noise vector through a deterministic function that is parametrized by a neural network. In the probabilistic machine learning problems, implicit distributions trained with the GAN framework can learn distributions that are more expressive than the tractable distributions trained with the maximum-likelihood framework. Variational autoencoders (VAE) BID13 BID20 are another successful generative models that use neural networks to parametrize the posterior and the conditional likelihood distributions. Both networks are jointly trained to maximize a variational lower bound on the data log-likelihood. One of the limitations of VAEs is that they learn factorized distributions for both the posterior and the conditional likelihood distributions. In this paper, we propose the "implicit autoencoder" (IAE) that uses implicit distributions for learning more expressive posterior and conditional likelihood distributions. Learning a more expressive posterior will in a tighter variational bound; and learning a more expressive conditional likelihood distribution will in a global vs. local decomposition of information between the prior and the conditional likelihood. This enables the latent code to only capture the information that we care about such as the high-level and abstract information, while the remaining low-level information of data is separately captured by the noise vector of the implicit decoder. Implicit distributions have been previously used in learning generative models in works such as adversarial autoencoders (AAE) BID16, adversarial variational Bayes (AVB) , ALI , BiGAN BID5 and other works such as BID12 BID22. The global vs. local decomposition of information has also been studied in previous works such as PixelCNN autoencoders (van den), PixelVAE BID9, variational lossy autoencoders BID4, PixelGAN autoencoders BID15, or other works such as BID2 BID8 BID0. In the next section, we first propose the IAE and then establish its connections with the related works. Let x be a datapoint that comes from the data distribution p data (x). The encoder of the implicit autoencoder (FIG0) defines an implicit variational posterior distribution q(z|x) with the function z = f φ (x,) that takes the input x along with the input noise vector and outputsẑ. The decoder of the implicit autoencoder defines an implicit conditional likelihood distribution p(x|z) with the functionx = g θ (ẑ, n) that takes the codeẑ along with the latent noise vector n and outputs a reconstruction of the imagex. In this paper, we refer toẑ as the latent code or the global code, and refer to the latent noise vector n as the local code. Let p(z) be a fixed prior distribution, p(x, z) = p(z)p(x|z) be the joint model distribution, and p(x) be the model distribution. The variational distribution q(z|x) induces the joint data distribution q(x, z), the aggregated posterior distribution q(z), and the inverse posterior/encoder distribution q(x|z) as follows: DISPLAYFORM0 Maximum likelihood learning is equivalent to matching the model distribution p(x) to the data distribution p data (x); and learning with variational inference is equivalent to matching the joint model distribution p(x, z) to the joint data distribution q(x, z). The entropy of the data distribution H data (x), the entropy of the latent code H(z), the mutual information I(x; z), and the conditional entropies H(x|z) and H(z|x) are all defined under the joint data distribution q(x, z) and its marginals p data (x) and q(z). Using the aggregated posterior distribution q(z), we can define the joint reconstruction distribution r(x, z) and the aggregated reconstruction distribution r(x) as follows: DISPLAYFORM1 Note that in general we have r(x, z) = q(x, z) = p(x, z), q(z) = p(z), and r(x) = p data (x) = p(x).We now use different forms of the aggregated evidence lower bound (ELBO) to describe the IAE and establish its connections with VAEs and AAEs. DISPLAYFORM2 Mutual Info. DISPLAYFORM3 Entropy of Data DISPLAYFORM4 Entropy of Data See Appendix A for the proof. The standard formulation of the VAE (Equation 4) only enables us to learn factorized Gaussian posterior and conditional likelihood distributions. The AAE BID16 (Equation 5) and the AVB BID17 enable us to learn implicit posterior distributions, but their conditional likelihood distribution is still a factorized Gaussian distribution. However, the IAE enables us to learn implicit distributions for both the posterior and the conditional likelihood distributions. Similar to VAEs and AAEs, the IAE (Equation 6) has a reconstruction cost function and a regularization cost function, but trains each of them with a GAN. The IAE reconstruction cost is E z∼q(z) KL(q(x|z) p(x|z)). The standard VAE uses a factorized decoder, which has a very limited stochasticity. Thus, the standard VAE performs almost deterministic reconstructions by learning to invert the deterministic mapping of the encoder. The IAE, however, uses a powerful implicit decoder to perform stochastic reconstructions, by learning to match the expressive decoder distribution p(x|z) to the inverse encoder distribution q(x|z). We note that there are other variants of VAEs that can also learn expressive decoder distributions by using autoregressive neural networks. We will discuss these models later in this section. Equation 8 contrasts the reconstruction cost of standard autoencoders that is used in VAEs/AAEs, with the reconstruction cost of IAEs. DISPLAYFORM5 We can see from Equation 8 that similar to IAEs, the reconstruction cost of the autoencoder encourages matching the decoder distribution to the inverse encoder distribution. But in autoencoders, the cost function also encourages minimizing the conditional entropy H(x|z), or maximizing the mutual information I(x, z). Maximizing the mutual information in autoencoders enforces the latent code to capture both the global and local information. In contrast, in IAEs, the reconstruction cost does not penalize the encoder for losing the local information, as long as the decoder can invert the encoder distribution. In order to minimize the reconstruction cost function of the IAE, we re-write it in the form of a distribution matching cost function between the joint data distribution and the joint reconstruction distribution KL(q(x, z) r(x, z)) (Equation 7). This KL divergence is approximately minimized with the reconstruction GAN. The IAE has also a regularization cost function KL(q(z) p(z)) that matches the aggregated posterior distribution with a fixed prior distribution. This is the same regularization cost function used in AAEs (Equation 5), and is approximately minimized with the regularization GAN. Note that the last term in FIG4 is the entropy of the data distribution that is fixed. Training Process. We now describe the training process. We pass a given point x ∼ p data (x) through the encoder and the decoder to obtainẑ ∼ q(z) andx ∼ r(x). We now train the discriminator of the reconstruction GAN to identify the positive example (x,ẑ) from the negative example (x,ẑ). Suppose this discriminator function at its optimality is D * (x, z). We try to confuse this discriminator by backpropagating through the negative example (x,ẑ) 1 and updating the encoder and decoder weights. More specifically, the generative loss of the reconstruction GAN is T * (x, z) = − log D * (x, z), which defines the reconstruction cost of the IAE. We use the re-parametrization trick to update the encoder and decoder weights by computing the unbiased Monte Carlo estimate of the gradient of the reconstruction cost T * (x, z) with respect to (φ, θ) as follows: DISPLAYFORM6 We call this process the adversarial reconstruction. Similarly, we train the discriminator of the regularization GAN to identify the positive example z ∼ p(z) from the negative exampleẑ ∼ q(z). This discriminator now defines the regularization cost function, which can provide us with a gradient to update only the encoder weights. We call this process the adversarial regularization. Optimizing the adversarial regularization and reconstruction cost functions encourages p(x|z) = q(x|z) and p(z) = q(z), which in the model distribution capturing the data distribution p(x) = p data (x).We note that in this work, we use the original formulation of GANs BID7 to match the distributions. As a , the gradient that we obtain from the adversarial training, only approximately follows the gradient of the variational bound on the data log-likelihood. However, as shown in BID19, the objective of the GAN can be modified to optimize any f -divergence including the KL divergence. Bits-Back Interpretation of the IAE Objective. In Appendix B, we describe an information theoretic interpretation of the ELBO of IAEs (Equation 7) using the Bits-Back coding argument BID10 BID4 BID8. In IAEs, the dimension of the latent vector along with its prior distribution defines the capacity of the latent code, and the dimension of the latent noise vector along with its distribution defines the capacity of the implicit decoder. By adjusting these dimensions and distributions, we can have a full control over the decomposition of information between the latent code and the implicit decoder. In one extreme case, by removing the noise vector, we can have a fully deterministic autoencoder that captures all the information by its latent code. In the other extreme case, we can remove the global latent code and have an unconditional implicit distribution that can capture the whole data distribution by itself. The global vs. local decomposition of information in IAEs is further discussed in Appendix C from an information theoretic perspective. In IAEs, we can choose to only optimize the reconstruction cost or both the reconstruction and the regularization costs. In the following, we discuss four special cases of the IAE and establish connections with the related methods. In this case, we remove the noise vectors from the IAE, which makes both q(z|x) and p(x|z)deterministic. We then only optimize the reconstruction cost E z∼q(z) KL(q(x|z) p(x|z)). As a , similar to the standard autoencoder, the deterministic decoder p(x|z) learns to match to the inverse deterministic encoder q(x|z), and thus the IAE learns to perform exact and deterministic reconstruction of the original image, while the latent code is learned in an unconstrained fashion. In other words, in standard autoencoders, the Euclidean cost explicitly encouragesx to reconstruct x, and in case of uncertainty, performs mode averaging by blurring the reconstructions; however, in IAEs, the adversarial reconstruction implicitly encouragesx to reconstruct x, and in case of uncertainty, captures this uncertainty by the local noise vector (Case 3), which in sharp reconstructions. In the previous case, the latent code was learned in an unconstrained fashion. We now keep the decoder deterministic and add the regularization term which matches the aggregated posterior distribution to a fixed prior distribution. In this case, the IAE reduces to the AAE with the difference that the IAE performs adversarial reconstruction rather than Euclidean reconstruction. This case of the IAE defines a valid generative model where the latent code captures all the information of the data distribution. In order to sample from this model, we first sample from the imposed prior p(z) and then pass this sample through the deterministic decoder. In this case of the IAE, we only optimize KL(q(x, z) r(x, z)), while p(x|z) is a stochastic implicit distribution. Matching the joint distribution q(x, z) to r(x, z) ensures that their marginal distributions would also match; that is, the aggregated reconstruction distribution r(x) matches the data distribution p data (x). This model by itself defines a valid generative model in which both the prior, which in this case is q(z), and the conditional likelihood p(x|z) are learned at the same time. In order to sample from this generative model, we initially sample from q(z) by first sampling a point x ∼ p data (x) and then passing it through the encoder to obtain the latent codeẑ ∼ q(z). Then we sample from the implicit decoder distribution conditioned onẑ to obtain the stochastic reconstructionx ∼ r(x). If the decoder is deterministic (Case 1), the reconstructionx would be the same as the original image x. But if the decoder is stochastic, the global latent code only captures the abstract and high-level information of the image, and the stochastic reconstructionx only shares this high-level information with the original x. This case of the IAE is related to the PixelCNN autoencoder BID23, where the decoder is parametrized by an autoregressive neural network which can learn expressive distributions, while the latent code is learned in an unconstrained fashion. In the previous case, we showed that even without the regularization term, r(x) will capture the data distribution. But the main drawback of the previous case is that its prior q(z) is not a parametric distribution that can be easily sampled from. One way to fix this problem is to fit a parametric prior p(z) to q(z) once the training is complete, and then use p(z) to sample from the model. However, a better solution would be to consider a fixed and pre-defined prior p(z), and impose it on q(z) during the training process. Indeed, this is the regularization term that the ELBO suggests in Equation 7. By adding the adversarial regularization cost function to match q(z) to p(z), we ensure that r(x) = p data (x) = p(x). Now sampling from this model only requires first sampling from the pre-defined prior z ∼ p(z), and then sampling from the conditional implicit distribution to obtain x ∼ r(x). In this case, the information of data distribution is captured by both the fixed prior and the learned conditional likelihood distribution. Similar to the previous case, the latent code captures the high-level and abstract information, while the remaining local and low-level information is captured by the implicit decoder. We will empirically show this decomposition of information on different datasets in Section 2.1.1 and Section 2.1.2. This decomposition of information has also been studied in other works such as PixelVAE BID9, variational lossy autoencoders BID4, PixelGAN autoencoders BID15 and variational Seq2Seq autoencoders BID2. However, the main drawback of these methods is that they all use autoregressive decoders which are not parallelizable, and are much more computationally expensive to scale up than the implicit decoders. Another advantage of implicit decoders to autoregressive decoders is that in implicit decoders, the local statistics is captured by the local code representation; but in autoregressive decoders, we do not learn a vector representation for the local statistics. Connections with ALI and BiGAN. In ALI BID6 and BiGAN BID5 models, there are two separate networks that define the joint data distribution q(x, z) and the joint model distribution p(x, z). The parameters of these networks are trained using the gradient that comes from a single GAN that tries to match these two distributions. However, in the IAE, similar to VAEs or AAEs, the encoder and decoder are stacked on top of each other and trained jointly. So the gradient that the encoder receives comes through the decoder and the conditioning vector. In other words, in the ALI model, the input to the conditional likelihood is the samples of the prior distribution, whereas in the IAE, the input to the conditional likelihood is the samples of the variational posterior distribution, while the prior distribution is separately imposed on the aggregated posterior distribution by the regularization GAN. This makes the training dynamic of IAEs similar to that of autoencoders, which encourages better reconstructions. Recently, many variants of ALI have been proposed for improving its reconstruction performance. For example, the HALI BID1 ) uses a Markovian generator to achieve better reconstructions, and ALICE BID14 augments the ALI's cost by a joint distribution matching cost function between (x,x) and (x, x), which is different from our reconstruction cost. In this section, we show that the IAE can learn a global vs. local decomposition of information between the latent code and the implicit decoder. We use the Gaussian distribution for both the global and local codes, and show that by adjusting the dimensions of the global and local codes, we can have a full control over the decomposition of information. FIG1 shows the performance of the IAE on the MNIST dataset. By removing the local code and using only a global code of size 20D FIG1, the IAE becomes a deterministic autoencoder. In this case, the global code of the IAE captures all the information of the data distribution and the IAE achieves almost perfect reconstructions. By decreasing the global code size to 10D and using a 100D local code FIG1 ), the global code retains the global information of the digits such as the label information, while the local code captures small variations in the style of the digits. By using a smaller global code of size 5D FIG1 ), the encoder loses more local information and thus the global code captures more abstract information. For example, we can see from FIG1 that the encoder maps visually similar digits such as {3, 5, 8} or {4, 9} to the same global code, while the implicit decoder learns to invert this mapping and generate stochastic reconstructions that share the same high-level information with the original images. Note that if we completely remove the global code, the local code captures all the information, similar to the standard unconditional GAN. Figure 3 shows the performance of the IAE on the SVHN dataset. When using a 150D global code with no local code (Figure 3b), similar to the standard autoencoder, the IAE captures all the information by its global code and can achieve almost perfect reconstructions. However, when using a 75D global code along with a 1000D local code (Figure 3c), the global code of the IAE only captures the middle digit information as the global information, and loses the left and right digit information. At the same time, the implicit decoder learns to invert the encoder distribution by keeping the middle digit and generating synthetic left and right SVHN digits with the same style of the middle digit. Figure 4 shows the performance of the IAE on the CelebA dataset. When using a 150D global code with no local code (Figure 4b), the IAE achieves almost perfect reconstructions. But when using a 50D global code along with a 1000D local code (Figure 4c), the global code of the IAE only retains the global information of the face such as the general shape of the face, while the local code captures the local attributes of the face such as eyeglasses, mustache or smile. In IAEs, by using a categorical global code along with a Gaussian local code, we can disentangle the discrete and continuous factors of variation, and perform clustering and semi-supervised learning. Clustering. In order to perform clustering with IAEs, we change the architecture of FIG0 by using a softmax function in the last layer of the encoder, as a continuous relaxation of the categorical global code. The dimension of the categorical code is the number of categories that we wish the data to be clustered into. The regularization GAN is trained directly on the continuous output probabilities of the softmax simplex, and imposes the categorical distribution on the aggregated posterior distribution. This adversarial regularization imposes two constraints on the encoder output. The first constraint is that the encoder has to make confident decisions about the cluster assignments. The second constraint is that the encoder must distribute the points evenly across the clusters. As a , the global code only captures the discrete underlying factors of variation such as class labels, while the rest of the structure of the image is separately captured by the Gaussian local code of the implicit decoder. Figure 5 shows the samples of the standard GAN and the IAE trained on the mixture of Gaussian data. Figure 5b shows the samples of the GAN, which takes a 7D categorical and a 10D Gaussian noise vectors as the input. Each sample is colored based on the one-hot noise vector that it was generated from. We can see that the GAN has failed to associate the categorical noise vector to different mixture components, and generate the whole data solely by using its Gaussian noise vector. Ignoring the categorical noise forces the GAN to do a continuous interpolation between different mixture components, which in reducing the quality of samples. Figure 5c shows the samples of the IAE whose implicit decoder architecture is the same as the GAN. The IAE has a 7D categorical global code (inferred by the encoder) and a 10D Gaussian noise vector. In this case, the inference network of the IAE learns to cluster the data in an unsupervised fashion, while its generative path learns to condition on the inferred cluster labels and generate each mixture component using the stochasticity of the Gaussian noise vector. This example highlights the importance of using discrete latent variables for improving generative models. A related work is the InfoGAN BID3, which uses a reconstruction cost in the code space to prevent the GAN from ignoring the categorical noise vector. The relationship of InfoGANs with IAEs is discussed in details in Section 3. The style (local noise vector) is drawn from a Gaussian distribution and held fixed across each row. FIG3 shows the clustering performance of the IAE on the MNIST dataset. The IAE has a 30D categorical global latent code and a 10D Gaussian local code. Each column corresponds to the conditional samples from one of the learned clusters (only 20 are shown). The local code is sampled from the Gaussian distribution and held fixed across each row. We can see that the discrete global latent code of the network has learned discrete factors of variation such as the digit identities, while the writing style information is separately captured by the continuous Gaussian noise vector. This network obtains about 5% error rate in classifying digits in an unsupervised fashion, just by matching each cluster to a digit type. Semi-Supervised Learning. The IAE can be used for semi-supervised classification. In order to incorporate the label information, we set the number of clusters to be the same as the number of class labels and additionally train the encoder weights on the labeled mini-batches to minimize the cross-entropy cost. On the MNIST dataset with 100 labels, the IAE achieves the error rate of 1.40%. In comparison, the AAE achieves 1.90%, and the Improved-GAN BID21 achieves 0.93%. On the SVHN dataset with 1000 labels, the IAE achieves the error rate of 9.80%. In comparison, the AAE achieves 17.70%, and the Improved-GAN achieves 8.11%. In this section, we describe the "Flipped Implicit Autoencoder" (FIAE), which is a generative model that is very closely related to IAEs. Let z be the latent code that comes from the prior distribution p(z). The encoder of the FIAE FIG4 ) parametrizes an implicit distribution that uses the noise vector n to define the conditional likelihood distribution p(x|z). The decoder of the FIAE parametrizes an implicit distribution that uses the noise vector to define the variational posterior distribution q(z|x). In addition to the distributions defined in Section 2, we also define the joint latent reconstruction distribution s(x, z), and the aggregated latent reconstruction distribution s(z) as follows: DISPLAYFORM0 The objective of the standard variational inference is minimizing KL(q(x, z) p(x, z)), which is the variational upper-bound on KL(p data (x) p(x)). The objective of FIAEs is the reverse KL divergence KL(p(x, z) q(x, z)), which is the variational upper-bound on KL(p(x) p data (x)). The FIAE optimizes this variational bound by splitting it into a reconstruction term and a regularization term as follow: DISPLAYFORM1 Cond. Entropy DISPLAYFORM2 where the conditional entropy H(z|x) is defined under the joint model distribution p(x, z). Similar to the IAE, the FIAE has a regularization term and a reconstruction term (Equation 14 and Equation 15). The regularization cost uses a GAN to train the encoder (conditional likelihood) such that the model distribution p(x) matches the data distribution p data (x). The reconstruction cost uses a GAN to train both the encoder (conditional likelihood) and the decoder (variational posterior) such that the joint model distribution p(x, z) matches the joint latent reconstruction distribution s(x, z).Connections with ALI and BiGAN. In ALI BID6 and BiGAN BID5 ) models, the input to the recognition network is the samples of the real data p data (x); however, in FIAEs, the recognition network only gets to see the synthetic samples that come from the simulated data p(x), while at the same time, the regularization cost ensures that the simulated data distribution is close the real data distribution. Training the recognition network on the simulated data in FIAEs is in spirit similar to the "sleep" phase of the wake-sleep algorithm BID11, during which the recognition network is trained on the samples that the network "dreams" up. One of the flaws of training the recognition network on the simulated data is that early in the training, the simulated data do not look like the real data, and thus the recognition path learns to invert the generative path in part of the data space that is far from the real data distribution. As the , the reconstruction GAN might not be able to keep up with the moving simulated data distribution and get stuck in a local optimum. However, in our experiments with FIAEs, we did not find this to be a major problem. Connections with InfoGAN. InfoGANs BID3, similar to FIAEs, train the variational posterior network on the simulated data; however, as shown in Equation 13, InfoGANs use an explicit reconstruction cost function (e.g., Euclidean cost) on the code space for learning the variational posterior. In order to compare FIAEs and InfoGANs, we train them on a toy dataset with four data-points and use a 2D Gaussian prior (Figure 8 and Figure 9). Each colored cluster corresponds to the posterior distribution of one data-point. In InfoGANs, using the Euclidean cost to reconstruct the code corresponds to learning a factorized Gaussian variational posterior distribution (Figure 8b) 2. This constraint on the variational posterior restricts the family of the conditional likelihoods that the model can learn by enforcing the generative path to learn a conditional likelihood whose true posterior could fit to the factorized Gaussian approximation of the posterior. For example, we can see in Figure 8a that the model has learned a conditional likelihood whose true posterior is axis-aligned, so that it could better match the factorized Gaussian variational posterior (Figure 8b). In contrast, the FIAE can learn an arbitrarily expressive variational posterior distribution (Figure 9b), which enables the generative path to learn a more expressive conditional likelihood and true posterior (Figure 9a).One of the main flaws of optimizing the reverse KL divergence is that the variational posterior will have the mode-covering behavior rather than the mode-picking behavior. For example, we can see from Figure 8b that the Gaussian posteriors of different data-points in InfoGAN have some overlap; but this is less of a problem in the FIAE (Figure 9b), as it can learn a more expressive q(z|x). This mode-averaging behavior of the posterior can be also observed in the wake-sleep algorithm, in which during the sleep phase, the recognition network is trained using the reverse KL divergence objective. The FIAE objective is not only an upper-bound on KL(p(x) p data (x)), but is also an upper-bound on KL(p(z) q(z)) and KL(p(z|x) q(z|x)). As a , the FIAE matches the variational posterior q(z|x) to the true posterior p(z|x), and also matches the aggregated posterior q(z) to the prior p(z). For example, we can see in Figure 9b that q(z) is very close to the Gaussian prior. However, the InfoGAN objective is theoretically not an upper-bound on KL(p(x) p data (x)), KL(p(z) q(z)) or KL(p(z|x) q(z|x)). As a , in InfoGANs, the variational posterior q(z|x) need not be close to the true posterior p(z|x), or the aggregated posterior q(z) does not have to match the prior p(z). Reconstruction. In this section, we show that the variational posterior distribution of the FIAE can invert its conditional likelihood function by showing that the network can perform reconstructions of the images. We make both the conditional likelihood and the variational posterior deterministic by removing both noise vectors n and. FIG0 shows the performance of the FIAE with a code size of 15 on the test images of the MNIST dataset. The reconstructions are obtained by first passing the image through the recognition network to infer its latent code, and then using the inferred latent code at the input of the conditional likelihood to generate the reconstructed image. Clustering. Similar to IAEs, we can use FIAEs for clustering. We perform an experiment on the MNIST dataset by choosing a discrete categorical latent code z of size 10, which captures the digit identity; and a continuous Gaussian noise vector n of size 10, which captures the style of the digit. The variational posterior distribution q(z|x) is also parametrized by an implicit distribution with a Gaussian noise vector of size 20, and performs inference only over the digit identity z. Once the network is trained, we can use the variational posterior to cluster the test images of the MNIST dataset. This network achieves the error rate of about 2% in classifying digits in an unsupervised fashion by matching each categorical code to a digit type. We observed that when there is uncertainty in the digit identity, different draws of the noise vector in different one-hot vectors at the output of the recognition network, showing that the implicit decoder can efficiently capture the uncertainty. In this paper, we proposed the implicit autoencoder, which is a generative autoencoder that uses implicit distributions to learn expressive variational posterior and conditional likelihood distributions. We showed that in IAEs, the information of the data distribution is decomposed between the prior and the conditional likelihood. When using a low dimensional Gaussian distribution for the global code, we showed that the IAE can disentangle high-level and abstract information from the low-level and local statistics. We also showed that by using a categorical latent code, we can learn discrete factors of variation and perform clustering and semi-supervised learning. In this section, we describe an information theoretic interpretation of the ELBO of IAEs (Equation 7) using the Bits-Back coding argument BID10 BID4 BID8. Maximizing the variational lower bound is equivalent to minimizing the expected description length of a source code for the data distribution p data (x) when the code is designed under the model distribution p(x). In order to transmit x, the sender uses a two-part code. It first transmits z, which ideally would only require H(z) bits; however, since the code is designed under p(z), the sender has to pay the penalty of KL(q(z) p(z)) extra bits to compensate for the mismatch between q(z) and p(z). After decoding z, the receiver now has to resolve the uncertainty of q(x|z) in order to reconstruct x, which ideally requires the sender to transmit the second code of the length H(x|z) bits. However, since the code is designed under p(x|z), the sender has to pay the penalty of E z∼q(z) KL(q(x|z) p(x|z)) extra bits on average to compensate for the fact that the conditional decoder p(x|z) has not perfectly captured the inverse encoder distribution q(x|z); i.e., the autoencoder has failed to achieve perfect stochastic reconstruction. But for a given x, the sender could use the stochasticity of q(z|x) to encode other information. Averaged over the data distribution, this would get the sender H(z|x) "bits back" that needs to be subtracted in order to find the true cost for transmitting x: DISPLAYFORM0 From Equation 25, we can see that the IAE only minimizes the extra number of bits required for transmitting x, while the VAE minimizes the total number of bits required for the transmission. Continuous Variables. The Bits-Back argument is also applicable to continuous random variables. Suppose x and z are real-valued random variables. Let h(x) and h(z) be the differential entropies of x and z; and H(x) and H(z) be the discrete entropies of the quantized versions of x and z, with the quantization interval of ∆x and ∆z. We have DISPLAYFORM1 The sender first transmits the real-valued random variable z, which requires transmission of H(z) = h(z) − log ∆z bits, as well as KL(q(z) p(z)) extra bits. As ∆z → 0, we will have H(z) → ∞, which, as expected, implies that the sender would need infinite number of bits to source code and send the real-valued random variable z. However, as we shall see, we are going to get most of these bits back from the receiver at the end. After the first message, the sender then sends the second message, which requires transmission of h(x|z) − log ∆x bits, as well as E z∼q(z) KL(q(x|z) p(x|z)) extra bits. Once the receiver decodes z, and form that decodes x, it can decode a secondary message of the average length H(z|x) = h(z|x) − log ∆z, which needs to be subtracted in order to find the true cost for transmitting x: DISPLAYFORM2 From Equation 28, we can interpret the IAE cost as the extra number of bits required for the transmission of x. In IAEs, the global code (prior) captures the global information of data, while the remaining local information is captured by the local noise vector (conditional likelihood). In this section, we describe the global vs. local decomposition of information from an information theoretic perspective. In order to transmit x, the sender first transmits z and then transmits the residual bits required for reconstructing x, using a source code that is designed based on p(x, z). If p(z) and p(x|z) are powerful enough, in theory, they can capture any q(x, z), and thus regardless of the decomposition of information, the sender would only need to send H data (x) bits. In this case, the ELBO does not prefer one decomposition of information to another. But if the capacities of p(z) and p(x|z) are limited, the sender will have to send extra bits due to the distribution mismatch, ing in the regularization and reconstruction errors. But now different decompositions of information will in different numbers of extra bits. So the sender has to decompose the information in a way that is compatible with the source codes that are designed based on p(z) and p(x|z). The prior p(z) that we use in this work is a low-dimensional Gaussian or categorical distribution. So the regularization cost encourages the sender to encode low-dimensional or simple concepts in z that is consistent with p(z); otherwise, the sender would need to pay a large cost for KL(q(z) p(z)). The choice of the information encoded in z would also affect the extra number of bits of E z∼q(z) KL(q(x|z) p(x|z)), which is the reconstruction cost. This is because the conditional decoder p(x|z) with its limited capacity is supposed to capture the inverse encoder distribution q(x|z). So the sender must encode the kind of information in z that after being observed, can maximally remove the stochasticity of q(x|z) so as to lower the burden on p(x|z) for matching to q(x|z). So the reconstruction cost encourages learning the kind of concepts that can remove as much uncertainty as possible from the data distribution. By balancing the regularization and reconstruction costs, the latent code learns global concepts which are low-dimensional or simple concepts that can maximally remove uncertainty from data. Examples of global concepts are digit identities in the MNIST dataset, objects in natural images or topics in documents. There are two methods to implement how the reconstruction GAN conditions on the global code. Location-Dependent Conditioning. Suppose the size of the first convolutional layer of the discriminator is (batch, width, height, channels). We use a one layer neural network with 1000 ReLU hidden units to transform the global code of size (batch, global_code_size) to a spatial tensor of size (batch, width, height, 1). We then broadcast this tensor across the channel dimension to get a tensor of size (batch, width, height, channels), and then add it to the first layer of the discriminator as an adaptive bias. In this method, the latent vector has spatial and location-dependent information within the feature map. This is the method that we used in deterministic and stochastic reconstruction experiments. Location-Invariant Conditioning. Suppose the size of the first convolutional layer of the discriminator is (batch, width, height, channels). We use a linear mapping to transform the global code of size (batch, global_code_size) to a tensor of size (batch, channels). We then broadcast this tensor across the width and height dimensions, and then add it to the first layer of the discriminator as an adaptive bias. In this method, the global code is encouraged to learn the global information that is location-invariant such as the class label information. We used this method in all the clustering and semi-supervised learning experiments. The regularization discriminator in all the experiments is a two-layer neural network, where each layer has 2000 hidden units with the ReLU activation function. The architecture of the encoder, the decoder and the reconstruction discriminator for each dataset is as follows.
We propose a generative autoencoder that can learn expressive posterior and conditional likelihood distributions using implicit distributions, and train the model using a new formulation of the ELBO.
524
scitldr
Strong inductive biases allow children to learn in fast and adaptable ways. Children use the mutual exclusivity (ME) bias to help disambiguate how words map to referents, assuming that if an object has one label then it does not need another. In this paper, we investigate whether or not standard neural architectures have a ME bias, demonstrating that they lack this learning assumption. Moreover, we show that their inductive biases are poorly matched to lifelong learning formulations of classification and translation. We demonstrate that there is a compelling case for designing neural networks that reason by mutual exclusivity, which remains an open challenge. Children are remarkable learners, and thus their inductive biases should interest machine learning researchers. To help learn the meaning of new words efficiently, children use the "mutual exclusivity" (ME) bias -the assumption that once an object has one name, it does not need another (Figure 1). In this paper, we examine whether or not standard neural networks demonstrate the mutual exclusivity bias, either as a built-in assumption or as a bias that develops through training. Moreover, we examine common benchmarks in machine translation and object recognition to determine whether or not a maximally efficient learner should use mutual exclusivity. The mutual exclusivity task used in cognitive development research . Children tend to associate the novel word ("dax") with the novel object (right). When children endeavour to learn a new word, they rely on inductive biases to narrow the space of possible meanings. Children learn an average of about 10 new words per day from the age of one until the end of high school , a feat that requires managing a tractable set of candidate meanings. A typical word learning scenario has many sources of ambiguity and uncertainty, including ambiguity in the mapping between words and referents. Children hear multiple words and see multiple objects within a single scene, often without clear supervisory signals to indicate which word goes with which object . The mutual exclusivity assumption helps to resolve ambiguity in how words maps to their referents. examined scenarios like Figure 1 that required children to determine the referent of a novel word. For instance, children who know the meaning of "cup" are presented with two objects, one which is familiar (a cup) and another which is novel (an unusual object). Given these two objects, children are asked to "Show me a dax," where "dax" is a novel nonsense word. Markman and Wachtel found that children tend to pick the novel object rather than the familiar object. Although it is possible that the word "dax" could be another word for referring to cups, children predict that the novel word refers to the novel object -demonstrating a "mutual exclusivity" bias that familiar objects do not need another name. This is only a preference; with enough evidence, children must eventually override this bias to learn hierarchical categories: a Dalmatian can be called a "Dalmatian," a "dog", or a "mammal" . As an often useful but sometimes misleading cue, the ME bias guides children when learning the words of their native language. It is instructive to compare word learning in children and machines, since word learning is also a widely studied problem in machine learning and artificial intelligence. There has been substantial (a) (b) Figure 2: Evaluating mutual exclusivity in a feedforward (a) and seq2seq (b) neural network. (a) After training on a set of known objects, a novel label ("dax") is presented as a one-hot input vector. The network maps this vector to a one-hot output vector representing the predicted referent, through an intermediate embedding layer and an optional hidden layer (not shown). A representative output vector produced by a trained network is shown, placing almost all of the probability mass on known outputs. (b) A similar setup for mapping sequences of labels to their referents. During the test phase a novel label "dax" is presented and the ME Score at that output position is computed. recent progress in object recognition, much of which is attributed to the success of deep neural networks and the availability of very large datasets . But when only one or a few examples of a novel word are available, deep learning algorithms lack human-like sample efficiency and flexibility . Insights from cognitive science and cognitive development can help bridge this gap, and ME has been suggested as a psychologically-informed assumption relevant to machine learning. In this paper, we examine standard neural networks to understand if they have an ME bias. Moreover, we analyze whether or not ME is a good assumption in lifelong variants of common translation and object recognition tasks. Children utilize a variety of inductive biases like mutual exclusivity when learning the meaning of words . Previous work comparing children and neural networks has focused on the shape bias -an assumption that objects with the same name tend to have the same shape, as opposed to color or texture . Children acquire a shape bias over the course of language development , and neural networks can do so too, as shown in synthetic learning scenarios and large-scale object recognition tasks (see also and for alternative findings). This bias is related to how quickly children learn the meaning of new words , and recent findings also show that guiding neural networks towards the shape bias improves their performance . In this work, we take initial steps towards a similar investigation of the ME bias in neural networks. Compared to the shape bias, ME has broader implications for machine learning systems; as we show in our analyses, the bias is relevant beyond object recognition. Closer to the present research, analyzed an ME-like effect in neural machine translation systems at the sentence level, rather than the word level considered in developmental studies and our analyses here. showed that neural machine translation systems often learn many-to-one sentence mappings that in meaning loss, such that two different sentences (meanings) in the source language are mapped to the same sentence (meaning) in the target language. Using a trained network, they show how a probabilistic pragmatics model can be used as a post-processor to preserve meaning and encourage one-to-one mappings. These sentence-level biases do not necessarily indicate how models behave at the word level, and we are interested in the role of ME during learning rather than as a postprocessing step. Nevertheless, Cohn-Gordon and Goodman's are important and encouraging, raising the possibility that ME could aid in training deep learning systems. In this section, we investigate whether or not standard neural network models have a mutual exclusivity bias. Paralleling the developmental paradigm , ME is analyzed by presenting a novel stimulus ("Show me the dax") and asking models to predict which outputs (meanings) are most likely. The strength of the bias is operationalized as the aggregate probability mass placed on the novel rather than the familiar meanings. Our analyses relate to classic experiments by Marcus on whether neural networks can generalize outside their training space (; . Marcus showed that a feedforward autoencoder trained on arbitrary binary patterns fails to generalize to an output unit that was never activated during training. Our aim is to study whether standard architectures can recognize and learn a more abstract pattern -a perfect one-to-one mapping between input symbols and output symbols. Specifically, we are interested in model predictions regarding unseen meanings given a novel input. We also test for ME using modern neural networks in two settings using synthetic data: classification (feedforward classifiers) and translation (sequence-to-sequence models; as reported in Appendix A). Synthetic data. We consider a simple one-to-one mapping task inspired by. Translating this into a synthetic experiment, input units denote words and output units denote objects. Thus, the dataset consists of 100 pairs of input and output patterns, each of which is a one-hot vector of length 100. Each input vector represents a label (e.g., 'hat', 'cup', 'dax') and each output vector represents a possible referent object (meaning). Figure 2a shows the input and output patterns for the'dax' case, and similar patterns are defined for the other 99 input and output symbols. A one-to-one correspondence between each input symbol and each output symbol is generated through a random permutation, and there is no structure to the data beyond the arbitrary one-to-one relationship. Models are trained on 90 name-referent pairs and evaluated on the remaining 10 test pairs. No model can be expected to know the correct meaning of each test name -there is no way to know from the training data -but several salient patterns are discoverable. First, there is a precise one-toone relationship exemplified by the 90 training items; the 10 test items can be reasonably assumed to follow the same one-to-one pattern, especially if the network architecture has exactly 10 unused input symbols and 10 unused output symbols. Second, the perfect one-to-one relationship ensures a perfect ME bias in the structure of the data. Although the learner does not know precisely which new output symbol a new input symbol refers to, it should predict that the novel input symbol will correspond to one of the novel output symbols. An ideal learner should discover that an output unit with a known label does not need another -in other words, it should utilize ME to make predictions. Mutual exclusivity. We ask the neural network to "Show me the dax" by activating the "dax" input unit and asking it to select amongst possible referents (similar to Figure 1). The network produces a probability distribution over candidate referents (see Figure 2a), and can make relative (two object) comparisons by isolating the two relevant scores. To quantify the overall propensity toward ME, we define an "ME score" that measures the aggregate probability assigned to all of the novel output symbols as opposed to familiar outputs, corresponding to better performance on the classic forced choice ME task. Let us denote the training symbol by Y, drawn from the data distribution (X, Y) ∼ D and the held out symbols Y drawn from (X, Y) ∼ D. The mutual exclusivity score is the sum probability assigned to unseen output symbols Y when shown a novel input symbol x ∈ X ME Score = 1 averaged over each of the test items. An ideal learner that has discovered the one-to-one relationship in the synthetic data should have a perfect ME score of 1.0. In Figure 2a, the probability assigned to the novel output symbol is 0.01 and thus the corresponding ME Score is 0.01. The challenge is to get a high ME score for novel (test) items while also correctly classifying known (training) items. A wide range of standard neural networks are evaluated on the mutual exclusivity test. We use an embedding layer to map the input symbols to vectors of size 20 or 100, followed optionally by a hidden layer, and then by a 100-way softmax output layer. The networks are trained with different activation functions (ReLUs , TanH, Sigmoid), optimizers (Adam , Momentum, SGD), learning rates (0.1, 0.01, 0.001) and regularizers (weight decay, batch-normalisation , dropout (b), and entropy regularization (see Appendix B.1)). The models are trained to maximize log-likelihood. All together, we evaluated over 400 different models on the synthetic ME task. Figure 4: Ideal and untrained ME scores compared with the ME scores of a few learned models. Results. Several representative training runs with different architectures are shown in Figure 3. An ideal learner that has discovered the one-to-one pattern should have a mutual exclusivity of 1; for a novel input, the network would assign all the probability mass to the unseen output symbols. In contrast, none of the configurations and architectures tested behave in this way. As training progresses, the mutual exclusivity score (solid blue line; Figure 3) tends to fall along with the training loss (red line). In fact, almost all of the networks acquire a strong anti-mutual exclusivity bias, transitioning from a initial neutral bias to placing most or all of the probability mass on familiar outputs (seen in Figure 4). An exception to this pattern is the entropy regularized model, which maintains a score equivalent to an untrained network. In general, trained models strongly predict that a novel input symbol will correspond to a known rather than unknown output symbol, in contradiction to ME and the organizing structure of the synthetic data. Further informal experiments suggest our cannot be reduced to simply not enough data: these architectures do not learn this one-to-one regularity regardless of how many input/output symbols are provided in the training set. Even with thousands of training examples demonstrating a one-toone pattern, the networks do not learn this abstract principle and fail to capture this defining pattern in the data. Other tweaks were tried in an attempt to induce ME, including eliminating the bias units or normalizing the weights, yet we were unable to find an architecture that reliably demonstrated the ME effect. The show that standard neural networks fail to reason by mutual exclusivity when trained in a variety of typical settings. The models fail to capture the perfect one-to-one mapping (ME bias) seen in the synthetic data, predicting that new symbols map to familiar outputs in a many-to-many fashion. Although our focus is on neural networks, this characteristic is not unique to this model class. We posit it more generally affects flexible models trained to maximize log-likelihood. In a trained network, the optimal activation value for an unused output node is zero: for any given training example, increasing value of an unused output simply reduces the available probability mass for the Name Languages Sentence Pairs Vocabulary Size IWSLT'14 Eng.-Vietnamese ∼133K 17K(en), 7K(vi) WMT'14 Eng.-German ∼4.5M 50K(en), 50K(de) WMT'15 Eng.-Czech ∼15.8M 50K(en), 50K(cs) target output. Using other loss functions could in different outcomes, but we also did not find that weight decay and entropy regularization of reasonable values could fundamentally alter the use of novel outputs. In the next section, we investigate if the lack of ME could hurt performance on common learning tasks such as machine translation and image classification. Mutual exclusivity has implications for a variety of common learning settings. Mutual exclusivity arises naturally in lifelong learning settings, which more realistically reflect the "open world" characteristics of human cognitive development. Unlike epoch-based learning, a lifelong learning agent does not assume a fixed set of concepts and categories. Instead, new concepts can be introduced at any point during learning. An intelligent learner should be sensitive to this possibility, and ME is one means of intelligently reasoning about the meaning of novel stimuli. Children and adults learn in an open world with some probability of encountering a new class at any point, resembling the first epoch of training a neural net only. Moreover, the distribution of categories is neither uniformly distributed nor randomly shuffled . To simulate these characteristics, we construct lifelong learning scenarios using standard benchmarks as described below. In this section, we investigate if mutual exclusivity could be a helpful bias when training machine translation models in a lifelong learning paradigm. From the previous experiments, we know that the type of sequence-to-sequence (seq2seq) models used for translation acquire an anti-ME bias over the course of training (see Appendix A). Would a translation system benefit from assuming that a single word in the source sentence maps to a single word in the target sentence, and vice-versa? This assumption is not always correct since synonymy and polysemy are prevalent in natural languages, and thus the answer to whether or not ME holds is not absolute. Instead, we seek to measure the degree to which this bias holds in lifelong learning on real datasets, and compare this bias to the inductive biases of models trained on these datasets. The data for translation provides a somewhat natural distribution over the frequency at which different words are observed (there are words that appear much more frequently than the others). This allows us to use a single pass through the dataset as a proxy for lifelong translation learning. Datasets. We analyze three common datasets for machine translation, each consisting of pairs of sentences in two languages (Table 1). The vocabularies are truncated based on word frequency in accordance with the standard practices for training neural machine translation models . Mutual exclusivity. There are several ways to operationalize mutual exclusivity in a machine translation setting. Mutual exclusivity could be interpreted as whether a new word in the source sentence ("Xylophone" in English) is likely to be translated to a new word in the target sentence ("Xylophon" in German), as opposed to a familiar word. Since the word alignments are difficult to determine and not provided with the datasets, we instead measure a reasonable proxy: if a new word is encountered in the source sequence, is a new word also encountered in the target sentence? For a source sentence S and an arbitrary novel word N S, and a target sentence T and a novel word N T, we measure a dataset's ME Score as the conditional probability P (N T ∈ T |N S ∈ S). A hypothetical translation model could compute whether or not N S ∈ S by checking if the present word is absent from the vocabulary-so-far during the training process. Thus this conditional probability is an easily-computable cue for determining whether or not a model should expect a novel output word. For the three datasets, we consider both forward and backward translation to get six scenarios for analysis. The probability P (N T ∈ T |N S ∈ S) is estimated for a sample of 100 randomly shuffled sequences of the dataset sentence pairs. See Appendix B.3 for details on calculating the base rate P (N T ∈ T). Table 2: Number of sentences after which the ME Score P (NT ∈ T |NS ∈ S) falls below threshold. Results and Discussion. The measures of conditional probability in the six scenarios are shown in Table 2. There is a consistent pattern through the trajectory of early learning: the conditional probability P (N T ∈ T |N S ∈ S) is high initially for thousands of initial sentence presentations, but then wanes as the network encounters more samples from the dataset. For a large part of the initial training, a seq2seq model would benefit from predicting that previously unseen words in the source language are more likely to map to unseen words in the target language. Moreover, this conditional probability is always higher than the base rate of encountering a new word, indicating that conditioning on the novelty of the input provides substantial additional signal for predicting novelty in the output. Nevertheless, even the base rate suggests that a model should expect novel words with some regularity in our settings. This is in stark contrast to the synthetic showing that seq2seq models quickly acquire an anti-ME assumption (see Appendix A), and their expectation of mapping novel inputs to novel outputs decays rapidly as training progresses (Appendix Figure 7). Similar to translation, we examine if object classifiers would benefit from reasoning by mutual exclusivity during training processes that mirror lifelong learning. To study this, when selecting an image for training, we sample the class from a power law distribution (see Appendix B.2) such that the model is more likely to see certain classes . Ideally, we would model the probability that an object belongs to a novel class based on its similarity to previous samples seen by the model (e.g., outlier detection). Identifying that an image belongs to a novel class is non-trivial, and instead we calculate the base rate for classifying an image as "new" while a learner progresses through the dataset. The set of classes not seen by the model are referred to as "new" here. This measure can be seen as a lower bound on the usefulness of ME through the standard training process, since this calculation assumes a blind learner that is unaware of any novelty signal present in the raw image. Datasets. This section examines the Omniglot dataset and the ImageNet dataset . The Omniglot dataset has been widely used to study few-shot learning, consisting of 1623 classes of handwritten characters with 20 images per class. The ImageNet dataset consists of about 1.2 million images from 1000 different classes. Mutual exclusivity. To measure ME throughout the training process, we examine if an image encountered for the first time while training belongs to a class that has not been seen before. This is operationalized as the probability of encountering an image from a new class N as a function of the number of images seen so far t, P (N |t) (see Appendix B.5). This analysis is agnostic to the content of the image and whether or not it is a repeated item; it only matters whether or not the class is novel. As before, the analysis is performed using ten random runs through the dataset. We contrast the statistics of the datasets by comparing them to the ME Score (Equation 1) of neural network classifiers trained on the datasets. See Appendix B.4 for details about the models and their training procedures. The probability mass assigned to the unseen classes by the network is recorded after each optimizer step, as computed using Equation 1. Results and Discussion. The are summarized in Figure 6 and Table 3. The probability that a new image belongs to an unseen class P (N |t) is higher than the ME score of the classifier through most of the learning phase. Comparing the statistics of the datasets to the inductive biases in the classifiers, the ME score for the classifiers is substantially lower than the baseline ME measure in the dataset, P (N |t) (Table 3). For instance, the ImageNet classifier drops its ME score below 0.05 after about 8,960 images, while the approximate ME measure for the dataset shows that new classes are encountered at above this rate until at least 111,000 images. We found that higher learning rates can force the probabilities assigned to unseen classes to zero on ImageNet after just a single gradient step. These suggest that neural classifiers, with their bias favoring frequent rather than infrequent outputs for novel stimuli, are not well-suited to lifelong learning challenges where such inferences are critical. Although we examined classifiers trained in an online fashion, we would expect similar when we train them using replay or epoch-based training setups, where repeated presentation of past examples would only strengthen the anti-ME bias. These classifiers are hurt by their lack of ME and their failure to consider that new stimuli likely map to new classes. Ideally, a learning algorithm should be capable of leveraging the image content, combined with its own learning maturity, to decide how strongly it should reason by ME. Instead, standard models and training procedures do not provide these capabilities and do not utilize this important inductive bias observed in cognitive development. Children use the mutual exclusivity (ME) bias to learn the meaning of new words efficiently, yet standard neural networks learn very differently. Our show that standard deep learning algorithms lack the ability to reason with ME, including feedforward networks and recurrent sequenceto-sequence models trained to maximize log-likelihood with common regularizers. Beyond simply lacking this bias, these networks learn an anti-ME bias, preferring to map novel inputs to familiar and frequent (rather than unfamiliar) output classes. Our also show that these characteristics The plots show the probability that a new input image belongs to an unseen class P (N |t), as a function of the number of images t seen so far during training (blue), with its standard deviation. This measure is contrasted with the ME score of a neural network classifier trained through a similar run of the dataset (orange). are poorly matched to more realistic lifelong learning scenarios where novel classes can appear at any point, as demonstrated in the translation and classification experiments presented here. Neural nets may be currently stymied by their lack of ME bias, ignoring a powerful assumption about the structure of learning tasks. Mutual exclusivity is relevant elsewhere in machine learning. Recent work has contrasted the ability of humans and neural networks to learn compositional instructions from just one or a few examples, finding that neural networks lack the ability to generalize systematically (; . The authors suggest that people rely on ME in these learning situations, and thus few-shot learning approaches could be improved by utilizing this bias as well. In our analyses, we show that neural networks tend to learn the opposite bias, preferring to map novel inputs to familiar outputs. More generally, ME can be generalized from applying to "novel versus familiar" stimuli to instead handling "rare versus frequent" stimuli (e.g., in translation, rare source words may map to rare target words). The utility of reasoning by ME could be extended to early stages of epoch based learning too. For example, during epoch-based learning, neural networks take longer to acquire rare stimuli and patterns of exceptions , often mishandling these items for many epochs by mapping them to familiar responses. Another direction for future work is studying how the ME bias should interact with hierarchical categorization tasks. We posit that the ME assumption will be increasingly important as learners tackle more continual, lifelong, and large-scale learning challenges . Mutual exclusivity is an open challenge for deep neural networks, but there are promising avenues for progress. The ME bias will not be helpful for every problem, but it is equally clear that the status quo is sub-optimal: models should not have a strong anti-ME bias regardless of the task and dataset demands. Ideally, a model would decide autonomously how strongly to use ME (or not) based on the demands of the task. For instance, in our synthetic example, an ideal learner would discover the one-to-one correspondence and use this perfect ME bias as a meta-strategy. If the dataset has more many-to-one correspondences, it would adopt another meta-strategy. This meta-strategy could even change depending on the stage of learning, yet such an approach is not currently available for training models. Previous cognitive models of word learning have found ways to incorporate the ME bias (; ; ;), although in ways that do not generalize to training deep neural networks. While successful in some domains, these models are highly simplified or require built-in mechanisms for implementing ME, making them so far impractical for use in realistic settings. As outlined above, it would be ideal to acquire a ME bias via meta learning or learning to learn , with the advantage of calibrating the bias to the dataset itself rather than assuming its strength a priori. For example, the meta learning model of seems capable of learning an ME bias, although it was not specifically probed in this way. Recent work by demonstrated that neural nets can learn to reason by ME if trained explicitly to do so, showing these abilities are within the repertoire of modern tools. However acquiring ME is just one step toward the goal proposed here: using ME to facilitate efficient lifelong learning or large-scale classification and translation. In , standard deep neural networks do not naturally reason by mutual exclusivity, but designing them to do so could lead to faster and more flexible learners. There is a compelling case for building models that learn through mutual exclusivity. Synthetic data. We evaluate if a different class of models, sequence-to-sequence (seq2seq) neural networks , take better advantage of ME structure in the data. This popular class of models is used in machine translation and other natural language processing tasks, and thus the nature of their inductive biases is critical for many applications. As in the previous section, we create a synthetic dataset that has a perfect ME bias: each symbol in the source maps to exactly one symbol in the target. We also have a perfect alignment in the dataset, so that each token in the source corresponds to the token at the same position in the target. The task is illustrated in Figure 2b. We consider translation from sequences of words to sequences of referent symbols. The dataset consists of 20 label-referent pairings. Ten pairs are used to train the model and familiarize the learning algorithm with the task. The remaining ten pairs were used in the test phase. To train the model, we generate 1000 sequences of words whose lengths range from 1 to 5. To generate sequences for the test phase, words in the training sequences are replaced with new words with a probability of 0.2. Thus, 1000 sequences are used to test for ME. The ME score is evaluated using Equation 1 at positions in the output sequence where the corresponding input is new. As shown in Figure 2b, the ME score is evaluated in the second position which corresponds to the novel word "dax," using the probability mass assigned to the unseen output symbols. We probe a seq2seq model that has a recurrent encoder using Gated Recurrent Units (GRUs) and a GRU decoder. Both the encoder and decoder had embedding and hidden sizes of 256. Dropout (a) with a probability of 0.5 was used during training. The networks are trained using an Adam optimizer with a learning rate of 0.001 and a log-likelihood loss. Two versions of the network were trained with and without attention. Results. Results are shown in Figure 7 and confirm our findings from the feedforward network. The ME score falls to zero within a few training iterations, and the networks fail to assign any substantial probability to the unseen classes. The networks achieve a perfect score on the training set, but cannot extrapolate the one-to-one mappings to unseen symbols. Again, not only do seq2seq models fail to show a mutual exclusivity bias, they acquire a anti-mutual exclusivity bias that goes against the structure of the dataset. B ADDITIONAL DETAILS The entropy regularizer is operationalized by subtracting the entropy of prediction from the loss. Thus, the model is penalized for being overly confident about its prediction. We found that the entropy regularizer produces an ME score that stays constant across training, at the cost of the model being less confident about predictions made for seen classes. To simulate a naturalistic lifelong learning scenario, we try to sample a few classes more frequently than the others. To do this, we sample the classes for training using a power law distribution. Weights are assigned to the classes using the following formula, where c is the index of the class. After the class for training is chosen, we uniformly sample from the images of that class for training. The base rate for machine translation is defined as the probability of observing a new word in the target at a particular time t in training. We go through the unseen sentences in the corpus from the target language at time t to compute the probability of sampling a sentence with at least one new word. Thus, the base rate at time t in training is defined as: P (new in target at t) = # of unseen sentences in target with new words # of unseen sentences For Omniglot, a convolutional neural network was trained on 1623-way classification. The architecture consists of 3 convolutional layers (each consisting of 5 × 5 kernels and 64 feature maps), a fully connected layer (576 × 128) and a softmax classification layer. It was trained with a batch size of 16 using an Adam optimizer and a learning rate of 0.001. For Imagenet, a Resnet18 model was trained on 1000-way classification with a batch size of 256, using an Adam optimizer and a learning rate of 0.001. For classification, we calculate the score P (N |t) for the model by adding the probabilities the model assigns to all the "new" (unseen) classes when iterating through the remaining corpus (similar to Equation 1). For the dataset, we compute P (N |t) by sampling all unseen images in the corpus and compute the proportion from "new" classes given their ground truth labels.
Children use the mutual exclusivity (ME) bias to learn new words, while standard neural nets show the opposite bias, hindering learning in naturalistic scenarios such as lifelong learning.
525
scitldr
Cortical neurons process and integrate information on multiple timescales. In addition, these timescales or temporal receptive fields display functional and hierarchical organization. For instance, areas important for working memory (WM), such as prefrontal cortex, utilize neurons with stable temporal receptive fields and long timescales to support reliable representations of stimuli. Despite of the recent advances in experimental techniques, the underlying mechanisms for the emergence of neuronal timescales long enough to support WM are unclear and challenging to investigate experimentally. Here, we demonstrate that spiking recurrent neural networks (RNNs) designed to perform a WM task reproduce previously observed experimental findings and that these models could be utilized in the future to study how neuronal timescales specific to WM emerge. Previous studies have shown that higher cortical areas such as prefrontal cortex operate on a long timescale, measured as the spike-count autocorrelation decay constant at rest. These long timescales have been hypothesized to be critical for performing working memory (WM) computations, but it is experimentally challenging to probe the underlying circuit mechanisms that lead to stable temporal properties. Recurrent neural network (RNN) models trained to perform WM tasks could be a useful tool if these models also utilize units with long heterogeneous timescales and capture previous experimental findings. However, such RNN models have not yet been identified. In this study, we construct a spiking RNN model to perform a WM task and compare the emerging timescales with the timescales derived from the prefrontal cortex of rhesus monkeys trained to perform similar WM tasks. We show that both macaque prefrontal cortex and the RNN model utilize units/neurons with long timescales during delay period to sustain stimulus information. In addition, the number of units with long timescales was significantly reduced in the RNN model trained to perform a non-WM task, further supporting the idea that neuronal timescales are task-specific and functionally organized. We employed a spiking RNN model based on leaky integrate-and-fire (LIF) units recurrently connected to one another. These units are governed by: 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. where τ m is the membrane time constant (10 ms), v i (t) is the membrane voltage of unit i at time t, x i (t) is the synaptic input current that unit i receives at time t, I ext is the external input current, and R is the leak resistance (set to 1). The synaptic input current (x) is modeled using a single-exponential synaptic filter applied to the presynaptic spike trains: where τ s i is the synaptic decay time constant of unit i, w ij defines the synaptic connectivity strength from unit j to unit i, and the second summation refers to the spike train produced by unit j. We used the method that we previously developed in to construct LIF RNNs that performed a delayed match-to-sample task (DMS; Figure 1A top). Briefly, we trained several continuous-variable rate RNNs to perform the DMS task using a gradient descent algorithm, and the trained networks were then mapped to LIF networks. In total, we trained 40 RNNs of 200 units (80% excitatory and 20% inhibitory units) to perform the task. The synaptic decay constants (τ s) were optimized and constrained to vary between 20 ms and 125 ms, but the major findings presented here did not change when the synaptic decay constants were not optimized (i.e. fixed to a constant value; see Section 4). All the units from the trained RNNs that satisfied the preprocessing criteria were pooled for the spike-count autocorrelation analysis (see Section 4). The task began with a 1 s of fixation period (i.e. no external input) followed by two sequential input stimuli (each stimulus lasting for 0.25 s) separated by a delay period (0.75 s). The input signal was set to either -1 or +1 during the stimulus window. If the two sequential stimuli had the same sign (-1/-1 or +1/+1), the network was trained to produce an output signal approaching +1 after the offset of the second stimulus (Figure 1A). If the stimuli had opposite signs (-1/+1 or +1/-1), the network produced an output signal approaching -1 (Figure 1A). To ensure that the spiking RNN model we employed here is a valid model for investigating neuronal timescales observed in the prefrontal cortex, we compared the findings from our model to the findings obtained from a publicly available dataset. The dataset contains single-neuron spike train recordings from ventral and dorsal prefrontal cortex of four rhesus monkeys performing DMS tasks (Figure 1B). The spike trains of 3257 neurons recorded in the dorsolateral prefrontal cortex (dlPFC) were analyzed for the spike-count autocorrelation analysis. More details regarding the dataset and the tasks can be found in. 4 Neuronal timescales specific to working memory Spike-count autocorrelation decay time constants. To characterize the temporal receptive field, we computed the decay time constant of the spike-count autocorrelation for each unit/neuron during the fixation period. For each unit/neuron, we first binned the spike trains (during the fixation period) over multiple trials using a non-overlapping 50-ms moving window. Since the fixation period duration was 1 s for both RNN and experimental models, this ed in a [Number of Trials × 20] spike-count matrix for each unit/neuron. For the experimental data, the minimum number of trials required for a neuron to be considered for analysis was 11 trials. The average number of trials from all the neurons included in the analysis was 84.8 ± 34.5 trials. For the RNN model, we generated 50 trials for each unit. Next, a Pearson's correlation coefficient (ρ) was computed between two time bins (i.e. two columns in the spike-count matrix) separated by a lag (∆). The coefficient was calculated for all possible pairs with the maximum lag of 650 ms. The coefficients were averaged for each lag value, and an exponential decay function was fitted across the average coefficient values (ρ) using the LevenbergMarquardt nonlinear least-squares method: where A and B are the amplitude and the offset of the fit, respectively. The timescale (σ) defines how fast the autocorrelation decays. The following three inclusion criteria (commonly used in previous experimental studies) were applied to the RNN model and the experimental data: 0 < σ ≤ 500 ms, A > 0, and a first decrease in ρ earlier than ∆ = 150 ms. In addition, the fitting was started after a first decrease in autocorrelation. As shown in Figure 2 (left), the timescales extracted from the untrained RNNs (sparse, random Gaussian connectivity weights; 2769 units from 40 RNNs satisfied the inclusion criteria) were right-skewed. On the other hand, the trained RNNs (841 units from 40 RNNs satisfied the inclusion criteria) and the experimental data (959 units from 4 monkeys satisfied the inclusion criteria) were heavily left-skewed, suggesting that both trained model and data contained predominantly units with long timescales (Figure 2). The distributions and the average autocorrelation values from the RNN model and the experimental data were within those previously reported. Units with long neuronal timescales encode information robustly. Next, we investigated to see if the units/neurons with longer timescales were involved with more stable coding using cross-temporal decoding analysis. For each cue stimulus identity, the trials of each neuron were divided into two splits in an interleaved manner (i.e. even vs odd trials). All possible pairwise differences (in instantaneous firing rates) between cue conditions were computed within each split. Finally, a Fisher-transformed Pearson correlation coefficient was computed between the pairwise differences of the first split at time t 1 and the differences of the second split at time t 2. A high Fisher-transformed correlation value (i.e. high discriminability) represents a reliable cue-specific difference present in the network population. We performed the above analysis on short and long neuronal timescale subgroups from the experimental data and the RNN model. A unit/neuron was assigned to the short σ group if its timescale was smaller than the lower quartile value. The upper quartile was used to identify units/neurons with large autocorrelation decay time constants. There were 122 short σ and 128 long σ neurons for the experimental data. For the RNN model, there were 210 units in each subgroup. The cross-temporal discriminability matrices (Figure 3) indicate that stronger cue-specific differences across the delay period were present in the long σ subgroup compared to the short σ subgroup for both experimental data and the RNN model. These are consistent with the previous experimental findings, and suggest that longer neuronal timescales correspond to more stable coding. Task-specific temporal receptive fields. Murray et al. demonstrated that the hierarchical organization of the neuronal timescales from different cortical areas closely tracks the anatomical hierarchical organization. For instance, sensory areas important for detecting incoming stimuli house predominantly neurons with short timescales. On the other hand, higher cortical areas including prefrontal areas may require neurons with stable temporal receptive fields that are capable of encoding and integrating information on a longer timescale. Timescale (ms) To investigate if such functional specialization also emerges in our spiking model, we trained another group of spiking RNNs (40 RNNs) on a simpler task that did not require WM. The task paradigm is modeled after a Go-NoGo task, and required the RNNs to respond immediately after the cue stimulus: output approaching -1 for the "-1" cue and +1 for the "+1" cue. Each cue stimulus lasted for 125 ms. This specific task paradigm, which we refer to as Go-NoGo task, was chosen since it does not involve WM, and it has been widely used to study how primary sensory areas process sensory information. Apart from the task paradigm, all the other model parameters were same as the RNNs trained to perform the DMS task. The autocorrelation decay timescales extracted from the RNNs trained to perform the Go-NoGo task were significantly shorter than the timescales obtained from the working memory RNNs (Figure 4). The RNNs contained fewer units with long timescales compared to the DMS RNNs (Figure 4A), and the timescales averaged by network were also significantly faster than the average timescales from the DMS networks (Figure 4B). In addition, the average autocorrelation timecourse for the Go-NoGo networks decayed faster than the one from the DMS RNNs and resembled the timecourses obtained from the primary somatosensory cortex and the medial-temporal area in the visual cortex (see Figure 1C in ). These findings indicate that the neuronal timescales of our RNN models are task-specific and possibly organized in a hierarchical fashion. Timescale (ms) Effects of synaptic decay constants on neuronal timescales. In this section, we demonstrate that the timescales that we obtained from the two RNN models (Go-NoGo and DMS) in the previous section are not largely driven by the synaptic decay time constants (τ s) that we optimized to construct the spiking RNNs. For each task model, we trained 40 RNNs with the synaptic decay constants fixed to 125 ms. Even though all the units now had τ s = 125 ms, the timescale distributions from both models were largely preserved (Figure 5A), and the hierarchy was also maintained (Figure 5B). For the Go-NoGo model, the average timescale values (averaged by network) obtained from the τ s optimized RNNs were significantly smaller than the timescales computed from the RNNs with τ s = 125 ms: 93.8 ± 27.6 ms and 109.1 ± 21.0 ms for the τ s optimized and fixed RNNs, respectively. On the other hand, the average timescales from the τ s optimized DMS RNNs were significantly larger than the ones extracted from the fixed τ s networks: 157.0 ± 32.0 ms and 141.1 ± 27.6 ms for the τ s optimized and fixed RNNs, respectively. Therefore, increasing the synaptic decay time constant for all the units to 125 ms did not necessarily lead to increased neuronal timescales, suggesting that connectivity patterns and structures might play a bigger role in governing task-specific timescales. In this study, we employed a spiking RNN model of WM to investigate if the model exhibits and utilizes heterogeneous timescales for prolonged integration of information. We validated the model using an experimental dataset obtained from rhesus monkeys trained on WM tasks: the model and the primate prefrontal cortex both displayed similar heterogeneous neuronal timescales and incorporated units/neurons with long timescales to maintain stimulus information. The timescales from the RNN model trained on a non-WM task (Go-NoGo task) were markedly shorter, since units with long timescales were not required to support the simple computation. Future works include characterizing the network dynamics and the circuit motifs of the DMS RNN model to elucidate connectivity structures required to give rise to the diverse, stable temporal receptive fields specific to WM.
Spiking recurrent neural networks performing a working memory task utilize long heterogeneous timescales, strikingly similar to those observed in prefrontal cortex.
526
scitldr
Conventional deep learning classifiers are static in the sense that they are trained on a predefined set of classes and learning to classify a novel class typically requires re-training. In this work, we address the problem of Low-shot network-expansion learning. We introduce a learning framework which enables expanding a pre-trained (base) deep network to classify novel classes when the number of examples for the novel classes is particularly small. We present a simple yet powerful distillation method where the base network is augmented with additional weights to classify the novel classes, while keeping the weights of the base network unchanged. We term this learning hard distillation, since we preserve the response of the network on the old classes to be equal in both the base and the expanded network. We show that since only a small number of weights needs to be trained, the hard distillation excels for low-shot training scenarios. Furthermore, hard distillation avoids detriment to classification performance on the base classes. Finally, we show that low-shot network expansion can be done with a very small memory footprint by using a compact generative model of the base classes training data with only a negligible degradation relative to learning with the full training set. In many real life scenarios, a fast and simple classifier expansion is required to extend the set of classes that a deep network can classify. For example, consider a cleaning robot trained to recognize a number of objects in a certain environment. If the environment is modified with an additional novel object, it is desired to be able to update the classifier by taking only a few images of that object and expand the robot classifier. In such a scenario, the update should be a simple procedure, based on a small collection of images captured in a non-controlled setting. Furthermore, such a low-shot network update should be fast and without access the entire training set of previously learned data. A common solution to classifier expansion is fine-tuning the network BID6. However fine-tuning requires keeping a large amount of base training data in memory, in addition to collecting sufficient examples of the novel classes. Otherwise, fine-tuning can lead to degradation of the network accuracy on the base classes, also known as catastrophic forgetting BID0. In striking contrast, for some tasks, humans are capable of instantly learning novel categories. Using one or only a few training examples humans are able to learn a novel class, without compromising previously learned abilities or having access to training examples from all previously learned classes. We consider the classifier expansion problem under the following constraints:1. Low-shot: very few samples of the novel classes are available. 2. No forgetting: preserving classification performance on the base classes. 3. Small memory footprint: no access to the base classes training data. In this work we introduce a low-shot network expansion technique, augmenting the capability of an existing (base) network trained on base classes by training additional parameters that enables to classify novel classes. The expansion of the base network with additional parameters is performed in the last layers of the network. To satisfy low-shot along with no-forgetting constraints, we present a hard distillation framework. Distillation in neural networks BID5 is a process for training a target network to imitate another network. A loss function is added to the target network so that its output matches the output of the mimicked network. In standard soft distillation the trained network is allowed to deviate from the mimicked network. Whereas hard distillation enforces that the output of the trained network for base classes matches the output of the mimicked network as a hard constraint. We achieve hard distillation by keeping the weights of the base network intact, and learn only the newly added weights. Network expansion with hard distillation yields a larger network, distilling the knowledge of the base network in addition to augmented capacity to classify novel classes. We show that in the case of low-shot (only 1-15 examples of a novel class), hard distillation outperforms soft distillation. Moreover, since the number of additional parameters in the expanded network is small, the inference time of the new network is nearly identical to the base network. To maintain a small memory footprint, we refrain from saving the entire training set. Instead, we present a compact generative model, consisting of a collection of generative models fitted in the feature space to each of the base classes. We use a Gaussian Mixture Model (GMM) with small number of mixtures, and show it inflicts a minimal degradation in classification accuracy. Sampling from the generative GMM model is fast, reducing the low-shot training time and allowing fast expansion of the network. We define a benchmark for low-shot network expansion. The benchmark is composed of a series of tests of increasing complexity, ranging from simple tasks where base and novel classes are from different domains and to difficult tasks where base and novel classes are from the same domain and shares objective visual similarities. We perform a comprehensive set of experiments on this challenging benchmark, comparing the performance of the proposed to alternative methods. To summarize, the main contributions of the paper are:1. A novel hard-distillation solution to a low-shot classifier expansion problem 2. GMM as a sufficient generative model to represent base classes in a feature space 3. A new benchmark for the low-shot classifier expansion problem 2 RELATED WORKS A common solution to the class-incremental learning problem is to use a Nearest-Neighbors (NN) based classifier in feature space. A significant advantage of a NN-based classifier is that it can be easily extended to classify a novel class, even when only a single example of the class is available (one-shot learning). However NN-based classifiers require keeping in the memory significant amount of training data from the base classes. BID7 proposed to use Nearest Class Mean (NCM) classifier, where each class is represented by a single prototype example which is the mean feature vector of all class examples. One major disadvantage of NCM and NN-based methods is that they are based on a fixed feature representation of the data. To overcome this problem BID7 proposed to learn a new distance function in the feature space using metric learning. The ideas of metric learning combined with the NN classifier resonate with recent work by on Matching Networks for one-shot learning, where both feature representation and the distance function are learned end-to-end with attention and memory augmented networks. The problem we consider in this paper is different from the one discussed by. We aim to expand existing deep classifier trained on large dataset to classify novel classes, rather than to create a general mechanism for one-shot learning. BID3 presented an innovative low-shot learning mechanism, where they proposed a Squared Gradient Magnitude regularization technique for an improved fixed feature representation learning designed for low-shot scenarios. They also introduced techniques to hallucinate additional training examples for novel data classes. In contrast, we present a method which aims to maximize performance in low-shot network expansion given a fixed representation, allowing expanding the representation based on novel low-shot data. Furthermore, in our work, we demonstrate the ability to expand the network without storing the entire base classes training data. Recently, BID9 proposed iCaRL -(Incremental Classifier and Representation Learning), to solve the class-incremental learning problem. iCaRL is based on Nearest-Mean-of-Exemplars classifier, similar to the NCM classifier of BID7. In the iCaRL method, the feature representation is updated and the class means are recomputed from a small stored number of representative examples of the base classes. During the feature representation update, the network parameters are updated by minimizing a combined classification and distillation loss. The iCaRL method was introduced as a class-incremental learning method for large training sets. In Section 4 we discuss its adaptation to low-shot network expansion and compare it to our method. BID11 proposed the Progressive Network for adding new tasks without affecting the performance of old tasks. They propose freezing the parameters that were trained on old tasks and expand the network with a additional layers when training a new task. BID15 proposed the Progressive learning technique which solves the problem of online sequential learning in extreme learning machines paradigm (OS-ELM). The purpose of their work is to incrementally learn the last fully-connected layer of the network. When a sample from a novel class arrives, the last layer is expanded with additional parameters. The Progressive learning solution updates the last layer only sequentially and only works in the ELM framework (does not update internal layers of the network). In another work BID14 proposed an incremental learning technique which augments the base network with additional parameters in last fully connected layer to classify novel classes. Similar to iCaRL, they perform soft distillation by learning all parameters of the network. Instead of keeping historical training data, they propose phantom sampling -hallucinating data from past distribution modeled with Generative Adversarial Networks. In this work we propose a solution that borrows ideas from freeze-and-expand paradigm, improved feature representation learning, network distillation and modeling past data with a generative model. We propose to apply expansion to the last fully connected layer of a base network to enable classification on novel classes, and to deeper layers to extend and improve the feature representation. However, in contrast to other methods BID9; BID15, we do not retrain the base network parameters, but only the newly introduced weights of the expansion. Moreover, the extended feature representation is learned from samples of base and novel classes. In contrast to BID3, where the improved feature representation is learned from simulating low-shot scenarios on the base classes only, before the actual novel data is available. Finally, in order to avoid keeping all historical training data, we use Gaussian Mixture Model of the feature space as a generative model for base classes. Assume a deep neural network is trained on K base classes with the full set of training data. This base network can be partitioned into two subnetworks: a feature extraction network and a classification network. The feature extraction network f rep maps an input sample x into a feature representation v ∈ R N. The classification network f cls maps feature vectors v into a vector of approximated class posterior probabilities P (k|v) which correspond to each one of K classes. The whole network can be represented as composition of two networks f net (x) = f cls (f rep (x)). For example, if the classification network consists of the last fully connected layer (FC) followed by softmax, then DISPLAYFORM0 N is class's k weights vector, and Z is the normalization factor. In the following we discuss how the pre-learned feature representation of feature extraction network can be leveraged to classify additional classes in low-shot scenario with only relatively minor changes to the classification subnetwork. First, we discuss how to expand the classification network to classify one additional class. We can expand f cls from a K-class classifier into K + 1 class classifier by adding a new weight vector w K+1 ∈ R N to the last FC layer. Thus, the K + 1 class probability is DISPLAYFORM0 v, where Z is a new normalization factor for K + 1 classes. We would like to preserve classification accuracy on the base classes to avoid catastrophic forgetting. To that end, during training we constrain to optimize of the w K+1 weights, while the vectors {w i} K i=1 are kept intact. We refer to this paradigm as hard distillation. By preserving the base classes weight vectors, we guarantee that as a of the last classification layer expansion the only new errors that can appear are between the novel class and the base classes, but not among the base classes. Moreover, the small number of newly learned parameters helps avoid over-fitting, which is especially important in low-shot scenarios. Similarly, we can expand the classification network to classify more than one novel class. are the number of input feature to F C2 before the expansion. Due to the small memory footprint constraint, we are unable to keep the entire training data of the base classes. As an alternative, we can use a generative model of the base classes and during training draw samples from the model. There are various approaches to this task, such as , , Pixel CNN van den , or conventional methods of non-parametric kernel density estimation BID4. However, it is usually hard to generate accurate samples from past learned distributions in the image domain, and these methods still require a significant amount of memory to store the model network parameters. Furthermore, since training typically requires thousands of samples, we prefer a generative model that allows fast sampling to reduce the low-shot phase training time. In our work, we use the Gaussian Mixture Model (GMM) density estimator as an approximate generative model of the data from the base classes. However, instead of approximating the generative distribution of the image data, we approximate a class conditional distribution of its feature representation. Thus, we model a GMM DISPLAYFORM0 where M is the number of mixtures for each bass class. In order to satisfy the small memory footprint constraint, we use a GMM which assumes feature independence, i.e., the covariance matrix Σ i of each Gaussian mixture is diagonal. We denote this model as Deep Feature GMM. If we have K classes, and the feature vectors dimensionality is N, the memory requirements for storing information about base classes is O(M KN). The feature representation v, which we learn a generative model for, can be from the last fully connected layer or from deeper layers. In Section 4.5, we evaluate the effectiveness of the use of the Deep Features GMM, showing that despite its compact representation, there is a minimal degradation in accuracy when training a classifier based only on data that is generated from the Deep Features GMM, compared to the accuracy obtained on the full training data. We apply standard data augmentation (random crop, horizontal flip, and color noise) to the input samples of the novel classes and create 100 additional samples variants from each of the novel class samples. These samples are passed through the feature extraction network f rep to obtain their corresponding feature representation. Note that new samples and their augmented variants are passed through f rep only once. As described in Section 3.1, we expand the classification subnetwork f cls and train the expanded network to classify novel classes in addition to the base classes. FIG0 illustrates the proposed method in the case where f cls is the last fully connected layer. As mentioned above, we only learn the N dimensional vector w K+1, which augments the K × N weight matrix of the FC layer. Each training batch is composed of base classes feature vectors drawn from the Deep Features GMM models learned from the base classes training data and the available samples of a novel class. The training batch is balanced to have equal number of generations/samples per class. Since the forward and backward passes are carried out by only the last FC layers, each iteration can be done very rapidly. We use SGD with gradient dropout (see below) to learn w K+1. More specifically, the weights update is done by: DISPLAYFORM0 where µ is the momentum factor, α is the learning rate and M is a binary random mask with probability p of being 1 (M is randomly generated throughout the low-shot training). That is, the gradient update is applied to a random subset of the learned weights. This SGD with gradient dropout is our heuristic variant for Noisy SGD proposed by BID1 which provably helps to escape saddle points. In Section 4.3 we demonstrate the contribution of the gradient dropout when only a few novel labeled samples are available. The procedure described in the previous subsections expands the last classification layer, but does not change the feature representation space. In some cases, especially in those which the novel classes are similar to the base classes, it is desirable to update and expand the feature representation. To expand the feature representation, we add new parameters to deeper layers of the network. This of course requires an appropriate expansion of all subsequent layers. To satisfy the hard distillation constraints, we enforce that the feature representation expansion does not affect the network output for the base classes. All weights in subsequent layers which connects the expanded representation to the base classes are set to zero and remain unchanged during learning. In FIG0 (b) we demonstrate an expansion of two last fully connected layers. The F C2 weight matrix is zero padded to adjust to the new added weights in F C1. Only the expansion to F C2 uses the new added features in F C1. The details of the representation learning expansion can be found in Appendix D. In this section, we evaluate the proposed low-shot network expansion method on several classification tasks. We design a benchmark which measures the performance of several alternative low-shot methods in scenarios that resemble real-life problems, starting with easier tasks (Scenario 1) to harder tasks (Scenario 2 & 3). In each experiment, we use a standard dataset that is partitioned to base classes and novel classes. We define three scenarios:Scenario 1 Generic novel classes: unconstrained novel and base classes which can be from different domains. Scenario 2 Domain specific with similar novel classes: base and novel classes are drawn from the same domain and the novel classes share visual similarities among themselves. Scenario 3 Domain specific with similar base and novel classes: base and novel classes are drawn from the same domain and each novel class shares visual similarities with one of the base classes. In each scenario we define five base classes (learned using the full train set) and up to five novel classes, which should be learned from up to 15 samples only. We compare the proposed method to several alternative methods for low-shot learning described in Section 4.2. Dataset for Scenario 1 For the task of generic classification of the novel classes we use the ImageNet dataset BID10, such that the selected classes were not part of the ILSVRC2012 1000 classes challenge. Each class have at least 1000 training images and 250 test images per class. The randomly selected 5 partition of 5 base classes and 5 novel classes. Dataset for Scenario 2 and Scenario 3 For these scenarios we use the UT-Zappos50K BID17 shoes dataset for fine-grained classification. We choose 10 classes representing different types of shoes each having more than 1,000 training images and 250 test images. To define similarity between the chosen classes, we fine-tune the base network (VGG-19) on the selected classes with the full dataset, and we use the confusion matrix as a measure of similarity between classes. Using the defined similarities, we randomly partition the 10 classes to 5 base and 2 novel classes, where for Scenario 2 we enforce similarity between novel classes, and for Scenario 3 we enforce similarity between novel and base classes. The confusion matrix representing the visual similarities and an illustration of the similarities between the base and novel classes is presented in FIG0 in Appendix C. In the proposed method we use the VGG-19 network BID12 trained on ImageNet ILSVRC2012 BID10 1000 classes as a feature extraction subnetwork f rep. In all three scenarios for training the classification subnetwork f cls on the base classes, we fine-tune the last two fully-connected layers of VGG-19 on the 5 selected base classes, while freezing the rest of the layers of f rep.We denote the method proposed in Section 3 as Generative Low-Shot Network Expansion: Gen-LSNE. We compare our proposed method to NCM BID7, and to the Prototype-kNN method which is an extension of NCM and the soft distillation based method inspired by iCaRL method , adapted for the low-shot scenario. We compare the proposed method to NCM classifier proposed by BID7. Additionally, we extend the NCM classifier by using multiple prototypes for each class, as in the Prototype-kNN classifier BID4. Both NCM and Prototype-kNN are implemented in a fixed feature space of the FC2 layer of the VGG-19 network. In our implementation of the Prototype-kNN, we fit a Deep Features GMM model with 20 mixtures for each of the base classes. We extract feature representation of all of the available samples from the novel classes. The Deep Features GMM centroids of the base feature vectors and the novel feature vectors of the samples are considered as a prototypes of each class. We set k for Prototype-kNN classifier to be the smallest number of prototypes per class (the number of prototypes in the novel classes is lower than the number of mixture in the base classes). The Prototype-kNN classification rule is the majority vote among k nearest neighbors of the query sample. If the majority vote is indecisive, that is, there are two or more classes with the same number of prototypes among the k nearest neighbors of query image, we repeat classification with k = 1. We want to measure the benefit of the hard distillation constraint in low-shot learning scenario. Thus, we formulate a soft distillation based method inspired by iCaRL BID9 and methods described by BID14 and BID15 as an alternative to the proposed method. In the iCaRL method, feature representation is updated by re-training the whole representation network. Since in low-shot scenario we have only a small number of novel class samples, updating the whole representation network is infeasible. Using the soft distillation method, we adapt to the low-shot scenario by updating only the last two fully connected layers F C1, F C2, but still use a combination of distillation and classification loss as in the iCaRL method. The iCaRL method stores a set of prototype images and uses the Nearest Mean Exemplar (NME) classifier at the final classification stage. In order to provide a fair comparison with the hard distillation method and uphold our memory restriction, we avoid storing prototypes in image domain, and use the proposed Deep-Features GMM as a generative model for the base-classes. Using NME classifier with prototypes of the base classes is in fact a special case of Prototype-kNN with k = 1. Therefore, in soft distillation method instead of NME we use a learned expanded network with additional parameters in last fully connected layers, which aligns with BID14 and BID15, and in our proposed hard-distillation method in Section 3.1. To summarize, soft distillation applies a distillation loss and allows the F C1, F C2 layers to adjust to the new data, while the proposed hard-distillation freezes F C1, F C2 and trains only the new (expanded) parameters without using a distillation loss. We denote the soft distillation based methods as Soft-Dis in the presented . In Section 3.3 we proposed using gradient dropout regularization on SGD as a technique to improve convergence and overcome overfitting in a low-shot scenario. We perform ablation experiments to assess the importance of the gradient dropout and train using both soft distillation (Soft-Dis) and proposed hard distillation (Gen-LSNE) with and without gradient dropout regularization. Scenario 1: Generic novel classes In this experiment, the base classification network is trained on five base classes and then expanded to classify two novel classes chosen at random. For each of the five class partitions (Section 4.1), we perform five trials by randomly drawing two novel classes from five novel classes available in the partition. The are an average of 25 trials. The of this experiment are presented in Figure 2. In Figure 6 in Appendix B we present detailed of the test error on the base and novels classes apart. Prototype-kNN and the Soft-Dis methods perform better on the base classes. However, our method is significantly better on the novel classes and the overall test error is considerably improved, particularly when the number of samples is small. In addition, we see the significant gain in accuracy delivered by the gradient dropout when the number of novel samples is lower than 3 samples. Furthermore, gradient dropout also improves the of the Soft-Dis method. The Prototype-kNN method is unable to effectively utilize the new available novel samples, however it best preserves the base class accuracy when the number of novel samples and base class prototypes is high (above 5, see Figure 6). Since the number of prototypes used is equal to the number of novel samples used, the addition of a novel samples/base class prototypes generally has a greater impact on the preservation of the base class accuracy. We assume that since the spread of the base class prototypes is better than that of the novel classes, then some novel samples are misclassified as some similar base class. NCM generally performs considerably better than Prototype-kNN, despite the use of less information from the base classes. However, NCM is unable to effectively utilize more novel samples when they are available. Gen-LSNE significantly outperforms NCM with a single novel sample, and overall outperforms all the tested method with nine and below samples per novel class. In this section we explore the effect of expansion of deeper layers, as described in Section 3.4. We partition the datasets as defined in 4.1 to five base and five novel classes, and we test a 10 classes classification task. We expand the feature representation which is obtained after F C1 layer with 5 new features. The size of the feature representation after the FC1 layer of VGG-19 is of dimension 4k. Thus, F C1 is expanded with 4k · 5 new weights. The are averaged over the 5 trails. FIG3 shows the obtained, we denote +5Inner as the experiments with the additional five shared representation features. We see a marginal gain in Scenario 1. However, we observe a significant gain in Scenario 2 and 3 when the number of samples increases (especially Scenario 2). Observe that Gen-LSNE significantly outperforms the alternative tested method in almost all of the tested cases. Note that the addition of the five shared additional representation features has no observable effect on Soft-Dis. In the Deep-features GMM evaluation experiment, we feed the full training data to the base network and collect the feature vectors before F C1, i.e., two FC layers before the classification output. We fit a GMM model to the feature vectors of each of the base classes with a varying number of mixture. We train the two last FC layers of the base network from randomly initialized weights, where the training is based on generating feature vectors from the fitted GMM. We measure the top-1 accuracy on the test set of the networks trained with GMM models and the base network trained with full training data on the datasets defined in 4.1. The difference in top-1 accuracy between the network trained with full data and the networks trained with GMM models represent degradation caused by compressing the data into simple generative model. The of the experiment presented in the TAB3 demonstrate that learning with samples from GMM models cause only a negligible degradation relative to learning with a full training set. We observe that the degradation in accuracy is not monotonic in the number of GMM mixtures, and that for many practical purposes a small number of mixture may be sufficient. We have introduced Gen-LSNE, a technique for low-shot network expansion. The method is based on hard-distillation, where pre-trained base parameters are kept intact, and only a small number of parameters are trained to accommodate the novel classes. We presented and evaluated the advantages of hard-distillation: (i) it gains significant increased accuracy (up to 20%) on the novel classes, (ii) it minimizes forgetting: less than 3% drop in accuracy on the base classes, (iii) small number of trained parameters avoids overfitting, and (iv) the training for the expansion is fast. We have demonstrated that our method excels when only a few novel images are provided, rendering our method practical and efficient for a quick deployment of the network expansion. We have also presented Deep-Features GMM for effective base class memorization. This computationally and memory efficient method allows training the network from a generative compact feature-space representation of the base classes, without storing the entire training set. In basic setting of a hard distillation, novel class classification is based on the base representation. We have also presented an extended version of network expansion, where the representation is updated by training the last deeper layers. We have shown that when more images of the novel classes are available, the adjusted representation is effective. Generally speaking, the representation can be improved as more samples are available, but at the expense of longer training, larger memory footprint, and risking forgetting. In the future we would like to continue exploring hard-distillation methods, in extremely low-shot classifier expansion, where only one or a handful of images of the novel class are provided. An additional research direction would be to maximize the information gain that is obtained from each novel image, aspiring towards human low-shot understanding. The fully connected layer FC1 is parametrized with weight matrix W ∈ R N ×V, where V is the dimensionality of input feature vector ν and N is the dimensionality of the output feature representation vector v. As was described in Section 3.4 we want to extend the feature representation v ∈ R N with E additional dimensionsṽ ∈ R E. We expand the weight matrix W with E × V additional weights as shown in FIG0. We denote the expanded weights as W exp ∈ R E×V. The expanded weights are to be learned from the novel data. We draw a random set of S novel examples. Let DISPLAYFORM0 denote the mean response of the FC1 to the set of novel examples. Let DISPLAYFORM1 be the E indexes of maximum elements of R. We initialize the expansion to FC1 layer W exp in the following manner: DISPLAYFORM2 where w j ∈ R V is the j th row of matrix W, ∼ N (0, std(W)), and α is a weight constant (in our experiments α = 0.25). This initialization allows the expansion of the feature representationṽ to have non zero responses (after ReLU) with respect to the novel samples. Since we operate in a Low-Shot scenario, where only few samples of novel classes are available, this weights initialization plays crucial role in convergence of FC1 extended weights. We initialize the subsequent layer FC2 in the following manner: let W ∈ R K×N be the weight matrix of FC2, where N is the dimensionality of the feature representation vector v and K is the number of base classes. Since v was expanded with additional E featuresṽ, and we want to allow classification of L novel classes, the dimension of expanded W will be (K + L) × (N + E). As was mentioned in Section 3.4 and illustrated in FIG0, the hard distillation constraint requires that W will be zero-padded with K × E zeros to avoid influence of the expanded featuresṽ on the output of the base network. In contrast, the expansion of W which we denote W exp ∈ R L×(N +E) should be encouraged to produce larger responses toṽ to improve learning. We initialize the expansion of FC2 layer W exp in the following manner: DISPLAYFORM3 where u ∈ {−1, 1} with probability 0.5 and Γ is an amplification parameter. In our experiments we used Γ = 2. We found that this initialization technique is crucial in assuring convergence of the added weights and the ability of the new weights to improve classification in low-shot setting.
In this paper, we address the problem of Low-shot network-expansion learning
527
scitldr
People ask questions that are far richer, more informative, and more creative than current AI systems. We propose a neural program generation framework for modeling human question asking, which represents questions as formal programs and generates programs with an encoder-decoder based deep neural network. From extensive experiments using an information-search game, we show that our method can ask optimal questions in synthetic settings, and predict which questions humans are likely to ask in unconstrained settings. We also propose a novel grammar-based question generation framework trained with reinforcement learning, which is able to generate creative questions without supervised data. People can ask rich, creative questions to learn efficiently about their environment. Question asking is central to human learning yet it is a tremendous challenge for computational models. There is always an infinite set of possible questions that one can ask, leading to challenges both in representing the space of questions and in searching for the right question to ask. Machine learning has been used to address aspects of this challenge. Traditional methods have used heuristic rules designed by humans , which are usually restricted to a specific domain. Recently, neural network approaches have also been proposed, including retrieval methods which select the best question from past experience and encoder-decoder frameworks which map visual or linguistic inputs to questions (; ; ;). While effective in some settings, these approaches do not consider settings where the questions are asked about partially unobservable states. Besides, these methods are heavily data-driven, limiting the diversity of generated questions and requiring large training sets for different goals and contexts. There is still a large gap between how people and machines ask questions. Recent work has aimed to narrow this gap by taking inspiration from cognitive science. For instance, incorporates aspects of "theory of mind" in question asking by simulating potential answers to the questions, but the approach relies on imperfect agents for natural language understanding which may lead to error propagation. Related to our approach, proposed a powerful question-asking framework by modeling questions as symbolic programs, but their algorithm relies on hand-designed program features and requires expensive calculations to ask questions. We use "neural program generation" to bridge symbolic program generation and deep neural networks, bringing together some of the best qualities of both approaches. Symbolic programs provide a compositional "language of thought" for creatively synthesizing which questions to ask, allowing the model to construct new ideas based on familiar building blocks. Compared to natural language, programs are precise in their semantics, have clearer internal structure, and require a much smaller vocabulary, making them an attractive representation for question answering systems as well (; ;). However, there has been much less work using program synthesis for question asking, which requires searching through infinitely many questions (where many questions may be informative) rather than producing a single correct answer to a question. Deep neural networks allow for rapid question-synthesis using encoder-decoder modeling, eliminating the need for the expensive symbolic search and feature evaluations in. Together, the questions can be synthesized quickly and evaluated formally for quality groundtruth board partly revealed board example questions How long is the red ship? (size Red) Is purple ship horizontal? (== (orient Purple) H) Do all three ships have the same size? (=== (map (λ x (size x)) (set AllShips))) Figure 1: The Battleship task. Blue, red, and purple tiles are ships, dark gray tiles are water, and light gray tiles are hidden. The agent can see a partly revealed board, and should ask a question to seek information about the hidden board. Example questions and translated programs are shown on the right. We recommend viewing the figures in color. (e.g. the expected information gain), which as we show can be used to train question asking systems using reinforcement learning. In this paper, we develop a neural program generation model for asking questions in an informationsearch game similar to "Battleship" used in previous work (; ; . The model uses a convolutional encoder to represent the game state, and a Transformer decoder for generating questions. Building on the work of , the model uses a grammar-enhanced question asking framework, such that questions as programs are formed through derivation using a context free grammar. Importantly, we show that the model can be trained from human demonstrations of good questions using supervised learning, along with a data augmentation procedure that leverages previous work to produce additional human-like questions for training. Our model can also be trained without such demonstrations using reinforcement learning. We evaluate the model on several aspects of human question asking, including reasoning about optimal questions in synthetic scenarios, density estimation based on free-form question asking, and creative generation of genuinely new questions. To summarize, our paper makes three main contributions: 1) We propose a neural network for modeling human question-asking behavior, 2) We propose a novel reinforcement learning framework for generating creative human-like questions by exploiting the power of programs, and 3) We evaluate different properties of our methods extensively through three different experiments. Question generation has attracted attention from the machine learning community. Early research mostly explored rule-based methods which strongly depend on human-designed rules . Recent methods for question generation adopt deep neural networks, especially using the encoder-decoder framework, and can generate questions without handcrafted rules. These methods are mostly data-driven, which use pattern recognition to map inputs to questions. Researchers have worked on generating questions from different types of inputs such as knowledge base facts , pictures , and text for reading comprehension . However aspects of human question-asking remain beyond reach, including the goal-directed and flexible qualities that people demonstrate when asking new questions. This issue is partly addressed by some recent papers which draw inspiration from cognitive science. Research from and generate questions by sampling from a candidate set based on goal-oriented metrics. This paper extends the work of to overcome the limitation of the candidate set, and generate creative, goal-oriented programs with neural networks. Our work also builds on neural network approaches to program synthesis, which have been applied to many different domains (; ;). Those approaches often draw inspiration from computer architecture, using neural networks to simulate stacks, memory, and controllers in differentiable form . Other models incorporate Deep Reinforcement Learning (DRL) to optimize the generated programs in a goal oriented environment, such as generating SQL queries which can correctly perform a specific database processing task , translating strings in Microsoft Excel sheets , understanding and constructing 3D scenes and objects . Recent work has also proposed ways to incorporate explicit grammar information into the program synthesis process. design a special module to capture the grammar information as a prior, which can be used during generation. Some recent papers (; Figure (a) shows the network architecture. The board is represented as a grid of one-shot vectors and is embedded with a convolutional neural network. The board embedding and a sequence of symbols are inputted to a Transformer decoder to generate output vectors (details in section 4). PE means positional embeddings, and WE means word embeddings. (b) shows the derivation steps for program "(> (size Blue) 3)" using CFG. Non-terminals are shown as bold-faced, and terminals are shown in italic. The production rules used are shown next to each arrow. ) encode grammar with neural networks and use DRL to explicitly encourage the generation of semantically correct programs. Our work differs from these in two aspects. First, our goal is to generate informative human-like questions in the new domain instead of simply correct programs. Second, we more deeply integrate grammar information in our framework, which directly generates programs based on the grammar. In this paper, we work with a task used in previous work for studying human information search as well as question asking . The task is based on an information search game called "Battleship", in which a player aims to resolve the hidden layout of the game board based on the revealed information (Figure 1). There are three ships with different colors (blue, red, and purple) placed on a game board consisting of 6 × 6 grid of tiles. Each ship can be either horizontal or vertical, and takes 2, 3 or 4 tiles long. All tiles are initially turned over (light grey in Figure 1), and the player can flip one tile at a time to reveal an underlying color (either a ship color, or dark grey for water). The goal of the player is to determine the configuration of the ships (positions, sizes, orientations) in the least number of flips. In the modified version of this task studied in previous work (;, the player is presented with a partly revealed game board, and is required to ask a natural language question to gain information about the underlying configuration. As shown in Figure 1, the player can only see the partly revealed board, and might ask questions such as "How long is the red ship?" In this paper, we present this task to our computational models, and ask the models to generate questions about the game board. designed a powerful context free grammar (CFG) to describe the questions in the Battleship domain. The grammar represents questions in a LISP program-like format, which consists of a set of primitives (like numbers, colors, etc.) and a set of functions over primitives (like arithmetic operators, comparison operators, and other functions related to the game board). Another research by shows that it captures the full range of questions that people asked in an extensive dataset, mainly because the majority of this grammar is general functions which make it flexible enough. The grammar is able to generate an infinite set of other possible questions beyond collected human questions, capturing key notions of compositionality and computability. Figure 1 provides some examples of produced programs. The full grammar is provided in Appendix C. The neural network we use includes a Convolutional Neural Network (CNN) for encoding the input board, and a Transformer decoder for estimating the symbol distribution or selecting actions in different settings described below. The input x ∈ {0, 1} 6×6×5 is a binary representation of the 6x6 game board with five channels, one for each color to be encoded as a one-hot vector in each grid location. A simple CNN maps the input x to the encoder output e ∈ R 6×6×M, where M is the length of encoded vectors. Then a Transformer decoder takes e and a sequence of length L as input, and outputs a sequence of vectors where N o is the output size. As shown later, the input sequence and output vectors can be interpreted differently in different settings. The model is shown in Figure 2 (a), and details are described in Appendix A. Our model is compatible with both supervised and reinforcement training. Supervised training. In the supervised setting, the goal is to model the distribution of questions present in the training set. Each output y i ∈ R No is a symbol at position i in the program, where N o is the number of different symbols in the grammar. The model is trained with symbol-level cross entropy loss, and can be used to calculate the log-likelihood of a given sequence, or to generate a question symbol-by-symbol from left to right. Generation works as follows. Suppose at step t, a sequence of length t along with the encoded board is presented to the decoder. The model predicts the vector y t which represents the probability of each symbol to be chosen as next. Then we sample a symbol at location t + 1 and execute the decoder again with the new sequence, until an <eos> symbol is generated or maximum length is reached. Sequence-based RL. The framework can be adapted to generate a sequence of symbols without stepwise supervision, such that reward is provided only after the entire question is generated. Grammar-enhanced RL. Finally, the framework can be used with a novel grammar-enhanced RL training procedure. Figure 2 (b) illustrates the process of generating a program from the context-free grammar specified in. Beginning from the start symbol "A", at each step a production rule is chosen and applied to one of the non-terminals in the current string. The choice of rule is modeled as a Markov Decision Process, and we solve it with DRL. Each state is a partially derived string passed to the decoder, and we use the first output y 1 ∈ R No to represent the probability of selecting each production rule from all possible N o rules. After the rule is applied, the new string is passed back into the decoder, repeating until only terminals are contained in the sequence. We adopt the leftmost derivation here to avoid the ambiguity of parsing order, so at each step the left-most non-terminal will be replaced. In the first experiment, we designed three tasks to evaluate whether the model can learn simple compositional rules and reasoning strategies. These tasks include counting the number of visible ship tiles, locating a missing ship tile, and generalizing both strategies to unseen scenario types. Figure 3 illustrates the three tasks we designed in this experiment by providing examples of each task. Counting task. Models must select the ship color with the least number of visible tiles on the board. Each board has a unique answer, and models respond by generating a program "(topleft (coloredTiles X))" where X is a ship color. 4000 examples are used for training, and another 1000 examples are used for testing. Missing tile task. Models must select the ship that is missing a tile and identify which tile is missing. All ships are completely revealed except one, which is missing exactly one tile. Models respond by generating "(== (color Y) X)" where X is a color and Y is a location on the board. The number of training and test examples are the same as the counting task. Compositionality task. Models must combine both of the above strategies to find the missing tile of the ship with the least visible tiles. Outputs are produced as "(Z (coloredTiles X))" where X is a color and Z is either topleft or bottomright. Each board has a unique answer. This task further evaluates compositionality by withholding question types from training. With three values for X and two for Z, there are six possible question types and one is picked as the "held out" type. The other five "known" question types have 800 training examples. For the held out question type, the number of training examples is varied from 0 to 800, to test how much data is needed for generalization. Another 200 new boards of each question type is used for evaluation. More information about the model hyperparameters and training procedures are provided in Appendix B.1. We train our model in a fully supervised fashion. Accuracy for the counting and missing tile tasks is shown in Figure 3. The full neural program generation model shows strong reasoning abilities, achieving high accuracy for both the counting and missing tile tasks, respectively. We also perform ablation analysis of the encoder filters of the model, and provide the in Appendix D. The for the compositionality task are summarized in Table 1. When no training data regarding the held out question type is provided, the model cannot generalize to situations systematically different from training data, exactly as pointed out in previous work on the compositional skills of encoder-decoder models . However, when the number of additional training data increases, the model quickly incorporates the new question type while maintaining high accuracy on the familiar question tasks. On the last row of Table 1, we compare our model with another version where the decoder is replaced by two linear transformation operations which directly classify the ship type and location (details in Appendix B.1). This model has 33.0% transfer accuracy on compositional scenarios never seen during training. This suggests that the model has the potential to generalize to unseen scenarios if the task can be decomposed to subtasks and combined together. In this experiment, we examine if the neural network has the capability of capturing the distribution of human questions as a conditioned language model. To train the model, we need to construct a training set of many paired game boards and questions. Instead of laboriously collecting a large number of real human questions, and translating them into programs by hand, we construct the dataset by sampling from a previous computational model of human question asking . More precisely, we randomly generate a large number of game boards and sample K questions given each board. For generating the boards, we first uniformly sample the configuration of three ships, and randomly cover arbitrary number of tiles, with the restriction that at least one ship tile is observable. Next we randomly sample K programs for each board with importance sampling based on the cognitive model proposed by , which models the probability of a question under a given context as where ε(·) is a parameterized energy function for estimating the likelihood of a question being asked by human, which considers multiple features such as question informativeness, complexity, answer type, etc. Z is a normalization constant. We also randomly generate a larger set of questions to pretrain the decoder component of the model as a "language model" over questions, enabling it to better capture the grammatical structure of possible questions. Details regarding the model hyperparameters, training procedure, and pre-training procedure are provided in Appendix B.2. We evaluate the log-likelihood of reference questions generated by our full model as well as some lesioned variants of the full model, including a model without pretraining, a model with the Transformer decoder replaced by an LSTM decoder, a model with the convolutional encoder replaced by a simple MLP encoder, and a model that only has a decoder (unconditional language model). Though the method from also works on this task, here we cannot compare with their method for two reasons. One is that our dataset is constructed using their method, so the likelihood of their method should be an upper bound in our evaluation setting. Additionally, they can only approximate the log-likelihood due to an intractable normalizing constant, and thus it difficult to directly compare with our methods. Two different evaluation sets are used, one is sampled from the same process on new boards, the other is a small set of questions collected from human annotators. In order to calculate the log-likelihood of human questions, we use translated versions of these questions that were used in previous work , and filtered some human questions that score poorly according to the generative model used for training the neural network (Appendix B.2). A summary of the is shown in Table 2a. The full model performs best on both datasets, suggesting that pretraining, the Transformer decoder, and the convolutional encoder are all important components of the approach. However, we find that the model without an encoder performs reasonably well too, even out-performing the full model with a LSTM-decoder on the human-produced questions. This suggests that while contextual information from the board leads to improvements, it is not the most important factor for predicting human questions. To further investigate the role of contextual information and whether or not the model can utilize it effectively, we conduct another analysis. Intuitively, if there is little uncertainty about the locations of the ships, observing the board is critical since there are fewer good questions to ask. To examine this factor, we divide the scenarios based on the entropy of the hypothesis space of possible ship locations into a low entropy set (bottom 30%), medium entropy set (40% in the middle), and high entropy set (top 30%). We evaluate different models on the split sets of sampled data and report the in Table 2b. When the entropy is high, it is easier to ask a generally good question like "how long is the red ship" without information of the board, so the importance of the encoder is reduced. If entropy is low, the models with access to the board has substantially higher log-likelihood than the model without encoder. Also, the first experiment (section 5.1) would be impossible without an encoder. Together, this implies that our model can capture important context-sensitive characteristics of how people ask questions. In this experiment, we evaluate our reinforcement learning framework proposed in Section 4 on its ability of generating novel questions from scratch, without providing a large training set. The reward for training the reinforcement agent is calculated based on the energy value of the generated question q. We transform the energy value to a proper range for reward by −ε(q)/10 and clamp it between −1 and 1. The model is optimized with the REINFORCE algorithm . A baseline for REINFORCE is established simply as the average of the rewards in a batch. In order to produce higher-quality questions, we manually tune the information-related parameter of the energy function from to make it more information-seeking in this experiment. This process is described in Appendix B.2. We compare the models on their ability to generate diverse questions with high expected information gain (EIG), which is defined as the expected reduction in entropy, averaged across all possible answers to a question x. where I(·) is the Shannon entropy. The terms p(h) and p(h|d; x) are the prior and posterior distribution of a possible ship configuration h given question x and answer d ∈ A x. We compare our program-based framework with a simple text-based model, which has the same architecture but is trained with supervision on the text-based question dataset collected by . We also compare with the supervised program-based model from the last experiment. Finally, we implement a sequence-based reinforcement learning agent that specifies the program without direct access to the grammatical rules. For this alternative RL agent, we find it necessary to pretrain for 500 epochs with stepwise supervision. The models are evaluated on 2000 randomly sampled boards, and the are shown in Table 3. Note that any ungrammatical questions are excluded when we calculate the number of unique questions. First, when the text-based model is evaluated on new contexts, 96.3% of the questions it generates were included in the training data. We also find that the average EIG and the ratio of EIG>0 is worse than the supervised model trained on programs. Some of these deficiencies are due to the very limited text-based training data, but using programs instead can help overcome these limitations. With the program-based framework, we can sample new boards and questions to create a much larger dataset with executable program representations. This self-supervised training helps to boost performance, especially when combined with grammar-enhanced RL. From the table, the grammar-enhanced RL model is able to generate informative and creative questions. It can be trained from scratch without examples of human questions, and produces many novel questions with high EIG. In contrast, the supervised model rarely produces new questions beyond the training set. The sequence-level RL model is also comparatively weak at generating novel questions, perhaps because it is also pre-trained on human questions. It also more frequently generates ungrammatical questions. We also provide examples in Figure 4 to show the diversity of questions generated by the grammar enhanced model, and more in the supplementary materials. Figure 4a shows novel questions the model produces, which includes clever questions such as " Where is the bottom right of all the purple and blue tiles?" or "What is the size of the blue ship minus the purple ship?", while it can also sometimes generates meaningless questions such as "Is the blue ship shorter than itself?" Additional examples of generated questions are provided in Appendix B. Is any ship two tiles long? (> (++ (map (lambda x (== (size x) 2)) (set AllShips))) 0) Are there any ships in row 1? (> (++ (map (lambda y (and (== (rowL y) 1) (not (== (color y) Water)))) (set AllTiles))) 0) Is part of a ship on tile 4-6? (not (== (color 4-6) Water)) What is the size of the blue ship? (setSize (coloredTiles Blue)) What is the size of the purple ship? (size Purple) Which column is the first part of the blue ship? (colL (topleft (coloredTiles Blue))) What is the orientation of the blue ship? With the grammar enhanced framework, we can also guide the model to ask different types of questions, consistent with the goal-directed nature and flexibility of human question asking. The model can be queried for certain types of questions by providing different start conditions to the model. Instead of starting derivation from the start symbol "A", we can start derivation from a intermediate state such as "B" for Boolean questions or a more complicated "(and B B)" for composition of two Boolean questions. In Figure 4b, we show examples where the model is asked to generate four specific types of questions: true/false questions, number questions, location-related questions, and compositional true/false questions. We see that the model can flexibly adapt to new constraints and generate meaningful questions. In Figure 4c, we compare the model generated questions with human questions, each randomlysampled from the model outputs and the human dataset. These examples again demonstrate that our model is able to generate clever and human-like questions. However, we also find that people sometimes generate questions with quantifiers such as "any" and "all", which are operationalized in program form with lambda functions. These questions are complicated in representation and not favored by our model, showing a current limitation in our model's capacity. We introduce a neural program generation framework for question asking task under partially unobservable settings, which is able to generate creative human-like questions with human question demonstrations by supervised learning or without demonstrations by grammar-enhanced reinforcement learning. Programs provide models with a "machine language of thought" for compositional thinking, and neural networks provide an efficient means of question generation. We demonstrate the effectiveness of our method in extensive experiments covering a range of human question asking abilities. The current model has important limitations. It cannot generalize to systematically different scenarios, and it sometimes generates meaningless questions. We plan to further explore the model's compositional abilities in future work. Another promising direction is to model question asking and question answering jointly within one framework, which could guide the model to a richer sense of the question semantics. Besides, allowing the agent to iteratively ask questions and try to win the game is another interesting future direction. We would also like to use our framework in dialog systems and open-ended question asking scenarios, allowing such systems to synthesize informative and creative questions. Encoder. A simple CNN with one layer of filters is used to encode the board. Intuitively, many questions are related to specific positions, thus the position information should be recoverable from the encoding. On the other hand, some features of the board are translation-invariant, such as whether a ship is blocked by another ship. In order to capture the position-sensitive information as well as the translation-invariant patterns, three convolution operations with different filter sizes (1 × 1, 3 × 3, and 5 × 5) are performed in parallel on the same input. The inputs are padded accordingly to make sure the feature maps have the same width and height. Then three feature maps are concatenated together along the dimension of output channels, and passed through a linear projection. Formally, the outputs of the convolutions c can be obtained by where Conv k denotes a convolution operation on a k × k filter, ReLU(·) means applying a ReLU activation, and [A; B] means the concatenation of matrices A and B. Then c ∈ R 6×6×3Cout is projected to the encoder output e ∈ R 6×6×M by matrix W e o ∈ R 3Cout,M, where C out is the number of out channels of each convolution, and M is the length of encoded vectors. Decoder. We use the decoder from the Transformer model . With an input sequence of length L, the decoder computes the hidden states through several stacked Decoder Attention Layers. Each layer is composed by three sub-layers, a self-attention module, an attention over the encoded board, and a fully connected feed-forward network. Residual connections are employed around each sub-layer, followed by a layer normalization . After N layers of attention modules, a final output layer transforms the hidden states to the output vectors y i ∈ R No at every position i from 1 to L, where N o is the output size. Given the output from encoder e, and the hidden representation h n−1 from Decoder Attention Layer n − 1, each layer computes the hidden representation as where LN(·) means layer normalization , FC(·) is a fully connected layer, ATT(·) and Self-ATT(·) are multi-head attention mechanisms, which computes the attention over the output of encoder e, and the attention over the input h n−1 itself, respectively. They are defined as follows Multi-ATT(·) is the multi-head attention mechanism described in the paper by , which is a concatenation of multiple standard attention mechanisms with inputs projected using different matrices. A multi-head attention with n heads is defined as where is the scaled dot-product attention operation. Q, K, V are a set of vectors called queries, keys, and values, respectively, and d k is the dimension of queries and keys. After N layers, we apply a linear projection and a softmax activation to h N to get the output vectors y 1,..., y L. In this experiment, we use C out = 10, L = 50 for the model encoder. Each word is embedded with 50 dimension vectors in the decoder. The decoder has 2 layers, each multi-head attention module has 4 heads, and The model is trained for 500 epochs using Adam optimizer with initial learning rate at 0.001 and a batch size is set as 32. To further examine the model's ability on compositionality task, we evaluate another version of the model which replaces the decoder with two linear transformations to directly predict y l ∈{topleft, bottomright}, and y c ∈{Blue, Red, Color}. With the hidden representation of the encoder c in equation 1, y l and y c are calculated as where W l ∈ R |c|×2, W c ∈ R |c|×3, andc is the flattened vector of c. In this experiment, the model encoder has the same hyper-parameters as in the first experiment. We increase the size of the decoder by setting number of layers to 3, number of heads to 6, and set The model is also trained for 500 epochs using Adam optimizer with the same initial learning rate at 0.001. In order to better familiarize the model with grammar, we also pretrain the decoder for 500 epochs on a larger set of question programs. This pretraining corpus is first uniformly sampled from the PCFG grammar defining questions, then we calculate the average energy value of each program on 10, 000 boards, and keep the top 2500 unique questions. For the evaluation set of human questions, we found that some simple questions become complicated in program form. For example, question "How many ship pieces are there in the second column?" will be translated to "(++ (map (lambda y (and (== (colL y) 2) (not (== (color y) Water)))) (set All_Tiles)))". Such complicated programs score very poorly according to the energy function, so they do not appear in the training set. As a , the average log-likelihood is extremely skewed by these unseen questions. For a more robust evaluation, we remove the last 10% human questions with low energy values according to. The neural model for this experiment has the same hyper-parameters as in the last experiment, and is optimized by REINFORCE algorithm with initial learning rate 0.0001 and batch size 24. To encourage the exploration of the model, we also apply an -greedy strategy with set to 0.5 at the beginning, and gradually decreased to 0.01 as training continues. This model is trained for 500 epochs, within each epoch the model passes 10, 000 different boards. From some preliminary experiments, we find that the models have a strong preference of generating similar programs of relatively low complexity, with the original energy values as rewards. Thus, we tune two parameters of the energy model as mentioned in section 5.3, which are the two parameters corresponding to information seeking features (denoted as f EIG and f EIG=0 in the original paper). We increase this two parameters from 0.1 until the reinforcement learning models are able to generate a diverse set of questions. The sequence RL baseline which directly generates sequence with the decoder is trained with MIXER algorithm , which is a variant of REINFORCE algorithm widely used in sequence generation tasks. MIXER provides a smooth transition from supervised learning to reinforcement learning. This model is pretrained for 500 epochs, and trained with RL for 300 epochs. The grammar is provided in table 4 and 5. Table 4: Part 1 of the grammatical rules. Rules marked with b have a reference to the Battleship game board (e.g., during evaluation the function orient looks up the orientation of a ship on the game board) while all other rules are domain-general (i.e., can be evaluated without access to a game board). True if all elements in set of numbers are equal B → (any setB) True if any element in set of booleans is True B → (all setB) True if all elements in set of booleans are True True if the two ships are touching (diagonal does not count) B → (isSubset setL setL) True if the first set of locations is subset of the second set of locations Number of True elements in set of booleans Column number of location L N → (setSize setL) Number of elements in set of locations The most left of the most top location in the set of locations L → (bottomright setL) The most right of the most bottom location in the set of locations All locations with color C setL → (setDifference setL setL) Remove second set from first set setL → (union setL setL) Combine both sets setL → (intersection setL setL) Elements that exist in both sets setL → (unique setL) Unique elements in set We also perform an ablation test on the neural network in the first experiment (section 5.1). Accuracy for the counting and missing tile tasks is summarized in Table 6a for the full model and lesioned variants. The full neural program generation model shows strong reasoning abilities, achieving an accuracy of 99.80% and 95.50% for the counting and missing tile tasks, respectively. The full model is compared to weakened variants with only one filter size in the encoder, either "3x3" and "1x1 conv only," and the performance of the weakened models drop dramatically on the missing tile task. To better understand the role of different filter sizes, Table 6b breaks down the errors in the missing tile task on whether the question can pick the right ship (color acc.) and whether it can select the right location (location acc.). The 3 × 3 convolution filters can accurately select the correct color, but often fail to choose the right tile. The model with 1 × 1 convolution filters has poor performance for both color and location. In the current architecture, predicting the correct location requires precise information that seems to be lost without filters of different sizes. For the experiment on estimating the distribution of human questions (Experiment 5.2), Table 7 provides a full table of log-likelihood of different models on evaluation set of different uncertainty level. Here we provide more examples of questions generated by our models in the generation experiment (Experiment 5.3). Figure 5, 6 and 7 provides more examples for the same settings as shown in figure 4 in the main text. Figure 8 shows generated examples of the text-based model. What is the size of blue plus the number of the row which contains the beginning of the red ship? (+ (size Blue) (colL (topleft (coloredTiles Red)))) 2.46 Energy: -5.11 Figure 5: Novel questions generated by the grammar enhanced model.
We introduce a model of human question asking that combines neural networks and symbolic programs, which can learn to generate good questions with or without supervised examples.
528
scitldr
The classification of images taken in special imaging environments except air is the first challenge in extending the applications of deep learning. We report on an UW-Net (Underwater Network), a new convolutional neural network (CNN) based network for underwater image classification. In this model, we simulate the visual correlation of attention with image understanding for special environments, such as fog and underwater by constructing an inception-attention (I-A) module. The experimental demonstrate that the proposed UW-Net achieves an accuracy of 99.3% on underwater image classification, which is significantly better than other image classification networks, such as AlexNet, InceptionV3, ResNet and Se-ResNet. Moreover, we demonstrate the proposed IA module can be used to boost the performance of the existing object recognition networks. By substituting the inception module with the I-A module, the Inception-ResnetV2 network achieves a 10.7% top1 error rate and a 0% top5 error rate on the subset of ILSVRC-2012, which further illustrates the function of the attention in the image classifications. Underwater images and videos contain a lot of valuable information for many underwater scientific researches . However, the image analysis systems and classification algorithms designed for natural images cannot be directly applied to the underwater images due to the complex distortions existed in underwater images (e.g., low contrast, blurring, non-uniform brightness, non-uniform color casting and noises) and there is, to the best of our knowledge, no model for underwater image classification. Except for the inevitable distortions exhibited in underwater images, there are other three key problems for the classification of underwater images: the in underwater images taken in different environments are various; the salient objects such as ruins, fish, diver exist not only in underwater environment, but also in air. The features extracted from the salient objects cannot be relied on primarily in the classification of underwater images; and since the classification of underwater images is only a dualistic classification task, the structure of the designed network should be simple to avoid the over-fitting. Increasing the depth and width of a CNN can usually improve the performance of the model, but is more prone to cause over-fitting when the training dataset is limited, and needs more computational resource . To remit this issue, proposed the inception module, which simultaneously performs the multi-scale convolution and pooling on a level of CNN to output multi-scale features. In addition, the attention mechanism is proposed and applied in the recent deep models which takes the advantage that human vision pays attention to different parts of the image depending on the recognition tasks; ). Although these strategies play an important role in advancing the field of image classifications, we find that the large-scale features such as the area play a more important role in the visual attention mechanism when people understanding of underwater images, which is unlike the attention mechanism applied in natural scene image classification . In this paper, we propose an underwater image classification network, called UW-Net. The overview network structure is shown in Fig. 1. Unlike other models, the UW-Net pays more attention to the Figure 1: The structure of the UW-Net. The bottom part is the output of the eighth layer in the I-A module. The red area represents a higher response of features for the underwater image classification. As shown, our I-A module concerns more about the regions of underwater images. features of images by construct the inception-attention (I-A) modules and thus achieves better performance. The contributions of this paper are as follows: (i) to the best of our knowledge, it is the first CNN-based model for underwater image classification; (ii) an inception-attention module is proposed, which joints the multi-dimensional inception module with the attention module to realize the multiple weighting of the output of various scales of features; (iii) this work is a first attempt to simulate the visual correlation between understanding images and areas through I-A modules. The rest of the paper is organized as follows: Section 2 introduces the related work. The proposed UW-Net is described in Section 3. Section 4 illustrates the experimental and analysis, and we summarize this paper in Section 5. As mentioned before, there is less work focusing on underwater image classification. Thus, we mainly introduce the classification models designed for natural scenes and the recent attention mechanism which is incorporated in our network in this section. won the ImageNet Large Scale Visual Recognition Competition (ILSVRC) , CNNs become more and more popular in the application of image recognition tasks. Many CNNs pursue better performance by means of superimposing more convolution layers, but the number of the parameters is concomitantly increased. As the depth increases , the gradient of the network will disappear or explode in the training process. proposed the "GoogleNet" model, in which the inception architecture is first proposed. The 1 × 1 convolution is also applied as a dimension reduction technique and to extract more nonlinear characteristics of the features. By fusing the features extracted from multi-scale convolutions, better image representation is obtained in the deep layers of the network. Based on the initial inception structure, multiple network structures such as Inception V3 and Inception V4 are further proposed. proposed the Resnet, in which the residual modules are proved to be effective in solving the gradient disappearance problem of deep convolution networks. An important feature of human vision is that people usually focus more on a certain area of the whole scene while ignoring other areas ), which is called the attention mechanism (AM). proposed the SeNet by constructing the attention mechanism of feature channels, and won the championship of ILSVRC2017. proposed the residual attention network (RAN), which also combined the attention mechanism with the residual modules, and achieves 4.8% top-5 error rate on ILSVRC 2012 dataset. These works are all evidences of the effectiveness of the attention mechanism in deep learning models on one hand, and on the other, none of these studies on attention mechanism focus on features. Although the performance of the existing image classification models has exceeded that of human beings in some classification tasks , most of the existing classification models assume that images have legible texture and uniform features. However, when light propagating through the water, the absorption and scattering determined by the internal optical property (IOP) of the water affect the process of underwater imaging. Not only the water body, but also the dissolved organic matter and small floating particles (called sea snow), whose concentration and species vary greatly, also affect the underwater image quality . With the depth increases in water, the color of light disappears according to their wavelengths. Artificial lighting often in uneven lighting, creates bright spots in the image, and makes the scattering of suspended matter worse . These challenges make the design of an effective underwater image classification algorithm difficult. Moreover, human recognition of underwater images is often based on the features of the images. In the next section, we use the inception module with multiple sizes of receptive fields to extract features and simulate the human attention mechanism by combining an attention module emphasizing the features of underwater images. The network structure of UW-Net is constructed based on multiple I-A modules, as shown in Fig. 2. The network can be extended with more I-A modules for other complex visual classification tasks. An underwater image is first forwarded to a convolutional layer with 7 × 7 size of kernels to obtain large reception fields. An auxiliary classification branch is added after two I-A modules, and the output of the auxiliary branch will be used to optimize the network as part of the loss function. As follows, we will introduce the key components in the UW-Net, i.e., I-A module including inception module and attention module, and classification branch in details. The classical inception models (; are constructed by multiple sizes of convolutional kernels. The feature maps are processed by different convolution kernels in one inception module, and then merged and forwarded to the next inception module directly. However, not all the extracted features of every convolution kernel are positively related to the current image classification task. For example, the features extracted by a convolution kernel with large size tend to describe the global information, which has little effect on a fine-grained image classification task even though they can be transferred to the deeper layers, and will in a certain degree of waste of computational resource. Underwater images exhibit the characteristics of large intra-class differences. Furthermore, the positions and proportions of the areas vary in underwater images, and the recognition of underwater image is based more on the global features of the image. In addition, the quality of most underwater images is poor due to the effects of lighting absorption and scattering. In view of these characteristics of the underwater images, we adopt convolution kernels with larger size and the average pooling to reduce the impact of local features of the image on the final classification . In the experiment, we find that the best classification can be obtained by using convolution kernel sizes of 1 × 1, 5 × 5 and 7 × 7 in the inception module. The structure of the UW-Net. After two I-A modules, one branch serves as an auxiliary classification branch and one branch serves as the backbone of the network. The final model has two outputs, in which the output of the auxiliary branch is used to compute the loss function. We construct the attention module based on soft attention, as shown in Fig. 3, which consists of a trunk branch and a mask branch to simulate the recognition of underwater image by humans beings . Inspired by the residual network , the trunk branch takes the output of the previous layer as input directly so that the basic features of the image can be transmitted to the deep layers of the network, and the gradient disappearance and the gradient explosion can be remitted. On the other hand, the down-sampling operation is firstly performed in the mask branch, and then up-sampling by bilinear interpolation is used at the last step to keep the same size of the feature map with the input. The activation function of the first and second convolutional layers are Relu and Sigmoid , respectively. The adaptive weight N (x) for a point x of the original features map P (x), in the range of, can be learned after the mask branch. The output of the attention module F (x) can be expressed as: For a N (x) approximating 1, F (x) will be near twice the value of the original feature P (x), which means that for the features that are valid for the current classification, more attention will be given. On the contrary, for a N (x) approximating 0, the output of the attention module will approximate the original feature P (x). In the proposed UW-Net, an auxiliary classification branch is introduced to the output of the second I-A module to reduce the risk of over-fitting. The convergence curve of the UW-Net is shown in Fig. 4. It is obvious that adding auxiliary classifier can not only accelerate the convergence, but also improve the accuracy on the test set. The loss function J of the UW-Net can be expressed as follows: where J 0 is the cross entropy of the final output of the model and the real label of the image, J 1 is the cross entropy of the output of the model's auxiliary classification branch and the actual label, α is the weight attenuation coefficient of the network, and L is the L2 regularization term. The UW-Net is constructed by two I-A modules proposed above, and the detailed blocks and parameter settings are reported in Table 1. Before the first I-A module, the size of the image is reduced to 36 × 36, and the channel of the input image is increased to 32 by two convolutional layers and one pooling operation. After the I-A module, the size of the feature map is reduced and the channel of the feature map is increased. The final prediction of the model is obtained after an average pooling (Avgpool) and a fully connected (FC) layer. In this section, a series of experiments are conducted to demonstrate the performance of the UW-Net in underwater image classification. Additionally, the effectiveness of the proposed I-A module in improving the performance of existing inception based networks is investigated. We compare the UW-Net with several typical image classification models and inception model based networks. The figure 2 36 × 36 × 32 1×I-A module As in figure 2 36 × 36 × 128 1×Inception As in Section 3.1.1 17 × 17 × 512 models for comparison are re-trained in the same dataset without changing the structure, only the parameters are optimized. To ensure the diversity of underwater images, we collect more than 4,000 underwater images from ImageNet dataset 1, JAMSTEC dataset 2, underwater rock image dataset 3 and online underwater images. These images are labeled as underwater images. In addition, more than 5,000 non-underwater images from the ImageNet including birds, cars, food, airplanes, cats, etc. are selected and labeled as non-underwater images. The proportion of the samples used in training and testing is 70% and 30%, respectively. To reduce the risk of over-fitting, we augment the data by random clipping and flipping. We use the initialization method proposed by to initialize weights and train the UW-Net by using the SGD (Stochastic Gradient Descent) with a mini-batch size of 32. The weight decay, momentum and initial learning rate are set to 0.001, 0.9 and 0.001, respectively. The learning rate is decreased ten times of its original value at 1k and 2k iterations. The training end at 3k iterations. The loss curve is shown in Fig. 5. Figure 6: Visualized examples of regional contributions to the final classification. From up to down are: the output of the fifth layer in Alexnet, the output of the thirteenth layer in VGG16, the output of the twelfth layer in Googlenet, the output of the seventeenth layer in Resnet-50, the output of the seventeenth layer in SE-Resnet-50 and the output of the eighth layer in the UW-Net. The UW-Net achieves 100% and 99.3% accuracy on the training and testing datasets, respectively. We also report the comparisons with the AlexNet, VGG16, InceptionV3, Resnet-50 and SE-Resnet-50 on the testing dataset in Table 2, and class activation maps (CAMs) produced by these models for underwater image classification are shown in Fig. 6. The darker red color in the CAMs represent more importance of the regions to the final classification. As shown in Fig. 6, the interesting areas of the UW-Net locate more in the areas compared to the competing models. The data in Table 2 shows that the UW-Net is superior to the competing models for the task of underwater image classification by fewer computation units and parameters, and higher accuracy. By adopting the I-A model in the UW-Net, we achieve higher accuracy with a smaller depth of network. At the same time, the UW-Net requires lower computation than the other inception based models. The I-A module designed in this work can not only be applied in the UW-Net, but also to other common image classification networks. To further verify the generalization ability of the I-A module in boosting the performance of related models, we embed the proposed I-A module with the max-pooling in the down-sampling of the mask branch into several image classification models including GoogleNet, InceptionV3, InceptionV4 and Inception-ResnetV2. One hundred categories of images are selected from the ILSVRC-2012 dataset, including ships, sharks, dogs, cocks, etc. Each category contains about 1300 images, and a total number of 130,000 images is included. The training and testing sets consist of 125,000, and 5000 images respectively. The size of the images is 299 × 299. All the networks are tested and tuned on this dataset in the same way. The experimental are shown in Table 3. It can be seen that the error rates of Top-1 obtained by substituting the inception module with the I-A module are significantly decreased. Such a indicates the generalization and effectiveness of the proposed I-A module, also gives another evidence that attention weighted large-scale image features simulate better the visual understanding mechanism in image classifications. A new underwater image classification network UW-Net is proposed in this work, wherein an inception-attention module is constructed. In this model, we simulate the visual correlation between understanding images and areas through I-A modules, which joint the multidimensional inception module with the attention module to realize the multiple weighting of the output of various scales of features. The 100% accuracy on the training set and 99.3% accuracy on the testing set of the UW-Net is achieved benefiting from the refinement of the usefulness of multiscale features by the I-A module. In the future, we will try to improve the performance of other underwater image visual analysis models by introducing the proposed I-A module.
A visual understanding mechanism for special environment
529
scitldr
Deep networks have achieved impressive across a variety of important tasks. However, a known weakness is a failure to perform well when evaluated on data which differ from the training distribution, even if these differences are very small, as is the case with adversarial examples. We propose \emph{Fortified Networks}, a simple transformation of existing networks, which “fortifies” the hidden layers in a deep network by identifying when the hidden states are off of the data manifold, and maps these hidden states back to parts of the data manifold where the network performs well. Our principal contribution is to show that fortifying these hidden states improves the robustness of deep networks and our experiments (i) demonstrate improved robustness to standard adversarial attacks in both black-box and white-box threat models; (ii) suggest that our improvements are not primarily due to the problem of deceptively good due to degraded quality in the gradient signal (the gradient masking problem) and (iii) show the advantage of doing this fortification in the hidden layers instead of the input space. We demonstrate improvements in adversarial robustness on three datasets (MNIST, Fashion MNIST, CIFAR10), across several attack parameters, both white-box and black-box settings, and the most widely studied attacks (FGSM, PGD, Carlini-Wagner). We show that these improvements are achieved across a wide variety of hyperparameters. The success of deep neural networks across a variety of tasks has also driven applications in domains where reliability and security are critical, including self-driving cars BID6, health care, face recognition BID25, and the detection of malware BID17. Security concerns arise when an agent using such a system could benefit from the system performing poorly. Reliability concerns come about when the distribution of input data seen during training can differ from the distribution on which the model is evaluated. Adversarial examples BID11 from attacks on neural network models, applying small perturbations to the inputs that change the predicted class. Such perturbations can be small enough to be unnoticeable to the naked eye. It has been shown that gradient-based methods allow one to find modifications of the input that often change the predicted class BID26 BID11. More recent work demonstrated that it is possible to create modifications such that even when captured through a camera, they change the predicted class with high probability BID7.Some of the most prominent classes of defenses against adversarial examples include feature squeezing BID29, adapted encoding of the input (Jacob BID14, and distillation-related approaches BID20 . Existing defenses provide some robustness but most are not easy to deploy. In addition, many have been shown to be providing the illusion of defense by lowering the quality of the gradient signal, without actually providing improved robustness BID1 . Still others require training a generative model directly in the visible space, which is still difficult today even on relatively simple datasets. Our work differs from the approaches using generative models in the input space in that we instead employ this robustification on the distribution of the learned hidden representations, which makes the The plot on the right shows direct experimental evidence for this hypothesis: we added fortified layers with different capacities to MLPs trained on MNIST, and display the value of the total reconstruction errors for adversarial examples divided by the total reconstruction errors for clean examples. A high value indicates success at detecting adversarial examples. Our support the central motivation for fortified networks: that off-manifold points can much more easily be detected in the hidden space (as seen by the relatively constant ratio for the autoencoder in hidden space) and are much harder to detect in the input space (as seen by this ratio rapidly falling to zero as the input-space autoencoder's capacity is reduced).identification of off-manifold examples easier. We do this by training denoising autoencoders on top of the hidden layers of the original network. We call this method Fortified Networks. We demonstrate that Fortified Networks (i) can be generically added into an existing network; (ii) robustify the network against adversarial attacks and (iii) provide a reliable signal of the existence of input data that do not lie on the manifold on which it the network trained. In the sections that follow, we discuss the intuition behind the fortification of hidden layers and lay out some of the method's salient properties. Furthermore, we evaluate our proposed approach on MNIST, Fashion-MNIST, CIFAR10 datasets against whitebox and blackbox attacks. The Empirical Risk Minimization Framework Let us consider a standard classification task with an underlying data distribution D over pairs of examples x ∈ R d and corresponding labels y ∈ [k]. We also assume that we are given a suitable loss function L(θ, x, y), for instance the cross-entropy loss. As usual, θ ∈ R p is the set of model parameters. Our goal then is to find model parameters θ that minimize the risk E (x,y)∼D [L(x, y, θ) ]. This expectation cannot be computed, therefore a common approach is to to minimize the empirical risk 1/N D L(x, y, θ) taking into account only the examples in a given dataset D.Adversarial Attacks and Robustness While the empirical risk minimization framework has been very successful and often leads to excellent generalization, it has the significant limitation that it doesn't guarantee robustness, and more specifically performance on examples off the data manifold. BID19 proposed an optimization view of adversarial robustness, in which the adversarial robustness of a model is defined as a min-max problem, DISPLAYFORM0 where S denotes the set of all points within a sphere of radius ε, which is task-specific. Larger ε values correspond to stronger attacks but which may be more visually apparent. Denoising Autoencoders Denoising autoencoders (DAEs) are neural networks which take a noisy version of an input (for example, an image) and are trained to predict the noiseless version of that input. This approach has been widely used for feature learning and generative modeling in deep learning BID4. More formally, denoising autoencoders are trained to minimize a reconstruction error or negative log-likelihood of generating the clean input. For example, with DISPLAYFORM1 Figure 2: Diagram illustrating a one-layer fortified network. A network is evaluated with a data sample x and its corresponding adversarial example x. Hidden units h k and h k are corrupted with noise, encoded with the encoder enc., and decoded with the decoder dec. The autoencoder (denoted by the red color) is trained to reconstruct the hidden unit h k that corresponds to the clean input. Dotted lines are two reconstruction costs: for a benign (L rec) and adversarial examples (L adv).Gaussian log-likelihood of the clean input given the corrupted input, r θ the learned denoising function, C a corruption function with Gaussian noise of variance σ 2, the reconstruction loss is BID0 demonstrated that with this loss function, an optimally trained denoising autoencoder's reconstruction vector is proportional to the gradient of the log-density: DISPLAYFORM2 DISPLAYFORM3 This theory establishes that the reconstruction vectors from a well-trained denoising autoencoder form a vector field which points in the direction of the data manifold. However, BID0 showed that this may not hold for points which are distant from the manifold, as these points are rarely sampled during training. In practice, denoising autoencoders are not just trained with tiny noise but also with large noises, which blurs the data distribution as seen by the learner but makes the network learn a useful vector field even far from the data. We propose the use of DAEs inserted at crucial points between layers of the original neural network in order to denoise the transformed data points which may lie away from the original data manifold. Intuitively, the method aims to regularize the hidden representations by keeping the activations on the surface of the corresponding projected data manifold through the application of a DAE trained on the hidden representations (on the original clean data). We argue that applying the DAEs on the hidden layers-as opposed to the raw input signal-facilitates learning, while providing a stronger protection from adversarial attacks. As illustrated in FIG0, we hypothesize that more abstract representations associated with deeper networks are easier to denoise because the transformed data manifolds are flatter. The flattening of data manifolds in the deeper layers of a neural network was first noted experimentally by BID5. We provide experimental support for these claims in Section 4.Layer fortification R Our method works by substituting a hidden layer h k with a denoised version. We feed the signal h k through the encoder network, E k, and decoder network, D k, of a DAE for layer k, which yields the denoised version, h DISPLAYFORM0 where n k is white Gaussian noise of variance σ 2 and appropriate shape. We call the ing layer, a fortified layer and the ing network the fortified network corresponding to the original network.• Reconstruction loss. For a mini-batch of N clean examples, DISPLAYFORM1 is fed into a DAE loss, similar to equation 2: DISPLAYFORM2 • Adversarial loss. We use some adversarial training method to produce the perturbed version of the mini-batch, DISPLAYFORM3, where x (i) is a small perturbation of x (i) which is designed to make the network produce the wrong answer. The corresponding hidden layer h DISPLAYFORM4 (using the perturbed rather than original input) is fed into a similar DAE loss: DISPLAYFORM5 where we note that the target reconstruction for denoising is the clean version of the hidden layer, without noise and without adversarial perturbation. For training purposes, we treat the DAEs as part of the fortified network, backpropagate through and train all weights jointly. Aside from the original classification loss, L c, we also include the classification loss from the adversarial objective, L c (y), and we introduce a dual objective for the DAEs. To build a fortified network, we can apply this fortification process to some or all the layers. The final objective used for training the fortified network includes the classification loss and all reconstruction and adversarial losses: DISPLAYFORM6 where λ rec > 0 and λ adv > 0 tune the strength of the DAE terms. This kind of training process allows for the production of hidden representations robust to small perturbations, and in particular, to adversarial attacks. Off-manifold signaling The reconstruction losses act as a reliable signal for detecting off-manifold examples (cf. Section 4). This is a particularly useful property in practice: not only can we provide more robust classification , we can also sense and suggest to the analyst or system when the original example is either adversarial or from a significantly different distribution. Motivation for when and where to use fortified layers We have discussed advantages to placing fortified layers in the hidden states instead of the input space (with further discussion in section 6,) but the question of where exactly fortified layers need to be placed remains unanswered. Is it just the final hidden layer? Is it every hidden layer? We outline two important considerations regarding this issue. First, in the higher-level hidden layers, it is much easier for the network to identify points which are off of the manifold or close to the margin. This is directly experimentally demonstrated in fig. 1. Secondly, the higher level hidden layers may already look like points that are not adversarial due to the effect of the adversarial perturbations in the earlier layers. While we are not aware of any formal study of this phenomenon, it is clearly possible (imagine for example a fortified layer on the output from the softmax, which could only identify unnatural combinations of class probabilities.) Given these opposing objectives, we argue for the inclusion of multiple fortified layers across the network. We evaluated the performance of our model as a defense against adversarial attacks. We focused on two of the most popular and well-studied attacks. Firstly, we consider the Fast Gradient Sign Method (FGSM, BID11 which is popular as it only requires a single step and can still be effective against many networks. Secondly, we consider the projected gradient descent attack BID16 which is slower than FGSM as it requires many iterations, but has been shown to be a much stronger attack BID19 .Additionally, we consider both white-box attacks (where the attackers knows the model) and blackbox attacks (where they don't, but they have access to the training set.)Fast Gradient Sign Method. The Fast Gradient Sign Method (FGSM) BID11 is a simple one-step attack that produces ∞ -bounded adversaries via the following gradient based perturbation. DISPLAYFORM0 Projected Gradient Descent. The projected gradient descent attack BID19, sometimes referred to as FGSM k, is a multi-step extension of the FGSM attack characterized as follows: DISPLAYFORM1 initialized with x 0 as the clean input x and with the corrupted input x as the last step in the sequence. Π refers to the projection operator, which in this context means projecting the adversarial example back onto the region within an ε radius of the original data point, after each step in the adversarial attack. Finally we considered the Carlini-Wagner L2 attack BID8 which consists of joint optimization of loss maximization and minimizing the distance of the adversarial example to the original example. A significant challenge with evaluating defenses against adversarial attacks is that many attacks rely upon a network's gradient. Methods which reduce the quality of this gradient, either by making it flatter or noisier can lead to methods which lower the effectiveness of gradient-based attacks, but which are not actually robust to adversarial examples BID2 BID23. This process, which has been referred to as gradient masking or gradient obfuscation, must be analyzed when studying the strength of an adversarial defense. One method for studying the extent to which an adversarial defense gives deceptively good as a of gradient masking relies on the observation that black-box attacks are a strict subset of white-box attacks, so white-box attacks should always be at least as strong as black-box attacks. If a method reports much better defense against white-box attacks, it suggests that the selected white-box attack is underpowered as a of gradient masking. Another test for gradient masking is to run an iterative search, such as projected gradient descent (PGD) with an unlimited range for a large number of iterations. If such an attack is not completely successful, it indicates that the model's gradients are not an effective method for searching for adversarial images, and that gradient masking is occurring. Still another test is to confirm that iterative attacks with small step sizes always outperform single-step attacks with larger step sizes (such as FGSM). If this is not the case, it may suggest that the iterative attack becomes stuck in regions where optimization using gradients is poor due to gradient masking. Additionally, BID1 discussed the Backward Pass Differentiable Approximation (BPDA) attack to cover cases where a defense employs a transformation which is clearly nondifferentiable or reduces the quality of the gradients. Because we pass gradients through the fortified layers in the normal training of our network, it is unlikely that the quality of these gradients is significantly deteriorated, and there isn't a reason to expect that they would be because the fortified layers are relatively shallow and use normal activation functions (i.e. no non-differentiable functions.). Additionally we ran additional experiments using the identity function version of BPDA on Fashion MNIST FGSM (ε = 0.3) with standard deviation across five trials. Our adversarial training baseline achieved an accuracy of 84.64±0.48 on this task, fortified networks with the normal attack achieved an accuracy of 89.50±0.45, and fortified networks with the BPDA version of the attack (treating the autoencoder as an identity) achieved an accuracy of 89.88±0.30, which corresponds to a weaker attack. For details about the specifics of our model architectures and hyperparameters we refer readers to the Appendix. With all experiments, we use the same attacks (with identical parameters) at training and test time to generate adversarial examples. An important point to note here is that all of the autoencoders in our fortified layers used a single hidden layer with tied weights. We also performed an analytical experiment on an RNN language model to study if fortified networks can detect when it's being given outputs from its own model (sampling mode) when trained using ground truth input sequences. To this end we train a language model on the standard Text8 dataset, which is derived from Wikipedia articles. We trained a single-layer LSTM with 1000 units at the character-level, and included fortified layers between the hidden states and the output on each time step. With 50 sampling steps, the fortified layers had a reconstruction error on average 103% of the teacher forcing reconstruction error. With 180 sampling steps, this valued increased to 112%. With 300 sampling steps this increased even further to 134%. This is clear evidence that the outputs move off of the manifold with more sampling steps, and that this is effectively measured by fortified networks. We ran with many hyperparameters for the fortified layers to demonstrate the generality of the improvement given by fortified networks. We ran FGSM on Fashion-MNIST (ε=0.3) while varying the amount of noise injected and the weighting on the reconstruction loss. We also varied the amount of noise and choice of losses in the ablation experiment in Table 7. We achieved consistent improvement across a variety of settings but found the largest improvement when using both the adversarial and clean reconstruction losses and a small amount of noise. Thus, we see that a consistent improvement is achieved when the weighting on the reconstruction part of the loss and the amount of noise injected are varied over several orders of magnitude. Using Generative Models as a Defense. The observation that adversarial examples often consist of points off of the data manifold and that deep networks may not generalize well to these points motivated BID12 BID24 BID18 to consider the use of the generative models as a defense against adversarial attacks. Ilyas et al. FORMULA0; BID10 also showed the existence of adversarial examples which lie on the data manifold, and showed that training against adversarial examples forced to lie on the manifold is an effective defense. Our method shares a closely related motivation to these prior works, with a key difference being that we propose to consider the manifold in the space of learned representations, Table 2: CIFAR-10 PGD Results with (non-resnet) CNNs. In these experiment we used a fortified block (single convolutional autoencoder) following each convolutional layer. Both experiments were run for 200 epochs and with all hyperparameters and architecture kept the same with the exception of the fortified layer being added. We considered different types of baselines:'Baseline -no new layers' means we simply removed the fortified block.' Baseline -extra layers' means that we added extra layers to match the capacity of the fortified layers, but only gave half of these extra layers activations as the fortified block has two layers but only one activation.' Baseline -extra activations' means that we added an activation following each layer, giving more activations in total than the Fortified Network. Attack instead of considering the manifold directly in the visible space. One motivation for this is that the learned representations have a simpler statistical structure, which makes the task of modeling this manifold and detecting unnatural points much simpler. Learning the distribution directly in the visible space is still very difficult (even state of the art models fall short of real data on metrics like Inception Score) and requires a high capacity model. Additionally, working in the space of learned representations allows for the use of a relatively simple generative model, in our case a small denoising autoencoder. proposed to work around these challenges from working in the visible space by using the Deep Image Prior instead of an actual generative model. While this has the advantage of Table 4: Accuracies against white-box attacks on Fashion MNIST. For PGD we used ε = 0.1 and for FGSM we used ε = 0.1 and ε = 0.3 1. Compared with DefenseGAN BID24. Table 5: Left: Accuracies against blackbox MNIST attacks with adversarial training (FGSM). DISPLAYFORM0 Reporting 50/50 compared to previous works (, PS). The test error on clean examples is in parenthesis. Right:We ran a fortified network on Fashion-MNIST using adversarial training with PGD for a variety of ε values, each for 5 epochs. The motivation behind this experiment, suggested by BID1 is confirming if unbounded (ε = 1) adversarial attacks are able to succeed. A defense which succeeds primarily by masking or obfuscating the gradients would fail to bring the accuracy to zero even with an unbounded attack. As can be seen, unbounded attacks against Fortified Networks succeed when given a sufficiently large ε, which is evidence against gradient masking. being a model that doesn't require a special training procedure (as deep image prior is a separate optimization process for each example) it may be limited in the types of adversarial attacks that it's resistant to, and it would provide no defense against adversarial attacks which are in the range of a convolutional network, which have been shown to exist BID28.Another key difference between our work and BID24 is that both DefenseGAN and the Invert-and-Classify approach use an iterative search procedure at inference time to map observed data points onto nearby points on the range of the generator. On the other hand, our approach uses small denoising autoencoders that are used in the same way (i.e. a simple forward application) during both training and testing. The use of such an iterative procedure presents challenges for evaluation, as it is possible for gradients to vanish while doing backpropagation through such a procedure, which may lead to an overestimate in the strength of the defense due to the gradient masking problem BID22 BID1. One indicator of the gradient masking problem is black-box attacks outperforming white-box attacks, which is an indicator of under-powered attacks as black-box attacks are a strict subset of white-box attacks. This indicator of gradient obfuscation was present in the work of BID24 where black-box attacks were generally stronger against their defense, but with our method we observe very similar defense quality against black-box and white-box attacks. BID12 BID18 both considered using an autoencoder as a pre-processing step in the input space. Interestingly BID18 ) used a loss function defined in the space of the hidden states, but still used autoencoders directly in the input space. Adversarial Hidden State Matching. BID9 demonstrate that adversarially matching the hidden layer activations of regular and adversarial examples improves robustness. This work shared the same motivation of using the hidden states to improve robustness, but differed in that they used an adversarial objective and worked in the original hidden states instead of using a generative model (in our case, the DAE in the fortified layers). We present direct experimental comparisons with their work in section 5. BID15 proposed a method which involves matching the logit (pre-softmax outputs) values for the original samples with the logit values ing from adversarial examples. Denoising Feature Matching. BID27 proposed to train a denoising autoencoder in the hidden states of the discriminator in a generative adversarial network. The generator's parameters are then trained to make the reconstruction error of this autoencoder small. This has the effect of encouraging the generator to produce points which are easy for the model to reconstruct, which will include true data points. Both this and Fortified Networks use a learned denoising autoencoder in the hidden states of a network. A major difference is that the denoising feature matching work focused on generative adversarial networks and tried to minimize reconstruction error through a learned generator network, whereas our approach targets the adversarial examples problem. Additionally, our objective encourages the output of the DAE to denoise adversarial examples so as to point back to the hidden state of the original example, which is different from the objective in the denoising feature matching work, which encouraged reconstruction error to be low on states from samples from the generator network. Adversarial Spheres. BID10 studied the existence of adversarial examples in the task of classifying between two hollow concentric shells. Intriguingly, they prove and construct adversarial examples which lie on the data manifold (although also looked for such examples experimentally using GANs.) The existence of such on-manifold adversarial examples demonstrates that a simplified version of our model trained with only L rec could not protect against all adversarial examples. However, training with L adv encourages the fortified layers to map back from points which are not only off of the manifold, but also to map back from points which are hard to classify, allowing Fortified Networks to also potentially help with on-manifold adversarial examples as well. Protecting against adversarial examples could be of paramount importance in mission-critical applications. We have presented Fortified Networks, a simple method for the robustification of existing deep neural networks. Our method is practical, as fortifying an existing network entails introducing DAEs between the hidden layers of the network, which can be automated. Furthermore, the DAE reconstruction error at test time is a reliable signal of distribution shift, which can in examples unlike those encountered during training. High error can signify either adversarial attacks or significant domain shift; both are important cases for the analyst or system to be aware of. Moreover, fortified networks are efficient: since not every layer needs to be fortified to achieve improvements, fortified networks are an efficient way to improve robustness to adversarial examples. For example, we have shown improvements on ResNets where only two fortified layers are added, and thus the change to the computational cost is very slight. Finally, fortified networks are effective, as they improve on adversarial defense on three datasets (MNIST, Fashion MNIST, and CIFAR10), across a variety of attack parameters (including the most widely used ε values), across three widely studied attacks (FGSM, PGD, Carlini-Wagner L2), and in both the black-box and white-box settings. A EXPERIMENTAL SETUP All attacks used in this work were carried out using the Cleverhans BID21 ) library. A.1 WHITE-BOX ATTACKS Our convolutional models (Conv, in the tables) have 2 strided convolutional layers with 64 and 128 filters followed by an unstrided conv layer with 128 filters. We use ReLU activations between layers then followed by a single fully connected layer. The convolutional and fully-connected DAEs have a single bottleneck layer with leaky ReLU activations with some ablations presented in the table below. With white-box PGD attacks, we used only convolutional DAEs at the first and last conv layers with Gaussian noise of σ = 0.01 whereas with FGSM attacks we used a DAE only at the last fully connected layer. The weight on the reconstruction error λ rec and adversarial cost λ adv were set to 0.01 in all white-box attack experiments. We used the Adam optimizer with a learning rate of 0.001 to train all models. The table below lists a few ablations with different activation functions in the autoencoder Our black-box are based on a fully-connected substitute model (input-200-200-output), which was subsequently used to attack a fortified convolutional network. The CNN was trained for 50 epochs using adversarial training, and the predictions of the trained CNN were used to train the substitute model. 6 iterations of Jacobian data augmentation were run during training of the substitute, with λ = 0.1. The test set data holdout for the adversary was fixed to 150 examples. The learning rate was set to 0.003 and the Adam optimizer was used to train both models. TAB0: More attack steps to uncover gradient masking effects. Different epsilon values at attack time. We applied attacks of different ε values to a network trained with ε = 0.03. Results shown in tab. 9.More attack iterations. We ran attacks for more steps at test time, on a network trained on attacks of 7 steps. Results shown in tab. 10.
Better adversarial training by learning to map back to the data manifold with autoencoders in the hidden states.
530
scitldr
Neural networks could misclassify inputs that are slightly different from their training data, which indicates a small margin between their decision boundaries and the training dataset. In this work, we study the binary classification of linearly separable datasets and show that linear classifiers could also have decision boundaries that lie close to their training dataset if cross-entropy loss is used for training. In particular, we show that if the features of the training dataset lie in a low-dimensional affine subspace and the cross-entropy loss is minimized by using a gradient method, the margin between the training points and the decision boundary could be much smaller than the optimal value. This is contrary to the of recent related works such as , and we identify the reason for this contradiction. In order to improve the margin, we introduce differential training, which is a training paradigm that uses a loss function defined on pairs of points from each class. We show that the decision boundary of a linear classifier trained with differential training indeed achieves the maximum margin. The reveal the use of cross-entropy loss as one of the hidden culprits of adversarial examples and introduces a new direction to make neural networks robust against them. Training neural networks is challenging and involves making several design choices. Among these are the architecture of the network, the training loss function, the optimization algorithm used for training, and their hyperparameters, such as the learning rate and the batch size. Most of these design choices influence the solution obtained by the training procedure and have been studied in detail BID9 BID4 BID5; BID17 BID19. Nevertheless, one choice has been mostly taken for granted when the network is trained for a classification task: the training loss function. Cross-entropy loss function is almost the sole choice for classification tasks in practice. Its prevalent use is backed theoretically by its association with the minimization of the Kullback-Leibler divergence between the empirical distribution of a dataset and the confidence of the classifier for that dataset. Given the particular success of neural networks for classification tasks BID11 BID18 BID5, there seems to be little motivation to search for alternatives for this loss function, and most of the software developed for neural networks incorporates an efficient implementation for it, thereby facilitating its use. Recently there has been a line of work analyzing the dynamics of training a linear classifier with the cross-entropy loss function BID15 b; BID7. They specified the decision boundary that the gradient descent algorithm yields on linearly separable datasets and claimed that this solution achieves the maximum margin.1 However, these claims were observed not to hold in the simple experiments we ran. For example, FIG6 displays a case where the cross-entropy minimization for a linear classifier leads to a decision boundary which attains an extremely poor margin and is nearly orthogonal to the solution given by the hard-margin support vector machine (SVM).We set out to understand this discrepancy between the claims of the previous works and our observations on the simple experiments. We can summarize our contributions as follows. Cross-entropy min. decision boundary (poor margin)Figure 1: Orange and blue points represent the data from two different classes in R 2. Cross-entropy minimization for a linear classifier on the given training points leads to the decision boundary shown with the solid line, which attains a very poor margin and is almost orthogonal to the solution given by the SVM.1. We analyze the minimization of the cross-entropy loss for a linear classifier by using only two training points, i.e., only one point from each of the two classes, and we show that the dynamics of the gradient descent algorithm could yield a poor decision boundary, which could be almost orthogonal to the boundary with the maximum margin.2. We identify the source of discrepancy between our observations and the claims of the recent works as the misleading abbreviation of notation in the previous works. We clarify why the solution obtained with cross-entropy minimization is different from the SVM solution.3. We show that for linearly separable datasets, if the features of the training points lie in an affine subspace, and if the cross-entropy loss is minimized by a gradient method with no regularization to train a linear classifier, the margin between the decision boundary of the classifier and the training points could be much smaller than the optimal value. We verify that when a neural network is trained with the cross-entropy loss to classify two classes from the CIFAR-10 dataset, the output of the penultimate layer of the network indeed produces points that lie on an affine subspace.4. We show that if there is no explicit and effective regularization, the weights of the last layer of a neural network could grow to infinity during training with a gradient method. Even though this has been observed in recent works as well, we are the first to point out that this divergence drives the confidence of the neural network to 100% at almost every point in the input space if the network is trained for long. In other words, the confidence depends heavily on the training duration, and its exact value might be of little significance as long as it is above 50%.5. We introduce differential training, which is a training paradigm that uses a loss function defined on pairs of points from each class -instead of only one point from any class. We show that the decision boundary of a linear classifier trained with differential training indeed produces the SVM solution with the maximum hard margin. We start with a simple binary classification problem. Given two points x ∈ R d and −y ∈ R d from two different classes, we can find a linear classifier by minimizing the cross-entropy loss function log(e −w x + 1) + log(e −w ỹ + 1), DISPLAYFORM0. Unless the two points x and −y are equal, the function does not attain its minimum at a finite value ofw. Consequently, if the gradient descent algorithm is used to minimize, the iterate at time k,w[k], diverges as k increases. The following theorem characterizes the growth rate ofw[k] and its direction in the limit by using a continuous-time approximation to the gradient descent algorithm. Theorem 1. Given two points x ∈ R d and −y ∈ R d, letx and −ỹ denote [x 1] and [−y 1], respectively. Without loss of generality, assume x ≤ y. If the two points are in different classes and we minimize the cross-entropy loss DISPLAYFORM1 where σ x = x 2, σ xy =x ỹ and σ y = ỹ 2.Note that first d coordinates of represent the normal vector of the decision boundary obtained by minimizing the cross-entropy loss. This vector is different from x + y, which is the direction of the maximum-margin solution given by the SVM. In fact, the direction in could be almost orthogonal to the SVM solution in certain cases, which implies that the margin between the points and the decision boundary could be much smaller than the optimal value. Corollary 1 describes a subset of these cases. Corollary 1. Given two points x and −y in R d, let ψ denote the angle between the solution given by and the solution given by the SVM, i.e., (x + y). If x y = 1, then DISPLAYFORM2 where σ x = x 2 +1 and σ y = y 2 +1. Consequently, as x / y approaches 0 while maintaining the condition x y = 1, the angle ψ converges to π/2. Remark 1. Corollary 1 shows that if x and −y have disparate norms, the minimization of the crossentropy loss with gradient descent algorithm could lead to a direction which is almost orthogonal to the maximum-margin solution. It may seem like this problem could be avoided with preprocessing the data so as to normalize the data points. However, this approach will not be effective for neural networks: if we consider an L-layer neural network, w φ L−1 (x), and regard the first L − 1 layers, φ L−1 (·), as a feature mapping, preprocessing a dataset {x i} i∈I will not produce a normalized set of features {φ L−1 (x i)} i∈I. Note that we could not normalize {φ L−1 (x i)} i∈I directly either, since the mapping φ L−1 (·) evolves during training. Remark 2. Theorem 1 shows that the norm of w keeps growing unboundedly as the training continues. The same behavior will be observed for larger datasets in the next sections as well. Since the "confidence" of the classifier for its prediction at a point x is given by DISPLAYFORM3 this unbounded growth of w drives the confidence of the classifier to 100% at every point in the input space, except at the points on the decision boundary, if the algorithm is run for long. Given the lack of effective regularization for neural networks, a similar unbounded growth is expected to be observed in neural network training as well, which is mentioned in BID1. As a , the confidence of a neural network might be highly correlated with the training duration, and whether a neural network gives 99% or 51% confidence for a prediction might be of little importance as long as it is above 50%. In other words, regarding this confidence value as a measure of similarity between an input and the training dataset from the most-likely class should be reconsidered. In this section, we examine the binary classification of a linearly separable dataset by minimizing the cross-entropy loss function. Recently, this problem has also been studied in BID16 a; BID7. We restate an edited version of the main theorem of, followed by the reason of the edition. Theorem 2 (Adapted from Theorem 3 of). Given two sets of points {x i} i∈I and {−y j} j∈J that are linearly separable in R d, letx i and −ỹ j denote [x i 1] and [−y j 1], respectively, for all i ∈ I, j ∈ J. Then the iterate of the gradient descent algorithm,w(t), on the cross-entropy loss function DISPLAYFORM0 with a sufficiently small step size will converge in direction: DISPLAYFORM1 where w is the solution to DISPLAYFORM2 The solution given in Theorem 2 was referred in, and consequently in the other works, as the maximum-margin solution. However, due to the misleading absence of the bias term in the notation, this is incorrect. Given the linearly separable sets of points {x i} i∈I and {−y j} j∈J, the maximum-margin solution given by the SVM solves DISPLAYFORM3 On the other hand, the solution given by Theorem 2 corresponds to DISPLAYFORM4 Even though the sets of constraints for both problems are identical, their objective functions are different, and consequently, the solutions are different. As a , the decision boundary obtained by crossentropy minimization does not necessarily attain the maximum hard margin. In fact, as the following theorem shows, its margin could be arbitrarily worse than the maximum margin. Theorem 3. Assume that the points {x i} i∈I and {−y j} j∈J are linearly separable and lie in an affine subspace; that is, there exist a set of orthonormal vectors {r k} k∈K and a set of scalars DISPLAYFORM5 Let w, · + B = 0 denote the decision boundary obtained by minimizing the cross-entropy loss, i.e. the pair (w, B) solves DISPLAYFORM6 Then the minimization of the cross-entropy loss yields a margin smaller than or equal to DISPLAYFORM7 where γ denotes the optimal hard margin given by the SVM solution. Remark 3. Theorem 3 shows that if the training points lie in an affine subspace, the margin obtained by the cross-entropy minimization will be smaller than the optimal margin value. As the dimension of this affine subspace decreases, the cardinality of the set K increases and the term k∈K ∆ 2 k could become much larger than 1/γ 2. Therefore, as the dimension of the subspace containing the training points gets smaller compared to the dimension of the input space, cross-entropy minimization with a gradient method becomes more likely to yield a poor margin. Note that this argument also holds for classifiers of the form w φ(x) with the fixed feature mapping φ(·).The next theorem relaxes the condition of Theorem 3 and allows the training points to be near an affine subspace instead of being exactly on it. Note that the ability to compare the margin obtained by cross-entropy minimization with the optimal value is lost. Nevertheless, it highlights the fact that same set of points could be assigned a different margin by cross-entropy minimization if all of them are shifted away from the origin by the same amount in the same direction. Theorem 4. Assume that the points {x i} i∈I and {−y j} j∈J in R d are linearly separable and there exist a set of orthonormal vectors {r k} k∈K and a set of scalars {∆ k} k∈K such that DISPLAYFORM8 Let w, · + B = 0 denote the decision boundary obtained by minimizing the cross-entropy loss, i.e. the pair (w, B) solves DISPLAYFORM9 Then the minimization of the cross-entropy loss yields a margin smaller than or equal to DISPLAYFORM10 Remark 4. Both Theorem 3 and Theorem 4 consider linearly separable datasets. If the dataset is not linearly separable, BID7 predicts that the normal vector of the decision boundary, w, will have two components, one of which converges to a finite vector and the other diverges. The diverging component still has the potential to drive the decision boundary to a direction with a poor margin. In fact, the margin is expected to be small especially if the points intruding into the opposite class lie in the same subspace as the optimal normal vector for the decision boundary. In this work, we focus on the case of separable datasets as this case provides critical insight into the issues of state-of-the-art neural networks, given they can easily attain zero training error even on randomly generated datasets, which indicates the linear separability of the features obtained at their penultimate layers . In previous sections, we saw that the cross-entropy minimization could lead to poor margins, and the main reason for this was the appearance of the bias term in the objective function of (P2). In order to remove the effect of the bias term, consider the SVM problem (P1) and note that this problem could be equivalently written as minimize w w 2 2 subject to w, x i + y j ≥ 2 ∀i ∈ I, ∀j ∈ Jif we only care about the weight parameter w. This gives the hint that if we use the set of differences {x i + y j : i ∈ I, j ∈ J} instead of the individual sets {x i} i∈I and {−y j} j∈J, the bias term could be excluded from the problem. This was also noted in BID8 BID6 previously. Indeed, this approach allows obtaining the SVM solution with a loss function similar to the cross-entropy loss, as the following theorem shows. Theorem 5. Given two sets of points {x i} i∈I and {−y j} j∈J that are linearly separable in R d, if we solve min DISPLAYFORM0 by using the gradient descent algorithm with a sufficiently small learning rate, the direction of w converges to the direction of maximum-margin solution, i.e. DISPLAYFORM1 where w SVM is the solution of (P3).Proof. Apply Theorem 2 by replacing the sets {x i} i∈I and {−y j} j∈J with {x i + y j} i∈I,j∈J and the empty set, respectively. Then the minimization of the loss function FORMULA16 Since w SVM is the solution of (P3), we obtain w = 1 2 w SVM, and the claim of the theorem holds. Remark 5. Theorem 5 is stated for the gradient descent algorithm, but the identical statement could be made for the stochastic gradient method as well by invoking the main theorem of BID20.Minimization of the cost function yields the weight parameterŵ of the decision boundary. The bias parameter, b, could be chosen by plotting the histogram of the inner products {ŵ, x i} i∈I and {ŵ, −y j} j∈J and fixing a value forb such that DISPLAYFORM2 DISPLAYFORM3 The largest hard margin is achieved bŷ DISPLAYFORM4 However, by choosing a larger or smaller value forb, it is possible to make a tradeoff between the Type-I and Type-II errors. The cost function includes a loss defined on every pair of data points from the two classes. This cost function can be considered as the cross-entropy loss on a new dataset which contains |I| × |J| points. There are two aspects of this fact:1. When standard loss functions are used for classification tasks, we need to oversample or undersample either of the classes if the training dataset contains different number of points from different classes. This problem does not arise when we use the cost function. 2. Number of pairs in the new dataset, |I| × |J|, will usually be much larger than the original dataset, which contains |I| + |J| points. Therefore, the minimization of might appear more expensive than the minimization of the standard cross-entropy loss computationally. However, if the points in different classes are well separated and the stochastic gradient method is used to minimize, the algorithm achieves zero training error after using only a few pairs, which is formalized in Theorem 6. Further computation is needed only to improve the margin of the classifier. In addition, in our experiments to train a neural network to classify two classes from the CIFAR-10 dataset, only a few percent of |I| × |J| points were observed to be sufficient to reach a high accuracy on the training dataset. Theorem 6. Given two sets of points {x i} i∈I and {−y j} j∈J that are linearly separable in R d, assume the cost function is minimized with the stochastic gradient method. Define R x = max{x i − x i : i, i ∈ I}, R y = max{y j − y j : j, j ∈ J} and let γ denote the hard margin that would be obtained with the SVM: DISPLAYFORM5 If 2γ ≥ 5 max(R x, R y), then the stochastic gradient algorithm produces a weight parameter,ŵ, only in one iteration which satisfies the inequalities (7a)-(7b) along with the bias,b, given by. In this section, we present numerical experiments supporting our claims. Differential training. In Figure 2, we show the decision boundaries of two linear classifiers, where one of them is trained by minimizing the cross-entropy loss, and the other through differential training. Unlike the example shown in FIG6, here the data do not exactly lie in an affine subspace. In particular, one of the classes is composed of 10 samples from a normal distribution with mean and variance 25, and the other class is composed of 10 samples from a normal distribution with mean and variance 25. As can be seen from the figure, the cross-entropy minimization yields a margin that is smaller than differential training, even though when the training dataset is not low-dimensional, which is predicted by Theorem 4. Cross-entropy min. boundary Figure 2: Classification boundaries obtained using differential training and cross-entropy minimization. The margin recovered by cross-entropy minimization is worse than differential training even when the training dataset is not low-dimensional. Low-dimensionality. We empirically evaluated if the features obtained at the penultimate layer of a neural network indeed lie in a low-dimensional affine subspace. For this purpose, we trained a convolutional neural network architecture to classify horses and planes from the CIFAR-10 dataset BID10. FIG3 shows the cumulative variance explained for the features that feed into the soft-max layer as a function of the number of principle components used. We observe that the features, which are the outputs of the penultimate layer of the network, lie in a low-dimensional affine subspace, and this holds for a variety of training modalities for the network. This observation is relevant to Remark 3. The dimension of the subspace containing the training points is at most 20, which is much smaller than the dimension of the feature space, 84. Consequently, cross-entropy minimization with a gradient method is expected to yield a poor margin on these features. We compare our with related works and discuss their implications for the following subjects. Adversarial examples. State-of-the-art neural networks have been observed to misclassify inputs that are slightly different from their training data, which indicates a small margin between their decision boundaries and the training dataset (; BID3 ; . Our reveal that the combination of gradient methods, cross-entropy loss function and the low-dimensionality of the training dataset (at least in some domain) has a responsibility for this problem. Note that SVM with the radial basis function was shown to be robust against adversarial examples, and this was attributed to the high nonlinearity of the radial basis function in BID3. Given that the SVM uses neither the cross entropy loss function nor the gradient descent algorithm for training, we argue that the robustness of SVM is no surprise -independent of its nonlinearity. Lastly, effectiveness of differential training for neural networks against adversarial examples is our ongoing work. The activations feeding into the soft-max layer could be considered as the features for a linear classifier. Plot shows the cumulative variance explained for these features as a function of the number of principle components used. Almost all the variance in the features is captured by the first 20 principle components out of 84, which shows that the input to the soft-max layer resides predominantly in a low-dimensional subspace. Low-dimensionality of the training dataset. As stated in Remark 3, as the dimension of the affine subspace containing the training dataset gets very small compared to the dimension of the input space, the training algorithm will become more likely to yield a small margin for the classifier. This observation confirms the of BID13, which showed that if the set of training data is projected onto a low-dimensional subspace before feeding into a neural network, the performance of the network against adversarial examples is improved -since projecting the inputs onto a low-dimensional domain corresponds to decreasing the dimension of the input space. Even though this method is effective, it requires the knowledge of the domain in which the training points are low-dimensional. Because this knowledge will not always be available, finding alternative training algorithms and loss functions that are suited for low-dimensional data is still an important direction for future research. Robust optimization. Using robust optimization techniques to train neural networks has been shown to be effective against adversarial examples BID12 BID0. Note that these techniques could be considered as inflating the training points by a presumed amount and training the classifier with these inflated points. Consequently, as long as the cross-entropy loss is involved, the decision boundaries of the neural network will still be in the vicinity of the inflated points. Therefore, even though the classifier is robust against the disturbances of the presumed magnitude, the margin of the classifier could still be much smaller than what it could potentially be. Differential training. We introduced differential training, which allows the feature mapping to remain trainable while ensuring a large margin between different classes of points. Therefore, this method combines the benefits of neural networks with those of support vector machines. Even though moving from 2N training points to N 2 seems prohibitive, it points out that a true classification should in fact be able to differentiate between the pairs that are hardest to differentiate, and this search will necessarily require an N 2 term. Some heuristic methods are likely to be effective, such as considering only a smaller subset of points closer to the boundary and updating this set of points as needed during training. If a neural network is trained with this procedure, the network will be forced to find features that are able to tell apart between the hardest pairs. Nonseparable data. What happens when the training data is not linearly separable is an open direction for future work. However, as stated in Remark 4, this case is not expected to arise for the state-of-the-art networks, since they have been shown to achieve zero training error even on randomly generated datasets , which implies that the features represented by the output of their penultimate layer eventually become linearly separable. A PROOF OF THEOREM 1Theorem 1 could be proved by using Theorem 2, but we provide an independent proof here. Gradient descent algorithm with learning rate δ on the cross-entropy loss yields DISPLAYFORM0 1 + e −w x + δỹ e −w ỹ 1 + e −w ỹ.Ifw = 0, thenw(t) = p(t)x + q(t)ỹ for all t ≥ 0, wherė DISPLAYFORM1 Then we can writeα Lemma 2. If b < 0, then there exists t 0 ∈ (0, ∞) such that DISPLAYFORM2 Proof. Note that DISPLAYFORM3 which implies that DISPLAYFORM4 as long as DISPLAYFORM5 By using Lemma 2, DISPLAYFORM6 Proof. Solving the set of equations DISPLAYFORM7, DISPLAYFORM8 Proof. Note thatż ≥ a/2 andv ≥ c/2; therefore, DISPLAYFORM9 if either side exists. Remember thaṫ DISPLAYFORM10 We can compute f (w) = 2acw + bcw 2 + ab b 2 w 2 + 2abw + a 2. The function f is strictly increasing and convex for w > 0. We have DISPLAYFORM11 Therefore, when b ≥ a, the only fixed point of f over [0, ∞) is the origin, and when a > b, 0 and (a − b)/(c − b) are the only fixed points of f over [0, ∞). Figure 4 shows the curves over whichu = 0 andẇ = 0. Since lim t→∞ u = lim t→∞ w, the only points (u, w) can converge to are the fixed points of f. Remember thaṫ DISPLAYFORM12 so when a > b, the origin is unstable in the sense of Lyapunov, and (u, w) cannot converge to it. Otherwise, is the only fixed point, and it is stable. As a , DISPLAYFORM13 Figure 4: Stationary points of function f. DISPLAYFORM14 Proof. From Lemma 6, DISPLAYFORM15 Consequently, DISPLAYFORM16 which gives the same solution as Lemma 5: DISPLAYFORM17 Proof. We can obtain a lower bound for square of the denominator as DISPLAYFORM18 DISPLAYFORM19 As a , Then, we can write w as DISPLAYFORM20 Remember, by definition, w SVM = arg min w 2 s.t. w, x i + y j ≥ 2 ∀i ∈ I, ∀j ∈ J.Since the vector u also satisfies u, x i + y j = w, x i + y j ≥ 2 for all i ∈ I, j ∈ J, we have u ≥ w SVM = 1 γ. As a , the margin obtained by minimizing the cross-entropy loss is DISPLAYFORM21 If B < 0, we could consider the hyperplane w, · − B = 0 for the points {−x i} i∈I and {y j} j∈J, which would have the identical margin due to symmetry. Therefore, without loss of generality, assume B ≥ 0. As in the proof of Theorem 3, KKT conditions for the optimality of w and B requires w = i∈I µ i x i + j∈J ν j y j, B = i∈I µ i − j∈J ν j where µ i ≥ 0 and ν j ≥ 0 for all i ∈ I, j ∈ J. Note that for each k ∈ K, w, r k = i∈I µ i x i, r k − j∈J ν j −y j, r k DISPLAYFORM0 Since {r k} k∈K is an orthonormal set of vectors, DISPLAYFORM1 The follows from the fact that w −1 is an upper bound on the margin. E PROOF OF THEOREM 6In order to achieve zero training error in one iteration of the stochastic gradient algorithm, it is sufficient to have min i ∈I x i, x i + y j > max j ∈J −y j, x i + y j ∀i ∈ I, ∀j ∈ J, or equivalently, x i + y j, x i + y j > 0 ∀i, i ∈ I, ∀j, j ∈ J.By definition of the margin, there exists a vector w SVM ∈ R d with unit norm which satisfies 2γ = min i∈I,j∈J x i + y j, w SVM.Note that w SVM is orthogonal to the decision boundary given by the SVM. Then we can write every x i + y j as x i + y j = 2γw SVM + δ If we choose γ > 5 2 max(R x, R y), we have 4γ 2 − 2γ(2R x + 2R y) − (R x + R y) 2 > 0, which guarantees and completes the proof.
We show that minimizing the cross-entropy loss by using a gradient method could lead to a very poor margin if the features of the dataset lie on a low-dimensional subspace.
531
scitldr
The concepts of unitary evolution matrices and associative memory have boosted the field of Recurrent Neural Networks (RNN) to state-of-the-art performance in a variety of sequential tasks. However, RNN still has a limited capacity to manipulate long-term memory. To bypass this weakness the most successful applications of RNN use external techniques such as attention mechanisms. In this paper we propose a novel RNN model that unifies the state-of-the-art approaches: Rotational Unit of Memory (RUM). The core of RUM is its rotational operation, which is, naturally, a unitary matrix, providing architectures with the power to learn long-term dependencies by overcoming the vanishing and exploding gradients problem. Moreover, the rotational unit also serves as associative memory. We evaluate our model on synthetic memorization, question answering and language modeling tasks. RUM learns the Copying Memory task completely and improves the state-of-the-art in the Recall task. RUM’s performance in the bAbI Question Answering task is comparable to that of models with attention mechanism. We also improve the state-of-the-art to 1.189 bits-per-character (BPC) loss in the Character Level Penn Treebank (PTB) task, which is to signify the applications of RUM to real-world sequential data. The universality of our construction, at the core of RNN, establishes RUM as a promising approach to language modeling, speech recognition and machine translation. Recurrent neural networks are widely used in a variety of machine learning applications such as language modeling BID7 ), machine translation BID5 ) and speech recognition BID11 ). Their flexibility of taking inputs of dynamic length makes RNN particularly useful for these tasks. However, the traditional RNN models such as Long Short-Term Memory (LSTM, BID12) and Gated Recurrent Unit (GRU, BID5) exhibit some weaknesses that prevent them from achieving human level performance: 1) limited memory-they can only remember a hidden state, which usually occupies a small part of a model; 2) gradient vanishing/explosion BID4 ) during training-trained with backpropagation through time the models fail to learn long-term dependencies. Several ways to address those problems are known. One solution is to use soft and local attention mechanisms BID5 ), which is crucial for most modern applications of RNN. Nevertheless, researchers are still interested in improving basic RNN cell models to process sequential data better. Numerous works BID7; BID2 ) use associative memory to span a large memory space. For example, a practical way to implement associative memory is to set weight matrices as trainable structures that change according to input instances for training. Furthermore, the recent concept of unitary or orthogonal evolution matrices BID0; BID14 ) also provides a theoretical and empirical solution to the problem of memorizing long-term dependencies. Here, we propose a novel RNN cell that resolves simultaneously those weaknesses of basic RNN. The Rotational Unit of Memory is a modified gated model whose rotational operation acts as associative memory and is strictly an orthogonal matrix. We tested our model on several benchmarks. RUM is able to solve the synthetic Copying Memory task while traditional LSTM and GRU fail. For synthetic Recall task, RUM exhibits a stronger ability to remember sequences, hence outperforming state-of-the-art RNN models such as Fastweight RNN BID2 ) and WeiNet . By using RUM we achieve the state-of-the-art in the real-world Character Level Penn Treebank task. RUM also outperforms all basic RNN models in the bAbI question answering task. This performance is competitive with that of memory networks, which take advantage of attention mechanisms. Our contributions are as follows:1. We develop the concept of the Rotational Unit that combines the memorization advantage of unitary/orthogonal matrices with the dynamic structure of associative memory; 2. The Rotational Unit of Memory serves as the first phase-encoded model for Recurrent Neural Networks, which improves the state-of-the-art performance of the current frontier of models in a diverse collection of sequential task. The problem of the gradient vanishing and exploding problem is well-known to obstruct the learning of long-term dependencies BID4 ).We will give a brief mathematical motivation of the problem. Let's assume the cost function is C. In order to evaluate ∂C/∂W ij, one computes the derivative gradient using the chain rule: DISPLAYFORM0 where DISPLAYFORM1 } is the Jacobian matrix of the point-wise nonlinearity. As long as the eigenvalues of D (k) are of order unity, then if W has eigenvalues λ i 1, they will cause gradient explosion ∂C/∂h (T) → ∞, while if W has eigenvalues λ i 1, they can cause gradient vanishing, ∂C/∂h (T) → 0. Either situation hampers the efficiency of RNN.LSTM is designed to solve this problem, but gradient clipping BID22 ) is still required for training. Recently, by restraining the hidden-to-hidden matrix to be orthogonal or unitary, many models have overcome the problem of exploding and vanishing gradients. Theoretically, unitary and orthogonal matrices will keep the norm of the gradient because the absolute value of their eigenvalues equals one. Several approaches have successfully developed the applications of unitary and orthogonal matrix to recurrent neural networks. BID0 BID14 use parameterizations to form the unitary spaces. applies gradient projection onto a unitary manifold. BID28 uses penalty terms as a regularization to restrain matrices to be unitary, hence accessing long-term memorization. Only learning long-term dependencies is not sufficient for a powerful RNN. BID13 finds that the combination of unitary/orthogonal matrices with a gated mechanism improves the performance of RNN because of the benefits of a forgetting ability. BID13 also points out the optimal way of such a unitary/gated combination: the unitary/orthogonal matrix should appear before the reset gate, which can then be followed by a modReLU activation. In RUM we implement an orthogonal operation in the same place, but the construction of that matrix is completely different: instead of parameterizing the kernel, we encode a natural rotation, generated by the inputs and the hidden state. Limited memory in RNN is truly a shortage. Adding an external associative memory is a natural solution. For instance, the Neural Turing Machine BID7 ) and many other models have shown the power of using this technique. While it expands the accessible memory space, the technique significantly increases the size of the model, therefore making the process of learning so many parameters harder. Now, we will briefly describe the concept of associative memory. In basic RNN, h t = σ(W x t + Ah t−1 + b) where h t is the hidden state at time step t and x is the input data at each step. Here W and A are trainable parameters that are fixed in the model. A recent approach replaces A with a dynamic A t (as a function of time) so that this matrix can serve as a memory state. Thus, the memory size increases from DISPLAYFORM0, where N h is the hidden size. In particular, A t is determined by A t−1, h t−1 and x t which can be a part of a multi-layer or a Hopfiled net. By treating the RNN weights as memory determined by the current input data, a larger memory size is provided and less trainable parameters are required. This significantly increases the memorization ability of RNN. Our model also falls into this category of associative memory through its rotational design of an orthogonal A t matrix. Recently, BID23 proposed a novel neural network architecture that uses vectors instead of conventional single neurons to represent concepts in hidden states. These vectors are called capsules. Special connections are also designed to connect capsules through a process, called dynamic routing. This work shows promising performance of phase-encoded models in Convolutional Neural Networks. The Rotational Unit of Memory model, which we introduce below, serves as the first successful phase-encoded model in the RNN domain. We give a detailed comparison of these two models in section 5.3. Rotations are well-studied mathematical structures that have various fundamental applications in the theory of Lie groups (; BID9), quantum physics BID24 ), etc. In computer vision BID25 ) the position and orientation of an object form a pose, which contains valuable information about the object. A feasible way of estimating poses is through rotational matrices and quaternions BID15; BID18 ).The conventional way of representing memory in RNNs is by encoding the information in a hidden state, which is a vector of a certain finite dimension N. To the best of our knowledge, the frontier of RNN models utilizes mostly the norm of the elements of the hidden state during the learning process. Experiments and theory point, however, that representational advantages can be achieved, by using capsules as vectors in the Euclidean R N space and thereby allowing the model to manipulate the pose of these capsules BID23 ).Here, we equip the hidden state in an RNN with a pose by viewing it as a vector with position and orientation in R N. We then propose an efficient method for manipulating the pose of the orientation by the means of rotations in an N -dimensional space. Our particular parameterization for the rotation is a natural way to define a differentiable orthogonal operation within the RNN cell. For the remainder of this section we suggest ways of engineering models that incorporate rotations as units of memory. In the following discussion N x is the input size and N h is the hidden size. The operation Rotation is an efficient encoder of an orthogonal operation, which acts as a unit of memory. Rotation computes an orthogonal operator R(a, b) in R N h ×N h that represents the rotation between two non-collinear vectors a and b in the two-dimensional subspace span(a, b) of the Euclidean space R N h with distance ·. As a consequence, R can act as a kernel on a hidden state h. More formally, what we propose is a function DISPLAYFORM0 Other ways to extract an orthogonal operation from elements in the RNN cell are still possible. Some approaches are as follows: 1. Use a skew-symmetric matrix A to define the orthogonal operator e A; 2. Use a permutation operator. However, those constructions are difficult to implement and do not offer a natural intuition about encoding memory. We recognize that other constructions are also feasible and potentially interesting for research. such that after ortho-normalizing a and b to DISPLAYFORM1 we encode the following matrix in DISPLAYFORM2 Figure 1 (a) demonstrates the projection to the plane span(a, b) in the brackets of equation FORMULA5. The A practical advantage of Rotation is that it is both orthogonal and differentiable. On one hand, it is a composition of differentiable sub-operations, which enables learning via backpropagation. On the other hand, it preserves the norm of the hidden state, hence it can yield more stable gradients. We were motivated to find differentiable implementations of unitary (orthogonal in particular) operations in existing toolkits for deep learning. Our is that Rotation can be implemented in various frameworks that are utilized for RNN and other deep learning architectures. Indeed, Rotation is not constrained to parameterize a unitary structure, but instead it produces an orthogonal matrix from simple components in the cell, which makes it useful for experimentation. DISPLAYFORM3 We implement Rotation together with its action on a hidden state efficiently. We do not need to compute the matrix R t before we rotate. Instead we can directly apply the RHS of equation FORMULA5 to the hidden state. Hence, the memory complexity of our algorithm is O(N b ·N h), which is determined by the RHS of. Note that we only use two trainable vectors in R N h to generate orthogonal weights in R N h ×N h, which means the model has O(N 2 h) degrees of freedom for a single unit of memory. Likewise, the time complexity is O(N b · N 2 h). Thus, Rotation is a universal operation that enables implementations suitable to any neural network model with backpropagation. We propose the Recurrent Unit of Memory as the first example of an application of Rotation to a recurrent cell. FIG0 (b) is a sketch of the connections in the cell. RUM consists of an update gate u ∈ R N h that has the same function as in GRU. Instead of a reset gate, however, the model learns a memory target variable τ ∈ R N h. RUM also learns to embed the input vector DISPLAYFORM0 Hence Rotation encodes the rotation between the embedded input and the target, which is accumulated to the associative memory unit R t ∈ R N h ×N h (originally initialized to the identity matrix). Here λ is a non-negative integer that is a hyper-parameter of the model. From here, the orthogonal R t acts on the state h to produce an evolved hidden stateh. Finally RUM obtains the new hidden state via u, just as in GRU. The RUM equations are as follows DISPLAYFORM1 σ activation of the update gate; ε t =W xh · x t +b t embedded input for Rotation; DISPLAYFORM2 rotational associative memory; DISPLAYFORM3 unbounded evolution of hidden state; DISPLAYFORM4 DISPLAYFORM5 The norm η is a scalar hyper-parameter of the RUM model. The orthogonal matrix R(ε t, τ) conceptually takes the place of a kernel acting on the hidden state in GRU. This is the most efficient place to introduce an orthogonal operation, as the Gated Orthogonal Recurrent Unit (GORU, BID13) experiments suggest. The difference with the GORU cell is that GORU parameterizes and learns the kernel as an orthogonal matrix, while RUM does not parameterize the rotation R. Instead, RUM learns τ, which together with x, determines R. The orthogonal matrix keeps the norm of the vectors, so we experiment with a ReLU activation instead of the conventional tanh in gated mechanisms. Even though R is an orthogonal element of RUM, the norm of h t is not stable because of the ReLU activation. Therefore, we suggest normalizing the hidden state h t to a have norm η. We call this technique time normalization as we usually feed mini-batches to the RNN during learning that have the shape (N b, N T), where N b is the size of the batch and N T is the length of the sequence that we feed in. Time normalization happens along the sequence dimension as opposed to the batch dimension in batch normalization. Choosing appropriate η for the RUM model stabilizes learning and ensures the eigenvalues of the kernels are bounded from above. This in turn means that the smaller η is, the more we reduce the effect of exploding gradients. Finally, even though RUM uses an update gate, it is not a standard gated mechanism, as it does not have a reset gate. Instead we suggest utilizing additional memory via the target vector τ. By feeding inputs to RUM, τ adapts to encode rotations, which align the hidden states in desired locations in R N h, without changing the norm of h. We believe that the unit of memory R t gives advantage to RUM over other gated mechanisms, such as LSTM and GRU. Firstly, we test RUM's memorization capacity on the Copying Memory Task. Secondly, we signify the superiority of RUM by obtaining a state-of-the-art in the Associative Recall Task. Thirdly, we show that even without external memory, RUM achieves comparable to state-of-the-art in the bAbI Question Answering data set. Finally, we utilize RUM's rotational memory to reach 1.189 BPC in the Character Level Penn Treebank. We experiment with λ = 0 RUM and λ = 1 RUM, the latter model corresponding to tuning in the rotational associative memory. A standard way to evaluate the memory capacity of a neural network is to test its performance in the Copying Memory Task BID12, BID10 BID0 ). We follow the setup in BID14. The objective of the RNN is to remember (copy) information received T time steps earlier (see section A for details about the data).Our in this task demonstrate: 1. RUM utilizes a different representation of memory that outperforms those of LSTM and GRU; 2. RUM solves the task completely, despite its update gate, which does not allow all of the information encoded in the hidden stay to pass through. The only other gated RNN model successful at copying is GORU. FIG1 reveals that LSTM and GRU hit a predictable baseline, which is equivalent to random guessing. RUM falls bellow the baseline, and subsequently learns the task by achieving zero loss after a few thousands iterations. The spikes on the learning curves for RUM are arising from the fact that we are using a ReLU activation for RUM without gradient clipping. With the help of figure 2 we will explain how the additional hyper-parameters for RUM affect its training. We observe that when we remove the normalization (η = N/A) then RUM learns more quickly than the case of requiring a norm η = 1.0. At the same time, though, the training entails more fluctuations. Hence we believe that choosing a finite η to normalize the hidden state is an important tool for stable learning. Moreover, it is necessary for the NLP task in this paper (see section 4.4): for our character level predictions we use large hidden sizes, which if left unnormalized, can make the cross entropy loss blow up. We also observe the benefits of tuning in the associative rotational memory. Indeed, a λ = 1 RUM has a smaller hidden size, N h = 100, yet it learns much more quickly than a λ = 0 RUM. It is possible that the accumulation of phase via λ = 1 to enable faster long-term dependence learning than the λ = 0 case. Either way, both models overcome the vanishing/exploding gradients, and eventually learn the task completely. Another important synthetic task to test the memory ability of recurrent neural network is the Associative Recall. This task requires RNN to remember the whole sequence of the data and perform extra logic on the sequence. We follow the same setting as in BID2 and and modify the original task so that it can test for longer sequences. In detail, the RNN is fed into a sequence of characters, e.g. "a1s2d3f4g5??d". The RNN is supposed to output the character based on the "key" which is located at the end of the sequence. The RNN needs to look back into the sequence and find the "key" and then to retrieve the next character. In this example, the correct answer is "3". See section B for further details about the data. In this experiment, we compare RUM to an LSTM,, a Fast-weight RNN BID2 ) and a recent successful RNN WeiNet . All the models have the same hidden state N h = 50 for different lengths T. We use a batch size 128. The optimizer is RMSProp with a learning rate 0.001. We find that LSTM fails to learn the task, because of its lack of sufficient memory capacity. NTM and Fast-weight RNN fail longer tasks, which means they cannot learn to manipulate their memory efficiently. TAB2 gives a numerical summary of the and figure 4, in the appendix, compares graphically RUM to LSTM. Length Question answering remains one of the most important applicable tasks in NLP. Almost all stateof-the-art performance is achieved by the means of attention mechanisms. Few works have been done to improve the performance by developing stronger RNN. Here, we tested RUM on the bAbI Question Answering data set ) to demonstrate its ability to memorize and reason without any attention. In this task, we train 20 sub-tasks jointly for each model, using a 10k training sets. See section C for detailed experimental settings and on each sub-task. We compare our model with several baselines: a simple LSTM, an End-to-end Memory Network BID27 ) and a GORU. We find that RUM outperforms significantly LSTM and GORU and achieves competitive with those of MemN2N, which has an attention mechanism. We summarize the in Table 2. We emphasize that for some sub-tasks in the table, which require large memory, RUM outperforms models with attention mechanisms (MemN2N). Test Accuracy (%) LSTM ) 49 GORU BID13 ) 60 MemN2N BID27 ) 86 RUM (ours) 73.2 Table 2: Question Answering task on bAbI dataset. Test accuracy (%) on LSTM, MemN2N, GORU and RUM. RUM outperforms LSTM/GORU and is outperformed only by MemN2N, which uses an attention mechanism. The Rotational Unit of Memory is a natural architecture that can learn long-term structure in data while avoiding significant overfitting. Perhaps, the best way to demonstrate this unique property, among other RNN models, is to test RUM on real world character level NLP tasks. The corpus is a collection of articles in The Wall Street Journal BID19 ). The text is in English and its vocabulary consists of 10000 words. We split the data into train, validation and test sets according to. We train by feeding mini-batches of size N b that consist of sequences of T consecutive characters. We incorporate RUM into the state-of-the-art high-level model: Fast-Slow RNN (FS-RNN, BID21). The FS-RNN-k architecture consists of two hierarchical layers: one of them is a "fast" layer that connects k RNN cells F 1,..., F k in series; the other is a "slow" layer that consists of a single RNN cell S. The organization is roughly as follows: F 1 receives the input from the mini-batch and feeds its state into S; S feeds its state into F 2; the output of F k is the probability distribution of the predicted character. FORMULA5 1.24 -HyperLSTM (Ha et al. FORMULA5 1.219 14.4M NASCell (Zoph & V. Le FORMULA5 1.214 16.3M FS-LSTM-4 (Mujika et al. FORMULA5 1.193 6.5M FS-LSTM-2 (Mujika et al. FORMULA5 1.190 7.2M FS-RUM-2 (ours)1.189 11.2M TAB3: With FS-RUM-2 we achieve the state-of-the-art test on the Penn Treebank task. FS-RUM-2 generalizes better than other gated models, such as GRU and LSTM, because it learns efficient patterns for activation in its kernels. Such a skill is useful for the large Penn Treebank data set, as with its special diagonal structure, the RUM cell in FS-RUM-2 activates the hidden state effectively. We discuss this representational advantage in section 5.1. One advantage of the Rotational Unit of Memory is that it allows the model to encode information in the phase of the hidden state. In order to demonstrate the structure behind such learning, we look at the kernels that generate the target memory τ in the RUM model. FIG2 (a) is a visualization for the Recall task that demonstrates the diagonal structure of W FORMULA5 hh which generates τ (a diagonal structure is also present W hh, but it is contrasted less). One way to interpret the importance of the diagonal contrast is that each neuron in the hidden state plays an important role for learning since each element on the diagonal activates a distinct neuron. Moreover, the diagonal structure is not task specific. For example, in FIG2 (b) we observe a particular W hh for the target τ on the Penn Treebank task. The way we interpret the meaning of the diagonal structure, combined with the off-diagonal activations, is that probably they encode grammar and vocabulary, as well as the links between various components of language. It is natural to view the Rotational Unit of Memory and many other approaches using orthogonal matrices to fall into the category of phase-encoding architectures: R = R(θ), where θ is a phase information matrix. For instance, we can parameterize any orthogonal matrix according to the Efficient Unitary Neural Networks (EUNN, BID14 DISPLAYFORM0, where U 0 is a block diagonal matrix containing N/2 numbers of 2-by-2 rotations. The component θ i is an one-by-(N/2) parameter vector. Therefore, the rotational memory equation in our model can be represented as where θ t are rotational memory phase vectors at time t and φ represents the phases generated by the operation Rotation correspondingly. Note that each element of the matrix multiplication U 0 (θ i) · U 0 (φ i) only depends on one element from θ i and φ i each. This means that, to cancel out one element θ i, the model only needs to learn to express φ i as the negation of θ i. DISPLAYFORM1 As a , our RNN implementation does not require a reset gate, as in GRU or GORU, because the forgetting mechanism is automatically embedded into the representation of phase-encoding. Thus, the concept of phase-encoding is simply a special sampling on manifolds generated by the special orthogonal Lie group SO(N). Now, let N = N h be the hidden size. One way to extend the current RUM model is to allow for λ to be any real number in the associative memory equation DISPLAYFORM2 This will expand the representational power of the rotational unit. The difficulty is to mathematically define the raising of a matrix to a real power, which is equivalent to defining a logarithm of a matrix. Again, rotations prove to be a natural choice since they are elements of SO(N h), and their logarithms correspond to elements of the vector space of the Lie algebra so(N h), associatied to SO(N h). We want to clarify that RUM and Capsule Net are not equivalent in terms of learning representations, but they share notable spiritual similarities. A parallel between RUMs state and Capsules representation. The hidden state in our model is viewed as a vector in an Euclidean space R n -it has an orientation and a magnitude. In a similar fashion, a capsule is a vector that has an orientation and a magnitude. Both RUM and Capsule Net learn to manipulate the orientation and magnitude of their respective components. The Rotation operation and the Routing mechanism. Both mechanisms are ways of manipulating orientations and magnitudes. In the routing mechanism we start from priors (linearly generated from the input to the given layer of capsules), then generate outputs, and finally measure the dot product between the priors and the output. This dot product essentially measures the similarity between the two vectors through the cosine of the angle between them. This relative position between the two vectors is used for effective routing, so that the orientations of the capsules can be manipulated iteratively. For rotation mechanism, we start with the embedded input vector (an alternative of the priors) and then generate the target memory (an alternative of the outputs). Then we measure (encode) the rotation between the embedded input and the target memory (an alternative of taking the dot product). And finally we use that encoded rotation to change the orientation of the hidden state (the iterative process of the routing mechanism).Main Difference. The hidden state in RUM usually has a much larger dimensionality than the capsules that are used in BID23. Hence, effectively, we demonstrate how to manipulate orientations and magnitudes of a much higher dimensionality (for example, we have experimented with hidden sizes of 1000 and 2000 for language modeling). For future work, the RUM model can be applied to other higher-level RNN structures. For instance, in section 4.4 we already showed how to successfully embed RUM into FS-RNN to achieve stateof-the-art . Other examples may include Recurrent Highway Networks , HyperNetwork BID8 ) structures, etc. The fusion of RUM with such architectures could lead to more state-of-the-art in sequential tasks. We proposed a novel RNN architecture: Rotational Unit of Memory. The model takes advantage of the unitary and associative memory concepts. RUM outperforms many previous state-of-the-art models, including LSTM, GRU, GORU and NTM in synthetic benchmarks: Copying Memory and Associative Recall tasks. Additionally, RUM's performance in real-world tasks, such as question answering and language modeling, is competetive with that of advanced architectures, some of which include attention mechanisms. We claim the Rotational Unit of Memory can serve as the new benchmark model that absorbs all advantages of existing models in a scalable way. Indeed, the rotational operation can be applied to many other fields, not limited only to RNN, such as Convolutional and Generative Adversarial Neural Networks. The alphabet of the input consists of symbols {a i}, i ∈ {0, 1, · · ·, n − 1, n, n + 1}, the first n of which represent data for copying, and the remaining two form "blank" and "marker" symbols, respectively. In our experiment n = 8 and the data for copying is the first 10 symbols of the input. The expectation from the RNN model is to output "blank" and, after the "marker" appears in the input, to output (copy) sequentially the initial data of 10 steps. The sequences for training are randomly generated, and consist of pairs of "character" and "number" elements. We set the key to always be a "character". We fix the size of the "character" set equal to half of the length of the sequence and the size of the "number" set equal to 10. Therefore, the total category has a size of T /2 + 10 + 1. The associative memory provided by rotational operation Rotation enables RUM to solve the Associative Recall Task. The input sequences is 50. For all models N h = 50. For the training of all models we use RMSProp optimization with a learning rate of 0.001 and a decay rate of 0.9; the batch size N b is 128. We observe that it is necessary to tune in the associative memory via λ = 1 since λ = 0 RUM does not learn the task. In this task, we train 20 models jointly on each sub-task. All of them use a 10k data set, which is divided into 90% of training and 10% of validation. We first tokenize all the words in the data set and combine the story and question by simply concatenating two sequences. Different length sequences are filled with "blank" at the beginning and the end. Words in the sequence are embedded into dense vectors and then fed into RNN in a sequential manner. The RNN model outputs the answer prediction at the end of the question through a softmax layer. We use batch size of 32 for all 20 subsets. The model is trained with Adam Optimizer with a learning rate 0.001. Each subset is trained with 20 epochs and no other regularization is applied. For all RNN cells we apply layer normalization BID3 ) to the cells and to the LSTM gates and RUM's update gate and target memory, zoneout BID17 ) to the recurrent connections, and dropout BID26 ) to the FS-RNN. For training we use Adam optimization BID16 ). We apply gradient clipping with maximal norm of the gradients equal to 1.0. TAB9 lists the hyper-parameters we use for our models. We embed the inputs into a higher-dimensional space. The output of each models passes through a softmax layer; then the probabilities are evaluated by a standard cross entropy loss function. The bits-per-character (BPC) loss is simply the cross entropy with a binary logarithm. Table 7 outlines the performance of all variances of RUM models. BID21 achieve their record with FS-LSTM-2, by setting F 1,2 and S to LSTM. The authors in the same paper suggest that the "slow" cell has the function of capturing long-term dependencies from the data. Hence, it is natural to set S to be a RUM, given its memorization advantages. In particular, we experiment with FS-RUM-2, for which S is a RUM and F 1,2 are LSTM, as shown in figure 5. Additionally, we compare the validation performance of derivative models of our baseline FS-RUM-2 model in figure 6.Empirically, we discovered that a time normalization η = 1.0 works best for RUM when gradient clipping norm is also 1.0. 1.189 14.2M FS-RUM-2 (ours) 1.189 11.2M Table 7: FS-RUM-2(B)+LR is the baseline FS-RUM-2 except that the learning rate equals 0.003. FS-RUM-2(B)+(S)800-1100, 900-1200, 1000-1000 and 1000-1200 are derivative models of FS-RUM-2, which are defined in figure 6, and improve the validation performance of the baseline FS-LSTM-2 model. In figure 6 we also show the 0.002 improvement to the validation BPC loss, achieved by FS-RUM-2(B)+(S)800-1100. 2). FS-RUM-2(B)+Norm is the same as FS-RUM-2(B) except that the time normalization norm η is 1.3. FS-RUM-2(B)+(S)800-1100 is the same as FS-RUM-2(B) except that the fast cells' size is 800 and the slow cell's size is 1100. FS-RUM-2(B)+(S)1000-1000 is the same as FS-RUM-2(B) except that the fast cells' size is 1000 and the slow cell's size is 1000. FS-RUM-2(B)+(S)900-1200 is the same as FS-RUM-2(B) except that the fast cells' size is 900 and the slow cell's size is 1200.
A novel RNN model which outperforms significantly the current frontier of models in a variety of sequential tasks.
532
scitldr
While many recent advances in deep reinforcement learning rely on model-free methods, model-based approaches remain an alluring prospect for their potential to exploit unsupervised data to learn environment dynamics. One prospect is to pursue hybrid approaches, as in AlphaGo, which combines Monte-Carlo Tree Search (MCTS)—a model-based method—with deep-Q networks (DQNs)—a model-free method. MCTS requires generating rollouts, which is computationally expensive. In this paper, we propose to simulate roll-outs, exploiting the latest breakthroughs in image-to-image transduction, namely Pix2Pix GANs, to predict the dynamics of the environment. Our proposed algorithm, generative adversarial tree search (GATS), simulates rollouts up to a specified depth using both a GAN- based dynamics model and a reward predictor. GATS employs MCTS for planning over the simulated samples and uses DQN to estimate the Q-function at the leaf states. Our theoretical analysis establishes some favorable properties of GATS vis-a-vis the bias-variance trade-off and empirical show that on 5 popular Atari games, the dynamics and reward predictors converge quickly to accurate solutions. However, GATS fails to outperform DQNs in 4 out of 5 games. Notably, in these experiments, MCTS has only short rollouts (up to tree depth 4), while previous successes of MCTS have involved tree depth in the hundreds. We present a hypothesis for why tree search with short rollouts can fail even given perfect modeling. The earliest and best-publicized applications of deep reinforcement learning (DRL) involve Atari games and the board game of Go , where experience is inexpensive because the environments are simulated. In such scenarios, DRL can be combined with Monte-Carlo tree search (MCTS) methods (; Kocsis & Szepesvári, 2006) for planning, where the agent executes roll-outs on the simulated environment (as far as computationally feasible) to finds suitable policies. However, for RL problems with long episodes, e.g. Go, MCTS can be very computationally expensive. In order to speed up MCTS for Go and learn an effective policy, Alpha Go employs a depth-limited MCTS with the depth in the hundreds on their Go emulator and use an estimated Q-function to query the value of leaf nodes. However, in real-world applications, such as robotics and dialogue systems , collecting samples often takes considerable time and effort. In such scenarios, the agent typically cannot access either the environment model or a corresponding simulator. Recently, generative adversarial networks (GANs) BID15 have emerged as a popular tool for synthesizing realistic-seeming data, especially for high-dimensional domains, including images and audio. Unlike previous approaches to image generation, which typically produced blurry images due to optimizing an L1 or L2 objective, GANs produces crisp images. Since theire original conception as an unsupervised method, GANs have been extended for conditional generation, e.g., generating an image conditioned on a label or the next frame in a video given a context window . Recently, the PIX2PIX approach has demonstrated impressive on a range of image-to-image transduction tasks .In this work, we propose and analyze generative adversarial tree search (GATS), a new DRL algorithm that utilizes samples from the environment to learn both a Q-function approximator, a near-term reward predictor, and a GAN-based model of the environment's dynamics (state transitions). Together, the dynamics model and reward predictor constitute a learned simulator on which MCTS can be performed. GATS leverages PIX2PIX GANs to learn a generative dynamics model (GDM) that efficiently learns the dynamics of the environment, producing images that agree closely with the actual observed transactions and are also visually crisp. We thoroughly study various image transduction models, arriving ultimately at a GDM that converges quickly (compared to the DQN), and appears from our evaluation to be reasonably robust to subtle distribution shifts, including some that destroy a DQN policy. We also train a reward predictor that converges quickly, achieving negligible error (over 99% accuracy). GATS bridges model-based and model-free reinforcement learning, using the learned dynamics and reward predictors to simulate roll-outs in combination with a DQN. Specifically, GATS deploys the MCTS method for planning over a bounded tree depth and uses the DQN algorithm to estimate the Q-function as a value for the leaf states .One notable aspect of the GATS algorithm is its flexibility, owing to consisting of a few modular building blocks: (i) value learning: we deployed DQN and DDQN (ii) planning: we use pure Monte Carlo sampling; (iii) a reward predictor: we used a simple 3-class classifier; (iv) dynamics model: we propose the GDM architecture. Practically, one can swap in other methods for any among these blocks and we highlight some alternatives in the related work. Thus, GATS constitutes a general framework for studying the trade-offs between model-based and model-free reinforcement learning. Theoretical analysis We analyze the components of error in the estimation of the expected return used by GATS, and further study the trade-offs in its bias and variance. Since GATS utilizes the learned Q function of DQN/DDQN in the leaf nodes of the MCTS tree, the existing errors in the Q-estimation decays exponentially as the depth of MCTS grows. We further study the bias in the Q estimate of DQN and DDQN, where we found (empirically) that GATS with even one step look-ahead or rollout (depth one), can help to reduce the negative effects of these biases. This leads to a reduction in the sample complexity of DQN and DDQN by a factor of 2 on the game Pong. Furthermore, we develop a heuristic optimism-based strategy for GATS using the GDM. The low computation cost of Pong allows us to do an extensive study of the bias-variance of Q for different model-based planning and exploration strategies. Experimental For this work, we also developed a new OpenAI gym BID11 -like interface for the latest Atari Learning Environment (ALE) , which supports different modes and difficulties for Atari games. We study the sample complexity required by GDM and RP to adapt and transfer from one domain of the game (a mode and difficulty) to another domain (another mode and difficulty). We show that GDM and RP adapt quickly to the new mode in a few numbers of samples, while the estimated Q-function requires significantly more samples to adapt. We documented and open-sourced this wrapper on the latest ALE as well as the code for GDM, RP, and the GATS algorithm. Surprising negative Despite learning environment models (GDM and RP) that converge efficiently and achieve accuracy exceeding our expectations, we are unable to improve the return of the learned policy using short rollouts (at most depth 5) on any other game besides Pong. The negative persisted across an extensive and costly study with many different hyper-parameter settings and learning strategies, including several different strategies to use the generated frames to train the Q-model inspired by. We put forth a hypothesis for why GATS, despite the good performance of its constituent models and its theoretical advantages, might fail in short rollout depths: in short, under this training regime, the problem may be that the Q-learner does not observe the outcomes of its mistakes. We also explain why in order to test our hypothesis, one might need to experiment with significantly longer rollouts, which may be prohibitively expensive (computationally) in this domain. Find a more detailed explanation of this hypothesis in Section 7.Consider the fact that all known successes of MCTS have involved tree depth in the hundreds. For example, BID17 shows for Atari games, when a plain MCTS is deployed, a depth of 300 with 1000 trajectories is required to learn a reasonable policy. This depth of MCTS on GDM requires massive amounts of computation beyond the scale of academic research and the scope of this paper. Considering the broader enthusiasm for both model-based RL and generative adversarial networks, we believe that this study, despite its failure to advance the leaderboard, illuminates several important considerations for future work to develop tree-search and rollout based methods to combine model-based and model-free reinforcement learning. An infinite horizon γ-discounted MDP M is a tuple X, A, T, R, P 0, γ, with state space X, action space A, and P 0, the distribution over the initial states. The transition kernel T: x, a → ∆ x, drives the model dynamics accompanied with-bounded reward of R: x, a → ∆ r, where 0 ≤ γ < 1. The agent's objective is to find a policy π:= X → A that maximizes the overall expected discounted reward η DISPLAYFORM0 γ t r t |x 0 = x, a 0 = a, we denote the expected cumulative discounted reward under policy π starting from state-action x, a. In value based RL, we aim to learn the Q-function in order to derive the optimal policy. In order to learn the Q function, we might aim to minimize square loss for any given pair of state and action (x, a), DISPLAYFORM1 In order to minimize the expression in Eq. 1, a double sampling is required to estimate the inner expectation. To avoid the cost of the double sampling, a common approach is to instead minimize the Bellman residual (; BID1 : DISPLAYFORM2 The Bellman residual is the sum of the expression in Eq. 1 and an additional variance term. DQN partially addresses this bias 1, by computing the target value with a separate function approximator, typically updated less frequently than the policy, DISPLAYFORM3 Generally, in addition to this bias, there are additional statistical biases due to limited capacity of network, optimization algorithm, model mismatch, as well as bias induced by the max operator or choice of a . In the next section, we theoretically and empirically study this bias and show how GATS can address this undesirable effect. For the generative dynamic model, we propose a generic GDM which consists of a generator G and a discriminator D, trained adversarially w.r.t. the extended conditional Wasserstein metric between two probability measures P, P G conditioned on a third probability measure P; DISPLAYFORM4 Here, z is a mean-zero unit-variance Gaussian vector random variable and · L indicates the space of all 1-Lipschitz functions. In GDM, D solves the interior sup, while G's objective is to minimize this distance and learn the P | for all . We deploy our proposed GDM in GATS where P is the distribution over pairs of : (x, a) in the replay buffer, and P | is the distribution over the successor states: x, which is the transition kernel T (x |x, a). In the previous section, we discussed the DQN objective function, Eq. 2, which is an inherently biased estimator. In the next section, we demonstrate how big these biases can be in practice. Let · denote an estimate for a given quantity and e Q be the upper bound on estimation error in Q function 1 This bias vanishes in deterministic domains 3Under review as a conference paper at ICLR 2019 (bias+variance); |Q(x, a) − Q(x, a)| ≤ e Q, ∀x, a. For any given rollout policy π r, using GDM, RP, and estimated Q, the expected return is given by the following expression: DISPLAYFORM0 Since this expectation is estimated with given GDM,RP and the estimated Q-function, GATS efficiently estimates this expected return without any interaction with the real environment. Let ξ(π r, x) denote the same quantity under the ground truth model DISPLAYFORM1 Moreover, for the RP and GDM, where T and r are the estimated transition kernel and reward function 2 we assume ∀x, x, a ∈ X, A a r(x, a) − r(x, a) ≤ e R, and, DISPLAYFORM2 If GATS is run to estimate the Q function using DQN procedure with the estimated model of the environment, GDM and RP, the deviation in estimating ξ p (π r, x) ∀ x and π r is bounded as; DISPLAYFORM3 Proof in the Appendix A Proposition. 1 provides an insight to the contribution of each term in the error into the GATS predicted expected return ξ p (π r, x). The exponential vanishing error in Q estimation comes at the cost of variances in the model estimation. Therefore, the agent can choose H, the depth of roll-out, in such a way to minimize the estimation error through approximating the upper bound on error terms. We provide a more detailed description of generative adversarial tree search (Alg. 1). Generative Adversarial Tree Search (GATS) Alg. 1 is built upon DQN/DDQN and by re-using the experiences in the replay buffer it learns a reward model RP, model dynamics GDM, and Q-function. For planning, GATS deploys bounded-depth MCTS on the learned model (GDM and RP), instead of the real environment. It then uses the learned Q-function to estimate the maximum expected return at the leaf nodes Fig. 8. In order to learn the model dynamics, we propose a novel deep neural network, GDM, parameterized by θ GDM, demonstrating that the constituent models are sample-efficient and achieve strong predictive performance. The input to the GDM is the state (four consecutive frames) and a sequence of actions, from which GDM generates the successor frames. We train GDM by sampling mini-batches of experiences from the replay buffer. Simultaneously, we train RP, parameterized with θ RP, the same way. In our basic experiments, our exploration strategy for GATS, as with DQN, consists of the -greedy approach. We also propose a new optimism-based method of exploration for GATS. Throughout our experimental study, we observed that the approximated Wasserstein distance, the output of the discriminator, decreases for frequently-visited state-action experiences and stays high for rare experiences. Intuitively, for unfamiliar experiences, the generator is unable to fool the discriminator, so the Wasserstein distance between real and generated frames is high. We compute the exponent of this distance and use its inverse to construct a pseudo-count , an adaptation of the idea of a state-action visitation count for continuous states and actionsÑ (x, a). In the optimism in the face of uncertainty principle , an optimistic Q-functionQ is learned for for t = to the end of episode do 5: DISPLAYFORM0 Store transition (x t, a t, r t, x t+1) and {(x i, a i, r i, x i+1)} m 0 in replay buffer 7:Sample a random minibatch of transitions (x τ, a τ, r τ, x τ +1) from replay buffer 8: DISPLAYFORM1 Update GDM, and RP end for 12: end for exploration. We deploy this notion of optimism and use the pseudo-count to compute the optimistic QQ DISPLAYFORM0 where c is the confidence scale constant. We can decouple Eq. 6 into the Q-function and confidence DISPLAYFORM1 where DISPLAYFORM2 Therefore, we learn C the same way as we learn Q by using DDQN. We add the learned C to ξ(π r, x) for our GATS planning, i.e. max π {ξ(π, x) + C(π, x)}. This encourages the agent to explore the parts of state space where the GDM is not yet accurate. Since those parts of the state space correspond to less frequently visited parts of the state space, this approach can help with better exploration compared to ε-greedy approach. We study the performance of GATS on 5 Atari games, namely Pong, Asterix, Breakout, Crazy Climber and Freeway, using the OpenAI Gym BID11. We adopt the common DQN architecture and game settings popularized by. For the GDM architecture (FIG1, we build upon the U-Net model image-to-image generator originally used in PIX2PIX . The GDM receives a state, sequence of actions, a Gaussian noise vector 3 and generates a predicted next state. 4 The RP is a simple model with 3 outputs, it receives the current state, action, and the successor state as input then outputs a class, one for each possible clipped reward {−1, 0, 1}. We train GDM and RP using prioritized weighted mini-batches of size 128 (more weight on recent samples), and update the two networks every 16 decision steps of GATS (4 times less frequently than the Q update). We deploy GATS as a bounded-depth MCTS on the learned model 5 and use the learned Q values at the leaves. Our experiments show that with significantly fewer samples, compared to DQN training, the GDM learns the environment's dynamics and generalizes well to a test set. We also show it adapts quickly even if we change the policy or the difficulty or the mode of the domain. In order to develop the current GDM, we experimented many different model architectures for the generator-discriminator, as well as different loss functions. We compare performance visually on test samples, since the L1 and L2 losses are not good metrics for learning game dynamics as demonstrated in detail in Apx. F and FIG1. We experiment with the PatchGAN discriminator (patch sizes 1, 16, and 70) and L1 loss used in PIX2PIX , finding that this architecture takes approximately 10× more training iterations to learn game dynamics for Pong than the current GDM. This is likely since the learning game dynamics such as ball position requires the entire frame for the discriminator, more than the patch-based texture loss given by PatchGAN. We also experiment with the ACVP architecture for the generator trained on L2 loss using the same hyper-parameters as specified in the original paper, and we find it also takes an order of magnitude more training iterations to learn game dynamics. We hypothesize this is because ACVP is a much larger network optimized for long term predictions, and does not take advantage of skip-connections and the discriminator loss as in GDM. For the choice of the GAN loss, we first tried the original GAN-loss BID15, which is based on Jensen-Shannon distance. With this criterion, not only it is difficult to find the right parameters but also not stable enough for non-stationary domains. We did the experiments using this loss and trained for Pong while the ing model was not stable enough for RL tasks. The training loss is sometimes unstable even for a given fixed data set. Since, Wasserstein metric provides Wasserstein distance criterion and is a more general loss in GANs, we deployed W-GAN for our GDM. Because W-GAN requires the discriminator to be a bounded Lipschitz function, the authors adopt gradient clipping. Using W-GAN provided improvement but still not sufficient for RL where fast and stable convergence is required. In order to improve the learning stability, parameter robustness, and quality of frames, we also tried the follow-up work on improved-W-GAN BID16, which adds a gradient penalty into the loss of discriminator in order to satisfy the bounded Lipschitzness. Even though it made the GDM more stable than before, it was still not sufficient due to the huge instability in the loss curve. Finally, we tried spectral normalization , a recent technique that not only provides high-quality frames, but also converges quickly while the loss function stays smooth. Because of its stability, robustness to hyperparameter choices, and fast learning due spectral normalization combined with W-GAN, GDM is able to handle the change in the state distribution in RL and still preserve the frame quality. More detailed study is left to Appendix. FIG1 shows the efficiency of GDM and how accurate it can generate next 9 frames just conditioning on the previous 4 frames and the trajectory of actions. We train the GDM on a replay buffer of 100,000 frames using a 3-step loss, and evaluate its performance on 8-step roll-outs on unseen 10,000 frames. We also tried the learned Q function on both generated and real consecutive frames and observed that the relative deviation is significantly small (O(10 −2)). Therefore as DRL methods are data hungry, we can re-use the data generated by GDM to train the Q-function even more. In later we study the ways we can incorporate the generated samples by GDM and RP in order to train the Q-function, as it is done in Dyna-Q. It worth noting that, since the GDM and RP modes adapt quickly, it is critical to come up with a strategy for how to sample from the replay buffer. Our analysis suggests sampling fresher experience with higher probability, compared to old, potentially stale samples (Fig. 6). We extend our study to the case where we change the model dynamics by changing the game mode. In this case, by going from default mode to alternate mode in pong, the opponent paddle gets halved in size. We expected that in this case, where the game became easier, the DDQN agent would preserve its performance but surprisingly it gave us the most negative score possible, i.e -21 and broke. Therefore, we start fine-tuning DDQN and took 3M time steps (12M frames) to master the game again. It is worth noting that it takes DDQN 5M time steps (20M frames) to master from scratch. While DDQN appears unacceptably brittle in this scenario, GDM and RP adapt to the new model dynamics in 3k samples, which is significantly smaller (see details in F). For this study, we wrote an Gym-style wrapper for a new version of ALE which supports different modes of difficulty levels. The exploration-exploitation trade-off is extensively studied in RL literature (; BID10 BID3). The regret analysis of MDPs (; BID7) is investigated, where the Optimism in the Face of Uncertainty (OFU) principle is applied to guarantee a high probability regret upper bound. For Partially Observable MDPs, OFU is known to provide a high probability regret upper bound (Azizzadenesheli et al., Given four consecutive frames of Atari games, and a sequence of eight actions, GDM generates sequences of the future frames almost identical to the real frames. First row: A sequence of real frames. Second row: a corresponding sequence of generated frames 2016a). Furthermore, more general settings like partial monitoring games are theoretically tackled BID8 and minmax regret guarantees are provided. While theoretical RL addresses variety of the trade-offs in the exploration-exploitation, this problem is still prominent in practical reinforcement learning research (; BID0 BID5 . On the empirical side, recent successes in video games has sparked a flurry of research interest. For example BID12 BID14) investigate DRL for dialogue policy learning, with addressing the efficiency of exploration. To combat the sample complexity shortcoming, designing an efficient exploration strategy in DRL has emerged as an active research topic, e.g. optimism and Thompson Sampling (; ; BID6 .Minimizing the Bellman residual using Bootstraps of the Q-function has been the core of value based DRL methods . Moreover, it has been extensively studied that minimizing the Bellman residual provides a biased estimator of the value function BID1 ). In order to mitigate this bias, DQN proposes to update the target value less frequently than the rest of the model in order to mimic the Fitted-Q update. This tweak might reduce the bias in the value estimator but significantly increases the sample complexity. On the other hand, Monte Carlo sampling strategies (; Kocsis & Szepesvári, 2006) have been proposed as efficient methods for planning, but suffer from high sample complexity in real world applications. Our provide a deeper understanding of this bias in Q and its relationship to model-based planning. Despite GANs' capabilities at generating perceptually realistic images, they are difficult to train and often unstable, especially for non-stationary tasks like RL. In recent years, there has been significant progress in developing stable learning procedures. The Wasserstein GAN (W-GAN) uses the Wasserstein metric as a notion of distance between two distributions, while requires the discriminator to be from the set of bounded Lipschitz functions. In order to satisfied this boundedness, BID16 proposed the improved W-GAN, which penalizes the discriminator's gradient although it still hard to train. Spectral normalization of discriminators has been studied recently, and has been shown empirically to converge more reliably. We leverage these advances in creating a stable learning procedure for the GDM. Recently, conditional video prediction has emerged as a growing area of research. Previous work trains large models with L2 loss to predict long future trajectories of frames given actions . The quality of the generated frames is measured by training DQN on them. However, since these models struggle to produce high frequency details and cannot produce meaningful frames in stochastic environments, due to the L2 loss. We implemented this work and compared it against GDM, a much smaller architecture with discriminative loss. We observe that GDM requires significantly fewer iterations to converge to perceptually unidentifiable frames. We also observed significantly lower error for GDM when a Q function is applied to generated frames from both models. Finally, learned environment models, such as those used for conditional video prediction, are leveraged in. A learned model encodes the generated trajectories into an abstract representation, which is used as an additional input to the policy model. They validate their methods on Sokoban, a small puzzle world, and show the capability of their model on multi-task learning in their miniPacman environment. does not use explicit planning and roll-out strategies. A similar approach to GATS is concurrently developed and empirically studied on Car Racing and VizDoom BID13. Further work employs transition models in order to perform rollouts in the encoded state representation , and demonstrate modest gains on Atari games (compared to DQN). A similar approach also has been studied in robotics (Wahlström et al., 2015). In contrast, we are able to learn model dynamics in the original state/pixel space. GATS synthesizes this prior work into a flexible framework for studying model-based and model-free reinforcement learning with four basic building blocks: (i) value learning (ii) planning (iii) a reward predictor, and (iv) dynamics model. This freedom in the GATS framework allows for many different variations and adaptations for a given domain and problem, and thus provides many avenues for further exploration. For instance, for value learning (i), one can use Count-based methods BID9. For planning (ii), one can use upper confidence bound tree search (UCT) (Kocsis & Szepesvári, 2006) or policy gradient methods . For the reward model (iii), if the reward has a continuous distribution, one can learn the mean reward using any regression model. Lastly, for model dynamics (iv), one can extend GDM or choose any other image generating model. Interestingly, this work can be extended to the λ-return setting, where a mix of n steps are acquired. While GATS is an appealing and flexible RL paradigm, it suffers from high computation cost due to modeling in image space and due to MCTS. Potentially, we could mitigate this overhead with parallelization or distilled policy methods BID17 by approximating with a smaller network. Discussion of negative In this section, we enumerate several hypotheses for why GATS under-performs DQN despite near-perfect modeling, and discuss several attempts to improve GATS based on these hypotheses. The following are shown in TAB1. DISPLAYFORM0 Replay Buffer: The agent's decision under GATS sometimes differs from that of the learned Q model. Therefore, it is important that we allow the Q-learner to observe the outcome of important outcomes in the generated MCTS states. To address this problem, we tried storing the samples generated in tree search and use them to further train the Q-model. We studied two scenarios: (i) using plain DQN with no generated samples and (ii) using Dyna-Q to train the Q function on the generated samples in MCTS. However, these techniques did not improve the performance of GATS.Optimizer: Since the problem is slightly different from DQN, especially in the Dyna-Q setting with generated frames, we tried a variety of different learning rates and minibatch sizes to tune the Q-learner. We considered variety of different ways to use the samples generated in tree search for learning in Dyna-Q. (i) Since we use the Q-function on the leaf nodes, we tried using the generated experience at the leaf nodes in the replay buffer. (ii) We randomly sampled additional generated experience from the tree to update the Q-learner. (iii) We choose the generated experience corresponding to the greedy action from the Q-learner on the generated tree by MCTS, which represents the trajectory we would have received following the greedy Q algorithm. We hypothesized that if we trained Q on its own decisions it would improve the learned Q-function. (iv) We also considered the case of following the ε-greedy policy induced by Q, rather than the greedy Q itself. (v) Finally, since following GATS or Q in a bigger shift in the later part of tree, we used generated trajectories from the greedy and ε-greedy policies and stored experience which happened in the later part of tree with higher probability. Specifically, we tried a variety of different geometric distributions to see which one was most helpful. Optimism: Optimism-based exploration strategy with the GDM. We observed that areas that of the state space that are novel to the GDM are often explored less, so the GDM has a higher absolute value of the Wasserstein distance for rarely visited state action pairs and a lower value of the Wasserstein distance for frequently seen state-action pairs. We added (i) the W -loss and also (ii) its exponent as a notion of extrinsic reward to encourage exploration. In (iii) and (iv) We did the same with a summation of different losses. Despite this extensive and costly study on GATS, we were not able to show that GATS benefits besides a limited improvement in training speed on Pong. Hypothesis on negative . From these negative , we propose a hypothesis for why tree search with short roll-outs, such as the GATS algorithm, might not boost performance even with perfect modeling with GDM and RP, despite reducing local bias. Consider the hypothetical situation described in FIG2 where a fish starts with an initialization of the Q function such that the greedy action is represented by yellow arrows. If the fish follows DQN, it reaches the shark quickly and receives a negative reward. The agent learns the down action is not a good action. Now consider the GATS algorithm with the same Q-function and 2 step look-ahead, even with a true model simulator. In this case, when the agent reaches the step above the sharks, the MCTS roll-out informs the agent that there is a negative reward of going down. Thus, the agent chooses action right, following the GATS action (red action). This holds for all following states with the red arrow in them. GATS locally avoid bad states, but does this information globally propagate?In this case, with ε-greedy exploration, the agent finally dies and gets a negative reward. This negative reward happens much later than for a DQN agent. Therefore, many more updates are required for the agent to learn that action down at the beginning of the game is not a good action. Moreover, this negative signal may even vanish due to the long update lengths. This shows how GATS roll-outs Following GATS with depth two locally prevents the goldfish from hitting the sharks but slows down learning that action down is sub-optimal due to the delay in negative signal. (b) Even if the goldfish uses the prediction of the future event for further learning, it might just slightly mitigate the slow-down in the learning process, but the fundamental issue is still present, and the agent suffers from a slow-down in learning. (c) For a grid world of 10 × 10, GATS with depth of 10 (GATS-10) in the highest return. Moreover, GATS with nonzero depth locally saves the agent from hitting the sharks, but in the long run it degrades the performance.can locally help to avoid (or reach) catastrophic (or good) events, but it can also slow down global understanding. As we suggested in our negative , this problem can be solved by putting the generated experience of MCTS in the replay buffer and deploying Dyna-Q. However, FIG2 illustrates the situation where the GATS action in the third row before the sharks is "right". In this case, two-step roll-outs do not see the sharks, and thus Dyna-Q does not speed learning, all while requiring significantly more computation. In practice, especially with limited memory of the replay buffer and limited capacity of the function classes, it can be difficult to tune Dyna-Q to work well. As the Goldfish and gold bucket experiment shows, deploying MCTS with Q-learning can have complex interactions. We showed in Proposition 1 that given a fixed estimated Q function, MCTS in a better worst-case error in the Q estimation. However, this does not guarantee that learning the Q function while performing MCTS will not in a worse estimated Q function. When this worse estimated Q-function is used in the leaf nodes with MCTS, it can in worse performance, as indicated in our empirical . To test this hypotheses empirically in a controlled environment, we implemented the 10x10 version of Goldfish and gold bucket environment. We tested GATS with a depth of 0 (i.e., the plain DQN), 1, 2, 4 and GATS +Dyna-Q with the depth of 1 and 2. FIG2 (c) represents the per episode return of different algorithms. Each episode has a maximum length of 100 steps unless the agent either reaches the gold bucket (the reward of +1) or hit any of the sharks (the reward of −1). At each time step, the agent also suffers a cost of 0.05 for not accomplishing the task, while the discount factor is 0.99. We train the Q-network using DQN and a mini-batch of size 64. We observe that GATS with the depth of 10 (the dimension of the grid) receives the highest return. Moreover, we observe that GATS with nonzero depth locally saves the agent to hit the sharks, since in the initial phase, GATS with non-zero depth has higher return than DQN. However, in the long run, GATS with short roll-outs (e.g. GATS-1 and GATS-2) degrades the performance, as seen in the later parts of the run. Furthermore, we see that the Dyna-Q approach also might fail in improving the performance in the naive case. We train GATS +Dyna-Q with both executed experiences and predicted ones in the oracle dynamics model. We observe that GATS +Dyna-Q does not provide much benefit over GATS. Without sophisticated sampling algorithms, GATS +Dyna-Q can cause the high-capacity network to overfit to the repeated samples. For many complex applications, like hard Atari games, GATS may require significantly longer roll-out depths with sophisticated Dyna-Q in order to perform well, which was computationally in-feasible for this study. However, the insights in designing near-perfect modeling algorithms and the extensive study of GATS, highlight several key considerations and provide an important framework to effectively design algorithms combining model-based and model-free reinforcement learning. Let's restate the estimated returns with the learned model as follows; DISPLAYFORM0 Now consider the following lemma;Lemma 1 (Deviation in Q-function) Define e Q as the uniform bound on error in the estimation of the Q function, such that |Q(x, a) − Q(x, a)| ≤ e Q, ∀x, a. Then; DISPLAYFORM1 Proof 1 For a given state x, define a 1 (x):= arg max a Q(x, a) and a 2 (x):= arg max a Q(x, a). Then; DISPLAYFORM2 With similar argument, we have; DISPLAYFORM3 ing in Lemma 1.In the remaining proof, we repeatedly apply the addition and subtraction technique to upper bound the error. To show how we apply this technique, we illustrate how we derive the error terms T (x 1 |x, a 1) − T (x 1 |x, a 1) and r(x, a 1) − r(x, a 1) for the first time step in detail. Let us restate the objective that we desire to upper bound: DISPLAYFORM4 We can thus express the third and fourth terms of Eq. 9 along with the addition and subtraction terms as: DISPLAYFORM5 Notice that the first two terms in Eq. 10 are the same except in in the first reward term, from which we derive the error term r(x, a 1) − r(x, a 1). We refactor the first two terms in Eq. 10 as: DISPLAYFORM6 Finally, we have the remaining last two terms of Eq. 10.xi,ai,∀i∈ [1,.,H] T (x 1 |x, a 1)π r (a 1 |x) DISPLAYFORM7 We repeatedly expand this remainder for the following time steps using the same steps as described above to derive the full bound. Following this procedure, we have: Figure 3: The sequence of four consecutive decision states, and corresponding learned Q-function by DQN at t, t + 1, t + 2, t + 3 from left to right, where the agent loses the point. At time step t, the optimal action is up but the Q-value of going up is lower than other actions. More significantly, even though the agent chooses action down and goes down, the Q value of action down at time step t is considerably far from the maximum Q value of the next state at time step t + 1.of action down, but it is very different. Moreover, in FIG4 and Table. 4 we investigate the case that the agent catches the ball. The ball is going to the right and agent needs to catch it. At time step t, the paddle is not on the direction of ball velocity, and as shown in Table. 4, the optimal action is down. But a closer look at the estimated Q value of action up reveals that the Q value for both action up is unreasonably close, when it could lead to losing the point. Lastly, we studied the existing errors Action up is sub-optimal but has high value and considerably close to action down. While action down and stay show have more similar values than up and down in the estimation of the Q function using DQN. In Table. 3, if the agent could roll-out even one step before making a decision, it could observe negative consequence of action down. The positive effect of the roll-out is more significant in earlier stages of Q learning, where the Q estimation is more off. We run GATS with 1, 2, 3, and 4 steps lookahead (GAT S1, GAT S2, GAT S3, GAT S4) and show its performance improvement over DQN in FIG5. FIG5 shows the RP prediction accuracy. We observe that when the transition phase occurs at decision step 1M, the RP model misclassifies the positive rewards. But the RP rapidly adapts to this shift and reduces the classification error to less than 2 errors per episode. As DRL methods are data hungry, we can re-use the data to efficiently learn the model dynamics. FIG1 shows how accurate the GDM can generate next 9 frames just conditioning on the first frame and the trajectory of actions. This trajectory is generated at decision step 100k. Moreover, we extend our study to the case where we change the model dynamics by changing the game mode. In this case, by going from default mode to alternate mode in pong, the opponent paddle gets halved in size. We expected that in this case, where the game became easier, the DDQN agent would preserve its performance but surprisingly it gave us the most negative score possible, i.e -21 and broke. Therefore, we start fine tuning DDQN and took 3M time step (12M frame) to master the game again. It is worth noting that it takes DDQN 5M time step (20M frame) to master from scratch. While DDQN shows a vulnerable and undesirably breakable behaviour to this scenario, GDM and RP thanks to their detailed design, adapt to the new model dynamics in 3k samples, which is amazingly smaller (see detail in F)In addition to GATS on DQN, we also study two other set of experiments on DDQN. Since FIG5 shows that the deeper roll-outs beyond one step do not provide much additional benefit for Pong, we focus on one-step roll-outs for the next two experiments. In the first experiment, we equip GATS +DDQN with the mentioned Wasserstein-optimism approach, and compare it with DDQN and plain GATS +DDQN, which both use ε-greedy based approaches for exploration. In Fig. 6left, we observe that this optimism heuristic is helpful for better exploration. In the second experiment, we investigate the effect of prioritizing training samples for the GDM, fresher samples are more probable to be chosen, which we do in all experiments reported in Fig. 6left. We study the case where the input samples to GDM are instead chosen uniformly at random from the replay buffer in Fig. 6right. In this case the GATS learns a better policy faster at the beginning of the game, but the performance stays behind DDQN, due to the shift in the state distribution. It is worth mentioning that for optimism based exploration, there is no ε-greedy, which is why it gets close to the maximum score of 21. We tested DDQN and GATS-DDQN with ε = 0, and they also perform close to 21. We further extend the study of GDM to more games 1 and observed same robust behaviour as Pong. We also tried to apply GATS to more games, but were not able to extend it due to mainly its high computation cost. We tried different strategies of storing samples generated by MCTS, e.g. random generated experience, trajectory followed by Q on the tree, storing just leaf nodes, max leaf, also variety of different distributions, e.g. geometric distributing, but again due the height cost of hyper parameter tuning we were not successful to come up with a setting that GATS works for other games Figure 6: left:The optimism approach for GATS improves the sample complexity and learns a better policy faster. right: Sampling the replay buffer uniformly at random to train GDM, makes GDM slow to adapt to novel parts of state space. We include the for GATS with 1 step look-ahead (GATS-1) and compare its performance to DDQN as an example for the negative we obtained with short roll-outs with the GATS algorithm. While apply the same hyper parameters we tuned for pong, for Asterix in performance slightly above random policy, we re-do the hyper-parameter tuning specifically for this game again and FIG6 is the best performance we achieved. This illustrates the challenges of learning strong global policies with short roll-outs even with near-perfect modeling. Under review as a conference paper at ICLR 2019 DISPLAYFORM0 x 5x 6x 7x 8x 9x 10x 11x 12 DISPLAYFORM1 Figure 8: Roll-out of depth two starting from state x t. Here x's are the generated states by GDM. Q(x, a(x)) denotes the predicted value of state x choosing the greedy action a Q (x):= arg max a ′ ∈A Q(x, a ′). DISPLAYFORM2 Figure 9: Training GAN and Q θ ′ using the longer trajectory of experiences Figure 8: Roll-out of depth two starting from state x t. Here x's are the generated states by GDM. Q(x, a(x)) denotes the predicted value of state x choosing the greedy action a Q (x):= arg max a ∈A Q(x, a). The GDM model consists of seven convolution and also seven deconvolution layers. Each convolution layer is followed by Batch Normalization layers and the leaky RLU activation function with negative slope of −0.2. Also each deconvolution layer is followed by a Batch Normalization layer and the RELU activation instead of leaky RELU. The encoder part of the network uses channel dimensions of 32, 32, 64, 128, 256, 512, 512 and kernel sizes of 4, 4, 4, 4, 2, 2, 2. The reverse is true for the decoder DISPLAYFORM0 Figure 6: Roll-out of depth two starting from state x t. Here x's are the generated states by GDM. Q(x, a(x)) denotes the predicted value of state x choosing the greedy action a Q (x):= arg max a ′ ∈A Q(x, a ′). DISPLAYFORM1 Figure 7: Training GAN and Q θ ′ using the longer trajectory of experiencesWe train the generator using Adam optimizer with weight decay of 0.001, learning rate of 0.0001 and also beta1, beta2 = 0.5, 0.999. For the discriminator, we use SGD optimizer with smaller learning rate of 0.00001, momentum of 0.9, and weight decay of 0.1. Given the fact that we use Wasserstein metric for GDM training, the followings are the generator and discriminator gradient updates: for a given set of 5 frames and a action, sampled from the replay buffer, (f 1, f 2, f 3, f 4, a 4, f 5) and a random Gaussian vector z:Discriminator update: DISPLAYFORM2 Figure 9: Training GAN and Q θ using the longer trajectory of experiences part. We concatenate the bottleneck and next 5 deconvolution layers with a random Gaussian noise of dimension 100, the action sequence, and also the corresponding layer in the encoder. The last layer of decoder is not concatenated. FIG1. For the discriminator, instead of convolution, we use SN-convolution which ensures the Lipschitz constant of the discriminator is below 1. The discriminator consists of four SN-convolution layers followed by Batch Normalization layers and a leaky RELU activation with negative slope of −0.2. The number of channels increase as 64, 128, 256, 16 with kernel size of 8, 4, 4, 3, which is followed by two fully connected layers of size 400 and 18 where their inputs are concatenated with the action sequence. The output is a single number without any non-linearity. The action sequence uses one hot encoding representation. We train the generator using Adam optimizer with weight decay of 0.001, learning rate of 0.0001 and also beta1, beta2 = 0.5, 0.999. For the discriminator, we use SGD optimizer with smaller learning rate of 0.00001, momentum of 0.9, and weight decay of 0.1. Given the fact that we use Wasserstein metric for GDM training, the followings are the generator and discriminator gradient updates: for a given set of 5 frames and a action, sampled from the replay buffer, (f 1, f 2, f 3, f 4, a 4, f 5) and a random Gaussian vector z:Discriminator update: DISPLAYFORM3 Generator update: DISPLAYFORM4 where θ GDM = {θ G, θ D} are the generator parameters and discriminator parameters. In order to improve the quality of the generated frames, it is common to also add a class of multiple losses and capture different frequency aspects of the frames Isola et al. FORMULA3;. Therefore, we also add 10 * L1 + 90 * L2 loss to the GAN loss in order to improve the training process. It is worth noting twh these losses are defined on the frames with pixel values in [−1, 1], therefore they are small but still able to help speed up the the learning. In order to be able to roll-out for a longer and preserve the GDM quality, we also train the generator using self generated samples, i.e. given the sequence (f 1, f 2, f 3, f 4, a 4, f 5, a 5, f 6, a 6, f 7, a 7, f 8), we also train the generator and discriminator on the generated samples of generator condition on its own generated samples for depth of three. This allows us to roll out for longer horizon of more than 10 and still preserve the GDM accuracy. Q function on generated frames Ideally, if the GDM model is perfect at generating frames i.e. the space generated frames is, pixel by pixel, the same as the real frames, for the leaf nodes x H, we can use max a Q(x H, a; θ), learned by the DQN model on real frames, in order to assign values to the leaf nodes. But in practice, instead of x H, we have access to x H, a generated state twh perceptually is similar to x H FIG1, but from the perspective of Q θ, they might not be similar over the course of training of Q θ. In order to compensate for this error, we train another Q-network, parameterized with θ, in order to provide the similar Q-value as Q θ for generated frames. To train Q θ, we minimize the L2 norm between Q θ and Q θ for a given GAN sample state and trajectory. For this minimization, we use Adam with learning rate of 0.0001, no weight decay, and beta1, beta2 = 0.5, 0.999. We experimented with weight decay and adding L1 loss, but we find these optimizations degrade the performance of the network. We tracked the difference between Q θ (x) − Q θ (x) and Q θ (x) − Q θ (x) and observed twh both of these quantities are negligible. We ran GATS without the Qθ, with just Q θ, and observed only slightly worse performance. F GDM DOMAIN ADAPTATION.We evaluate the GDM's ability to perform domain adaptation using the environment mode and difficulty settings in the latest Arcade Learning Environment . We first fully train GDM and DDQN on Pong with Difficulty 0 and Mode 0. We then sample 10,000 frames for training the GDM on Pong with Difficulty 1 and Mode 1, which has a smaller paddle and different game dynamics. We also collect 10,000 additional frames for testing the GDM. We train GDM using transferred weights and reinitialized weights on the new environment samples and observe the L1 and L2 loss on training and test samples over approximately 3,000 training iterations, and we observe twh they decrease together without significant over-fitting in FIG1. To qualitatively evaluate these frames, we plot the next frame predictions of four test images in FIG1. We observe twh training GDM from scratch converges to a similarly low L1 and L2 loss quickly, but it fails to capture the game dynamics of the ball. This indicates the L1 and L2 loss are bad measurements of a model's ability to capture game dynamics. GDM is very efficient at transfer. It quickly learns the new model dynamics and is able to generalize to new test states with an order of magnitude fewer samples than the Q-learner. Finally, we evaluate the ability of the GDM to generate different future trajectories from an initial state. We sample an initial test state and random action sequences of length 5. We then unroll the GDM for 5 steps from the initial state. We visualize the different rollout trajectories in Figs. 1213. Under review as a conference paper at ICLR 2019Figure 13: Eight 5-step roll-outs of the GDM on the Asterix domain. Generated by sampling an initial state with 8 different 5-action length sequences.
Surprising negative results on Model Based + Model deep RL
533
scitldr
We present a novel architecture of GAN for a disentangled representation learning. The new model architecture is inspired by Information Bottleneck (IB) theory thereby named IB-GAN. IB-GAN objective is similar to that of InfoGAN but has a crucial difference; a capacity regularization for mutual information is adopted, thanks to which the generator of IB-GAN can harness a latent representation in disentangled and interpretable manner. To facilitate the optimization of IB-GAN in practice, a new variational upper-bound is derived. With experiments on CelebA, 3DChairs, and dSprites datasets, we demonstrate that the visual quality of samples generated by IB-GAN is often better than those by β-VAEs. Moreover, IB-GAN achieves much higher disentanglement metrics score than β-VAEs or InfoGAN on the dSprites dataset. Learning good representations for data is one of the essential topics in machine learning community. Although any strict definition for it may not exist, the consensus about the useful properties of good representations has been discussed throughout many studies BID9; BID10. A disentanglement, one of those useful properties of representation, is often described as a statistical independence or factorization; each independent factor is expected to be semantically well aligned with the human intuition on the data generative factor (e.g. a chair-type from azimuth on Chairs dataset BID6, or age from azimuth on CelebA dataset ). The learned representation distilling each important factors of data into a single independent direction is hard to be done but highly valuable for many other downstream tasks (; b; .Many models have been proposed for disentangled representation learning (; ; ; ; BID13 . Despite their impressive , they either require knowledge of ground-truth generative factors or weak-supervision (e.g. domain knowledge or partial labels). In contrast, among many unsupervised approaches BID14;;; BID15 ), yet the two most successful approaches for the independent factor learning are β-VAE BID20 and InfoGAN BID12. BID20 demonstrate that encouraging the KL-divergence term of Variational autoencoder (VAE) objective by multiplying a constant β > 1 induces a high-quality disentanglement of latent factors. As follow-up research, BID10 provide a theoretical justification of the disentangling effect of β-VAE in the context of Information Bottleneck theory BID25 BID24 BID12 propose another fully unsupervised approach based on Generative Adversarial Network (GAN) BID18. He achieves the goal by enforcing the generator to learn disentangled representations through increasing the mutual information (MI) between the generated samples and the latent representations. Although InfoGAN can learn to disentangle representations for relatively simple datasets (e.g. MNIST, 3D Chairs), it struggles to do so on more complicated datasets such as CelebA. Moreover, the disentangling performance of the learned representations from InfoGAN is known as not good as the performance of the β-VAE and its variant models BID20; BID11.Stimulated by the success of β-VAE models BID10 BID20; BID11 BID17 with the Information Bottleneck theory BID5 BID0 ) in disentangled representations learning task, we hypothesize that the weakness of InfoGAN in the representation learning may originate from that it can only maximize the mutual information but lacks any constraining mechanisms. In other words, InfoGAN misses the term upper-bounding the mutual information from the perspective of IB theory. We present a novel unsupervised model named IB-GAN (Information Bottleneck GAN) for learning disentangled representations based on IB theory. We propose a new architecture of GANs from IB theory so that the training objective involves an information capacity constraint that InfoGAN lacks but β-VAE has. We also derive a new variational approximation algorithm to optimize IB-GAN objective in practice. Thanks to the information regularizer, the generator can use the latent representations in a manner that is both more interpretable and disentangled than InfoGANThe contributions of this work are summarized as follows:1. IB-GAN is a new GAN-based model for fully unsupervised learning of disentangled representations. To the best of our knowledge, there is no other unsupervised GAN-based model for this sake except the InfoGAN's variants BID20 ).2. Our work is the first attempt to utilize the IB theory into the GAN-based deep generative model. IB-GAN can be seen as an extension to the InfoGAN, supplementing an information constraining regularizer that InfoGAN misses.3. IB-GAN surpasses state-of-the-art disentanglement scores of BID20 BID16 on dSprites dataset . The quality of generated samples by IB-GAN on 3D Chairs BID6 and CelebA is also much realistic compared to that of the existing β-VAE variants of the same task. We remind some s: IB principle in section 2.1 and the connection between β-VAE and IB theory BID10 in section 2.2. Lastly, InfoGAN BID12 ) is briefly reviewed in section 2.3. Let the input variable X and the target variable Y distributed according to some joint data distribution p(x, y). The goal of the IB BID25 BID5 is to obtain a compressive representation Z from the input variable X, while maintaining the predictive information about the target variable Y as much as possible. The objective for the IB is DISPLAYFORM0 where I(·, ·) denotes MI and β ≥ 0 is a Lagrange multiplier. The goal is to obtain the optimal representation encoder q φ (z|x) that balances the trade-off between the maximization and minimization of both MI terms. Hence, the IB objective in Eq. provides a natural means for good representations by enforcing the representation Z to ignore irrelevant information from the input and simultaneously to be predictive about the target, which can act as a minimal sufficient statistic of X for predicting Y BID25 BID5.A growing body of studies BID5 BID1 BID13 supports that the learned representations adapting the IB objective tend to be highly efficient and distilled in terms of its code length ). As a consequence, the learned representation is more generalizable and robust to adversarial attack BID5, disentangled BID10 and invariant to nuance factors BID0. Moreover, the IB framework prevents weight over-fitting BID1 BID27, and can be used to visualize high dimensional embedding in a low dimensional latent space . β-VAE BID20 ) is one of the state-of-the-art unsupervised disentangled representation learning models. The key idea of β-VAE is to multiply a constant β ≥ 1 to the KL-divergence term of the original VAE's objective : DISPLAYFORM0 where the encoder q φ (z|x) is generally known as the variational approximation to the intractable p(z|x), p(z) is a prior for the latent representation and p θ (x|z) is the decoder in the VAE context. Recently, a notable connection between β-VAE and the IB theory has been discovered in BID5. Eq. can be derived from the variational approximation to the IB objective Eq.. To clarify this connection, see the variational upper and lower bound of the MI: DISPLAYFORM1 The MI in Eq. subscribed with q: DISPLAYFORM2, is called as the representational MI 1. Given that computing marginal q φ (z) is intractable, we can use any prior p(z) to substitute for q φ (z), forming the variational upper-bound in Eq. FORMULA2 2. Likewise, we can use any decoder model p θ (x|z) to approximate q φ (x|z) = q φ (z|x)p(x)/q φ (z) of the MI, forming the variational lower-bound in Eq.. If the target variable Y in Eq. FORMULA0 is replaced with X, the task is to reconstruct (auto-encode) data from the representation Z. The variational lower-bound of Eq. FORMULA0 obtained by leveraging the upper and lower bound of MI in Eq. corresponds to Eq. FORMULA1 3.The disentanglement-promoting behavior of the β-VAE based on IB theory is discussed in BID10. Constraining the MI (or minimizing KL-divergence in practice) forces the encoder to learn representation containing only strongly relevant information to the data reconstruction, while ignoring other unnecessary (or less-necessary) features. The encoder becomes reluctant to use more channels (or dimensions) of the latent vector to lower the MI constraining cost. Hence, the most distinctive and principle features of data are grouped and aligned along with each independent dimension of the representation space. Generative Adversarial Networks (GAN) BID18 ) establish a min-max adversarial game between two neural networks, a generator G and a discriminator D. The discriminator D aims to distinguish well between real sample x ∼ p(x) and synthetic sample created by the G(z) with a random noise z ∼ p(z), while the generator G is trained to produce a realistic sample that is indistinguishable from the true sample. The adversarial game is formulated as follow: DISPLAYFORM0 Under an optimal discriminator D *, Eq. theoretically involves with the Jensen-Shannon divergence between the synthetic and the true sample distribution: JS(G(z)||p(x)). However, Eq. does not have any specific guidance on how G utilizes a mapping from z to x. That is, the variation of z in any independent dimension often yields entangled effects on a generated sample x. On the other hand, InfoGAN BID12 ) is capable of learning disentangled representations. InfoGAN introduces an additional latent code c and encourages it to describe the semantic features of the data. To do so, the training objective of InfoGAN accommodates a mutual information maximization term between the latent code c and the generated sample x = G(z, c): DISPLAYFORM1 where I(·, ·) denote MI and λ is a weight coefficient. To optimize Eq., the variational lower bound of MI is also exploited similar to that of the IM algorithm BID8. 1 We distinguish it from the generative in the next section. 2 The variational inference relies on the positivity of the KL divergence: BID25 BID28. DISPLAYFORM2 3 A constant data entropy term DISPLAYFORM3 is ignored for brevity. We introduce IB-GAN for disentangled representation learning approach in section 3.1, and propose a practical variational approximation for IB-GAN model in section 3.2. Finally, we discuss some distinctive characteristics of the IB-GAN in-depth in section 3.3. Although InfoGAN BID12 ) is a fully unsupervised GAN-based approach for learning disentangled representations, its disentanglement performance is, constantly reported, lower than β-VAE and its variants BID20; BID11. we hypothesis the weakness of InfoGAN in independent factor learning may originate from the absence of information constraint or any compression mechanism for the representation. Hence, our motivation is straightforward; we adopt the IB principle to the objective of InfoGAN, presenting Information Bottleneck GAN (IB-GAN). IB-GAN not only maximizes the MI term as the original InfoGAN does, but also constrains the maximization of MI simultaneously as DISPLAYFORM0 DISPLAYFORM1 where I L (·, ·) and I U (·, ·) denote the lower and upper bound of generative MI 4 respectively. The parameters λ and β are the weight coefficients of the GAN loss and the upper-bound of MI, respectively. More details on these parameters are discussed in section 3.3. One important change 5 in Eq. compared to the InfoGAN objective is regularizing the upper bound of MI with β, analogously to that of β-VAE and IB theory. For the optimization of IB-GAN, we here define the tractable variational lower and upper bound of the MI in Eq. using the similar derivation in BID12 BID2. For notational consistency, we use p θ (x|z) to denote the generator G(z). Then, the variational lowerbound I L (z, G(z)) of the generative MI in Eq. becomes DISPLAYFORM0 Since the generator marginal p θ (x) is difficult to calculate, a reconstructor model q φ (z|x) is introduced to approximate the quantity p θ (z|x) = p θ (x|z)p(z)/p θ (x) in Eq.. The lower-bound holds thanks to positivity of KL-divergence. Intuitively, by improving the reconstruction of an input code z from a generated sample x = G(z), we can maximize the lower-bound of MI between the generator and the code z. In contrast to the lower-bound, obtaining a practical variational upper-bound of the generative MI is not trivial. If we follow the same approach in BID5, the upper-bound I U (z, G(z)) of the generative MI becomes DISPLAYFORM1 where d(x) is a variational approximation to the generator marginal p θ (x) = z p(x|z)p(z). However, one critical problem of this approach is, in practice, it is difficult to choose or correctly identify the proper approximation model for d(x). Algorithm 1 IB-GAN training algorithm Input: batch size B, hyperparameters λ, β, and learning rates DISPLAYFORM2 DISPLAYFORM3 In theory, we can choose any model for d(x) (e.g. Gaussian), yet any improper choice of d(x) may severely downgrade the quality of synthesized samples from the generator p θ (x|z) since the upperbound I U (z, G(z)) in Eq. FORMULA11 is eventually identical to the KL(p θ (x|z)||d(x)). Moreover, although we express G(z) as p θ (x|z) for notional convenience, the probabilistic modeling of generator G will lose the merit of GAN: the likelihood-free (or implicit) modeling assumption. For this reason, we develop another formulation of the variational upper-bound on the MI term, based on the studies of deep-learning architecture and IB theory BID24 BID0. We define an additional stochastic model e ψ (r|z) that takes a noise input vector z and produces an intermediate stochastic representation r. In other words, we let x = G(r(z)) instead of x = G(z), then we can express the generator as p θ (x|z) = r p θ (x|r)e ψ (r|z). Consequently, a practical upper-bound I U (z, R(z)) of the generative MI can be obtained as: DISPLAYFORM4 DISPLAYFORM5 The first inequality in Eq. holds thanks to the Markov property BID24: if any generative process follows Z → R → X, then I(Z, X) ≤ I(Z, R). The inequality in Eq. FORMULA0 holds from the positivity of KL divergence. Thus, any prior m(r) can be utilized for substituting the marginal e ψ (r) without affecting the generated samples directly; therefore, this can bypass the difficulty of choosing the prior d(x) in Eq..Finally, from the variational lower-bound of the MI in Eq. and the newly introduced upper-bound in Eq., the lower-bound of IB-GAN objective in Eq. can be written as: max DISPLAYFORM6 DISPLAYFORM7 In other words, the intermediate representation r and the KL(e ψ (r|z)||m(r)) with β in Eq. FORMULA0 are leveraged to constrain the amount of shared information between the generator G(z) and input z. Eq. FORMULA0 is optimized by alternatively maximizing the generator G = p θ (x|r), the representation encoder e ψ (r|z), the variational reconstructor q φ (z|x) and the discriminator D. The IB-GAN architecture is presented in FIG0 (a), and overall training procedure is described in Algorithm 1. Connection to rate-distortion theory. Information Bottleneck theory is a generalization of the rate-distortion theory BID25 authors, 2019), in which the rate R is the code length per data sample to be transmitted through a noisy channel, and the distortion D represents the approximation error of reconstructing the input from the source code authors, 2019; ). The goal of RD-theory is minimizing D without exceeding a certain level of rate R, can be formulated as min R,D D + βR, where β ∈ [0, ∞] decides a theoretical achievable optimal frontier in the auto-encoding limit.Likewise, z and r in IB-GAN can be treated as an input and the encoding of the input, respectively. The distortion D is minimized by optimizing the variational reconstructor q φ (z|x(r)) to predict the input z from its encoding r, that is equivalent to maximizing I L (z, G(z)). The minimization of rate R is related minimizing the KL(e ψ (r|z)||m(r)) which measures the in-efficiency (or excess rate) of the representation encoder e ψ (r|z) in terms of how much it deviates from the prior m(r).Disentanglement-promoting behavior. The disentanglement-promoting behavior of β-VAE is encouraged by the variational upper-bound of MI term (i.e. KL(q(z|x)||p(z))). Since p(z) is often a factored Gaussian distribution, the KL-divergence term is decomposed into the form containing a total correlation term (; ; BID11 BID17 BID10, which essentially enforces the encoder to output statistically factored representations (; BID11 . Nevertheless, in IB-GAN, a noise input z is fed into the representation encoder e ψ (r|z) instead of the image x. Therefore, the disentangling mechanism of IB-GAN must be different from those of β-VAEs. From the formulation of the Eq., we could obtain another important insight: the GAN loss in IB-GAN can be seen as the secondary capacity regularizer over the noisy channel since the discriminator of GAN is the JS-divergence (or the reverse KL-divergence) between the generator and the empirical data distribution p(x) in its optimal BID18 BID22. Hence, λ controls the information compression level of z in the its encoding x = G(r(z)) 6. In other words, the GAN loss in IB-GAN is a second rate constraint in addition to the first rate constraint KL(e ψ (r|z)||m(r)) in the context of the rate-distortion theorem. Therefore, we describe the disentanglement-promoting behavior of IB-GAN regarding the ratedistortion theorem. Here, the goal is to deliver the input source z through the noisy channel using the coding r and x. We want to use compact encoding schemes for r and x. The efficient encoding scheme for r is defined by minimizing KL(e ψ (r|z)||m(r)) with the factored Gaussian prior m(r), which promotes statistical independence of the r. The efficient encoding scheme for x is defined by minimizing the divergence between G(z) and the data distribution p(x) via the discriminator; this promote the encoding x to be the realistic image. Maximizing I L (z, G(z)) in IB-GAN indirectly maximize I(r, G(r)) since I(z, G(z)) ≤ I(r, G(r)). In other words, maximizing the lower-bound of MI will increases the statistical dependency between the coding r and G(r), while these encoding need to be efficient in terms of their rate. Therefore, a single independent changes in r must be coordinated with the variations of a independent image factor. How to choose hyperparameters. Although setting any positive values for λ and β is possible, we set β ∈ and fix λ = 1. We observe that, in the most of the cases, IU (r, R(z)) collapses to 0 when β > 0.75 in the experiments with dSprites. Although λ is another interesting hyperparameter that can control the rate of x (i.e. the divergence of the G(z) from p(x)), we aims to support the usefulness of IB-GAN in the disentangled representation learning tasks, and thus we focus on the effect of β ∈ [0, 1.2] on the I U (r, R(z)) while fixing λ = 1. More discussion on the hyperparameter setting will be discussed in Appendix. (; BID16 . Our model's scores are obtained from 32 random seeds, with a peak score of (0.91, 0.78). The baseline scores except InfoGAN are referred to BID17. We use DCGAN with batch normalization as our base model for the generator and the discriminator. We let the reconstructor share the same frontend feature with the discriminator for efficient use of parameters as in the InfoGAN BID12. Also, the MLP-based representation encoder is used before the generator. We train the model using RMSProp BID23 optimizer with momentum of 0.9. The minibatch size is 64 in all experiments. Lastly, we constrain true and synthetic images to be normalized as [−1, 1]. Almost identical architectural configurations for the generator, discriminator, reconstructor, and representation encoder are used in all experiments except that the numbers of parameters are changed depending on the datasets. We defer more details on the models and experimental settings to Appendix. Although it is not easy to evaluate the disentanglement of representations, some quantitative metrics BID20; BID11 BID16 have been proposed based on the synthetic datasets providing ground-truth generative factors such as dSprites or teapots BID16. We verified our approach with the two different metrics (; BID16 on the dSprites dataset since this setting is tested with many other state-of-the-art baselines in BID17 including standard VAE , β-VAE BID20, TC-VAE BID11 and HFVAE BID17.In experiments, we adopt the instance noises technique BID21 since the dSprites images are too simple for the generator of GAN to learn. That is, the intensity distribution of synthetic images is unnaturally narrow (i.e.), making the overlapping probability with generated images very low, where the generator may barely learn from the discriminator. Hence, by adding instance noises ∼ N (0, σ instance * I) to both true and generated inputs, we can significantly improve the training stability of GAN models. This may be the reason for the inconsistency between the previous experiments (; BID11 on InfoGAN and our . For the technical detail, we anneal σ instance linearly from 1 to 0 during training for InfoGAN and IB-GAN. FIG0 shows the variations of KL(e(r i |z)||m(r i)) for 10-dimensional r (i.e. i = 1, . . ., 10) over training iterations on dSprites dataset . The sum of these values is the upper-bound of MI. We observe that all factors of variations are capped by different values. Similar behavior is exhibited in β-VAE BID10. During training, the encoder e ψ (r|z) is slowly adapted to capture the independent factors of dSprites dataset as the lower-bound of MI increases. We present the visual inspection of the latent traversal BID20 with the learned IB-GAN model in FIG2. The IB-GAN successfully learns 5 out of 5 ground truth factors from dSprites dataset, including positions of Y and X, scales, rotations, and shapes, which aligns with the caps on KL scores in FIG0. More and discussion about the convergence and the effects of β will is in Appendix B. TAB0 shows the quantitative in terms of the two disentanglement metric scores (; BID16 . IB-GAN outperforms other baselines (; BID20 BID11 BID17 . Interestingly, in our experiments, InfoGAN attains comparable scores to those of other VAE-based models. On the Eastwood's randomforest metric, InfoGAN slightly outperforms other baselines as well, which is consistent with the of BID16 . Following BID12 BID20 BID11), we evaluate the qualitative of IB-GAN by inspecting latent traversals. As shown in FIG4, the IB-GAN discovers various human attributes such as azimuth, hair color and smiling face expression. In addition, generated images of the IB-GAN are sharp and realistic than the of β-VAE and its variants BID20; BID11. We also show our qualitative on 3D Chairs dataset in FIG4. IB-GAN successfully disentangles scales, leg types and azimuth of chairs. These attributes are hardly captured in the original InfoGAN BID12 BID20; BID11, demonstrating the effectiveness of our approach. The proposed IB-GAN is a novel unsupervised GAN-based model for learning disentangled representation. We made a crucial modification on the InfoGAN's objective inspired by the IB theory and β-VAE; specifically, we developed an information capacity constraining term between the generator and the latent representation. We also derived a new variational approximation technique for optimizing IB-GAN. Our experimental showed that IB-GAN achieved the state-of-the-art performance on disentangled representation learning. The qualitatively generated samples of IB-GAN often had better quality than those of β-VAE on CelebA and 3D Chairs. IB-GAN attained higher quantitative scores than β-VAE and InfoGAN with disentanglement metrics on dSprites dataset. There are many possible directions for future work. First, our model can be naturally extended to adapt a discrete latent representation, as discussed in section 3.3. Second, many extensions of β-VAE have been actively proposed such as BID10; BID11 BID17, most of which are complementary for the IB-GAN objective. Further exploration toward this direction could be another interesting next topic. Reconstruction of input noise z. The ing architecture of IB-GAN is partly analogous to that of β-VAE since both are derived from the IB theory. However, β-VAE often generates blurry output images due to the large β > 1 (; BID11 BID17 since setting β > 1 typically increases the distortion . Recently, demonstrates the possibility of achieving small distortion with the minimum rate by adopting a complex auto-regressive decoder in β-VAE and by setting β < 1. However, their experiment is performed on relatively small dataset (e.g. MNIST, Omniglot).In contrast, IB-GAN may not suffer from this shortcoming since the generator in IB-GAN learns to generate image by minimizing the rate. Moreover, it does not rely on any probabilistic modeling assumption of the decoder unlike VAEs and can inherit all merits of InfoGANs (e.g. producing images of good quality by an implicit decoder, and an adaptation of categorical distribution). One downside of our model would be the introduction of additional capacity control parameter λ. Although, we fixed λ = 1 in all of our experiment, which could also affect the convergence or the generalization ability of the generator. Further investigation on this subject could be an interesting future work. Behaviors of IB-GAN according to β. If β is too large such that the KL-divergence term is almost zero, then there would be no difference between the samples from the representation encoder e ψ (r|z) and the distortion prior m(r). Then, both representation r and generated data x contain no information about z at all, ing in that the signal from the reconstructor is meaningless to the generator. In this case, the IB-GAN reduces to a vanilla GAN with an input r ∼ p(r).Maximization of variational lower-bound. Maximizing the variational lower-bound of generative MI has been employed in IM algorithm BID2 and InfoGAN BID12. Recently, offer the lower-bound of MI, named GILBO, as a data independent measure for the complexity of the learned representations for trained generative models. They discover the optimal lower-bound of the generative MI correlates well with the common image quality metrics of generative models (e.g. or). In this work, we discover a new way of upper-bounding the generative MI based on the causal relationship of deep learning architecture, and show the effectiveness of the upper-bound by measures the disentanglement of learned representation. Implementation of IB-GAN. Since the representation encoder e ψ (r|z) is stochastic, reparametrization trick is needed to backpropagate gradient signals for training the encoder model. The representation r can be embedded along with an extra discrete code c ∼ p(c) before getting into the generator (i.e. G(r, c)), and accordingly the reconstructor network becomes q(r, c|x) to predict the discrete code c as well. In this way, it is straightforward to introduce a discrete representation into IB-GAN, which is not an easy task in β-VAE based models. Theoretically, we can choose the any number for the dimension of r and z. However, The disentangled representation of IB-GAN is learned via the representation encoder e ψ (r|z). To obtain the representation r back from the real data x, we first sample z using the learned reconstructor q φ (z|x), and input it to the representation encoder e ψ (r|z). Therefore, we typically choose a smaller r dimension than that of z. For more details on the architecture of IB-GAN, please refer Appendix. E.Related Work. Many extensions of β-VAE BID20 have been proposed. BID10 modify β-VAE's objective such that the KL term is minimized to a specific target constant C instead of scaling the term using β. and BID11 demonstrate using the ELBO surgery that minimizing the KL-divergence enforces factorization of the marginal encoder, and thus promotes the independence of learned representation. However, a high value of β can decrease the MI term too much, and thus often leads to worse reconstruction fidelity compared to the standard VAE. Hence, they introduce a total correlation BID26 based regularization to overcome the reconstruction and disentanglement trade-off. These approaches could be complementary to IB-GAN, since the objective of IB-GAN also involves with the KL term. This exploration could be an interesting future work. One of the most important hyperparameter in the IB-GAN objective of Eq. FORMULA0 is β that controls the ratio of the lower-bound I L (z, R(z)) and the upper-bound I U (z, R(z)). Hence, the optimal balance point between the lower and upper bound term is affected by the β. Each panel in FIG6 shows the variational lower and upper-bound of MI along with independent KL(e(r i |z)||m(r i)) for each r i (i = 1, · · ·, 10) over the 150K training iterations. As shown in FIG6, if β = 0, the upper-bound of MI in Eq. FORMULA0 is ignored and the constraining effect on the representation r disappears. Hence, the lower-bound of MI can quickly increases up to its natural upper-bound 7 similar to the MI lower-bound in InfoGAN. With β = 1 of FIG6, the upper-bound of MI drops down to almost zero and so does the lower-bound. Hence, the representation r is independent of z (i.e. dose not contain any information about z) and IB-GAN reduces to vanilla GAN.When β is set properly as in FIG6, both lower and upper-bound of MI increase smoothly and the representation encoder e ψ (r|z) is slowly adapted to capture the distinctive factors of the dataset, where independent KL-divergence KL(e(r i |z)||m(r i)) increases one by one by capturing each disentangled attribute. Note that the sum of individual KL scores is the upper-bound of MI I U (z, R(z)) = i KL(e(r i |z)||m(r i)). We observe that all factors of variations are capped by different values; this behavior is reported as a key element of the disentangled representation learning in β-VAE BID10. FORMULA0 ) vs β. FIG7 and 5b illustrates the expected converged value of upper and lower MI bounds over the different β. Overall, the upper MI bound tends to decrease exponentially as β increases, consequently the lower MI bound decreases as well. DISPLAYFORM0 Specifically, β = 0, the upper-bound MI term disappears in the IB-GAN Eq.. Hence, the representation encoding r can diverge from the prior distribution m(r) without any restrictions, ing in a high value of upper MI bound. Interestingly, the gap between the upper and lower bound is also reduced as the β parameter increases as we can see in Figure. 5b. Lastly, FIG7 shows the effect of β on the disentanglement scores. The optimal disentanglement score was achieved when the β is around in a range of [0.1, 0.35], and optimal disentanglement score 0.91 is obtained when β = 0.212 supporting the fact that IB-GAN could control the disentanglement of the learned representation with the upper-bound of generative MI and the varying β. Following BID12 BID20 BID11 ), we evaluate the qualitative of IB-GAN by inspecting latent traversals. As shown in Figure7, the IB-GAN discovers various human attributes such as (a) azimuth, (b) color, (c) hair color, (d) skin color, (e) smile, and (f) gender. All of the features in Figure7 and Figure3(a) are captured by one best model with the parameter of β = 0.2838, λ = 1. These attributes are hardly captured in the original InfoGAN BID12 BID20; BID11, demonstrating the usefulness of the upper-bound of generative MI in IB-GAN. In addition, generated images of the IB-GAN are often sharp and realistic than the of β-VAE and its variants BID20; BID11. BID20 737,280 binary 64 × 64 images of 2D shapes with 5 ground truth factors. Ground truth factors consist of 3 shapes, 6 scales, 40 orientations, and 32 positions of X and Y. 3D Chairs BID6 86,366 gray-scale 64 × 64 images of 1,393 chair CAD models with 31 azimuth angles and 2 elevation angles. CelebA 202,599 RGB 64 × 64 × 3 images of celebrity faces consisting of 10,177 identities, 5 landmark locations, and 40 binary attributes. We use the cropped version of the dataset. TAB2 describes the details of hyperparameter settings used in our experiments. LR G/E/Q 5e-5, D 1e-6 iterations 1.5e5 RMSProp(momentum=0.9), nc=1, ngf=32, ndf=16, z dim=64, r dim=10, λ=1, β=0.2Instance Noise: σ instance is annealed linearly from 1.0 to 0 for 1.3e5 iterations. LR G/E/Q 5e-5, D 5e-6 iterations 2e5CelebA RMSProp(momentum=0.9), nc=3, ngf=64, ndf=64, z dim=500, r dim=15, λ=1, β=0.2838LR G/E/Q 5e-5, D 5e-7 iterations 2.5e5We summarize some implementation details of the models in our experiments on dSprites dataset, 3D Chairs, and CelebA datasets. TAB3 shows the base architectures of IB-GAN for the generator, discriminator, and encoder, while TAB4 shows those of InfoGAN. TAB2 also presents the hyperparameter settings that we use for the models in all experiments. TAB2 for hyperparameter setting. Generator ( DISPLAYFORM0
Inspired by Information Bottleneck theory, we propose a new architecture of GAN for a disentangled representation learning
534
scitldr
We investigate the training and performance of generative adversarial networks using the Maximum Mean Discrepancy (MMD) as critic, termed MMD GANs. As our main theoretical contribution, we clarify the situation with bias in GAN loss functions raised by recent work: we show that gradient estimators used in the optimization process for both MMD GANs and Wasserstein GANs are unbiased, but learning a discriminator based on samples leads to biased gradients for the generator parameters. We also discuss the issue of kernel choice for the MMD critic, and characterize the kernel corresponding to the energy distance used for the Cramér GAN critic. Being an integral probability metric, the MMD benefits from training strategies recently developed for Wasserstein GANs. In experiments, the MMD GAN is able to employ a smaller critic network than the Wasserstein GAN, ing in a simpler and faster-training algorithm with matching performance. We also propose an improved measure of GAN convergence, the Kernel Inception Distance, and show how to use it to dynamically adapt learning rates during GAN training. Generative Adversarial Networks (GANs; BID10 provide a powerful method for general-purpose generative modeling of datasets. Given examples from some distribution, a GAN attempts to learn a generator function, which maps from some fixed noise distribution to samples that attempt to mimic a reference or target distribution. The generator is trained to trick a discriminator, or critic, which tries to distinguish between generated and target samples. This alternative to standard maximum likelihood approaches for training generative models has brought about a rush of interest over the past several years. Likelihoods do not necessarily correspond well to sample quality BID13, and GAN-type objectives focus much more on producing plausible samples, as illustrated particularly directly by . This class of models has recently led to many impressive examples of image generation (e.g. a; ;).GANs are, however, notoriously tricky to train . This might be understood in terms of the discriminator class. BID10 showed that, when the discriminator is trained to optimality among a rich enough function class, the generator network attempts to minimize the Jensen-Shannon divergence between the generator and target distributions. This has been extended to general f -divergences by. According to BID1, however, it is likely that both the GAN and reference probability measures are supported on manifolds within a larger space, as occurs for the set of images in the space of possible pixel values. These manifolds might not intersect at all, or at best might intersect on sets of measure zero. In this case, the Jensen-Shannon divergence is constant, and the KL and reverse-KL divergences are infinite, meaning that they provide no useful gradient for the generator to follow. This helps to explain some of the instability of GAN training. The lack of sensitivity to distance, meaning that nearby but non-overlapping regions of high probability mass are not considered similar, is a long-recognized problem for KL divergence-based discrepancy measures (e.g. , Section 4.2). It is natural to address this problem using Integral Probability Metrics (IPMs; Müller, 1997): these measure the distance between probability measures via the largest discrepancy in expectation over a class of "well behaved" witness functions. Thus, IPMs are able to signal proximity in the probability mass of the generator and reference distributions. (Section 2 describes this framework in more detail.) BID1 proposed to use the Wasserstein distance between distributions as the discriminator, which is an integral probability metric constructed from the witness class of 1-Lipschitz functions. To implement the Wasserstein critic, Arjovsky et al. originally proposed weight clipping of the discriminator network, to enforce k-Lipschitz smoothness. improved on this by directly constraining the gradient of the discriminator network at points between the generator and reference samples. This new Wasserstein GAN implementation, called WGAN-GP, is more stable and easier to train. A second integral probability metric used in GAN variants is the maximum mean discrepancy (MMD), for which the witness function class is a unit ball in a reproducing kernel Hilbert space (RKHS). Generative adversarial models based on minimizing the MMD were first considered by and. These works optimized a generator to minimize the MMD with a fixed kernel, either using a generic kernel on image pixels or by modeling autoencoder representations instead of images directly. BID9 instead minimized the statistical power of an MMD-based test with a fixed kernel. Such approaches struggle with complex natural images, where pixel distances are of little value, and fixed representations can easily be tricked, as in the adversarial examples of BID10.Adversarial training of the MMD loss is thus an obvious choice to advance these methods. Here the kernel MMD is defined on the output of a convolutional network, which is trained adversarially. Recent notable work has made use of the IPM representation of the MMD to employ the same witness function regularization strategies as BID1 and , effectively corresponding to an additional constraint on the MMD function class. Without such constraints, the convolutional features are unstable and difficult to train BID9. Li et al. (2017b) essentially used the weight clipping strategy of Arjovsky et al., with additional constraints to encourage the kernel distribution embeddings to be injective. 1 In light of the observations by Gulrajani et al., however, we use a gradient constraint on the MMD witness function in the present work (see Sections 2.1 and 2.2).2's method, the Cramér GAN, also used the gradient constraint strategy of Gulrajani et al. in their discriminator network. As we discuss in Section 2.3, the Cramér GAN discriminator is related to the energy distance, which is an instance of the MMD , and which can therefore use a gradient constraint on the witness function. Note, however, that there are important differences between the Cramér GAN critic and the energy distance, which make it more akin to the optimization of a scoring rule: we provide further details in Appendix A. Weight clipping and gradient constraints are not the only approaches possible: variance features and constraints can work, as can other optimization strategies (; a).Given that both the Wasserstein distance and the MMD are integral probability metrics, it is of interest to consider how they differ when used in GAN training. showed that optimizing the empirical Wasserstein distance can lead to biased gradients for the generator, and gave an explicit example where optimizing with these biased gradients leads the optimizer to incorrect parameter values, even in expectation. They then claim that the energy distance does not suffer from these problems. As our main theoretical contribution, we substantially clarify the bias situation in Section 3. First, we show (Theorem 1) that the natural maximum mean discrepancy estimator, including the estimator of energy distance, has unbiased gradients when used "on top" of a fixed deep network representation. The generator gradients obtained from a trained representation, however, will be biased relative to the desired gradients of the optimal critic based on infinitely many samples. This situation is exactly analogous to WGANs: the generator's gradients with a fixed critic are unbiased, but gradients from a learned critic are biased with respect to the supremum over critics. MMD GANs, though, do have some advantages over Wasserstein GANs. Certainly we would not expect the MMD on its own to perform well on raw image data, since these data lie on a low dimensional manifold embedded in a higher dimensional pixel space. Once the images are mapped through appropriately trained convolutional layers, however, they can follow a much simpler distribution with broader support across the mapped domain: a phenomenon also observed in autoencoders . In this setting, the MMD with characteristic kernels BID4 shows strong discriminative performance between distributions. To achieve comparable performance, a WGAN without the advantage of a kernel on the transformed space requires many more convolutional filters in the critic. In our experiments (Section 5), we find that MMD GANs achieve the same generator performance as WGAN-GPs with smaller discriminator networks, ing in GANs with fewer parameters and computationally faster training. Thus, the MMD GAN discriminator can be understood as a hybrid model that plays to the strengths of both the initial convolutional mappings and the kernel layer that sits on top. We begin with a review of the MMD and relate it to the loss functions used by other GAN variants. Through its interpretation as an integral probability metric, we show that the gradient penalty of applies to the MMD GAN. We consider a random variable X with probability measure P, which we associate with the generator, and a second random variable Y with probability measure Q, which we associate with the reference sample that we wish to learn. Our goal is to measure the distance from P to Q using samples drawn independently from each distribution. The maximum mean discrepancy is a metric on probability measures BID6, which falls within the family of integral probability metrics (Müller, 1997); this family includes the Wasserstein and Kolmogorov metrics, but not for instance the KL or χ 2 divergences. Integral probability metrics make use of a class of witness functions to distinguish between P and Q, choosing the function with the largest discrepancy in expectation over P, Q, DISPLAYFORM0 The particular witness function class F determines the probability metric. 3 For example, the Wasserstein-1 metric is defined using the 1-Lipschitz functions, the total variation by functions with absolute value bounded by 1, and the Kolmogorov metric using the functions of bounded variation 1. For more on this family of distances, see e.g. BID3.In this work, our witness function class F will be the unit ball in a reproducing kernel Hilbert space H, with positive definite kernel k(x, x). The key aspect of a reproducing kernel Hilbert space is the reproducing property: for all f ∈ H, f (x) = f, k(x, ·) H. We define the mean embedding of the probability measure P as the element µ P ∈ H such that E P f (X) = f, µ P H; it is given by µ P = E X∼P k(·, X). The maximum mean discrepancy (MMD) is defined as the IPM with F the unit ball in H, MMD(P, Q; H) = sup DISPLAYFORM0 The witness function f * that attains the supremum has a straightforward expression (, Section 2.3), DISPLAYFORM1 DISPLAYFORM2 and an unbiased estimator of the squared MMD is (, Lemma 6) DISPLAYFORM3 When the kernel is characteristic BID4 BID5, the embedding µ P is injective (i.e., associated uniquely with P). Perhaps the best-known characteristic kernel is the exponentiated quadratic kernel, also known as the Gaussian RBF kernel, DISPLAYFORM4 Both the kernel and its derivatives decay exponentially, however, causing significant problems in high dimensions, and especially when used in gradient-based representation learning. The rational quadratic kernel DISPLAYFORM5 with α > 0 corresponds to a scaled mixture of exponentiated quadratic kernels, with a Gamma(α, 1) prior on the inverse lengthscale (, Section 4.2). This kernel will be the mainstay of our experiments, as its tail behaviour is much superior to that of the exponentiated quadratic kernel; it is also characteristic. The MMD has been a popular choice for the role of a critic in a GAN. This idea was proposed simultaneously by and , with numerous recent follow-up works BID9; b; ). As a key strategy in these recent works, the MMD of is not computed directly on the samples; rather, the samples first pass through a mapping function h, generally a convolutional network. Note that we can think of this either as the MMD with kernel k on features h(x), or simply as the MMD with kernel κ(x, y) = k(h(x), h(y)). The challenge is to learn the features h so as to maximize the MMD, without causing the critic to collapse to a trivial answer early in training. Bearing in mind that the MMD is an integral probability metric, strategies developed for training the Wasserstein GAN critic can be directly adopted for training the MMD critic. Li et al. (2017b) employed the weight clipping approach of BID1, though they motivated it using different considerations. found a number of issues with weight clipping, however: it oversimplifies the loss functions given standard architectures, the gradient decays exponentially as we move up the network, and it seems to require the use of slower optimizers such as RMSProp rather than standard approaches such as Adam .It thus seems preferable to adopt Gulrajani et al.'s proposal of regularising the critic witness by constraining its gradient norm to be nearly 1 along randomly chosen convex combinations of generator and reference points, αx i + (1 − α)y j for α ∼ Uniform. This was motivated by the observation that the Wasserstein witness satisfies this property (their Lemma 1), but perhaps its main benefit is one of regularization: if the critic function becomes too flat anywhere between the samples, the generator cannot easily follow its gradient. We will thus follow this approach, as did , whose model we describe next. 5 By doing so, we implicitly change the definition of the distance being approximated; we leave study of the differences to future work. By analogy, give some basic properties for the distance used by. and Bellemare et al. (2017, Section 4) proposed to use the energy distance as the critic in an adversarial network. The energy distance BID12 ) is a measure of divergence between two probability measures, defined as Many other GAN variants fall into the framework of IPMs (e.g. ; ;). Notably, although BID10 motivated GANs as estimating the Jensen-Shannon divergence, they can also be viewed as minimizing the IPM defined by the classifier family , thus motivating applying the gradient penalty to original GANs . in particular study properties of these distances. The issue of biased gradients in GANs was brought to prominence by Bellemare et al. (2017, Section 3), who showed bias in the gradients of the empirical Wasserstein distance for finite sample sizes, and demonstrated cases where this bias could lead to serious problems in stochastic gradient descent, even in expectation. They then claimed that the energy distance used in the Cramér GAN critic does not suffer from these problems. We will now both formalize and clarify these . First, Bellemare et al.'s proof that the gradient of the energy distance is unbiased was incomplete: the essential step in the reasoning, the exchange in the order of the expectations and the derivatives, is simply assumed. 8 We show that one can exchange the order of expectations and derivatives, under very mild assumptions about the distributions in question, the form of the network, and the kernel: Theorem 1. Let G ψ: Z → X and h θ: X → R d be deep networks, with parameters ψ ∈ R m ψ and θ ∈ R m θ, of the form defined in Appendix C.1 and satisfying Assumptions C and D (in Appendix C.2). This includes almost all feedforward networks used in practice, in particular covering convolutions, max pooling, and ReLU activations. Let P be a distribution on X such that E[X 2] exists, and likewise Z a distribution on Z such that E[Z 2] exists. P and Z need not have densities. DISPLAYFORM0 be a kernel function satisfying the growth assumption Assumption E for some α ∈. All kernels considered in this paper satisfy this assumption; see the discussion after Corollary 3.For µ-almost all (ψ, θ) ∈ R m ψ +m θ, where µ is the Lebesgue measure, the function DISPLAYFORM1 is differentiable at (ψ, θ), and moreover DISPLAYFORM2 Thus for µ-almost all (ψ, θ), DISPLAYFORM3 This is shown in Appendix C, specifically as Corollary 3 to Theorem 5, which is a quite general about interchanging expectations and derivatives of functions of deep networks. The proof is more complex than a typical proof that derivatives and integrals can be exchanged, due to the non-differentiability of ReLU-like functions used in deep networks. But this unbiasedness is not the whole story. In WGANs, the generator attempts to minimize the loss function DISPLAYFORM4 based on an estimateŴ(X, Y): first critic parameters θ are estimated on a "training set" X tr, Y tr, i.e. all points seen in the optimization process thus far, and then the distance is estimated on the remaining "test set" X te, Y te, i.e. the current minibatch, as DISPLAYFORM5 (After the first pass through the training set, these two sets will not be quite independent, but for large datasets they should be approximately so.) Theorem 2 (in Appendix B.2) shows that this estimator W is biased; Appendix B.4 further gives an explicit numerical example. This almost certainly implies that ∇ ψŴ is biased as well, by Theorem 4 (Appendix B.3). 9 Yet, for fixed θ, Corollary 1 shows that the estimator has unbiased gradients; it is only the procedure which first selects a θ based on training samples and then evaluates which is a biased estimator of.The situation with MMD GANs, including energy distance-based GANs, is exactly analogous. We have: for almost all particular critic representations h θ, the estimator of MMD 2 is unbiased. But the population divergence the generator attempts to minimize is actually DISPLAYFORM6 a distance previously studied by BID2 as well as Li et al. (2017b). An MMD GAN's effective estimator ofη is also biased by Theorem 2 (see particularly Appendix B.5); by Theorem 4, its gradients are also almost certainly biased. In both cases, the bias vanishes as the selection of θ becomes better; in particular, no bias is introduced by the use of a fixed (and potentially small) minibatch size, but rather by the optimization procedure for θ and the total number of samples seen in training the discriminator. Yet there is at least some sense in which MMD GANs might be considered "less biased" than WGANs. Optimizing the generator parameters of a WGAN while holding the critic parameters fixed is not sensible: consider, for example, P a point mass at 0 ∈ R and Q a point mass at q ∈ R. If q > 0, an optimal θ might correspond to the witness function f (t) = t; if we hold this witness function f fixed, the optimal q is at −∞, rather than at the correct value of 0. But if we hold an MMD GAN's critic fixed and optimize the generator, we obtain the GMMN model . Here, because the witness function still adapts to the observed pair of distributions, the correct distribution P = Q will always be optimal. Bad solutions might also seem to be optimal, but they can never seem arbitrarily better. Thus unbiased gradients of MMD 2 u might somehow be more meaningful to the optimization process than unbiased gradients of; exploring and formalizing this intuition is an intriguing area for future work. One challenge in comparing GAN models, as we will do in the next section, is that quantitative comparisons are difficult. Some insight can be gained by visually examining samples, but we also consider the following approaches to evaluate GAN methods. Inception score This metric, proposed by , is based on the classification output p(y | x) of the Inception model BID11. Defined as exp (E x KL(p(y | x) p(y))), it is highest when each image's predictive distribution has low entropy, but the marginal predictive distribution p(y) = E x p(y | x) has high entropy. This score correlates somewhat with human judgement of sample quality on natural images, but it has some issues, especially when applied to domains which do not represent a variety of the types of classes in ImageNet. In particular, it knows nothing about the desired distribution for the model. The Fréchet Inception Distance, proposed by , avoids some of the problems of Inception by measuring the similarity of the samples' representations in the Inception architecture (at the pool3 layer, of dimension 2048) to those of samples from the target distribution. The FID fits a Gaussian distribution to the hidden activations for each distribution and then computes the Fréchet distance, also known as the Wasserstein-2 distance, between those Gaussians. Heusel et al.show that unlike the Inception score, the FID worsens monotonically as various types of artifacts are added to CelebA images -though in our Appendix E we found the Inception score to be more mono- tonic than did Heusel et al., so this property may not be very robust to small changes in evaluation methods. Note also that the estimator of FID is biased; 10 we will discuss this issue shortly. KID We propose a metric similar to the FID, the Kernel Inception Distance, to be the squared MMD between Inception representations. We use a polynomial kernel, k(x, y) = DISPLAYFORM0 where d is the representation dimension, to avoid correlations with the objective of MMD GANs as well as to avoid tuning any kernel parameters. 11 This can also be viewed as an MMD directly on input images with the kernel K(x, y) = k(φ(x), φ(y)), with φ the function mapping images to Inception representations. Compared to the FID, the KID has several advantages. First, it does not assume a parametric form for the distribution of activations. This is particularly sensible since the representations have ReLU activations, and therefore are not only never negative, but do not even have a density: about 2% of components in Inception representations are typically exactly zero. With the cubic kernel we use here, the KID compares skewness as well as the mean and variance. Also, unlike the FID, the KID has a simple unbiased estimator.12 It also shares the behavior of the FID as artifacts are added to images (Appendix E). FIG2 demonstrates the empirical bias of the FID and the unbiasedness of the KID by comparing the CIFAR-10 train and test sets. The KID (FIG2) converges quickly to its presumed true value of 0; even for very small n, simple Monte Carlo estimates of the variance provide a reasonable measure of uncertainty. By contrast, the FID estimate (FIG2) does not behave so nicely: at n = 2 000, when the KID estimator is essentially always 0, the FID estimator is still quite large. Even at n = 10 000, the full size of the CIFAR test set, the FID still seems to be decreasing from its estimate of about 8.1 towards zero, showing the strong persistence of bias. This highlights that FID scores can only be compared to one another with the same value of n. Yet even for the same value of n, there is no particular reason to think that the bias in the FID estimator will be the same when comparing different pairs of distributions. In Appendix D, we demonstrate two situations where F ID(P 1, Q) < F ID(P 2, Q), but for insufficent numbers of samples the estimator usually gives the other ordering. This can happen even where all distributions in question are one-dimensional Gaussians, as Appendix D.1 shows analytically. Appendix D.2 also empirically demonstrates this on distributions more like the ones used for FID in practice, giving a 10 This is easily seen when the true FID is 0: here the estimator may be positive, but can never be negative. Note also that in fact no unbiased estimator of the FID exists; see Appendix D.3.11 k is the default polynomial kernel in scikit-learn . 12 Because the computation of the MMD estimator scales like O(n 2 d), we recommend using a relatively small n and averaging over several estimates; this is closely related to the block estimator of BID16. The FID estimator, for comparison, takes time O(nd 2 + d 3), and is substantially slower for d = 2048.simple example with d = 2048 where even estimating with n = 50 000 samples reliably gives the wrong ordering between the models. Moreover, Monte Carlo estimates of the variance are extremely small even when the estimate is very far from its asymptote, so it is difficult to judge the reliability of an estimate, and practitioners may be misled by the very low variance into thinking that they have obtained the true value. Thus comparing FID estimates bears substantial risks. KID estimates, by contrast, are unbiased and asymptotically normal. For models on MNIST, we replace the Inception featurization with features from a LeNet-like convolutional classifier 13 , but otherwise compute the scores in the same way. We also considered the diagnostic test of Arora & Zhang FORMULA0, which estimates the approximate number of "distinct" images produced by a GAN. The amount of subjectivity in what constitutes a duplicate image, however, makes it hard to reliably compare models based on this diagnostic. Comparisons likely need to be performed both with a certain notion of duplication in mind and by a user who does not know which models are being compared, to avoid subconscious biases; we leave further exploration of this intriguing procedure to future work. In supervised deep learning, it is common practice to dynamically reduce the learning rate of an optimizer when it has stopped improving the metric on a validation set. So far, this does not seem to be common in GAN-type models, so that learning rate schedules must be tuned by hand. We propose instead using an adaptive scheme, based on comparing the KID score for samples from a previous iteration to that from the current iteration. To avoid setting an explicit threshold on the change in the numerical value of the score, we use a p-value obtained from the relative similarity test of. If the test does not indicate that our current model is closer to the validation set than the model from a certain number of iterations ago at a given significance level, we mark it as a failure; when a given number of failures occur in a row, we decrease the learning rate. Bounliphone et al.'s test is for the hypothesis MMD(P 1, Q) < MMD(P 2, Q), and since the KID can be viewed as an MMD on image inputs, we can apply it directly. We compare the quality of samples generated by MMD GAN using various kernels with samples obtained by WGAN-GP and Cramér GAN on four standard benchmark datasets: the MNIST dataset of 28 × 28 handwritten digits 15, the CIFAR-10 dataset of 32 × 32 photos , the LSUN dataset of bedroom pictures resized to 64 × 64 BID14, and the CelebA dataset of celebrity face images resized and cropped to 160 × 160 .For most experiments, except for those with the CelebA dataset, we used the DCGAN architecture for both generator and critic. For MMD losses, we used only 16 top-layer neurons in the critic; more did not seem to improve performance, except for the distance kernel for which 256 neurons in the top layer was advantageous. advised to use at least 256-dimensional critic output, this enabled exact comparison between Cramér GAN and energy distance MMD, which are directly related (Section 2.3). For the generator we used the standard number of convolutional filters (64 in the second-to-last layer); for the critic, we compared networks with 16 and 64 filters in the first convolutional layer. 13 github.com/tensorflow/models/blob/master/tutorials/image/mnist/convolutional.py 14 We use the slight corrections to the asymptotic distribution of the MMD estimator given by BID9 in this test.15 yann.lecun.com/exdb/mnist/ 16 In the DCGAN architecture the number of filers doubles in each consecutive layer, so an f -filter critic has f, 2f, 4f and 8f convolutional filters in layers 1-4, respectively. For the higher-resolution model for the CelebA dataset, we used a 5-layer DCGAN critic and a 10-layer ResNet generator 17, with 64 convolutional filters in the last/first layer. This allows us to compare the performance of MMD GANs with a more complex architecture. Models with smaller critics run considerably faster: on our systems, the 16-filter DCGAN networks typically ran at about twice the speed of the 64-filter ones. Note that the critic size is far more important to training runtime than the generator size: we update the critic 5 times for each generator step, and moreover the critic network is run on two batches each time we use it, one from P and one from Q. Given the same architecture, all models considered here run at about the same speed. We evaluate several MMD GAN kernel functions in our experiments. 18 The simplest is the linear kernel: k dot (x, y) = x, y, whose MMD corresponds to the distance between means (this is somewhat similar to the feature matching idea of). We also use the exponentiated quadratic and rational quadratic functions, with mixtures of lengthscales, DISPLAYFORM0 where Σ = {2, 5, 10, 20, 40, 80}, A = {.2, .5, 1, 2, 5}. For the latter, however, we found it advantageous to add a linear kernel to the mixture, ing in the mixed RQ-dot kernel k rq * = k rq + k dot. Lastly we use the distance-induced kernel k dist ρ1,0 of, using the Euclidean distance ρ 1 so that the MMD is the energy distance. 19 We also considered Cramér GANs, with the surrogate critic, and WGAN-GPs. Each model was trained with a batch size of 64, and 5 discriminator updates per generator update. For CIFAR-10, LSUN and CelebA we trained for 150 000 generator updates, while for MNIST we used 50 000. The initial learning rate was set to 10 −4 and followed the adaptive scheme described in Section 4.1, with KID compared between the current model and the model 20 000 generator steps earlier (5 000 for MNIST), every 2 000 steps (500 for MNIST). After 3 consecutive failures to improve, the learning rate was halved. This approach allowed us to avoid manually picking a different learning rate for each of the considered models. We scaled the gradient penalty by 1, instead of the 10 recommended by and; we found this to usually work slightly better with MMD models. With the distance kernel, however, we scale the penalty by 10 to allow direct comparison with Cramér GAN.Quantitative scores are estimated based on 25 000 generator samples (100 000 for MNIST), and compared to 25 000 dataset elements (for LSUN and CelebA) or the standard test set (10 000 images held out from training for MNIST and CIFAR-10). Inception and FID scores were computed using 10 bootstrap resamplings of the given images; the KID score was estimated based on 100 repetitions of sampling 1 000 elements without replacement. Code for our models is available at github.com/mbinkowski/MMD-GAN.MNIST All of the models achieved good , measured both visually and in quantitative scores; full are in Appendix F. FIG3, however, shows the evolution of our quantitative criteria throughout the training process for several models. This shows that the linear kernel dot and rbf kernel rbf are clearly worse than the other models at the beginning of the training process, but both improve eventually. rbf, however, never fully catches up with the other models. There is also some evidence that dist, and perhaps WGAN-GP, converge more slowly than rq and Cramér GAN. Given their otherwise similar properties, we thus recommend the use of rq kernels over rbf in MMD GANs and limit experiments for other datasets to rq and dist kernels. Full are shown in Appendix F. Small-critic MMD GAN models approximately match large-critic WGAN-GP models, at substantially reduced computational cost. 17 As in , we use a linear layer, 4 residual blocks and one convolutional layer. 18 Because these higher-resolution experiments were slower to run, for CelebA we trained MMD GAN with only one type of kernel. 19 We also found it helpful to add an activation penalty to the critic representation network in certain MMD models. Otherwise the representations h θ sometimes chose very large values, which for most kernels does not change the theoretical loss (defined only in terms of distances) but leads to floating-point precision issues. We use a combined L 2 penalty on activations across all critic layers, with a factor of 1 for rq * and 0.0001 for dist. TAB0 presents scores for models trained on the LSUN Bedrooms dataset; samples from most of these models are shown in Figure 3. Comparing the models' Inception scores with the one achieved by the test set makes clear that this measure is not meaningful for this datasetnot surprisingly, given the drastic difference in domain from ImageNet class labels. In terms of KID and FID, MMD GANs outperform Cramér and WGAN-GP for each critic size. Although with the smaller critic are worse than with the large one for each considered model, small-critic MMD GANs still produce reasonably good samples, which certainly is not the case for WGAN-GP. Although a small-critic Cramér GAN produces relatively good samples, the separate objects in these pictures often seem less sharp than the MMD rq* samples. With a large critic, both Cramér GAN and MMD rq* give good quality samples, many of which are hardly distinguishable from the test set by eye. CelebA Scores for the CelebA dataset are shown in TAB1; MMD GAN with rq* kernel outperforms both WGAN-GP and Cramér GAN in KID and FID. Samples in Figure 4 show that for each of the models there are many visually pleasing pictures among the generated ones, yet unrealistic images are more common for WGAN-GP and Cramér. These illustrate the benefits of using the MMD on deep convolutional feaures as a GAN critic. In this hybrid system, the initial convolutional layers map the generator and reference image distributions to a simpler representation, which is well suited to comparison via the MMD. The MMD in turn employs an infinite dimensional feature space to compare the outputs of these convolutional layers. By comparison, WGAN-GP requires a larger discriminator network to achieve similar performance. It is interesting to consider the question of kernel choice: the distance kernel and RQ kernel are both characteristic BID4, and neither suffers from the fast decay of the exponentiated quadratic kernel, yet the RQ kernel performs slightly better in our experiments. The relative merits of different kernel families for GAN training will be an interesting topic for further study. It is not immediately obvious how to interpret the surrogate loss. An insight comes from considering the score function associated with the energy distance, which we now briefly review . A scoring rule is a function S(P, y), which is the loss incurred when a forecaster makes prediction P, and the event y is observed. The expected score is the expectation under Q of the score, DISPLAYFORM0 If a score is proper, then the expected score obtained when P = Q is greater or equal than the expected score for P = Q, S(Q, Q) ≥ S(P, Q).A strictly proper scoring rule shows an equality only when P and Q agree. We can define a divergence measure based on this score, DISPLAYFORM1 Bearing in mind the definition of the divergence, it is easy to see (, eq. 22) that the energy distance arises from the score function DISPLAYFORM2 The interpretation is straightforward: the score of a reference sample y is determined by comparing its average distance to a generator sample with the average distance among independent generator samples, E P ρ(X, X). If we take an expectation over Y ∼ Q, we recover the scoring rule optimized by the DISCO Nets algorithm (, Section 3.3).As discussed earlier, the Cramér GAN critic does not use the energy distance directly on the samples, but first maps the samples through a function h, for instance a convolutional network; this should be chosen to maximize the discriminative performance of the critic. Writing this mapping as h, we break the energy distance down as D e (P, Q) = S(Q, Q) − S(P, Q), where DISPLAYFORM3 and DISPLAYFORM4 When training the discriminator, the goal is to maximize the divergence by learning h, and so both FORMULA0 and FORMULA0 change: in other words, divergence maximization is not possible without two independent samples Y, Y from the reference distribution Q.An alternative objective in light of the score interpretation, however, is to simply optimize the average score. In other words, we would find features h that make the average distance from generator to reference samples much larger than the average distance between pairs of generator samples. We no longer control the term encoding the "variability" due to Q, E Q ρ(h(Y), h(Y)), which might therefore explode: for instance, h might cause h(Y) to disperse broadly, and far from the support of P, assuming sufficient flexibility to keep E P ρ(h(X), h(X)) under control. We can mitigate this by controlling the expected norm E Q ρ(h(Y), 0), which has the advantage of only requiring a single sample to compute. For example, we could maximize DISPLAYFORM5 This resembles the Cramér GAN critic, but the generator-to-generator distance is scaled differently, and there is an additional term: E P ρ(h(X), 0) is being maximized in, which is more difficult to interpret. An argument has been made (in personal communication with Bellemare et al.) that this last term is required if the function f c in is to be a witness of an integral probability metric, although the asymmetry of this witness in P vs Q needs to be analyzed further. We will now show that all estimators of IPM-like distances and their gradients are biased. Appendix B.1 defines a slight generalization of IPMs, used to analyze MMD GANs in the same framework as WGANs, and a class of estimators that are a natural model for the estimator used in GAN models. Appendix B.2 both shows that not only are this form of estimators invariably biased in nontrivial cases, and moreover no unbiased estimator can possibly exist; Appendix B.3 then demonstrates that any estimator with non-constant bias yields a biased gradient estimator. Appendices B.4 and B.5 demonstrate specific examples of this bias for the Wasserstein and maximized-MMD distances. We will first define a slight generalization of IPMs: we will use this added generality to help analyze MMD GANs in Appendix B.5.Definition 1 (Generalized IPM). Let X be some domain, with M a class of probability measures on X. 20 Let F be some parameter set, and J: DISPLAYFORM0 For example, if F is a class of functions f: X → R, and J is given by DISPLAYFORM1 then we obtain integral probability metrics. Given samples X ∼ P m and Y ∼ Q n, letP denote the empirical distribution of X (an equal mixture of point masses at each X i ∈ X), and similarlyQ for Y. Then we have a simple estimator of which is unbiased for fixed f: DISPLAYFORM2 Definition 2 (Data-splitting estimator of a generalized IPM). Consider the distance, with objective J and parameter class F. Suppose we observe iid samples X ∼ P m, Y ∼ Q n, for any two distributions P, Q. A data-splitting estimator is a functionD(X, Y) which first randomly splits the sample X into X tr, X te and Y into Y tr, Y te, somehow selects a critic functionf X tr,Y tr ∈ F independently of X te, Y te, and then returns a of the form DISPLAYFORM3 These estimators are defined by three components: the choice of relative sizes of the train-test split, the selection procedure forf X tr,Y tr, and the estimatorĴ. The most obvious selection procedure iŝ DISPLAYFORM4 though of course one could use regularization or other techniques to select a different f ∈ F, and in practice one will use an approximate optimizer. used an estimator of exactly this form in a two-sample testing setting. As noted in Section 3, this training/test split is a reasonable match for the GAN training process. As we optimize a WGAN-type model, we compute the loss (or its gradients) on a minibatch, while the current parameters of the critic are based only on data seen in previous iterations. We can view the current minibatch as X te, Y te, all previously-seen data as X tr, Y tr, and the current critic function asf X tr,Y tr. Thus, at least in the first pass over the training set, WGAN-type approaches exactly fit the data-splitting form of Definition 2; in later passes, the difference from this setup should be relatively small unless the model is substantially overfitting. We first show, in Theorem 2, that data-splitting estimators are biased downwards. Although this provides substantial intuition about the situation in GANs, it leaves open the question of whether some other unbiased estimator might exist; Theorem 3 shows that this is not the case. Theorem 2. Consider a data-splitting estimator (Definition 2) of the generalized IPM D (Definition 1) based on an unbiased estimatorĴ of J: for any fixed f ∈ F, DISPLAYFORM0 Then either the selection procedure is almost surely perfect, DISPLAYFORM1 or else the estimator has a downward bias: DISPLAYFORM2 Proof. Since X tr, Y tr are independent of X te, Y te, DISPLAYFORM3 Define the suboptimality off X tr,Y tr as DISPLAYFORM4. Note that ε ≥ 0, since D(P, Q) = sup f ∈F J(f, P, Q) and so for any f ∈ F we have J(f, P, Q) ≤ D(P, Q).Thus, either Pr(ε = 0) = 1, in which case holds, or else E[ε] > 0, giving.Theorem 2 makes clear that asf X tr,Y tr converges to its optimum, the bias ofD should vanish (as in , Theorem 3). Moreover, in the GAN setting the minibatch size only directly determines X te, Y te, which do not contribute to this bias; bias is due rather to the training procedure and the number of samples seen through the training process. As long asf X tr,Y tr is not optimal, however, the estimator will remain biased. Many estimators of IPMs do not actually perform this data splitting procedure, instead estimating D(P, Q) with the distance between empirical distributions D(P,Q). The standard biased estimator of the MMD (, Equation 5), the IPM estimators of BID6, and the empirical Wasserstein estimator studied by are all of this form. These estimators, as well as any other conceivable estimator, are also biased: Theorem 3. Let P be a class of distributions such that {(1 − α)P 0 + αP 1: 0 ≤ α ≤ 1} ⊆ P, where P 0 = P 1 are two fixed distributions. Let D be an IPM. There does not exist any estimator of D which is unbiased on P.Proof. We use a technique inspired by. Suppose there is an unbiased estimatorD(X, Y) of D: for some finite m and n, if X = {X 1, . . ., DISPLAYFORM5 Fix P 0, P 1, and Q ∈ P, and consider the function DISPLAYFORM6 Thus R(α) is a polynomial in α of degree at most m. DISPLAYFORM7 where used our general assumption about IPMs that if f ∈ F, we also have −f ∈ F. But R(α) is not a polynomial with any finite degree. Thus no such unbiased estimatorD exists. Note that the proof of Theorem 3 does not readily extend to generalized IPMs, and so does not tell us whether an unbiased estimator of the MMD GAN objective can exist. Also, attempting to apply the same argument to squared IPMs would give the square of, which is a quadratic function in α. Thus tells us that although no unbiased estimator for a squared IPM can exist with only m = 1 sample point, one can exist for m ≥ 2, as indeed does for the squared MMD. We will now show that biased estimators, except for estimators with a constant bias, must also have biased gradients. Assume that, as in the GAN setting, Q is given by a generator network G ψ with parameter ψ and inputs Z ∼ Z, so that Y = G ψ (Z) ∼ Q ψ. The generalized IPM of FORMULA0 is now a function of ψ, which we will denote as DISPLAYFORM0 Consider an estimatorD(ψ) of D(ψ). Theorem 4 shows that whenD(ψ) and D(ψ) are differentiable, the gradient ∇ ψD (ψ) is an unbiased estimator for ∇ ψ D(ψ) only if the bias ofD(ψ) doesn't depend on ψ. This is exceedingly unlikely to happen for the biased estimatorD(W) defined in Theorem 2, and indeed Theorem 3 shows cannot happen for any IPM estimator. Theorem 4. Let D: Ψ → R be a function on a parameter space Ψ ⊆ R d, with a random estimator D: Ψ → R which is almost surely differentiable. Suppose thatD has unbiased gradients: DISPLAYFORM1 Then, for each connected component of Ψ, DISPLAYFORM2 where the constant can vary only across distinct connected components. Proof. Let ψ 1 and ψ 2 be an arbitrary pair of parameter values in Ψ, connected by some smooth path r: → Ψ with r = ψ 1, r = ψ 2. For example, if Ψ is convex, then paths of the form r(t) = tψ 1 + (1 − t)ψ 2 are sufficient. Using Fubini's theorem and standard about path integrals, we have that DISPLAYFORM3 + const for all ψ in the same connected component of Ψ. Theorems 2 and 3 hold for the original WGANs, whose critic functions are exactly L-Lipschitz, considering F as the set of L-Lipschitz functions so that D F is L times the Wasserstein distance. They also hold for either WGANs or WGAN-GPs with F the actual set of functions attainable by the critic architecture, so that D is the "neural network distance" of or the "adversarial divergence" of.It should be obvious that for nontrivial distributions P and Q and reasonable selection criteria for f X tr,Y tr, does not hold, and thus FORMULA2 does (so that the estimate is biased downwards). Theorem 3 also shows this is the case on reasonable families of input distributions, and moreover that the bias is not constant, so that gradients are biased by Theorem 4.Example For an explicit demonstration, consider the Wasserstein case, F the set of 1-Lipschitz functions, with P = N and Q = N. Here D F (P, Q) = 1; the only critic functions f ∈ F which achieve this are f (t) = t + C for C ∈ R.If we observe only one training pair X tr ∼ P and Y tr ∼ Q, when X tr > Y tr, f 1 (t) = t is a maximizer of, leading to the expected estimate J(f 1, P, Q) = 1. But with probability Φ −1/ √ 2 ≈ 0.24 it happens that X tr < Y tr. In such cases, FORMULA2 could give e.g. f −1 (t) = −t, giving the expected response J(f −1, P, Q) = −1; the overall expected estimate of the estimator using this critic selection procedure is then DISPLAYFORM0 The only way to achieve ED F (X, Y) = 1 would be a "stubborn" selection procedure which chooses f 1 + C no matter the given inputs. This would have the correct output ED F (X, Y) = 1 for this (P, Q) pair. Applying this same procedure to P = N (−1, 1) and Q = N, however, would then give ED F (X, Y) = −1, when it should also be 1. Recall the distance η(P, Q) = sup θ MMD 2 (h θ (P), h θ (Q)) defined by. MMD GANs can be viewed as estimating η according to the scheme of Theorem 2, with F the set of possible parameters θ, J(θ, DISPLAYFORM0). Clearly our optimization scheme for θ does not almost surely yield perfect answers, and so again we have DISPLAYFORM1 As m tr, n tr → ∞, as for Wasserstein it should be the case thatη → η. This is shown for certain kernels, along with the rate of convergence, by Sriperumbudur et al. (2009a, Section 4).It should also be clear that in nontrivial situations, this bias is not constant, and hence gradients are biased by Theorem 4.Example For a particular demonstration, consider DISPLAYFORM2 with h θ: R 2 → R given by h θ (x) = θ T x, θ = 1, so that h θ chooses a one-dimensional projection of the two-dimensional data. Then use the linear kernel k dot, so that the MMD is simply the difference in means between projected features: DISPLAYFORM3, and DISPLAYFORM4 Clearly η(P, Q) = 1, which is obtained by θ ∈ {(−1, 0),}; any other valid θ will yield a strictly smaller value of MMD 2 (h θ (P), h θ (Q)).The MMD GAN estimator of η, if the optimum is achieved, useŝ FIG2,}; thus by Theorem 2, Eη(X, Y) < η(P, Q) = 1. (A numerical simulation for the former gives a value around 0.6 when m tr = n tr = 2.) DISPLAYFORM5 We now proceed to prove Theorem 1 as a corollary to the Theorem 5, our main about exchanging gradients and expectations of deep networks. Exchanging the gradient and the expectation can often be guaranteed using a standard in measure theory (see Proposition 1), as a corollary of the Dominated Convergence theorem (Proposition 2). This , however, requires the property Proposition 1.(ii): for almost all inputs X, the mapping is differentiable on the entirety of a neighborhood around θ. This order of quantifiers is important: it allows the use of the mean value theorem to control the average rate of change of the function, and the then follows from Proposition 2.For a neural network with the ReLU activation function, however, this assumption doesn't hold in general. For instance, if θ = (θ 1, θ 2) ∈ R 2 with θ 2 = 0 and X ∈ R, one can consider this very simple function: h θ (X) = max(0, θ 1 + θ 2 X). For any fixed value of θ, the function h θ (X) is differentiable in θ for all X in R except for X θ = −θ 1 /θ 2. However, if we consider a ball of possible θ values B(θ, r), the function is not differentiable on the set {−θ 1 /θ 2 ∈ R | θ ∈ B(θ, r)}, which can have positive measure for many possible distributions for X.In Theorem 5, we provide a proof that derivatives and expectations can be exchanged for all parameter values outside of a "bad set" Θ P, without relying on Proposition 1.(ii). This can be done using Lemma 1, which takes advantage of the particular structure of neural networks to control the average rate of change without using the mean value theorem. Dominated convergence (Proposition 2) can then be applied directly. We also show in Proposition 3 that the set Θ P, of parameter values where Theorem 5 might not hold, has zero Lebesgue measure. This relies on the standard Fubini theorem (, Theorem 14.16) and Lemma 4, which ensures that the network θ → h θ (X) is differentiable for almost all parameter values θ when X is fixed. Although Lemma 4 might at first sight seem obvious, it requires some technical considerations in topology and differential geometry. Proposition 1 (Differentiation Lemma (e.g. , Theorem 6.28) ). Let V be a nontrivial open set in R m and let P be a probability distribution on R d. Define a map h: R d × V → R n with the following properties: DISPLAYFORM0 (iii) There exists a P-integrable function g: DISPLAYFORM1. Proposition 2 (Dominated Convergence Theorem (e.g. , Corollary 6.26) ). Let P be a probability distribution on R d and f a measurable function. Let (f n) n∈N be a sequence of of integrable functions such that for P-almost all X ∈ R d, f n (X) → f (X) as n goes to ∞. Assume that there is a dominating function g: f n (X) ≤ g(X) for P-almost all X ∈ R d for all n ∈ N, and E P [g(X)] < ∞. Then f is P-integrable, and E P [f n (X)] → E P [f (X)] as n goes to ∞. We would like to consider general feed-forward networks with a directed acyclic computation graph G. Here, G consists of L + 1 nodes, with a root node i = 0 and a leaf node i = L. We denote by π(i) the set of parent nodes of i. The nodes are sorted according to a topological order: if j is a parent node of i, then j < i. Each node i for i > 0 computes a function f i, which outputs a vector in R di based on its input in R d π(i), the concatenation of the outputs of each layer in π(i). DISPLAYFORM0 We define the feed-forward network that factorizes according the graph G and with functions f i recursively: DISPLAYFORM1 where h π(i) is the concatenation of the vectors h j for j ∈ π(i). The functions f i can be of two types:• Affine transform (Linear Module): DISPLAYFORM2 is a known linear operator on the weights W i, which can account for convolutions and similar linear operations. We will sometimes use Y to denote the augmented vector Y 1, which accounts for bias terms.• Non-linear: These f i have no learnable weights. f i can potentially be non-differentiable, such as max pooling, ReLU, and so on. Some conditions on f i will be required (see Assumption D); the usual functions used in practice satisfy these conditions. Denote by C the set of nodes i such that f i is non-linear. θ is the concatenation of parameters of all linear modules: DISPLAYFORM3.., L}. Call the total number of parameters m = i∈C c m i, so that θ ∈ R m. The feature vector of the network corresponds to the output of the last node L and will be denoted DISPLAYFORM4 The subscript θ stands for the parameters of the network. We will sometimes use h θ (X) to denote explicit dependence on X, or omit it when X is fixed. Also define a "top-level function" to be applied to DISPLAYFORM5 This function might simply be K (U) = U, as in Corollaries 1 and 2. But it also allows us to represent the kernel function of an MMD GAN in Corollary 3: here we take X to be the two inputs to the kernel stacked together, apply the network to each of the two inputs with the same parameters in parallel, and then compute the kernel value between the two representations with K. K will have different smoothness assumptions than the preceding layers (Assumption B). We will need the following assumptions at various points, where α ≥ 1: DISPLAYFORM0 B The function K is continuously differentiable, and satisfies the following growth conditions where C 0 and C 1 are constants: DISPLAYFORM1, each real analytic on R d π(i), which agree with f i on the closure of a set D DISPLAYFORM2 ∀Y ∈D k i. These sets D k i are disjoint, and cover the whole input space: DISPLAYFORM3 DISPLAYFORM4 Another example is when f i computes max-pooling on two inputs. In that case we have K i = 2, and each domain D k i corresponds to a half plane (see FIG6). Each domain is defined by one inequality DISPLAYFORM5 DISPLAYFORM6 When f i is analytic on the whole space, DISPLAYFORM7, which can be defined by a single function (S i,k = 1) of G i,1,1 (Y) = 1. This case corresponds to most of the differentiable functions used in deep learning, such as the softmax, sigmoid, hyperbolic tangent, and batch normalization functions. Other activation functions, such as the ELU , are piecewise-analytic and also satisfy Assumptions C and D. We first state the main , which implies Theorem 1 via Corollaries 1 to 3. The proof depends on various intermediate which will be established afterwards. ] is differentiable at θ 0, and DISPLAYFORM0 where µ is the Lebesgue measure. Proof. Let θ 0 be such that the function θ → h θ (X) is differentiable at θ 0 for P-almost all X. By Proposition 3, this is the case for µ-almost all θ 0 in R m.Consider a sequence (θ n) n∈N that converges to θ 0; there is then an R > 0 such that θ n − θ 0 < R for all n ∈ N. Letting X be in R d, Lemma 2 gives that DISPLAYFORM1 It also follows that: DISPLAYFORM2 converges point-wise to 0 and is bounded by the integrable function 2F (X). Therefore by the dominated convergence theorem (Proposition 2) it follows that DISPLAYFORM3 Finally we define the sequence DISPLAYFORM4 which is upper-bounded by E P [M n (X)] and therefore converges to 0. By the sequential characterization of limits in Lemma 3, it follows that E P [K (h θ (X))] is differentiable at θ 0, and its differential is given by DISPLAYFORM5 These corollaries of Theorem 5 apply it to specific GAN architectures. Here we use the distribution Z to represent the noise distribution. Corollary 1 (WGANs). Let P and Z be two distributions, on X and Z respectively, each satisfying Assumption A for α = 1. Let G ψ: Z → X be a generator network and D θ: X → R a critic network, each satisfying Assumptions C and D. Then, for µ-almost all (θ, ψ), we have that DISPLAYFORM6 Proof. By linearity, we only need the following two : DISPLAYFORM7 The first follows immediately from Theorem 5, using the function K (U) = U (which clearly satisfies Assumption B for α = 1). The latter does as well by considering that the augmented network DISPLAYFORM8 still satisifes the conditions of Theorem 5. Corollary 2 (Original GANs). Let P and Z be two distributions, on X and Z respectively, each satisfying Assumption A for α = 1. Let G ψ: Z → X be a generator network, and D θ: X → R a discriminator network, each satisfying Assumptions C and D. Further assume that the output of D is almost surely bounded: there is some γ > 0 such that for µ-almost all (θ, ψ), Pr DISPLAYFORM0 Then we have the following: Proof. The log function is real analytic and (1/γ)-Lipschitz on (γ, 1 − γ). The claim therefore follows from Theorem 5, using the networks log DISPLAYFORM1 DISPLAYFORM2 The following assumption about a kernel k implies Assumption B when used as a top-level function K: E Suppose k is a kernel such that there are constants C 0, C 1 where DISPLAYFORM3 Corollary 3 (MMD GANs). Let P and Z be two distributions, on X and Z respectively, each satisfying Assumption A for some α ≥ 1. Let k be a kernel satisfying Assumption E. Let G ψ: Z → X be a generator network and D θ: X → R a critic representation network each satisfying Assumptions C and D. Then DISPLAYFORM4 Proof. Consider the following augmented networks: DISPLAYFORM5 has inputs distributed as P × Z, which satisfies Assumption A with the same α as P and Z, and h satisfies Assumptions C and D. The same is true of h and h. Moreover, the function DISPLAYFORM6 satisfies Assumption B. Thus Theorem 5 applies to each of h, h, and h. Considering the form of MMD 2 u, the follows by linearity and the fact that MMD Each of the kernels considered in this paper satisfies Assumption E with α at most 2:• k dot (x, y) = x, y works with α = 2, C 0 = 1, C 1 = 1.• k rbf σ of works with α = 2, C 0 = 1, DISPLAYFORM7 • k rq α of works with α = 2, C 0 = 1, C 1 = √ 2.• k dist ρ β,0 of, using ρ β (x, y) = x − y β with 1 ≤ β ≤ 2, works with α = β, C 0 = 3, C 1 = 4β. Since the existence of a moment implies the existence of all lower-order moments by Jensen's inequality, this finalizes the proof of Theorem 1. DISPLAYFORM8 with: DISPLAYFORM9 When i is not a linear layer, then by Assumption C f i is M -Lipschitz. Thus we can directly get the needed functions by recursion: DISPLAYFORM10 Lemma 2. Let R be a positive constant and θ ∈ R m. Under Assumptions A to C, the following hold for all θ ∈ B(θ, R) and all X in R d: DISPLAYFORM11 Proof. We will first prove the following inequality: DISPLAYFORM12 Let t be in and define the function f by DISPLAYFORM13 Then f = K (V) and f = K (U). Moreover, f is differentiable and its derivative is given by: DISPLAYFORM14 Using Assumption B one has that: DISPLAYFORM15 The follows using the mean value theorem. Now choosing U = h θ (X) and V = h θ (X) one gets the following: DISPLAYFORM16 Under Assumption C, it follows by Lemma 1 that: DISPLAYFORM17 The functions a, b, α, β defined in Lemma 1 are continuous, and hence all bounded on the ball B(θ, R); choose D > 0 to be a bound on all of these functions. It follows after some algebra that DISPLAYFORM18 α is concave on t ≥ 0, and so we have that DISPLAYFORM19 via Jensen's inequality and Assumption A. We also have E [1 + X] < ∞ by the same assumption. Thus F (X) is integrable. Lemma 3. Let f: R m → R be a real valued function and g a vector in R m such that: DISPLAYFORM20 for all sequences (θ n) n∈N converging towards θ 0 with θ n = θ 0. Then f is differentiable at θ 0, and its differential is g. Recall the definition of a differential: g is the differential of f at θ 0 if DISPLAYFORM0 The directly follows from the sequential characterization of limits. The last required for the proof of Theorem 5 is Proposition 3. We will first need some additional notation. For a given node i, we will use the following sets of indices to denote "paths" through the network's computational graph: DISPLAYFORM0 Note that ∂i ⊆ ¬i, and that ¬i = ∂i ∪ ¬π(i).If a(i) is the set of ancestors of node i, we define a backward trajectory starting from node i as an element q of the form: DISPLAYFORM1 where k j are integers in [K j]. We call T (i) the set of such trajectories for node i. For p ∈ P of the form p = (i, k, s), the set of parameters for which we lie on the boundary of p is DISPLAYFORM2 We also denote by ∂S p the boundary of the set S p. If Q is a subset of P, we use the following notation for convenience: DISPLAYFORM3 For a given θ 0 ∈ R m, the set of input vectors X ∈ R d such that h θ0 is not differentiable is DISPLAYFORM4 Consider a random variable X in the input space R d, following the distribution P. For a given distribution P, we introduce the following set of "critical" parameters: DISPLAYFORM5 This is the set of parameters θ where the network is not differentiable for a non-negligible set of datasets X.Finally, for a given X ∈ R d, set of parameters for which the network is not differentiable is DISPLAYFORM6 We are now ready to state and prove the remaining . Proposition 3. Under Assumption D, the set Θ P has 0 Lebesgue measure for any distribution P.Proof. Consider the following two sets: DISPLAYFORM7 By virtue of Theorem I in BID15 , it follows that the set of nondifferentiability of continuous functions is measurable. It is easy to see then, that D and Q are also measurable sets since the network is continuous. Note that we have the inclusion D ⊆ Q. We endow the two sets with the product measure ν:= µ × P, where µ is the Lebesgue measure. Therefore ν(D) ≤ ν(Q). On one hand, Fubini's theorem tells us: DISPLAYFORM8 By Lemma 4, we have that µ(Θ X) = 0; therefore ν(Q) = 0 and hence ν(D) = 0. On the other hand, we use again Fubini's theorem for ν(D) to write: DISPLAYFORM9 For all θ ∈ Θ P, we have P(N (θ)) > 0 by definition. Thus ν(D) = 0 implies that µ(Θ P) = 0. Lemma 4. Under Assumption D, for any X in R d, the set Θ X has 0 Lebesgue measure: µ(Θ X) = 0.Proof. We first show that Θ X ⊆ ∂S P, which was defined by.Let θ 0 be in Θ X. By Assumption D, it follows that θ 0 ∈ S P. Assume for the sake of contradiction that θ 0 / ∈ ∂S P. Then applying Lemma 5 to the output layer, i = L, implies that there is a real analytic function f (θ) which agrees with h θ on all θ ∈ B(θ 0, η) for some η > 0. Therefore the network is differentiable at θ 0, contradicting the fact that θ 0 ∈ Θ X. Thus Θ X ⊆ ∂S P.Lemma 6 then establishes that µ(∂S P) = 0, and hence µ(Θ X) = 0.Lemma 5. Let i be a node in the graph. Under Assumption D, if θ ∈ R m \ ∂S ¬i, then there exist η > 0 and a trajectory q ∈ T (i) such that h i θ = f q (θ) for all θ in the ball B(θ, η). Here f q is the real analytic function on R m defined with the same structure as h θ, but replacing each nonlinear f j with the analytic function f kj j for (j, k j) ∈ q. Proof. We proceed by recursion on the nodes of the network. If i = 0, we trivially have h 0 θ = X, which is real analytic on R m. Assume the for ¬π(i) and let θ ∈ R m \ ∂S ¬i. In particular θ ∈ R m \ ∂S ¬π(p). By the recursion assumption, we get: DISPLAYFORM0 with f q real analytic in R m.If θ / ∈ S ∂i, then there is some sufficiently small η > 0 such that B(θ, η) does not intersect S ∂i.Therefore, by Assumption D, there is some DISPLAYFORM1 ) for all θ ∈ B(θ, η), where f k i is one of the real analytic functions defining f i. By FORMULA2 we then have DISPLAYFORM2 Otherwise, θ ∈ S ∂i. Then, noting that by assumption θ / ∈ ∂S ∂i, it follows that for small enough η > 0, we have B(θ, η) ⊆ S ∂i. Denote by A the set of index triples p ∈ ∂i such that θ ∈ S p; A is nonempty since θ ∈ S ∂i. Therefore θ ∈ p∈A S p, and θ / ∈ p∈A c S p. We will show that for η small enough, B(θ, η) ⊆ p∈A S p. Assume for the sake of contradiction that there exists a sequence of (parameter, index-triple) pairs (θ n, p n) such that p n ∈ A c, θ n ∈ S pn, and θ n → θ. p n is drawn from a finite set and thus has a constant subsequence, so we can assume without loss of generality that p n = p 0 for some p 0 ∈ A c. Since S p0 is a closed set by continuity of the network and G p0, it follows that θ ∈ S p0 by taking the limit. This contradicts the fact that θ / ∈ p∈A c S p. Hence, for η small enough, DISPLAYFORM3, where ⊕ denotes concatenation, it finally follows that h i θ = f q0 (θ) for all θ in B(θ, min(η, η)), and f q0 is the real analytic function on R m as described. DISPLAYFORM4 Proof. We will proceed by recursion. For i = 0 we trivially have ∂S ¬0 = ∅, thus µ(∂S ¬0) = 0. Thus assume that µ(∂S ¬π(i) ) = 0.For s = (p, q), the pair of an index triple p ∈ ∂i and a trajectory q ∈ T (i), define the set DISPLAYFORM5 where f q is the real analytic function defined in Lemma 5 which locally agrees with h DISPLAYFORM6 We will now prove that for any θ in ∂S ∂i \ ∂S ¬π(i), there exists s ∈ ∂i × T (i) such that θ ∈ M s and µ(M s) = 0. We proceed by contradiction. DISPLAYFORM7 Moreover, since θ ∈ ∂S ∂i, there exists p ∈ ∂i such that G p (h π(i) θ ) = 0. This means that for s = (p, q), we have θ ∈ M s. If µ(M s) > 0, then by Lemma 7 M s = R m, hence we would have B(θ, η) ⊆ M s. By it would then follow that B(θ, η) ⊆ S ∂i. This contradicts the fact that θ is in ∂S ∂i, and hence µ(M s) = 0.We have shown that ∂S ∂i \ ∂S ¬π(i) ⊆ s∈A M s, where the sets M s have zero Lebesgue measure and A ⊆ P × L j=0 T (j) is finite. This implies: DISPLAYFORM8 Using the recursion assumption µ(∂S ¬π(i) ) = 0, one concludes that µ(∂S ¬i) = 0. Hence for the last node L, recalling that ¬L = P one gets µ(∂S P) = 0.Lemma 7. Let θ → F (θ): R m → R be a real analytic function on R m and define the set: DISPLAYFORM9 Then either µ(M) = 0 or F is identically zero. Proof. This is shown e.g. as Proposition 0 of. We now further study the bias behavior of the FID estimator mentioned in Section 4.We will refer to the Fréchet Inception Distance between two distributions, letting µ P denote the mean of a distribution P and Σ P its covariance matrix, as DISPLAYFORM0. This is motivated because it coincides with the Fréchet (Wasserstein-2) distance between normal distributions. Although the Inception coding layers to which the FID is applied are not normally distributed, the FID remains a well-defined pseudometric between arbitrary distributions whose first two moments exist. The usual estimator of the FID based on samples {X i} m i=1 ∼ P m and {Y j} n j=1 ∼ P n is the plug-in estimator. First, estimate the mean and covariance with the standard estimators: DISPLAYFORM1 LettingP X be a distribution matching these moments, e.g. N μ X,Σ X, the estimator is given by DISPLAYFORM2 In Appendices D.1 and D.2, we exhibit two examples where FID(P 1, Q) < FID(P 2, Q), but the estimator FID(P 1, Q) is usually greater than FID(P 2, Q) with an equal number of samples m from P 1 and P 2, for a reasonable number of samples. (As m → ∞, of course, the estimator is consistent, and so the order will eventually be correct.) We assume here an infinite number of samples n from Q for simplicity; this reversal of ordering is even easier to obtain when n = m. It is also trivial to achieve when the number of samples from P 1 and P 2 differ, as demonstrated by FIG2.Note that Appendices D.1 and D.2 only apply to this plug-in estimator of the FID; it remains conceivable that there would be some other estimator for the FID which is unbiased. Appendix D.3 shows that this is not the case: there is no unbiased estimator of the FID. We will first show that the estimator can behave poorly even with very simple distributions. When P = N (µ P, Σ P) and Q = N (µ Q, Σ Q), it is well-known that DISPLAYFORM0 where W is the Wishart distribution. Then we have DISPLAYFORM1 The remaining term E Tr Σ XΣY 1 2 is more difficult to evaluate, because we must consider the correlations across dimensions of the two estimators. But if the distributions in question are one-dimensional, denoting Σ P = σ.Thus the expected estimator for one-dimensional normals becomes DISPLAYFORM2 Now, consider the particular case DISPLAYFORM3 where the inequality follows because DISPLAYFORM4 The example of Appendix D.1, though indicative in that the estimator can behave poorly even with very simple distributions, is somewhat removed from the situations in which we actually apply the FID. Thus we now empirically consider a more realistic setup. First, as noted previously, the hidden codes of an Inception coding network are not well-modeled by a normal distribution. They are, however, reasonably good fits to a censored normal distribution ReLU(X), where X ∼ N (µ, Σ) and ReLU(X) i = max(0, X i). Using of , it is straightforward to derive the mean and variance of ReLU(X) BID8, and hence to find the population value of FID(ReLU(X), ReLU(Y)).Let d = 2048, matching the Inception coding layer, and consider DISPLAYFORM0 T, with C a d × d matrix whose entries are chosen iid standard normal. For one particular random draw of C, we found that FID(P 1, Q) ≈ 1123.0 > 1114.8 ≈ FID(P 2, Q). Yet with m = 50 000 samples, FID(P 1, Q) ≈ 1133.7 (sd 0.2) < 1136.2 (sd 0.5) ≈ FID(P 2, Q). The variance in each estimate was small enough that of 100 evaluations, the largest FID(P 1, Q) estimate was less than the smallest FID(P 2, Q) estimate. At m = 100 000 samples, however, the ordering of the estimates was correct in each of 100 trials, with FID(P 1, Q) ≈ 1128.0 (sd 0.1) and FID(P 2, Q) ≈ 1126.4 (sd 0.4). This behavior was similar for other random draws of C.This example thus gives a case where, for the dimension and sample sizes at which we actually apply the FID and for somewhat-realistic distributions, comparing two models based on their FID estimates will not only not reliably give the right ordering -with relatively close true values and high dimensions, this is not too surprising -but, more distressingly, will reliably give the wrong answer, with misleadingly small variance. This emphasizes that unbiased estimators, like the natural KID estimator, are important for model comparison. We can also show, using the reasoning of that we also employed in Theorem 3, that there is no estimator of the FID which is unbiased for all distributions. Fix a target distribution Q, and define the quantity F (P) = FID(P, Q). Also fix two distributions P 0 = P 1. Suppose there exists some estimatorF (X) based on a sample of size n for which DISPLAYFORM0 This function R(α) is therefore a polynomial in α of degree at most n. But let's consider the following one-dimensional case: DISPLAYFORM1 The mean and variance of (1 − α)P 0 + αP 1 can be written as DISPLAYFORM2 Note that (µ α − µ) 2 + σ 2 α + σ 2 is a quadratic function of α. However, σ α is polynomial in α only in the trivial case when P 0 = P 1. Thus R(α) is not a polynomial when P 0 = P 1, and so no estimator of the FID to an arbitrary fixed normal distribution Q can be unbiased on any class of distributions which includes two-component Gaussian mixtures. There is also no unbiased estimator is available in the two-sample setting, where Q is also unknown, by the same trivial extension to this argument as in Theorem 3.Unfortunately, this type of analysis can tell us nothing about whether there exists an estimator which is unbiased on normal distributions. Given that the distributions used for the FID in practice are clearly not normal, however, a practical unbiased estimator of the FID is impossible. We replicate here the experiments of Heusel et al.'s Appendix 1, which examines the behavior of the Inception and FID scores as images are increasingly "disturbed," and additionally consider the KID. As the "disturbance level" α is increased, images are altered more from the reference distribution. FIG2 show the FID, KID, and negative (for comparability) Inception score for both CelebA (left) and CIFAR-10 (right); each score is scaled to to be plotted on one axis, with minimal and maximal values shown in the legend. Note that Heusel et al. compared means and variances computed on 50 000 random disturbed CelebA images to those computed on the full 200 000 dataset; we instead use the standard traintest split, computing the disturbances on the 160 000-element training set and comparing to the 20 000-element test set. In this (very slightly) different setting, we find the Inception score to be monotonic with increasing noise on more of the disturbance types than did. We also found similar behavior on the CIFAR-10 dataset, again comparing the noised training set (size 50 000) to the test set (size 10 000). This perhaps means that the claimed non-monotonicity of the Inception score is quite sensitive to the exact experimental setting; further investigation into this phenomenon would be intriguing for future work. MNIST After training for 50 000 generator iterations, all variants achieved reasonable . Among MMD models, only the distance kernel saw an improvement with more neurons in the top layer. TAB4 Examining samples during training, we observed that rbf more frequently produces extremely "blurry" outputs, which can persist for a substantial amount of time before eventually resolving. This makes sense, given the very fast gradient decay of the rbf kernel: when generator samples are extremely far away from the reference samples, slight improvements yield very little reward for the generator, and so bad samples can stay bad for a long time. Scores for various models trained on CIFAR-10 are shown in TAB5. The scores for rq with a small critic network approximately match those of WGAN-GP with a large critic network, at substantially reduced computational cost. With a small critic, WGAN-GP, Cramér GAN and the distance kernel all performed very poorly. Samples from these models are presented in FIG2. FIG2: Samples from the models listed in TAB4. Rational-quadratic and Gaussian kernels obtain retain sample quality despite reduced discriminator complexity. Each of these models generates good quality samples with the standard DCGAN discriminator (critic size 64).
Explain bias situation with MMD GANs; MMD GANs work with smaller critic networks than WGAN-GPs; new GAN evaluation metric.
535
scitldr
We extend the Consensus Network framework to Transductive Consensus Network (TCN), a semi-supervised multi-modal classification framework, and identify its two mechanisms: consensus and classification. By putting forward three variants as ablation studies, we show both mechanisms should be functioning together. Overall, TCNs outperform or align with the best benchmark algorithms when only 20 to 200 labeled data points are available. align with those of benchmark algorithms (semi-supervised or supervised, multi-modal or uni-modal) 23 on Bank Marketing and DementiaBank datasets, when 20-200 labeled data points are available. We first briefly review the CN framework BID0 for supervised, multi-view classification. Consider a There are M interpreter networks I m (m = 1, .., M), each compressing one modality of features into a representation, which we call consensus interpretation vector. A discriminator D tries to distinguish the origin of each latent representation. A classifier C makes predictions based on all representations. DISPLAYFORM0 The training is done by iteratively optimizing two targets: DISPLAYFORM1 DISPLAYFORM2 Note that empirically, an additional noise modality v 0 ∼ N (µ 1..M, σ In this paper we extend CN to TCN. Formally, the input data include those labeled, DISPLAYFORM0, and unlabeled, {x (i) } (x (i) ∈ X U ). In the semi-supervised learning setting, there 38 can be a lot more unlabeled data points than labeled: |X U | |X L |, where the whole dataset is DISPLAYFORM1 Here each data point x contains feature values from multiple modalities (i.e., 'views'), and the 41 interpreter networks I m (m = 1, .., M), discriminator D and classifier C are set up identical to CN 42 as well. Different from CN, the classification loss is defined on only those labeled data, while the 43 discriminator loss is defined across both labeled and unlabeled data: DISPLAYFORM0 TCNs function in two mechanisms: The consensus mechanism compresses each data sample into "consensus interpretations", and the classifier mechanism tries to make these interpretations meaning-ful. To perform ablation studies on these mechanisms, we test the following three variants. whole dataset, we extract the consensus interpretations of those labeled data samples to train an SVM. TCN-svm lets the consensus mechanism to function alone, ing in almost trivial classifiers. DISPLAYFORM0, the optimization target in TCN-AE can be expressed as: DISPLAYFORM1 L C, and max DISPLAYFORM2 and min DISPLAYFORM3 As shown in FIG6, TCN-AE has inferior performances than TCN. Reconstruction in an autoen-58 coder style counteracts the consensus mechanisms, and should not be used with CN models. We run experiments on two classification datasets, Table 1: In BM, the three modalities correspond to basic information, statistical data, and employment. In DB, the three modalities correspond to acoustic, syntactic-semantic, and lexical. The Bank Marketing dataset is from the UCI machine learning repository. used for predicting 108 whether the customer will subscribe a term deposit in a bank marketing campaign via telephone. There are originally 4,640 positive samples (subscribe) and 36,548 negative ones (did not subscribe). Since consensus network models do not work well on imbalanced datasets, we randomly sample 111 5,000 negative samples to create an (almost) balanced dataset. We also convert the categorical raw Honore's statistics, word length, cosine distances between words in sentences, etc. not change while the training classification loss is higher than log2≈ 0.693 3. We check once more 149 when training stops. If the training classification loss is higher than log2, the model is re-initialized with a new random seed and the training is restarted. Empirically this re-initialization only happen no 151 more than once per ten runs, but the underlying cause need to be examined further. DISPLAYFORM0 where DISPLAYFORM1 where v m,j and v n,j are the j th component of v m and v n respectively. In total, for each data sample, DISPLAYFORM2 pairs of relative divergences are calculated. We average the negative of these divergences to 163 get the similarity for the interpretations: DISPLAYFORM3 Note that the "similarity" is defined such that its maximum possible value is 0 (where there is no JS 165 divergence between any pair of the interpretation vectors), and it has no theoretical lower bound. Figure 4: Examples of similarity plots against the number of steps taken, for DementiaBank using 80 labeled samples ("DB80", blue) and Bank Marketing using 20 labeled samples ("BM20", green). The y axis are scaled to (-0.035, 0) except TCN-AE, where the relative JS divergences "explode". Note that training stops when losses converge (as detailed in §4.2), so the trials may stop at different steps.. The three colors represent three modalities. At step 2, the interpretations are distributed randomly. At step 110, they become mixed evenly. The most interesting embedding happens at step 30, when interpretations of the three modalities form three'drumstick' shapes. With the highest symmetricity visually, this configuration of interpretations also has the highest similarity among the three.
A semi-supervised multi-modal classification framework, TCN, that outperforms various benchmarks.
536
scitldr
Separating mixed distributions is a long standing challenge for machine learning and signal processing. Applications include: single-channel multi-speaker separation (cocktail party problem), singing voice separation and separating reflections from images. Most current methods either rely on making strong assumptions on the source distributions (e.g. sparsity, low rank, repetitiveness) or rely on having training samples of each source in the mixture. In this work, we tackle the scenario of extracting an unobserved distribution additively mixed with a signal from an observed (arbitrary) distribution. We introduce a new method: Neural Egg Separation - an iterative method that learns to separate the known distribution from progressively finer estimates of the unknown distribution. In some settings, Neural Egg Separation is initialization sensitive, we therefore introduce GLO Masking which ensures a good initialization. Extensive experiments show that our method outperforms current methods that use the same level of supervision and often achieves similar performance to full supervision. Humans are remarkably good at separating data coming from a mixture of distributions, e.g. hearing a person speaking in a crowded cocktail party. Artificial intelligence, on the the hand, is far less adept at separating mixed signals. This is an important ability as signals in nature are typically mixed, e.g. speakers are often mixed with other speakers or environmental sounds, objects in images are typically seen along other objects as well as the . Understanding mixed signals is harder than understanding pure sources, making source separation an important research topic. Mixed signal separation appears in many scenarios corresponding to different degrees of supervision. Most previous work focused on the following settings:Full supervision: The learner has access to a training set including samples of mixed signals {y i} ∈ Y as well as the ground truth sources of the same signals {b i} ∈ B and {x i} ∈ X (such that y i = x i + b i). Having such strong supervision is very potent, allowing the learner to directly learn a mapping from the mixed signal y i to its sources (x i, b i). Obtaining such strong supervision is typically unrealistic, as it requires manual separation of mixed signals. Consider for example a musical performance, humans are often able to separate out the different sounds of the individual instruments, despite never having heard them play in isolation. The fully supervised setting does not allow the clean extraction of signals that cannot be observed in isolation e.g. music of a street performer, car engine noises or reflections in shop windows. The learner has access to a training set containing samples from the mixed signal {y i} ∈ Y as well as samples from all source distributions {b j} ∈ B and {x k} ∈ X. The learner however does not have access to paired sets of the mixed and unmixed signal ground truth (that is for any given y i in the training set, b i and x i are unknown). This supervision setting is more realistic than the fully supervised case, and occurs when each of the source distributions can be sampled in its pure form (e.g. we can record a violin and piano separately in a studio and can thus obtain unmixed samples of each of their distributions). It is typically solved by learning to separate synthetic mixtures b j + x k of randomly sampled b j and x k.No supervision: The learner only has access to training samples of the mixed signal Y but not to sources B and X. Although this settings puts the least requirements on the training dataset, it is a hard problem and can be poorly specified in the absence of strong assumptions and priors. It is generally necessary to make strong assumptions on the properties of the component signals (e.g. smoothness, low rank, periodicity) in order to make progress in separation. This unfortunately severely limits the applicability of such methods. In this work we concentrate on the semi-supervised setting: unmixing of signals in the case where the mixture Y consists of a signal coming from an unobserved distribution X and another signal from an observed distribution B (i.e. the learner has access to a training set of clean samples such that {b j} ∈ B along with different mixed samples {y i} ∈ Y). One possible way of obtaining such supervision, is to label every signal sample by a label, indicating if the sample comes only from the observed distribution B or if it is a mixture of both distributions B + X. The task is to learn a parametric function able to separate the mixed signal y i ∈ Y into sources x i ∈ X and b i ∈ B s.t. y i = b i + x i. Such supervision is much more generally available than full supervision, while the separation problem becomes much simpler than when fully unsupervised. We introduce a novel method: Neural Egg Separation (NES) -consisting of i) iterative estimation of samples from the unobserved distribution X ii) synthesis of mixed signals from known samples of B and estimated samples of X iii) training of separation regressor to separate the mixed signal. Iterative refinement of the estimated samples of X significantly increases the accuracy of the learned masking function. As an iterative technique, NES can be initialization sensitive. We therefore introduce another method -GLO Masking (GLOM) -to provide NES with a strong initialization. Our method trains two deep generators end-to-end using GLO to model the observed and unobserved sources (B and X). NES is very effective when X and B are uncorrelated, whereas initialization by GLOM is most important when X and B are strongly correlated such as e.g. separation of musical instruments. Initialization by GLOM was found to be much more effective than by adversarial methods. Experiments are conducted across multiple domains (image, music, voice) validating the effectiveness of our method, and its superiority over current methods that use the same level of supervision. Our semi-supervised method is often competitive with the fully supervised baseline. It makes few assumptions on the nature of the component signals and requires lightweight supervision. Source separation: Separation of mixed signals has been extensively researched. In this work, we focus on single channel separation. Unsupervised (blind) single-channel methods include: ICA BID3 and RPCA BID10. These methods attempt to use coarse priors about the signals such as low rank, sparsity or non-gaussianity. HMM can be used as a temporal prior for longer clips BID25, however here we do not assume long clips. Supervised source separation has also been extensively researched, classic techniques often used learned dictionaries for each source e.g. NMF BID30. Recently, neural network-based gained popularity, usually learning a regression between the mixed and unmixed signals either directly BID11 or by regressing the mask BID29 BID32. Some methods were devised to exploit the temporal nature of long audio signal by using RNNs BID19, in this work we concentrate on separation of short audio clips and consider such line of works as orthogonal. One related direction is Generative Adversarial Source Separation BID27 BID28 ) that uses adversarial training to match the unmixed source distributions. This is needed to deal with correlated sources for which learning a regressor on synthetic mixtures is less effective. We present an Adversarial Masking (AM) method that tackles the semi-supervised rather than the fully supervised scenario and overcomes mixture collapse issues not present in the fully supervised case. We found that non-adversarial methods perform better for the initialization task. The most related set of works is semi-supervised audio source separation BID26 BID0, which like our work attempt to separate mixtures Y given only samples from the distribution of one source B. Typically NMF or PLCA (which is a similar algorithm with a probabilistic formulation) are used. We show experimentally that our method significantly outperforms NMF.Disentanglement: Similarly to source separation, disentanglement also deals with separation in terms of creating a disentangled representation of a source signal, however its aim is to uncover latent factors of variation in the signal such as style and content or shape and color e.g. BID4; BID7. Differently from disentanglement, our task is separating signals rather than the latent representation. Generative Models: Generative models learn the distribution of a signal directly. Classical approaches include: SVD for general signals and NMF BID17 ) for non-negative signals. Recently several deep learning approaches dominated generative modeling including: GAN BID6, VAE BID15 and GLO BID1. Adversarial training (for GANs) is rather tricky and often leads to mode-collapse. GLO is non-adversarial and allows for direct latent optimization for each source making it more suitable than VAE and GAN. In this section we present our method for separating a mixture of sources of known and unknown distributions. We denote the mixture samples y i, the samples with the observed distribution b i and the samples from the unobserved distribution x i. Our objective is to learn a parametric function T , such that b i = T (y i).Full Supervision: In the fully supervised setting (where pairs of y i and b i are available) this task reduces to a standard supervised regression problem, in which a parametric function T (typically a deep neural network) is used to directly optimize: DISPLAYFORM0 Where typically is the Euclidean or the L 1 loss. In this work we use = L 1 .Mixed-unmixed pairs are usually unavailable, but in some cases it is possible to obtain a training set which includes unrelated samples x j and b k e.g. BID29 BID32. Methods typically randomly sample x j and b k sources and synthetically create mixtures y jk = x j + b k. The synthetic pairs (b k, y jk) can then be used to optimize Eq. 1. Note that in cases where X and B are correlated (e.g. vocals and instrumental accompaniment which are temporally dependent), random synthetic mixtures of x and b might not be representative of y and fail to generalize on real mixtures. Semi-Supervision: In many scenarios, clean samples of both mixture components are not available. Consider for example a street musical performance. Although crowd noises without street performers can be easily observed, street music without crowd noises are much harder to come by. In this case therefore samples from the distribution of crowd noise B are available, whereas the samples from the distribution of the music X are unobserved. Samples from the distribution of the mixed signal Y i.e. the crowd noise mixed with the musical performance are also available. The example above illustrates a class of problems for which the distribution of the mixture and a single source are available, but the distribution of another source is unknown. In such cases, it is not possible to optimize Eq. 1 directly due to the unavailability of pairs of b and y. Neural Egg Separation: Fully-supervised optimization (as in Eq. 1) is very effective when pairs of b i and y i are available. We present a novel algorithm, which iteratively solves the semi-supervised task as a sequence of supervised problems without any clean training examples of X. We name the method Neural Egg Separation (NES), as it is akin to the technique commonly used for separating egg whites and yolks. The core idea of our method is that although no clean samples from X are given, it is still possible to learn to separate mixtures of observed samples b j from distribution B combined with some estimates of the unobserved distribution samplesx i. Synthetic mixtures are created by randomly sampling an approximate samplex i from the unobserved distribution and combining with training sample b j: DISPLAYFORM1 thereby creating pairs (ỹ ij, b j) for supervised training. Note that the distribution of synthetic mixturesỹ ij might be different from the real mixture sample distribution y j, but the assumption (which is empirically validated) is that it will eventually converge to the correct distribution. During each iteration of NES, a neural separation function T is trained on the created pairs by optimizing the following term: DISPLAYFORM2 At the end of each iteration, the separation function T can be used to approximately separate the training mixture samples y i into their sources: DISPLAYFORM3 The refined X domain estimatesx i are used for creating synthetic pairs for finetuning T in the next iteration (as in Eq. 3).The above method relies on having an estimate of the unobserved distribution samples as input to the first iteration. One simple scheme is to initialize the estimates of the unobserved distribution samples in the first iteration asx i = c · y i, where c is a constant fraction (typically 0.5). Although this initialization is very naive, we show that it achieves very competitive performance in cases where the sources are independent. More advanced initializations will be discussed below. At test time, separation is simply carried out by a single application of the trained separation function T (exactly as in Eq. 4). Mixture samples {y i}, Observed source samples {b j} Result: Separation function T Initialize synthetic unobservable samples withx i ← c · y i or using AM or GLOM; Initialize T with random weights; DISPLAYFORM0 Optimize separation function for P epochs: DISPLAYFORM1 Update estimates of unobserved distribution samples: DISPLAYFORM2 Algorithm 1: NES Algorithm Our full algorithm is described in Alg. 1. For optimization, we use SGD using ADAM update with a learning rate of 0.01. In total we perform N = 10 iterations, each consisting of optimization of T and estimation ofx i, P = 25 epochs are used for each optimization of Eq. 3.GLO Masking: NES is very powerful in practice despite its apparent simplicity. There are some cases for which it can be improved upon. As with other synthetic mixture methods, it does not take into account correlation between X and B e.g. vocals and instrumental tracks are highly related, whereas randomly sampling pairs of vocals and instrumental tracks is likely to synthesize mixtures quite different from Y. Another issue is finding a good initialization-this tends to affect performance more strongly when X and B are dependent. We present our method GLO Masking (GLOM), which separates the mixture by a distributional constraint enforced via GLO generative modeling of the source signals. GLO BID1 ) learns a generator G, which takes a latent code z b and attempts to reconstruct an image or a spectrogram: b = G(z b). In training, GLO learns end-to-end both the parameters of the generator G as well as a latent code z b for every training sample b. It trains per-sample latent codes by direct gradient descent over the values of z b (similar to word embeddings), rather than by a feedforward encoder used by autoencoders (e.g. z b = E(b)). This makes it particularly suitable for our scenario. Let us define the set of latent codes: DISPLAYFORM3 The optimization is therefore: DISPLAYFORM4 We propose GLO Masking, which jointly trains generators: G B for B and G X for X such that their sum in mixture samples y = G B (z DISPLAYFORM5 We use the supervision of the observed source B to train G B , while the mixture Y contributes residuals that supervise the training of G X . We also jointly train the latent codes for all training images: z b ∈ Z for all b ∈ B, and z B y ∈ Z B, z X y ∈ Z X for all y ∈ Y. The optimization problem is: DISPLAYFORM6 As GLO is able to overfit arbitrary distributions, it was found that constraining each latent code vector z to lie within the unit ball z · z ≤ 1 is required for generalization. Eq. 6 can either be optimized end-to-end, or the left-hand term can be optimized first to yield Z, G B , then the right-hand term is optimized to yield Z B, Z X, G X . Both optimization procedures yield similar performance (but separate training does not require setting λ). Once G B and G X are trained, for a new mixture sample we infer its latent codes: DISPLAYFORM7 Our estimate for the sources is then: DISPLAYFORM8 Masking Function: In separation problems, we can exploit the special properties of the task e.g. that the mixed signal y i is the sum of two positive signals x i and b i. Instead of synthesizing the new sample, we can instead simply learn a separation mask m, specifying the fraction of the signal which comes from B. The attractive feature of the mask is always being in the range (in the case of positive additive mixtures of signals). Even a constant mask will preserve all signal gradients (at the cost of introducing spurious gradients too). Mathematically this can be written as: DISPLAYFORM9 For NES (and baseline AM described below), we implement the mapping function T (y i) using the product of the masking function y i · m(y i). In practice we find that learning a masking function yields much better than synthesizing the signal directly (in line with other works e.g. BID29 ; BID5).GLOM models each source separately and is therefore unable to learn the mask directly. Instead we refine its estimate by computing an effective mask from the element-wise ratio of estimated sources: DISPLAYFORM10 Initializing Neural Egg Separation by GLOM: Due to the iterative nature of NES, it can be improved by a good initialization. We therefore devise the following method: i) Train GLOM on the training set and infer the mask for each mixture. This is operated on images or mel-scale spectrograms at 64 × 64 resolutions ii) For audio: upsample the mask to the resolution of the highresolution linear spectrogram and compute an estimate of the X source linear spectrogram on the training set iii) Run NES on the observed B spectrograms and estimated X spectrograms. We find experimentally that this initialization scheme improves NES to the point of being competitive with fully-supervised training in most settings. To evaluate the performance of our method, we conducted experiments on distributions taken from multiple real-world domains: images, speech and music, in cases where the two signals are correlated and uncorrelated. We evaluated our method against 3 baseline methods:Constant Mask (Const): This baseline uses the original mixture as the estimate. This baseline method, proposed by BID26, first trains a set of l bases on the observed distribution samples B by Sparse Adversarial Masking (AM): As an additional contribution, we introduce a new semi-supervised method based on adversarial training, to improve over the shallow NMF baseline. AM trains a masking function m so that after masking, the training mixtures are indistinguishable from the distribution of source B under an adversarial discriminator D. The loss functions (using LS-GAN BID18) are given by: DISPLAYFORM0 Differently from CycleGAN BID34 and DiscoGAN BID14, AM is not bidirectional and cannot use cycle constraints. We have found that adding magnitude prior L 1 (m(y), 1) improves performance and helps prevent collapse. To partially alleviate mode collapse, we use Spectral Norm on the discriminator. We evaluated our proposed methods:GLO Masking (GLOM): GLO Masking on mel-spectrograms or images at 64 × 64 resolution. The NES method detailed in Sec. 3. Initializing X estimates using a constant (0.5) mask over Y training samples. Initializing NES with the X estimates obtained by GLO Masking. To upper bound the performance of our method, we also compute a fully supervised baseline, for which paired data of b i ∈ B, x i ∈ X and y i ∈ Y are available. We train a masking function with the same architecture as used by all other regression methods to directly regress synthetic mixtures to unmixed sources. This method uses more supervision than our method and is an upper bound. More implementation details can be found in appendix A. In this section we evaluate the effectiveness of our method on image mixtures. We conduct experiments both on the simpler MNIST dataset and more complex Shoes and Handbags datasets. To evaluate the quality of our method on image separation, we design the following experimental protocol. We split the MNIST dataset BID16 into two classes, the first consisting of the digits 0-4 and the second consisting of the digits 5-9. We conduct experiments where one source has an observed distribution B while the other source has an unobserved distribution X. We use 12k B training images as the B training set, while for each of the other 12k B training images, we randomly sample a X image and additively combine the images to create the Y training set. We evaluate the performance of our method on 5000 Y images similarly created from the test set of X and B. The experiment was repeated for both directions i.e. 0-4 being B while 5-9 in X, as well as 0-4 being X while 5-9 in B.In Tab. 1, we report our on this task. For each experiment, the top row presents the (PSNR and SSIM) on the X test set. Due to the simplicity of the dataset, NMF achieved reasonable performance on this dataset. GLOM achieves better SSIM but worse PSNR than NMF while AM performed 1-2dB better. NES achieves much stronger performance than all other methods, achieving about 1dB worse than the fully supervised performance. Initializing NES with the masks obtained by GLOM, in similar performance to the fully-supervised upper bound. FT from AM (numbers for finetuning from AM were omitted from the tables for clarity, as they were inferior to finetuning from GLOM in all experiments) achieved similar performance (24.0/0.95 and 23.8/0.95) to FT from GLOM. In order to evaluate our method on more realistic images, we evaluate on separating mixtures consisting of pairs of images sampled from the Handbags BID33 and Shoes BID31 datasets, which are commonly used for evaluation of conditional image generation methods. To create each Y mixture image, we randomly sample a shoe image from the Shoes dataset and a handbag image from the Handbags dataset and sum them. For the observed distribution, we sample another 5000 different images from a single dataset. We evaluate our method both for cases when the X class is Shoes and when it is Handbags. From the in Tab. 1, we can observe that NMF failed to preserve fine details, penalizing its performance metrics. GLOM (which used a VGG perceptual loss) performed much better, due to greater expressiveness. AM performance was similar to GLOM on this task, as the perceptual loss and stability of training of non-adversarial models helped GLOM greatly. NES performed much better than all other methods, even when initialized from a constant mask. Finetuning from GLOM, helped NES achieve stronger performance, nearly identical to the fully-supervised upper bound. It performed better than finetuning from AM (not shown in table) which achieved 22.5/0.85 and 22.7/0.86. Similar can be drawn from the qualitative comparison in the figure above. Separating environmental noise from speech is a long standing problem in signal processing. Although supervision for both human speech and natural noises can generally be obtained, we use this task as a benchmark to evaluate our method's performance on audio signals where X and B are not dependent. This benchmark is a proxy for tasks for which a clean training set of X sounds cannot be obtained e.g. for animal sounds in the wild, sounds training without animal noises can easily be obtained, but clean sounds made by the animal with no sounds are unlikely to be available. We obtain clean speech segments from the Oxford-BBC Lip Reading in the Wild (LRW) Dataset BID2, and resample the audio to 16 kHz. Audio segments from ESC-50 BID21, a dataset of environmental audio recordings organized into 50 semantic classes, are used as additive noise. Noisy speech clips are created synthetically by first splitting clean speech into clips with duration of 0.5 seconds, and adding a random noise clip, such that the ing SNR is zero. We then compute a mel-scale spectrogram with 64 bins, using STFT with window size of 25 ms, hop length of 10 ms, and FFT size of 512, ing in an input audio feature of 64 × 64 scalars. Finally, power-law compression is performed with p = 0.3, i.e. A 0.3, where A is the input audio feature. From the in Tab. 2, we can observe that GLOM, performed better than Semi-Supervised NMF by about 1dB better. AM training, performed about 2dB better than GLOM. Due to the independence between the sources in this task, NES performed very well, even when trained from a constant mask initialization. Performance was less than 1dB lower than the fully supervised (while not requiring any clean speech samples). In this setting due to the strong performance of NES, initializing NES with the speech estimates obtained by GLOM (or AM), did not yield improved performance. Separating vocal music into singing voice and instrumental music as well as instrumental music and drums has been a standard task for the signal processing community. Here our objective is to understand the behavior of our method in settings where X and B are dependent (which makes synthesis by addition of random X and B training samples a less accurate approximation).For this task we use the MUSDB18 Dataset BID24, which, for each music track, comprises separate signal streams of the mixture, drums, bass, the rest of the accompaniment, and the vocals. We convert the audio tracks to mono, resample to 20480 Hz, and then follow the procedure detailed in Sec. 4.2 to obtain input audio features. From the in Tab. 3, we can observe that NMF was the worst performer in this setting (as its simple bases do not generalize well between songs). GLOM was able to do much better than NMF and was even competitive with NES on Vocal-Instrumental separation. Due to the dependence between the two sources and low SNR, initialization proved important for NES. Constant initialization NES performed similarly to AM and GLOM. Finetuning NES from GLOM masks performed much better than all other methods and was competitive with the supervised baseline. GLOM was much better than AM initialization (not shown in table) that achieved 0.9 and 2.9. GLO vs. Adversarial Masking: GLO Masking as a stand alone technique usually performed worse than Adversarial Masking. On the other hand, finetuning from GLO masks was far better than finetuning from adversarial masks. We speculate that mode collapse, inherent in adversarial training, makes the adversarial masks a lower bound on the X source distribution. GLOM can in models that are too loose (i.e. that also encode samples outside of X). But as an initialization for NES finetuning, it is better to have a model that is too loose than a model which is too tight. Supervision Protocol: Supervision is important for source separation. Completely blind source separation is not well specified and simply using general signal statistics is generally unlikely to yield competitive . Obtaining full supervision by providing a labeled mask for training mixtures is unrealistic but even synthetic supervision in the form of a large training set of clean samples from each source distribution might be unavailable as some sounds are never observed on their own (e.g. sounds of car wheels). Our setting significantly reduces the required supervision to specifying if a certain sound sample contains or does not contain the unobserved source. Such supervision can be quite easily and inexpensively provided. For further sample efficiency increases, we hypothesize that it would be possible to label only a limited set of examples as containing the target sound and not, and to use this seed dataset to finetune a deep sound classifier to extract more examples from an unlabeled dataset. We leave this investigation to future work. To showcase the generality of our method, we chose not to encode task specific constraints. In practical applications of our method however we believe that using signalspecific constraints can increase performance. Examples of such constraints include: repetitiveness of music BID23, sparsity of singing voice, smoothness of natural images. Non-Adversarial Alternatives: The good performance of GLOM vs. AM on the vocals separation task, suggests that non-adversarial generative methods may be superior to adversarial methods for separation. This has also been observed in other mapping tasks e.g. the improved performance of NAM BID8 over DCGAN. A perfect signal separation function is a stable global minimum of NES as i) the synthetic mixtures are equal to real mixtures ii) real mixtures are perfectly separated. In all NES experiments (with constant, AM or GLOM initialization), NES converged after no more than 10 iterations, typically to different local minima. It is empirically evident that NES is not guaranteed to converge to a global minimum (although it converges to good local minima). We defer formal convergence analysis of NES to future work. In this paper we proposed a novel method-Neural Egg Separation-for separating mixtures of observed and unobserved distributions. We showed that careful initialization using GLO Masking improves in challenging cases. Our method achieves much better performance than other methods and was usually competitive with full-supervision. GLOM and AM use the same generator and discriminator architectures respectively for audio as they do for images. They operate on mel-scale spectrogram at 64 × 64 resolution. Masking Network: The generator for AM operates on 64 × 64 mel-scale audio spectrograms. It consists of 3 convolutional and 3 deconvolutional layers with stride 2 and no pooling. Outputs of convolutional layers are normalized with BatchNorm and rectified with ReLU activation, except for the last layer where sigmoid is used. In addition to the LSGAN loss, an additional magnitude loss is used, with relative weight of λ = 1.NES and the supervised method operate on full linear spectrogram of dimensions 257 × 64, without compression. They use the same DiscoGAN architecture, which contains two additional convolutional and deconvolutional layers. In this section we describe our implementation of the NMF semi-supervised source separation baseline BID26. NMF trains a decomposition: B = W Z where W are the weights and Z = [z 1, ..., z N] are the per sample latent codes. Both W and Z are non-negative. Regularization is important for the performance of the method. We follow BID9 BID12 and use L 1 regularization to ensure sparsity of the weights. The optimization problem therefore becomes: We present a qualitative analysis of the of GLOM and NES. To understand the quality of generations of GLO and the effect of the masking function, we present in Fig.2 the of the GLO generations given different mixtures from the Speech dataset. We also show the after the masking operation described in Eq. 10. It can be observed that GLO captures the general features of the sources, but is not able to exactly capture fine detail. The masking operation in GLOM helps it recover more fine-grained details, and in much cleaner separations. DISPLAYFORM0 We also show in Fig.2 the evolution of NES as a function of iteration for the same examples. NES(k) denotes the of NES after k iterations. It can be seen the NES converges quite quickly, and improve further with increasing iterations. In FIG1, we can observe the performance of NES on the Speech dataset in terms of SDR as a function of iteration. The are in line with the qualitative examples presented before, NES converges quickly but makes further gains with increasing iterations.
An iterative neural method for extracting signals that are only observed mixed with other signals
537
scitldr
In health, machine learning is increasingly common, yet neural network embedding (representation) learning is arguably under-utilized for physiological signals. This inadequacy stands out in stark contrast to more traditional computer science domains, such as computer vision (CV), and natural language processing (NLP). For physiological signals, learning feature embeddings is a natural solution to data insufficiency caused by patient privacy concerns -- rather than share data, researchers may share informative embedding models (i.e., representation models), which map patient data to an output embedding. Here, we present the PHASE (PHysiologicAl Signal Embeddings) framework, which consists of three components: i) learning neural network embeddings of physiological signals, ii) predicting outcomes based on the learned embedding, and iii) interpreting the prediction by estimating feature attributions in the "stacked" models (i.e., feature embedding model followed by prediction model). PHASE is novel in three ways: 1) To our knowledge, PHASE is the first instance of transferal of neural networks to create physiological signal embeddings. 2) We present a tractable method to obtain feature attributions through stacked models. We prove that our stacked model attributions can approximate Shapley values -- attributions known to have desirable properties -- for arbitrary sets of models. 3) PHASE was extensively tested in a cross-hospital setting including publicly available data. In our experiments, we show that PHASE significantly outperforms alternative embeddings -- such as raw, exponential moving average/variance, and autoencoder -- currently in use. Furthermore, we provide evidence that transferring neural network embedding/representation learners between distinct hospitals still yields performant embeddings and offer recommendations when transference is ineffective. Representation learning (i.e., learning embeddings) BID14 has been applied to medical images and clinical text (; BID16 BID13) but has been under-explored for time series physiological signals in electronic health records. This paper introduces the PHASE (PHysiologicAl Signal Embeddings) framework to learn embeddings of physiological signals FIG1 ), which can be used for various prediction tasks FIG1, and has been extensively tested in terms of its transferability using data from multiple hospitals (FIG1). In addition, this paper introduces an interpretability method to compute per-sample feature attributions of the original features (i.e., not embeddings) for a prediction in a tricky "stacked" model situation (i.e., embedding model followed by prediction model) (FIG1).Based on computer vision (CV) and natural language processing (NLP), exemplars of representation learning, physiological signals are well suited to embeddings. In particular, CV and NLP share two notable traits with physiological signals. The first is consistency. For CV, the domain has consistent features: edges, colors, and other visual attributes. For NLP, the domain is a particular language with semantic relationships consistent across bodies of text. For sequential signals, physiological patterns are arguably consistent across individuals. The second attribute is complexity. Across these three domains, each particular domain is sufficiently complex such that learning embeddings is non-trivial. Together, consistency and complexity suggest that for a particular domain, every research group independently spends a significant time to learn embeddings that may ultimately be Figure 1: The PHASE framework, which consists of embedding learning, prediction, interpretation, and transference. The checkered patterns denote that a model is being trained in the corresponding stage, whereas solid colors denote fixed weights/models. The red side of the LSTM denotes the hidden layer we will use to generate embeddings. In (c), the size of the black circles on the left represent the feature attributions being assigned to the original input features. The signals and the outputs of the LSTMs are vectors. Multiple connections into a single XGB model are simply concatenated. More details on the experimental setup can be found in Sections 4.1 and 6.1.quite similar. In order to avoid this negative externality, NLP and CV have made great progress on standardizing their embeddings; in health, physiological signals are a natural next step. Furthermore, physiological signals have unique properties that make them arguably better suited to representation learning than traditional CV and NLP applications. First, physiological signals are typically generated in the health domain, which is constrained by patient privacy concerns. These concerns make sharing data between hospitals next to impossible; however, sharing models between hospitals is intuitively safer and generally accepted. Second, a key component to successful transfer learning is a community of researchers that work on related problems. According to , there were at least fifty-three research publications using deep learning methods for physiological signals in the past ten years. Additionally, we discuss particular examples of neural networks for physiological signals in Section 2.2. These varied applications of neural networks imply that there is a large community of machine learning research scientists working on physiological signals, a community that could one day work collaboratively to help patients by sharing models. Although embedding learning has many aforementioned advantages, it makes interpretation more difficult. Naive applications of existing interpretation methods (; ; do not work for models trained using learned embeddings, because they will assign attributions to the embeddings. Feature attributions assigned to embeddings will be meaningless, because the embeddings do not map to any particular input feature. Instead, each embedding is a complicated, potentially non-linear combination of the original raw physiological signals. In a health domain, inability to meaningfully interpret your model is unsatisfactory. Healthcare providers and patients alike generally want to know the reasoning behind predictions/diagnoses. Interpretability can enhance both scientific discovery as well as provide credibility to predictive models. In order to provide a principled methodology for mapping embedding attributions back into physiological signal attributions, we provide a proof that justifies PHASE's Shapley value framework in Section 3.3. This framework generalizes across arbitrary stacked models and currently encompasses neural network models (e.g., linear models, neural networks) and tree-based models (e.g., gradient boosting machines and random forests).In the following sections, we discuss previous related work (Section 2) and describe the PHASE framework (Section 3). In Section 4, we first evaluate how well our neural network embeddings make accurate predictions (Section 4.2.1). Second, we evaluate whether transferring these embedding learners still enables accurate predictions across three different hospitals separated by location and across hospital departments (Section 4.2.2). Lastly, we present a visualization of our methodology for providing Shapley value feature attributions through stacked models in Section 4.2.3. Representation learning (embedding learning) in health is growing more popular. One particularly natural subdomain is medical image analysis, e.g., mammography analysis, kidney detection in ultrasound images, optical coherence tomography image analysis, diagnosing pneumonia using chest X-ray images, lung pattern analysis, otitis media image analysis, and more BID1 BID16 BID6 BID8; ). Outside of image analysis, additional examples of transfer learning in the medical domain include BID13, Wiens et al., Brisimi et al. (2018, Choi et al., Choi et al., and Che et al. (2016 . Even within physiological signals, some examples of embedding learning are beginning to sprout up, including , who utilize kNNs to perform transfer learning for brain-computer interaction. Comparatively, PHASE transfers neural networks as embedding functions learned in an partially supervised manner, where the embeddings provide a basis for training a model on any prediction task (as opposed to being tied to the prediction they were trained on). We denote partially supervised networks to be networks trained with prediction tasks related to the final downstream prediction. To our knowledge, our work is the first to transfer deep neural networks for embedding sequential physiological signals, embeddings which are not tied to a particular prediction problem. use acoustic signals to detect anomalies in heart sound. Based on this substantive community of research scientists working on physiological signals, there is a clear opportunity to unify independent research by appropriately using partially supervised feature embedding learning. In the vein of embedding learning, BID14 applied autoencoders to blood volume pulse and skin conductance measured from 36 people playing a video game and used the encodings to predict affective state. In their paper, the sample size is fairly small and reflects that their primary objective was to perform feature extraction and feature selection. In contrast, PHASE evaluates transferring embedding learners (i.e., feature extractors) across multiple hospitals TAB5.2.3 FORECASTING FOR OPERATING ROOM DATA proposed an approach, namely Prescience, which achieved state-of-the-art hypoxemia predictions using operating room data, the same data we used to evaluate PHASE. Prescience utilizes gradient boosting machines (GBM) applied to features extracted by using traditional time series feature extraction methods -exponential moving average/variance embeddings. Prescience compares prediction models including gradient boosting machines, linear lasso, linear SVM, and a parzen window with the objective of forecasting low blood oxygen in the future. Ultimately, find the highest performing method to be GBM trees. With samples drawn from the same data set, PHASE seeks to substitute the feature extraction used in Prescience with a deep learning approach, which ed in a better average precision compared to Prescience without the clinical text features (4 is Prescience and 12 is PHASE in FIG3). Interpretability of models via local feature attributions has been addressed in numerous independent pieces of work (; ; . For our evaluation of interpretability, we choose to focus on Shapley values introduced by Lloyd Shapley, originally in the context of game theory BID18 . identify Shapley values as the only additive feature attribution method that satisfies the properties of local accuracy, missingness, and consistency. For PHASE, our pipeline includes multiple models -GBM trees and LSTM networks. Methods exist for obtaining Shapley values for GBM trees (Tree SHAP) and for neural networks (Deep LIFT/Deep SHAP);. However, the default version of these methodologies do not beget a theoretically justified approach for propagating attributions through multiple models. In fact, to the authors' knowledge, a tractable method for obtaining local feature attributions for mixes of neural networks and trees does not exist. In this paper, we utilize versions of Tree SHAP and Deep LIFT that create single reference attributions that can be composed to address stacked models. At the end of obtaining many attributions, the average of these attributions approximates the Shapley value attributions (more details in Section 3.3). Taking inspiration from sinusoidal waveforms, we name our methodology PHASE. In the PHASE framework, the first step is to learn neural network embeddings for physiological signals (FIG1). The second step is to predict outcomes based on the learned feature embedding (as in FIG1), potentially across multiple hospitals (as in FIG1). Finally, the last step is to interpret the prediction by estimating feature attributions through the models trained in the first two steps (as in FIG1). PHASE uses LSTM networks to learn feature embeddings from time series physiological data. LSTMs are a popular variant on recurrent neural networks introduced by BID3. They have the capacity to model long term dependencies, while avoiding the vanishing gradient problem BID15. Details on the model architecture may be found in Section 6.2.For PHASE, we first train a univariate LSTM on each physiological signal P, predicting the minimum of P in the future five minutes FIG1 ). Note that we choose the minimum of the next five minutes because we care about forecasting adverse outcomes. Then, we obtain hidden embeddings of the original physiological signals by passing them through to the hidden layer (the red layer in FIG1). These embeddings are unsupervised in the sense that training them simply requires the same feature (albeit at different time steps). Yet, they are supervised in that we specify our interest to be forecasting adverse outcomes constituted by low signal values. We choose to focus on the minimum, because adverse outcomes in physiological signals are often tied to too-low signals. We find that the completely unsupervised alternative of an LSTM autoencoder is significantly less performant than the LSTM trained to predict the minimum of the next five minutes (8 is autoencoder and 12 is PHASE in FIG3).One reason behind having univariate neural networks is for transference. By using univariate networks, the input to the final prediction model may be any set of physiological signals with existing embedding learners. This is especially useful because hospital departments have substantial variation in the signals they may choose to collect for features. Another reason for univariate networks is that data in a single hospital is often collected at different points in time, or new measurement devices may be introduced to data collection systems. For traditional pipelines, it may be necessary to re-train entire machine learning pipelines when new features are introduced. With univariate networks, the flexibility would mean pre-existing embedding learners would not necessarily need to be re-trained. PHASE can use any prediction model. In this paper, we focus on gradient boosting machine trees because the Prescience method found that they outperform several other models in the operating room data. Gradient boosting machines were introduced by. This technique creates an ensemble of weak prediction models in order to perform classification/regression tasks in an iterative fashion. In particular, we utilize XGBoost, a popular implementation of gradient boosting machines that uses additive regression trees . XGBoost often dominates in Kaggle, a platform for predictive modeling competitions. In particular, seventeen out of twenty nine challenge winning solutions used XGBoost in 2015 . For PHASE, we postulate that utilizing embeddings of time series signals provides stronger features for the ultimate prediction with XGB (as visualized in FIG1 . Details on the model architecture may be found in Section 6.2. PHASE addresses an inherent challenge in the interpretation of an embedding model (or feature representation model). Estimating feature attributions is a common way to make a prediction interpretable. At a high level, the goal is to explain how much each feature matters for a model's prediction. However, this goal is only meaningful if the model being explained uses features with a natural human interpretation. For example, if we interpret PHASE's GBM model, which takes the embeddings as input and outputs a prediction, our feature attributions will be assigned to the embeddings, which are not meaningful to doctors or patients. The answer is to extend the prediction model (here, a GBM) by combining the feature embedding model (here, a LSTM network), which makes a "stacked" model (FIG1). Since the original features in the embedding stage are meaningful, one solution is to utilize a model agnostic feature attribution method over the "stacked" model. For our attributions, we aim to provide Shapley values, for which the exact model agnostic computation has an exponential computational complexity DISPLAYFORM0 where N is the sample size and M is the number of features) BID18. In response, one might want to use a model-specific method of computing approximate Shapley values to gain speed by knowing the model. However, to the authors' knowledge, there was previously no model-specific method to estimate Shapley values for a stack comprised of LSTMs and a GBM (or even local feature attributions for that matter).Single reference Shapley values: Our new method for estimating Shapley values for the aforementioned stacked model (i.e., LSTMs and GBM), requires adaptations on two existing feature attributions methods. First is Deep SHAP, a variant on Deep LIFT -a feature attribution method for neural networks ). Deep SHAP differs from Deep LIFT in that it can find attributions for single references. Both methods can be written as modifications to a traditional backward pass through a neural network BID0. Since the computational complexity of a backward pass is the same as a forward pass through the network, we can consider this cost "low". The second method we utilize is "Independent Tree SHAP". This method is a variation on normal Tree SHAP, but it can be computed for single references. Independent Tree SHAP has a computational complexity of O(M LT), where L is the maximum number of leaves in any given tree and T is the number of trees in the GBM. "Stacked" model Shapley values: Combining these two methods amounts to treating the "stacked" model (FIG1) as a larger neural network and applying Deep SHAP to pass back attributions as gradients at each layer BID0. However, at the GBM layer we obtain the appropriate gradients by dividing the Independent Tree SHAP Shapley values by the difference between the sample and the references. According to Theorem 1, we can then average over these single reference attributions for an approximation to the Shapley values. Generalizability: Note that Theorem 1 also implies that for any arbitrary set of models in a stack, if single reference Shapley values are obtainable for each model, the Shapley values for the entire stack can be obtained. Because the single reference Shapley value methods are known for neural networks and for trees, any "stacked" model composed of these two methods can be explained. Worth noting is that many embedding/prediction models can be represented as neural networks, making our framework to attribute "stacked" models fairly general. DISPLAYFORM1 where D is the data distribution, F is the set of all features, and f is our model. Rewriting the sum over all permutations of F, rather than over all combinations, the weighting term becomes one: DISPLAYFORM2 where the last step depends on independence between the permutations, independence between the conditional and non-conditional sets, and the data generating mechanism. We first describe our data sets, evaluation metric, model architectures, and the of comparisons between PHASE and alternative approaches in various testing scenarios (Section 4.2). More details on the model architecture and the models used for each experiment can be found in the Appendix. Data Description Hospital 0/1 data was collected via the Anesthesia Information Management System (AIMS), which records all data measured in the operating room during surgery. Both medical centers are within the same city (within 10 miles of each other). Hospital P is a sub-sampled version of the publicly available MIMIC data set from PhysioNet, which contains data obtained from an intensive care unit in Boston, Massachusetts BID5. Hospital P data was collected several thousands of miles from the medical centers associated with hospital 0/1 data. Some details about these hospitals are in Table 1, with more in Appendix: Section 6.1. Table 1: Statistics of the different data sources. Hospital P is a public data set (PhysioNet). Additional details about hospital 0/1 data in Figures 5, 6, and 7 in Appendix: Section 6.1 The hospital 0/1 data includes static information (height, weight, age, sex, procedure codes), as well as real-time measurements of thirty-five physiological signals (e.g., SaO 2, FiO 2, ETCO 2, etc.) sampled minute by minute. Although the hospital P data contains several physiological signals sampled at a high frequency, we solely use a minute by minute SaO 2 signal for our experiments. Any missing values in the data are imputed by the mean and each feature is standardized to have unit mean and variance. One important note is that although hospitals 0 and 1 are spatially close, one is an academic medical center and one is a trauma center. More details on the patients from these hospitals can be found in Section 6.1.Evaluation Methodology PHASE and alternative approaches are evaluated based on real-time prediction tasks, for example, whether a certain condition will occur in the next 5 minutes (details in Sections 4.2 and 6.1). Our evaluation metric for prediction performance in binary classification is area under the precision-recall curve, otherwise known as average precision (AP). Rather than ROC curves, PR curves often better highlight imbalanced labels. Precision is defined as tp tp+f p and recall is tp tp+f n, where tp is true positives, f p false positives, and f n false negatives. The area under the curve provides a summary statistic that balances both precision and recall. In this section, we compare different embeddings of physiological signals as discussed in TAB2 (where Min h represents PHASE). Based on these comparisons, we see the overall performance of predicting three adverse clinical events in Section 4.2.1 as well as a discussion of how well the embedding learners transfer between hospitals in Section 4.2.2. Lastly, in Section 4.2.3, we depict the attributions from our new model stacking method for obtaining Shapley values. In the operating room, there are a few physiological signals that stand out as indicators of adverse outcomes. Three of these signals include SaO 2 (blood oxygen), ETCO 2 (end tidal CO2), and NIBPM (noninvasive blood pressure measurement), which are linked to our three adverse outcomes: hypoxemia, hypocapnia, and hypotension, respectively. Forecasting these outcomes is particularly important because deviations from the norm could spell disaster; ). More details on labels in Section 6.1. Hidden layer embedding from an LSTM trained to predict the minimum of the current signal five minutes into the future on hospital h's data. In this section, we aim to investigate the performance gains PHASE embeddings offer over other embeddings. FIG3 shows the performance of XGB with different representations of the same signals across three prediction tasks. In terms of pre-training for this experiment, there is none for the Raw and EMA embeddings. However for Min h and Auto h, we have trained fifteen univariate LSTM networks for both objectives across both hospitals (a total of sixty networks, not including fine-tuned networks). We fix these same LSTM networks to generate hidden embeddings of the original signals across the three final prediction tasks of hypoxemia, hypocapnia, and hypotension. In terms of performance, the average precision points with all signals (in blue) are almost always significantly above their associated average precision points using only a single signal (in red). This suggests the outcomes derived from forecasting a single signal are complex and benefit from having access to more signals. Most importantly, with the exception of the fine tuned model, the Min h (PHASE) models (10 and 12) consistently outperform all other models in FIG3 by a significant margin. The fact that LSTM autoencoders fail to grasp meaningful representations of the data in comparison to partially supervised LSTMs offers an insight; on our data set, the key to performant embeddings is to have closeness between the LSTM's prediction task and the downstream prediction task. In this case, effective embeddings for forecasting adverse outcomes related to too-low signals, required that the embeddings themselves were related to low signals in the future. In this section, we have two aims: the first aim is to evaluate how well our embedding models transfer. The second aim is to explore methods to repurpose them when there is a large amount of domain shift between hospitals. First, we can look back to FIG3. The feature embeddings learned in a source hospital that differs to the target hospital performs significantly better than the EMA (Prescience -) and Raw embeddings (2 and 4) and generally on par with a matching source and target hospital. This is promising, because it suggests that the domain shift between hospitals 0 and 1 does not prevent physiological signal embeddings from transferring well. Although these hospitals do have similar patient distributions, this transference is better than TAB14 might be expected, given that one hospital is an academic medical center and the other is a trauma center. As further evidence of their differences, we report the top ten diagnoses from each hospital in Appendix: Section 6.1 and find no overlap apart from CALCULUS OF KIDNEY between hospitals 0 and 1.Next, in Figure 3, we can see that the Min P embeddings created from the publicly available PhysioNet data (2 p) are worse representations of SaO 2 in comparison to the embeddings trained with the target hospital's data. This implies that the domain shift between hospital P and the OR hospitals is too large -ing in learned Min embeddings (2 P) that are not as useful for prediction. Next, the improvement of the fine tuned models (4 P) over Min P embeddings, suggests the following insight: that fine tuning serves to recover the performance lost due to transference across distributionally disparate hospitals. Then, observing the improvement of Hypox P embeddings (6 P) over Min P embeddings, conveys a natural insight: that closeness between LSTM prediction tasks and the downstream prediction task is beneficial in the face of transference. This naturally extends the observation from Section 4.2.1: that closeness between LSTM prediction tasks and the downstream prediction task is beneficial in the face of performance. These two observations suggest two approaches to transferring across substantially different hospital data sets: 1.) train LSTMs with very specific prediction tasks that match the downstream prediction and 2.) fine tune LSTM networks. In this section, our aim is to evaluate the efficacy of our interpretability method for stacked models. To do so, we separate interpretability evaluation approaches into two categories: qualitative and quantitative. Qualitative evaluations are important to ensure human insight into interpretability methods. However, our primary goal is to ensure that our novel method to obtain local feature attributions for stacked models is correct in the sense that the attributions estimate Shapley values. One qualitative evaluation of feature attributions is who demonstrate that local feature attributions improve the performance of practicing anesthesiologists in forecasting hypoxemia. Our quantitative validation is a standard ablation/perturbation test, in a similar fashion to other interpretability evaluations: BID2, BID4, BID0, and. The test consists of the following. For a single sample, we sort the input features according to their attributions, and iteratively impute each feature by the mean of the last two minutes. In order to ensure our interpretations generalize, we evaluate on the test set. Additionally, we use the top 1000 positive samples sorted by the predicted probability of hypoxemia (true positives). Then we evaluate the mean predicted probability across all samples, which will start high (for true positives) and monotonically decrease as we impute features, leading to an overall decrease in the average probability. Good interpretability methods should in an initial steepness because the most "important" hypoxemic pathology is imputed first. Figure 4: Ablation test on the top 1000 positive labels, sorted by the probability prediction of the final model. We "remove" the features (by imputing the mean of the last two minutes) according to Shapley values or a random ordering and then predict the probability of hypoxemia on the entirety of our test set. We obtain Shapley values for both models with a fixed set of 100 samples. The model stack used for the interpretability evaluation (right) is in FIG3. More details about the setup for this experiment in Section 6.5Deep SHAP was originally proposed for traditional deep learning models, we extend it to support a mixture of model types in PHASE. As such, our primary aim is to evaluate against the pre-existing Deep SHAP methodology. We compare LSTM embeddings fed into XGB (PHASE) against LSTM embeddings fed into an MLP, because original Deep SHAP supports only traditional neural network architecture (see FIG1 in the Appendix for more details on the model setups). In Figure 4, we verify that ordering feature imputation by our interpretability method which combines Deep SHAP with Independent Tree SHAP (LSTM→XGB) does lead to initial steepness of the predicted probability of hypoxemia in a similar fashion to Deep SHAP alone (LSTM→MLP), with both methods outperforming a random ordering. In fact, it appears that for the Figure 4 (right), imputing our attributions for LSTM→XGB has a potentially more destructive effect on performance in the early number of imputed features. Lastly, in Appendix Section 6.5.1, we show attributions from both methods in a informal visual evaluation to confirm concordance. This paper presents PHASE, a new approach to machine learning with physiological signals based on transferring embedding learners. PHASE has potentially far-reaching impacts, because neural networks inherently create an embedding before the final output layer. As discussed in Section 2.2, there is a large body of research independently working on neural networks for physiological signals. PHASE offers a potential method of collaboration by analyzing partially supervised univariate networks as semi-private ways to share meaningful signals without sharing data sets. In the section we offer several insights into transference of univariate LSTM embedding functions. First, closeness of upstream (LSTM) and downstream prediction tasks is indeed important for both predictive performance and transference. For performance, we found that predicting the minimum of the future five minutes was sufficient for the LSTMs to generate good embeddings. For transference, predicting the minimum of the next five minutes was sufficient to transfer across similar domains (operating room data from an academic medical center and a trauma center) when predicting hypoxemia. However when attempting to utilize a representation from Hospital P, we found that the difference between operating rooms and intensive care units was likely too large to provide good predictions. Two solutions to this include fine tuning the Min LSTM models as well as acknowledging the large amount of domain shift and training specific LSTM embedding models with a particular downstream prediction in mind. Last but not least, this paper introduced a way to obtain feature attributions for stacked models of neural networks and trees. By showing that Shapley values may be computed as the mean over single reference Shapley values, this model stacking framework generalizes to all models for which single reference Shapley values can be obtained, which was quantitatively verified in Section 4.2.3.We intend to release code pertinent to training the LSTM models, obtaining embeddings, predicting with XGB models, and model stacking feature attributions -submitted as a pull request to the SHAP github (https://github.com/slundberg/shap). Additionally, we intend to release our embedding models, which we primarily recommend for use in forecasting "hypo" predictions. In the direction of future work, it is important to carefully consider representation learning in health -particularly in light of model inversion attacks as discussed in. To this end, future work in making precise statements about the privacy of models deserves attention, for which one potential avenue may be differential privacy . Other important areas to explore include extending these to higher sampling frequencies. Our data was sampled once per minute, but higher resolution data may beget different neural network architectures. Lastly, further work may include quantifying the relationship between domain shifts in hospitals and PHASE and determining other relevant prediction tasks for which embeddings can be applied (e.g., "hyper" predictions, doctor action prediction, etc. Labels For hypoxemia, a particular time point t is labelled to be one if the minimum of the next five minutes is hypoxemic (min(SaO t+1:t+6 2) ≤ 92). All points where the current time step is currently hypoxemic are ignored (SaO t 2 ≤ 92). Additionally we ignore time points where the past ten minutes were all missing or the future five minutes were all missing. Hypocapnia and hypotension are only labelled for hospitals 0 and 1. Additionally, we have stricter label conditions. We labeled the current time point t to be one if (min(S t−10:t) > T ) and the minimum of the next five minutes is "hypo" (min(S t+1:t+5) ≤ T ). We labeled the current time point t to be zero if (min(S t−10:t) > T ) and the minimum of the next ten minutes is not "hypo" (min(S t+1:t+10) > T ). All other time points were not considered. For hypocapnia, the threshold T = 34 and the signal S is ETCO 2. For hypotension the threshold is T = 59 and the signal S is NIBPM. Additionally we ignore time points where the past ten minutes were all missing or the future five minutes were all missing. As a , we have different sample sizes for different prediction tasks (reported in TAB7). For Min predictions, the label is the value of min(S t+1:t+5), points without signal for in the future five minutes are ignored. For Auto predictions, the label is all the time points: S t−59:t. The sample sizes for Min and Auto are the same and are reported in Table 3. Table 3: Sample sizes for the Min and Auto predictions for training the LSTM autoencoders. For the autoencoders we utilize the same data, without looking at the labels. We only utilize the 15 features above the line in both hospitals (Figure 5) for training our models., implemented in the Keras library with a Tensorflow back-end. We train our networks with either regression (Auto and Min embeddings) or classification (Hypox) objectives. For regression, we optimize using Adam with an MSE loss function. For classification we optimize using RMSProp with a binary cross-entropy loss function (additionally, we upsample to maintain balanced batches during training). Our model architectures consist of two hidden layers, each with 200 LSTM cells with dense connections between all layers. We found that important steps in training LSTM networks for our data are to impute missing values by the training mean, standardize data, and to randomize sample ordering prior to training (allowing us to sample data points in order without replacement). To prevent overfitting, we utilized dropouts between layers as well as recurrent dropouts for the LSTM nodes. Using a learning rate of 0.001 gave us the best final . The LSTM models were run to convergence (until their validation accuracy did not improve for five rounds of batch stochastic gradient descent). In order to train these models, we utilize three GPUs (GeForce GTX 1080 Ti graphics cards). DISPLAYFORM0 We train GBM trees in Python using XGBoost, an open source library for gradient boosting trees. XGBoost works well in practice in part due to it's ease of use and flexibility. Imputing and standardizing are unnecessary because GBM trees are based on splits in the training data, implying that scale does not matter and missing data is informative as is. We found that a learning rate of 0.02 for hypoxemia (0.1 for hypotension and hypocapnia), a max tree depth of 6, subsampling rate of 0.5, and a logistic objective gave us good performance. All XGB models were run until their validation accuracy was non-improving for five rounds of adding estimators (trees). In order to train these models, we utilize 72 CPUs (Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz) FIG3, apart from the fine tuned LSTMs, which were trained identically but were initialized with the non-target hospital's corresponding LSTM. This figure is for hypoxemia, but hypocapnia and hypotension parallel this setup. Not counting fine tuned LSTMs, there are a total of 60 LSTMs: 30 Auto models for hospitals 0/1 and 30 Min models for hospitals 0/1. LSTM/XGB architecture and hyperparameters are consistent across models and can be found in Section 6.2. The signals and the outputs of the LSTMs are vectors. Multiple connections into a single model are simply concatenated. For all LSTMs, they consist of two layers each with 200 LSTM cells, trained in identical manners, as described in Section 6.2. For XGB, the training is detailed in Section 6.2 as well. The univariate predictions made in FIG3 are similarly obtained, but only utilize the single feature used to obtain the final prediction. Here, "Hypoxemia" means: "Is min(SaO2 (t+1,···,t+5) ) ≤ 92?". EMA MinAuto FIG3. Reporting adjusted p-values based on one hundred bootstraps of the test set with adjusted pairwise comparisons via ANOVA with Tukey's HSD test. 0.0 denotes a p-value less than 1e−14. Since we are most concerned with models 10, 12, and 14 we only report pairs that include these models for the sake of brevity. FIG3. Reporting adjusted p-values based on one hundred bootstraps of the test set with adjusted pairwise comparisons via ANOVA with Tukey's HSD test. 0.0 denotes a p-value less than 1e−14. Since we are most concerned with models 10, 12, and 14 we only report pairs that include these models for the sake of brevity. Leave One Signal Out Test: In FIG12, we create a simulated setting -when predicting each event, we excluded the corresponding physiological signal from our features. For example, we assumed that SaO 2 is not recorded when predicting hypoxemia. Under this setting, we must rely on the remaining signals to predict hypoxemia. This setting is a more unsupervised evaluation in the sense that our outcome is not derived from a signal we create an embedding for. As our show FIG12 ), PHASEs outperformance is consistent for hypocapnia and hypotension. For hypoxemia, all representations perform poorly because predicting hypoxemia heavily relies on SaO2, leaving little signal for the remaining features. This is likely due in part to the low base rates of hypoxemia: 1.06% in hospital 0 and 2.32% in hospital 1. Investigating further, we found that the log loss of the Min embeddings was lower than other embeddings on the validation set, but not on the test set. This overfitting further suggests that there was little signal to be captured, causing simpler embeddings like EMA to be favored. Figure 3. Showcasing what models are being used for the the transference experiment in Figure 3. LSTM/XGB architecture and hyperparameters are consistent across models and can be found in Section 6.2. The signals and the outputs of the LSTMs are vectors. Multiple connections into a single XGB model are simply concatenated. All LSTMs consist of two layers each with 200 LSTM cells, trained in identical manners, as described in Section 6.2. For XGB, the training is detailed in Section 6.2 as well. The univariate predictions made in Figure 3 are similarly obtained, but only utilize the single feature used to obtain the final prediction. Here, "Hypoxemia" means: "Is min FIG1 ) ≤ 92?". 9e−09 7e−13 1 P 7e−13 7e−13 7e−13 3 P 3e−10 7e−13 1.000 7e−13 5 P 1e−07 7e−13 1.000 7e−13 0.998 2 0.736 7e−13 8e−13 7e−13 8e−13 1e−12 4 1e−12 7e−13 7e−13 7e−13 7e−13 7e−13 8e−08 6 7e−13 7e−13 7e−13 7e−13 8e−13 7e−13 7e−13 7e−13 2 P 7e−13 7e−13 7e−13 4e−12 7e−13 7e−13 8e−13 0.002 7e−13 4 P 7e−13 7e−13 7e−13 7e−13 7e−13 7e−13 7e−13 7e−13 0.979 7e−13 6 P 7e−13 7e−13 7e−13 7e−13 7e−13 7e−13 7e−13 7e−13 0.333 7e−13 0.988 1e−10 8e−13 1 P 7e−13 0.981 4e−11 3 P 7e−06 7e−13 7e−13 7e−13 5 P 0.229 7e−13 8e−13 7e−13 0.260 2 0.780 7e−13 8e−13 7e−13 0.027 1.000 4 9e−12 7e−13 7e−13 7e−13 0.574 1e−04 1e−06 6 7e−13 7e−13 7e−13 7e−13 7e−13 7e−13 7e−13 7e−13 2 P 0.603 7e−13 8e−13 7e−13 0.061 1.000 1.000 6e−06 7e−13 4 P 7e−13 7e−13 7e−13 7e−13 7e−13 7e−13 7e−13 7e−13 1e−04 7e−13 6 P 7e−13 7e−13 7e−13 7e−13 7e−13 7e−13 7e−13 7e−13 4e−08 7e−13 0.953 6.5 INTERPRETABILITY FIG1: Model setup for Figure 4. Showcasing what models are being used to evaluate interpretability. LSTM/XGB architecture and hyperparameters are consistent across models and can be found in Section 6.2. The signals and the outputs of the LSTMs are vectors. Multiple connections into a single XGB model are simply concatenated. Here, "Hypoxemia" means: "Is min FIG1 ) ≤ 92?". Of special note, the MLP is trained identically to the Hypox models. The architecture is a single layer with 100 nodes with a relu activation connected densely into a sigmoid output node. The MLP is trained until convergence by upsampling the number of positive samples to match the negative samples for each batch. The attributions for LSTM→MLP are computed via Deep SHAP and the attributions for LSTM→XGB are computed via Deep SHAP combined with Independent Tree SHAP (our novel method). Both methods use a fixed set of 100 randomly sampled points from the test set. FIG1, we present the local feature attributions for two different model stacks across a random set of examples. We can see that the models generally appear to agree on their predictions, although there are occasional disagreements -which are likely due to the fact that these attributions are for two different models. Being able to observe these trends is useful to understanding models and to achieving credibility. As our primary aim is to ensure that our model stacking local feature attributions agree with feature attributions on neural networks, we also provide attributions for true positives and true negatives in settings where the models agree. DISPLAYFORM0 In FIG1, we present local feature attributions for two different model stacks across true positive examples. Looking at these true positive examples we can see two consistent trends: high variability and a low absolute value of blood oxygen. Looking at the attributions we can discover that the dips in blood oxygen -minute to minute variability was important in both model stacks. Additionally, the closer the time point is to the actual prediction, the more important it is. In FIG1, we present the local feature attributions for two different model stacks across true negative examples. Here we can see that variability and dips make a much smaller relative impact in the model predictions. Instead, the most important factor in determining hypoxemia is the high value of SaO 2 closest to the final prediction. Furthermore, the feature attributions reveal interesting trends in the attributions for the MLP model, where there appears to be a consistent trend even though the samples look fairly different. FIG1: Randomly sampled feature attributions. Local feature attribution plots for two stacked models: 1. LSTM→MLP and 2. LSTM→XGB. Here we present fifteen randomly sampled hypoxemia examples. Figure 13: True negative feature attributions. Local feature attribution plots for two stacked models: 1. LSTM→MLP and 2. LSTM→XGB. Here we present the nine "least probable" positively labelled hypoxemia examples. In order to obtain this set, we took the intersection of the top 1000 negatively labelled examples from both models to get a set of 97 samples and randomly sample nine samples. FIG1: True positive feature attributions. Local feature attribution plots for two stacked models: 1. LSTM→MLP and 2. LSTM→XGB. Here we present the nine "most probable" positively labelled hypoxemia examples. In order to obtain this set, we took the intersection of the top 100 positively labelled examples from both models to get a set of 40 samples and randomly sample nine samples.
Physiological signal embeddings for prediction performance and hospital transference with a general Shapley value interpretability method for stacked models.
538
scitldr
We consider the dictionary learning problem, where the aim is to model the given data as a linear combination of a few columns of a matrix known as a dictionary, where the sparse weights forming the linear combination are known as coefficients. Since the dictionary and coefficients, parameterizing the linear model are unknown, the corresponding optimization is inherently non-convex. This was a major challenge until recently, when provable algorithms for dictionary learning were proposed. Yet, these provide guarantees only on the recovery of the dictionary, without explicit recovery guarantees on the coefficients. Moreover, any estimation error in the dictionary adversely impacts the ability to successfully localize and estimate the coefficients. This potentially limits the utility of existing provable dictionary learning methods in applications where coefficient recovery is of interest. To this end, we develop NOODL: a simple Neurally plausible alternating Optimization-based Online Dictionary Learning algorithm, which recovers both the dictionary and coefficients exactly at a geometric rate, when initialized appropriately. Our algorithm, NOODL, is also scalable and amenable for large scale distributed implementations in neural architectures, by which we mean that it only involves simple linear and non-linear operations. Finally, we corroborate these theoretical via experimental evaluation of the proposed algorithm with the current state-of-the-art techniques. Sparse models avoid overfitting by favoring simple yet highly expressive representations. Since signals of interest may not be inherently sparse, expressing them as a sparse linear combination of a few columns of a dictionary is used to exploit the sparsity properties. Of specific interest are overcomplete dictionaries, since they provide a flexible way of capturing the richness of a dataset, while yielding sparse representations that are robust to noise; see BID13;;. In practice however, these dictionaries may not be known, warranting a need to learn such representations -known as dictionary learning (DL) or sparse coding BID14. Formally, this entails learning an a priori unknown dictionary A ∈ R n×m and sparse coefficients x * (j) ∈ R m from data samples y (j) ∈ R n generated as DISPLAYFORM0 This particular model can also be viewed as an extension of the low-rank model BID15. Here, instead of sharing a low-dimensional structure, each data vector can now reside in a separate low-dimensional subspace. Therefore, together the data matrix admits a union-of-subspace model. As a of this additional flexibility, DL finds applications in a wide range of signal processing and machine learning tasks, such as denoising , image inpainting BID12, clustering and classification (; BID16 BID17 BID18 2019b; a), and analysis of deep learning primitives (; BID0 ; see also , and references therein. Notwithstanding the non-convexity of the associated optimization problems (since both factors are unknown), alternating minimization-based dictionary learning techniques have enjoyed significant success in practice. Popular heuristics include regularized least squares-based BID14 BID8 BID12 BID9 BID7, and greedy approaches such as the method of optimal directions (MOD) and k-SVD . However, dictionary learning, and matrix factorization models in general, are difficult to analyze in theory; see also BID10.To this end, motivated from a string of recent theoretical works BID1 BID4 ), provable algorithms for DL have been proposed recently to explain the success of aforementioned alternating minimization-based algorithms (; ; BID20 . However, these works exclusively focus on guarantees for dictionary recovery. On the other hand, for applications of DL in tasks such as classification and clusteringwhich rely on coefficient recovery -it is crucial to have guarantees on coefficients recovery as well. Contrary to conventional prescription, a sparse approximation step after recovery of the dictionary does not help; since any error in the dictionary -which leads to an error-in-variables (EIV) model for the dictionary -degrades our ability to even recover the support of the coefficients . Further, when this error is non-negligible, the existing guarantee recovery of the sparse coefficients only in 2 -norm sense . As a , there is a need for scalable dictionary learning techniques with guaranteed recovery of both factors. In this work, we present a simple online DL algorithm motivated from the following regularized least squares-based problem, where S(·) is a nonlinear function that promotes sparsity. S(x (j) ).Although our algorithm does not optimize this objective, it leverages the fact that the problem (P1) is convex w.r.t A, given the sparse coefficients {x (j) }. Following this, we recover the dictionary by choosing an appropriate gradient descent-based strategy . To recover the coefficients, we develop an iterative hard thresholding (IHT)-based update step BID3 ), and show that -given an appropriate initial estimate of the dictionary and a mini-batch of p data samples at each iteration t of the online algorithmalternating between this IHT-based update for coefficients, and a gradient descent-based step for the dictionary leads to geometric convergence to the true factors, i.e., x (j) →x * (j) and A i →A * i as t→∞. In addition to achieving exact recovery of both factors, our algorithm -Neurally plausible alternating Optimization-based Online Dictionary Learning (NOODL) -has linear convergence properties. Furthermore, it is scalable, and involves simple operations, making it an attractive choice for practical DL applications. Our major contributions are summarized as follows:• Provable coefficient recovery: To the best of our knowledge, this is the first on exact recovery of the sparse coefficients {x * (j) }, including their support recovery, for the DL problem. The proposed IHT-based strategy to update coefficient under the EIV model, is of independent interest for recovery of the sparse coefficients via IHT, which is challenging even when the dictionary is known; see also and BID11.• Unbiased estimation of factors and linear convergence: The recovery guarantees on the coefficients also helps us to get rid of the bias incurred by the prior-art in dictionary estimation. Furthermore, our technique geometrically converges to the true factors.• Online nature and neural implementation: The online nature of algorithm, makes it suitable for machine learning applications with streaming data. In addition, the separability of the coefficient update allows for distributed implementations in neural architectures (only involves simple linear and non-linear operations) to solve large-scale problems. To showcase this, we also present a prototype neural implementation of NOODL.In addition, we also verify these theoretical properties of NOODL through experimental evaluations on synthetic data, and compare its performance with state-of-the-art provable DL techniques. With the success of the alternating minimization-based techniques in practice, a push to study the DL problem began when BID1 showed that for m = n, the solution pair (A *, X *) lies at a local minima of the following non-convex optimization program, where X = [x, x,..., x (p) ] and Y = [y, y,..., y (p) ], with high probability over the randomness of the coefficients, min O * √ n µ log(n) DISPLAYFORM0 The algorithms discussed above implicitly assume that the coefficients can be recovered, after dictionary recovery, via some sparse approximation technique. However, as alluded to earlier, the guarantees for coefficient recovery -when the dictionary is known approximately -may be limited to some 2 norm bounds . This means that, the ing coefficient estimates may not even be sparse. Therefore, for practical applications, there is a need for efficient online algorithms with guarantees, which serves as the primary motivation for our work. We now detail the specifics of our algorithm -NOODL, outlined in Algorithm 1. NOODL recovers both the dictionary and the coefficients exactly given an appropriate initial estimate A of the dictionary. Specifically, it requires A to be-close to A * for 0 = O * (1/ log(n)), where (, κ)-closeness is defined as follows. This implies that, the initial dictionary estimate needs to be column-wise, and in spectral norm sense, close to A *, which can be achieved via certain initialization algorithms, such as those presented in. Given an integer n, we denote [n] = {1, 2, . . ., n}. The bold upper-case and lower-case letters are used to denote matrices M and vectors v, respectively. Mi, M (i,:), Mij, and vi (and v(i) ) denote the i-th column, i-th row, (i, j) element of a matrix, and i-th element of a vector, respectively. The superscript (·) (n) denotes the n-th iterate, while the subscript (·) (n) is reserved for the n-th data sample. Given a matrix M, we use M and M F as the spectral norm and Frobenius norm. Given a vector v, we use v, v 0, and v 1 to denote the 2 norm, 0 (number of non-zero entries), and 1 norm, respectively. We also use standard notations O(·), Ω(·) (O(·), Ω(·)) to indicate the asymptotic behavior (ignoring logarithmic factors). Further, we use g(n) = O * (f (n)) to indicate that g(n) ≤ Lf (n) for a small enough constant L, which is independent of n. We use c(·) for constants parameterized by the quantities in (·). Tτ (z):= z · 1 |z|≥τ denotes the hardthresholding operator, where "1" is the indicator function. We use supp(·) for the support (the set of non-zero elements) and sign(·) for the element-wise sign. DISPLAYFORM0 Form empirical gradient estimate: DISPLAYFORM1 Take a gradient descent step: DISPLAYFORM2 Normalize: DISPLAYFORM3 Due to the streaming nature of the incoming data, NOODL takes a mini-batch of p data samples at the t-th iteration of the algorithm, as shown in Algorithm 1. It then proceeds by alternating between two update stages: coefficient estimation ("Predict") and dictionary update ("Learn") as follows. Predict Stage: For a general data sample y = A * x *, the algorithm begins by forming an initial coefficient estimate x based on a hard thresholding (HT) step as shown in, where T τ (z):= z · 1 |z|≥τ for a vector z. Given this initial estimate x, the algorithm iterates over R = Ω(log(1/δ R))IHT-based steps to achieve a target tolerance of δ R, such that DISPLAYFORM4 x is the learning rate, and τ (r) is the threshold at the r-th iterate of the IHT. In practice, these can be fixed to some constants for all iterations; see A.6 for details. Finally at the end of this stage, we have estimate DISPLAYFORM5 Learn Stage: Using this estimate of the coefficients, we update the dictionary at t-th iteration A (t) by an approximate gradient descent step, using the empirical gradient estimate and the learning rate η A = Θ(m/k); see also A.5. Finally, we normalize the columns of the dictionary and continue to the next batch. The running time of each step t of NOODL is therefore O(mnp log(1/δ R)).For a target tolerance of T and δ T, such that A DISPLAYFORM6 NOODL uses an initial HT step and an approximate gradient descent-based strategy as in. Following which, our IHT-based coefficient update step yields an estimate of the coefficients at each iteration of the online algorithm. Coupled with the guaranteed progress made on the dictionary, this also removes the bias in dictionary estimation. Further, the simultaneous recovery of both factors also avoids an often expensive post-processing step for recovery of the coefficients. We start by introducing a few important definitions. First, as discussed in the previous section we require that the initial estimate A of the dictionary is-close to A *. In fact, we require this closeness property to hold at each subsequent iteration t, which is a key ingredient in our analysis. This initialization achieves two goals. First, the σ(i)A π(i) − A Definition 3. A matrix A ∈ R n×m with unit-norm columns is µ-incoherent if for all i = j the inner-product between the columns of the matrix follow DISPLAYFORM0 The incoherence parameter measures the degree of closeness of the dictionary elements. Smaller values (i.e., close to 0) of µ are preferred, since they indicate that the dictionary elements do not resemble each other. This helps us to effectively tell dictionary elements apart (; Candès and). We assume that µ = O(log(n)) . Next, we assume that the coefficients are drawn from a distribution class D defined as follows. Definition 4 (Distribution class D). The coefficient vector x * belongs to an unknown distribution D, where the support S = supp(x *) is at most of size k, DISPLAYFORM1 i |i ∈ S] = 1, and when i ∈ S, |x * i | ≥ C for some constant C ≤ 1. In addition, the non-zero entries are sub-Gaussian and pairwise independent conditioned on the support. The randomness of the coefficient is necessary for our finite sample analysis of the convergence. Here, there are two sources of randomness. The first is the randomness of the support, where the non-zero elements are assumed to pair-wise independent. The second is the value an element in the support takes, which is assumed to be zero mean with variance one, and bounded in magnitude. Similar conditions are also required for support recovery of sparse coefficients, even when the dictionary is known . Note that, although we only consider the case |x * i | ≥ C for ease of discussion, analogous may hold more generally for x * i s drawn from a distribution with sufficiently (exponentially) small probability of taking values in [−C, C].Recall that, given the coefficients, we recover the dictionary by making progress on the least squares objective (P1) (ignoring the term penalizing S(·)). Note that, our algorithm is based on finding an appropriate direction to ensure descent based on the geometry of the objective. To this end, we adopt a gradient descent-based strategy for dictionary update. However, since the coefficients are not exactly known, this in an approximate gradient descent-based approach, where the empirical gradient estimate is formed as. In our analysis, we establish the conditions under which both the empirical gradient vector (corresponding to each dictionary element) and the gradient matrix concentrate around their means. To ensure progress at each iterate t, we show that the expected gradient vector is (Ω(k/m), Ω(m/k), 0)-correlated with the descent direction, defined as follows. DISPLAYFORM2 This can be viewed as a local descent condition which leads to the true dictionary columns; see also Candès et al., and. In convex optimization literature, this condition is implied by the 2ρ − -strong convexity, and 1/2ρ + -smoothness of the objective. We show that for NOODL, ζ t = 0, which facilitates linear convergence to A * without incurring any bias. Overall our specific model assumptions for the analysis can be formalized as: DISPLAYFORM3 The coefficients are drawn from the distribution class D, as per Def. 4; A.3 The sparsity k satisfies k = O * (√ n/µ log(n)); A.4 A is-close to A * as per Def. 1, and 0 = O * (1/ log(n)); A.5 The step-size for dictionary update satisfies η A = Θ(m/k);A.6 The step-size and threshold for coefficient estimation satisfies η (r) x < c 1 (t, µ, n, k) = Ω(k/ √ n) < 1 and τ (r) = c 2 (t, µ, k, n) = Ω(k 2 /n) for small constants c 1 and c 2.We are now ready to state our main . A summary of the notation followed by a details of the analysis is provided in Appendix A and Appendix B, respectively. Theorem 1 (Main Result). Suppose that assumptions A.1-A.6 hold, and Algorithm 1 is provided with p = Ω(mk 2) new samples generated according to model at each iteration t. Then, with DISPLAYFORM4 alg, given R = Ω(log(n)), the coefficient estimate x (t) i at t-th iteration has the correct signed-support and satisfies (x DISPLAYFORM5 Furthermore, for some 0 < ω < 1/2, the estimate A (t) at (t)-th iteration satisfies DISPLAYFORM6 2, for all t = 1, 2,..... Our main establishes that when the model satisfies A.1∼A.3, the errors corresponding to the dictionary and coefficients geometrically decrease to the true model parameters, given appropriate dictionary initialization and learning parameters (step sizes and threshold); see A.4∼A.6. In other words, to attain a target tolerance of T and δ T, where DISPLAYFORM7 R is the target decay tolerance for the IHT steps. An appropriate number of IHT steps, R, remove the dependence of final coefficient error (per outer iteration) on the initial x. , this dependence in fact in an irreducible error, which is the source of bias in dictionary estimation. As a , since (for NOODL) the error in the coefficients only depends on the error in the dictionary, it can be made arbitrarily small, at a geometric rate, by the choice of T, δ T, and δ R. Also, note that, NOODL can tolerate i.i.d. noise, as long as the noise variance is controlled to enable the concentration to hold; we consider the noiseless case here for ease of discussion, which is already highly involved. Intuitively, Theorem 1 highlights the symbiotic relationship between the two factors. It shows that, to make progress on one, it is imperative to make progress on the other. The primary condition that allows us to make progress on both factors is the signed-support recovery (Def. 2). However, the introduction of IHT step adds complexity in the analysis of both the dictionary and coefficients. To analyze the coefficients, in addition to deriving conditions on the parameters to preserve the correct signed-support, we analyze the recursive IHT update step, and decompose the noise term into a component that depends on the error in the dictionary, and the other that depends on the initial coefficient estimate. For the dictionary update, we analyze the interactions between elements of the coefficient vector (introduces by the IHT-based update step) and show that the gradient vector for the dictionary update is (Ω(k/m), Ω(m/k), 0)-correlated with the descent direction. In the end, this leads to exact recovery of the coefficients and removal of bias in the dictionary estimation. Note that our analysis pipeline is standard for the convergence analysis for iterative algorithms. However, the introduction of the IHT-based strategy for coefficient update makes the analysis highly involved as compared to existing , e.g., the simple HT-based coefficient estimate in.NOODL has an overall running time of O(mnp log(1/δ R) max(log(1/ T), log(√ k/δ T)) to achieve target tolerances T and δ T, with a total sample complexity of p·T = Ω(mk 2). Thus to remove bias, the IHT-based coefficient update introduces a factor of log(1/δ R) in the computational complexity as compared to (has a total sample complexity of p · T = Ω(mk)), and also does not have the exponential running time and sample complexity as; see TAB0. The neural plausibility of our algorithm implies that it can be implemented as a neural network. This is because, NOODL employs simple linear and non-linear operations (such as inner-product and hard-thresholding) and the coefficient updates are separable across data samples, as shown in of Algorithm 1. To this end, we present a neural implementation of our algorithm in FIG2, which showcases the applicability of NOODL in large-scale distributed learning tasks, motivated from the implementations described in BID14 and .The neural architecture shown in FIG2 has three layers -input layer, weighted residual evaluation layer, and the output layer. The input to the network is a data and step-size pair (y (j), η x ) to each input node. Given an input, the second layer evaluates the weighted residuals as shown in FIG2. Finally, the output layer neurons evaluate the IHT iterates x (r+1) (j). We illustrate the operation of this architecture using the timing diagram in FIG2. The main stages of operation are as follows. Output: DISPLAYFORM0 The timing sequence of the neural implementation. Initial Hard Thresholding Phase: The coefficients initialized to zero, and an input (y (j), 1) is provided to the input layer at a time instant = 0, which communicates these to the second layer. Therefore, the residual at the output of the weighted residual evaluation layer evaluates to y (j) at = 1. Next, at = 2, this residual is communicated to the output layer, which in evaluation of the initialization x (j) as per. This iterate is communicated to the second layer for the next residual evaluation. Also, at this time, the input layer is injected with (y (j), η x ) to set the step size parameter η x for the IHT phase, as shown in FIG2 Iterative Hard Thresholding (IHT) Phase: Beginning = 3, the timing sequence enters the IHT phase. Here, the output layer neurons communicate the iterates x (r+1) (j) to the second layer for evaluation of subsequent iterates as shown in FIG2. The process then continues till the time instance = 2R + 1, for R = Ω(log(1/δ R)) to generate the final coefficient estimate x DISPLAYFORM0 for the current batch of data. At this time, the input layer is again injected with (y (j), 1) to prepare the network for residual sharing and gradient evaluation for dictionary update. The procedure now enters the dictionary update phase, denoted as "Hebbian Learning" in the timing sequence. In this phase, each output layer neuron communicates the final coefficient estimate x (t) (j) = x (R) (j) to the second layer, which evaluates the residual for one last time (with η x = 1), and shares it across all second layer neurons ("Hebbian learning"). This allows each second layer neuron to evaluate the empirical gradient estimate, which is used to update the current dictionary estimate (stored as weights) via an approximate gradient descent step. This completes one outer iteration of Algorithm 1, and the process continues for T iterations to achieve target tolerances T and δ T, with each step receiving a new mini-batch of data. We now analyze the convergence properties and sample complexity of NOODL via experimental evaluations 2. The experimental data generation set-up, additional , including analysis of computational time, are shown in Appendix E. We compare the performance of our algorithm NOODL with the current state-of-the-art alternating optimization-based online algorithms presented in , and the popular algorithm presented in BID12 (denoted as Mairal '09). First of these, Arora15(''biased''), is a simple neurally plausible method which incurs a bias and has a sample complexity of Ω(mk). The other, referred to as Arora15(''unbiased''), incurs no bias as per , but the sample complexity were not established. FIG4, (b-i), (c-i), and (d-i) show the performance of the aforementioned methods for k = 10, 20, 50, and 100, respectively. Here, for all experiments we set η x = 0.2 and τ = 0.1. We terminate NOODL when the error in dictionary is less than 10 −10. Also, for coefficient update, we terminate when change in the iterates is below 10 −12. For k = 10, 20 and k = 50, FIG4: Comparative analysis of convergence properties. Panels (a-i), (b-i), (c-i), and (d-i) show the convergence of NOODL, Arora15(''biased''), Arora15(''unbiased'') and Mairal'09, for different sparsity levels for n = 1000, m = 1500 and p = 5000. Since NOODL also recovers the coefficients, we show the corresponding recovery of the dictionary, coefficients, and overall fit in panels (a-ii), (b-ii), (c-ii), and (d-ii), respectively. Further, panels (e-i) and (e-ii) show the phase transition in samples p (per iteration) with the size of the dictionary m averaged across 10 Monte Carlo simulations for the two factors. Here, n = 100, k = 3, ηx = 0.2, τ = 0.1, 0 = 2/ log(n), ηA is chosen as per A.5. A trial is considered successful if the relative Frobenius error incurred by A and X is below 5 × 10 −7 after 50 iterations. we note that Arora15(''biased'') and Arora15(''unbiased'') incur significant bias, while NOODL converges to A * linearly. NOODL also converges for significantly higher choices of sparsity k, i.e., for k = 100 as shown in panel (d), beyond k = O(√ n), indicating a potential for improving this bound. Further, we observe that Mairal'09 exhibits significantly slow convergence as compared to NOODL. Also, in panels (a-ii), (b-ii), (c-ii) and (d-ii) we show the corresponding performance of NOODL in terms of the error in the overall fit (Y − AX F / Y F), and the error in the coefficients and the dictionary, in terms of relative Frobenius error metric discussed above. We observe that the error in dictionary and coefficients drops linearly as indicated by our main . We consider the online DL setting in this work. We note that, empirically NOODL works for the batch setting also. However, analysis for this case will require more sophisticated concentration , which can address the ing dependence between iterations of the algorithm. In addition, our experiments indicate that NOODL works beyond the sparsity ranges prescribed by our theoretical . Arguably, the bounds on sparsity can potentially be improved by moving away from the incoherence-based analysis. We also note that in our experiments, NOODL converges even when initialized outside the prescribed initialization region, albeit it achieves the linear rate once it satisfies the closeness condition A.4. These potential directions may significantly impact the analysis and development of provable algorithms for other factorization problems as well. We leave these research directions, and a precise analysis under the noisy setting, for future explorations. We present NOODL, to the best of our knowledge, the first neurally plausible provable online algorithm for exact recovery of both factors of the dictionary learning (DL) model. NOODL alternates between: (a) an iterative hard thresholding (IHT)-based step for coefficient recovery, and (b) a gradient descent-based update for the dictionary, ing in a simple and scalable algorithm, suitable for large-scale distributed implementations. We show that once initialized appropriately, the sequence of estimates produced by NOODL converge linearly to the true dictionary and coefficients without incurring any bias in the estimation. Complementary to our theoretical and numerical , we also design an implementation of NOODL in a neural architecture for use in practical applications. In essence, the analysis of this inherently non-convex problem impacts other matrix and tensor factorization tasks arising in signal processing, collaborative filtering, and machine learning. We summarizes the definitions of some frequently used symbols in our analysis in TAB1. In addition, we use D (v) as a diagonal matrix with elements of a vector v on the diagonal. Given a matrix M, we use M −i to denote a ing matrix without i-th column. Also note that, since we show that A (t)i − A * i ≤ t contracts in every step, therefore we fix t, 0 = O * (1/ log(n)) in our analysis. i-th column of the dictionary estimate at the t-th iterate. DISPLAYFORM0 Upper-bound on column-wise error at the t-th iterate. DISPLAYFORM1 Incoherence between the columns of A (t); See Claim 1. DISPLAYFORM2 Inner-product between the error and the dictionary element. DISPLAYFORM3 j on the diagonal for j ∈ S. i-th element the coefficient estimate at the r-th IHT iterate. DISPLAYFORM0 Decay parameter for coefficients. DISPLAYFORM1 Error in non-zero elements of the coefficient vector. DISPLAYFORM2 j with probability at least (1 − δ DISPLAYFORM3 1 F x * is the indicator function corresponding to the event that sign(x *) = sign(x), denoted by F x *, and similarly for the complement F x * B PROOF OF THEOREM 1We now prove our main . The detailed proofs of intermediate lemmas and claims are organized in Appendix C and Appendix D, respectively. Furthermore, the standard concentration are stated in Appendix F for completeness. Also, see TAB2 for a map of dependence between the . Given an-close estimate of the dictionary, the main property that allows us to make progress on the dictionary is the recovery of the correct sign and support of the coefficients. Therefore, we first show that the initial coefficient estimate recovers the correct signed-support in Step I.A. Now, the IHT-based coefficient update step also needs to preserve the correct signed-support. This is to ensure that the approximate gradient descent-based update for the dictionary makes progress. Therefore, in Step I.B, we derive the conditions under which the signed-support recovery condition is preserved by the IHT update. To get a handle on the coefficients, in Step II.A, we derive an upper-bound on the error incurred by each non-zero element of the estimated coefficient vector, i.e., | x i − x * i | for i ∈ S for a general coefficient vector x *, and show that this error only depends on t (the column-wise error in the dictionary) given enough IHT iterations R as per the chosen decay parameter δ R. In addition, for analysis of the dictionary update, we develop an expression for the estimated coefficient vector inStep II.B.We then use the coefficient estimate to show that the gradient vector satisfies the local descent condition (Def. 5). This ensures that the gradient makes progress after taking the gradient descent-based step. To begin, we first develop an expression for the expected gradient vector (corresponding to each dictionary element) in Step III.A. Here, we use the closeness property Def 1 of the dictionary estimate. Further, since we use an empirical estimate, we show that the empirical gradient vector concentrates around its mean in Step III.B. Now using Lemma 15, we have that descent along this direction makes progress. Step IV.A and Step IV.B, we show that the updated dictionary estimate maintains the closeness property Def 1. This sets the stage for the next dictionary update iteration. As a , our main establishes the conditions under which any t-th iteration succeeds. Our main is as follows. Theorem 1 (Main Result) Suppose that assumptions A.1-A.6 hold, and Algorithm 1 is provided with p = Ω(mk 2) new samples generated according to model at each iteration t. Then, with DISPLAYFORM0 i at t-th iteration has the correct signed-support and satisfies DISPLAYFORM1 Furthermore, for some 0 < ω < 1/2, the estimate A (t) at (t)-th iteration satisfies DISPLAYFORM2 alg is some small constant, where δ DISPLAYFORM3 As a first step, we ensure that our coefficient estimate has the correct signed-support (Def. 2). To this end, we first show that the initialization has the correct signed-support, and then show that the iterative hard-thresholding (IHT)-based update step preserves the correct signed-support for a suitable choice of parameters.• Step I.A: Showing that the initial coefficient estimate has the correct signed-supportGiven an-close estimate A of A *, we first show that for a general sample y the initialization step recovers the correct signed-support with probability at least (1−δ DISPLAYFORM0). This is encapsulated by the following lemma. Lemma 1 (Signed-support recovery by coefficient initialization step). Suppose A (t) DISPLAYFORM1, and t = O * (1/ log(m)), with probability at least (1 − δ (t)T ) for each random sample y = A * x *: DISPLAYFORM2 ).Note that this only requires the dictionary to be column-wise close to the true dictionary, and works for less stringent conditions on the initial dictionary estimate, i.e., requires DISPLAYFORM3; see also .• Step I.B: The iterative IHT-type updates preserve the correct signed support-Next, we show that the IHT-type coefficient update step preserves the correct signed-support for an appropriate choice of step-size parameter η (r)x and threshold τ (r). The choice of these parameters arises from the analysis of the IHT-based update step. Specifically, we show that at each iterate r, the step-size η (r)x should be chosen to ensure that the component corresponding to the true coefficient value is greater than the "interference" introduced by other non-zero coefficient elements. Then, if the threshold is chosen to reject this "noise", each iteration of the IHT-based update step preserves the correct signed-support. Lemma 2 (IHT update step preserves the correct signed-support). Suppose DISPLAYFORM4 T ), each iterate of the IHT-based coefficient update step shown in has the correct signed-support, if for a constant c DISPLAYFORM5 the step size is chosen as η DISPLAYFORM6 1, and the threshold τ (r) is chosen as DISPLAYFORM7 for some constants c 1 and c 2. Here, DISPLAYFORM8 ),and δ DISPLAYFORM9 ).Note that, although we have a dependence on the iterate r in choice of η (r)x and τ (r), these can be set to some constants independent of r. In practice, this dependence allows for greater flexibility in the choice of these parameters. We now derive an upper-bound on the error incurred by each non-zero coefficient element. Further, we derive an expression for the coefficient estimate at the t-th round of the online algorithm x (t):= x (R); we use x instead of x (t) for simplicity.• Step II.A: Derive a bound on the error incurred by the coefficient estimate-Since Lemma 2 ensures that x has the correct signed-support, we now focus on the error incurred by each coefficient element on the support by analyzing x. To this end, we carefully analyze the effect of the recursive update, to decompose the error incurred by each element on the support into two components -one that depends on the initial coefficient estimate xand other that depends on the error in the dictionary. We show that the effect of the component that depends on the initial coefficient estimate diminishes by a factor of (1 − η x + η x µt √ n) at each iteration r. Therefore, for a decay parameter δ R, we can choose the number of IHT iterations R, to make this component arbitrarily small. Therefore, the error in the coefficients only depends on the per column error in the dictionary, formalized by the following . Lemma 3 (Upper-bound on the error in coefficient estimation). With probability at least (1 − δ DISPLAYFORM0 T) the error incurred by each element i 1 ∈ supp(x *) of the coefficient estimate is upper-bounded as DISPLAYFORM1 ), and µ t is the incoherence between the columns of A (t); see Claim 1.This allows us to show that if the column-wise error in the dictionary decreases at each iteration t, then the corresponding estimates of the coefficients also improve.• Step II.B: Developing an expression for the coefficient estimate-Next, we derive the expression for the coefficient estimate in the following lemma. This expression is used to analyze the dictionary update. Lemma 4 (Expression for the coefficient estimate at the end of R-th IHT iteration). With probability at least (1 − δ DISPLAYFORM2 β) the i 1 -th element of the coefficient estimate, for each i 1 ∈ supp(x *), is given by DISPLAYFORM3 Here, ϑ DISPLAYFORM4 ) and δ DISPLAYFORM5 ).We again observe that the error in the coefficient estimate depends on the error in the dictionary via λ Given the coefficient estimate we now show that the choice of the gradient as shown in makes progress at each step. To this end, we analyze the gradient vector corresponding to each dictionary element to see if it satisfies the local descent condition of Def. 5. Our analysis of the gradient is motivated from. However, as opposed to the simple HT-based coefficient update step used by , our IHT-based coefficient estimate adds to significant overhead in terms of analysis. Notwithstanding the complexity of the analysis, we show that this allows us to remove the bias in the gradient estimate. To this end, we first develop an expression for each expected gradient vector, show that the empirical gradient estimate concentrates around its mean, and finally show that the empirical gradient vector is (Ω(k/m), Ω(m/k), 0)-correlated with the descent direction, i.e. has no bias.• Step III.A: Develop an expression for the expected gradient vector corresponding to each dictionary element-The expression for the expected gradient vector g DISPLAYFORM0 j corresponding to j-th dictionary element is given by the following lemma. Lemma 5 (Expression for the expected gradient vector). Suppose that A (t) is (t, 2)-near to A *. Then, the dictionary update step in Algorithm 1 amounts to the following for the j-th dictionary element DISPLAYFORM1 j is given by g DISPLAYFORM2 • Step III.B: Show that the empirical gradient vector concentrates around its expectation-Since we only have access to the empirical gradient vectors, we show that these concentrate around their expected value via the following lemma. Lemma 6 (Concentration of the empirical gradient vector). Given p = Ω(mk 2) samples, the empirical gradient vector estimate corresponding to the i-th dictionary element, g (t)i concentrates around its expectation, i.e., g DISPLAYFORM3 • Step III.C: Show that the empirical gradient vector is correlated with the descent directionNext, in the following lemma we show that the empirical gradient vector g DISPLAYFORM4 j is correlated with the descent direction. This is the main which enables the progress in the dictionary (and coefficients) at each iteration t. Lemma 7 (Empirical gradient vector is correlated with the descent direction). Suppose DISPLAYFORM5, and for any t ∈ [T], DISPLAYFORM6 This ensures for at any t ∈ [T], the gradient descent-based updates made via FORMULA4 gets the columns of the dictionary estimate closer to the true dictionary, i.e., t+1 ≤ t. Moreover, this step requires closeness between the dictionary estimate A (t) and A *, in the spectral norm-sense, as per Def 1. As discussed above, the closeness property (Def 1) is crucial to show that the gradient vector is correlated with the descent direction. Therefore, we now ensure that the updated dictionary A (t+1) maintains this closeness property. Lemma 7 already ensures that t+1 ≤ t. As a , we show that A (t+1) maintains closeness in the spectral norm-sense as required by our algorithm, i.e., that it is still (t+1, 2)-close to the true dictionary. Also, since we use the gradient matrix in this analysis, we show that the empirical gradient matrix concentrates around its mean.• Step IV.A: The empirical gradient matrix concentrates around its expectation: We first show that the empirical gradient matrix concentrates as formalized by the following lemma. Lemma 8 (Concentration of the empirical gradient matrix). With probability at least DISPLAYFORM0 Step IV.B: The "closeness" property is maintained after the updates made using the empirical gradient estimate: Next, the following lemma shows that the updated dictionary Amaintains the closeness property. Lemma 9 (A (t+1) maintains closeness). Suppose A (t) is (t, 2) near to A * with t = O * (1/ log(n)), and number of samples used in step t is p = Ω(mk 2), then with probability DISPLAYFORM1 Proof of Theorem 1. From Lemma 7 we have that with probability at least (1 − δ DISPLAYFORM0 j is (Ω(k/m), Ω(m/k), 0)-correlated with A * j. Further, Lemma 9 ensures that each iterate maintains the closeness property. Now, applying Lemma 15 we have that, for η A ≤ Θ(m/k), with probability at least (1 − δ DISPLAYFORM1 0 . where for 0 < ω < 1/2 with ω = Ω(k/m)η A. That is, the updates converge geometrically to A *. Further, from Lemma 3, we have that the on the error incurred by the coefficients. Here, Published as a conference paper at ICLR 2019 DISPLAYFORM0 ). That is, the updates converge geometrically to A *. Further, from Lemma 3, we have that the error in the coefficients only depends on the error in the dictionary, which leads us to our on the error incurred by the coefficients. This completes the proof of our main . We present the proofs of the Lemmas used to establish our main . Also, see TAB2 for a map of dependence between the , and Appendix D for proofs of intermediate . Claim 8Claim 9Lemma 7Lemma 8Claim 10Lemma 9Theorem 1Proof of Lemma 1. Let y ∈ R n be general sample generated as y = A * x *, where x * ∈ R m is a sparse random vector with support S = supp(x *) distributed according to D.4.The initial decoding step at the t-th iteration (shown in Algorithm 1) involves evaluating the innerproduct between the estimate of the dictionary A (t), and y. The i-th element of the ing vector can be written as DISPLAYFORM0 where DISPLAYFORM1 2 ≤ t and DISPLAYFORM2, otherwise. Now, we focus on the w i and show that it is small. By the definition of w i we have DISPLAYFORM3 Here, since var(x *) = 1, w i is a zero-mean random variable with variance DISPLAYFORM4 Now, each term in this sum can be bounded as, DISPLAYFORM5 Therefore, we have the following as per our assumptions on µ and k, DISPLAYFORM6 using Gershgorin Circle Theorem . Therefore, we have DISPLAYFORM7 Finally, we have that DISPLAYFORM8 Now, we apply the Chernoff bound for sub-Gaussian random variables w i (shown in Lemma 12) to conclude that DISPLAYFORM9 ).Further, w i corresponding to each m should follow this bound, applying union bound we conclude that DISPLAYFORM10 T.Proof of Lemma 2. Consider the (r + 1)-th iterate x (r+1) for the t-th dictionary iterate, where DISPLAYFORM11 ≤ t for all i ∈ [1, m] evaluated as the following by the update step described in Algorithm 1, DISPLAYFORM12 where ηx < 1 is the learning rate or the step-size parameter. Now, using Lemma 1 we know that x has the correct signed-support with probability at least (1 − δ DISPLAYFORM13 can be written as DISPLAYFORM14 we can write the (r + 1)-th iterate of the coefficient update step using as DISPLAYFORM15 Further, the j-th entry of this vector is given by DISPLAYFORM16 We now develop an expression for the j-th element of each of the term in FORMULA79 as follows. First, we can write the first term as DISPLAYFORM17 Next, the second term in can be expressed as DISPLAYFORM18 Finally, we have the following expression for the third term, DISPLAYFORM19 Now using our definition of λ DISPLAYFORM20 2, combining all the for, and using the fact that since A (t) is close to A *, vectors A (t) j − A * j and A * j enclose an obtuse angle, we have the following for the j-th entry of the (r + 1)-th iterate, x (r+1) is given by DISPLAYFORM21 Here ξ (r+1) j is defined as DISPLAYFORM22, we can write ξ DISPLAYFORM23 where β (t) j is defined as DISPLAYFORM24 Note that β (t) j does not change for each iteration r of the coefficient update step. Further, by Claim 2 we show that |β DISPLAYFORM25 where ξ DISPLAYFORM26. Further, using Claim 1, DISPLAYFORM27 since x (r−1) − x * 1 = O(k). Therefore, for the (r + 1)-th iteration, we choose the threshold to be DISPLAYFORM28 and the step-size by setting the "noise" component of FORMULA84 to be smaller than the "signal" part, specifically, half the signal component, i.e., DISPLAYFORM29 Also, since we choose the threshold as τ (r):= η DISPLAYFORM30 max, where x min = C/2, we have the following for the (r + 1)-th iteration, DISPLAYFORM31 Therefore, for this step we choose η DISPLAYFORM32 Therefore, η DISPLAYFORM33 can be chosen as DISPLAYFORM34. In addition, if we set all η (r) DISPLAYFORM35 Further, since we initialize with the hard-thresholding step, the entries in |x | ≥ C/2. Here, we define ξ FORMULA0, we have DISPLAYFORM36 From Claim 2 we have that |β (t) i1 | ≤ t β with probability at least (1 − δ (t) β ). Further, using Claim 1, and letting C DISPLAYFORM37 Rearranging the expression for (r + 1)-th update, and using we have the following upperbound DISPLAYFORM38. i1, where we define q= (1 − η DISPLAYFORM0 Here, α DISPLAYFORM1 is defined as DISPLAYFORM2 Our aim now will be to express C DISPLAYFORM3.., i k, and all η x = η x. Then, using Claim 3 we have the following expression for C DISPLAYFORM4 Here, DISPLAYFORM5 Next from Claim 4 we have that with probability at DISPLAYFORM6, and using the on sum of geometric series, we have DISPLAYFORM0 i1 is upper-bounded as DISPLAYFORM0 Further, since k = O(√ n/µ log(n)), kc x < 1, therefore, we have DISPLAYFORM1 with probability at least (1 − δ DISPLAYFORM2 i1 δ R 0 for an appropriately large R. Therefore, the error in each non-zero coefficient is DISPLAYFORM3 with probability at least (1 − δ (t) β ). i1 as defined in, and recursively substituting for x (r) i1 we have DISPLAYFORM0 where we set all η r x to be η x. Further, on defining DISPLAYFORM1 where γ DISPLAYFORM2 Note that γ (R) i1 can be made appropriately small by choice of R. Further, by Claim 5 we have DISPLAYFORM3 with probability at least (1 − δ DISPLAYFORM4 Proof of Lemma 5. From Lemma 4 we have that for each j ∈ S, DISPLAYFORM5 with probability at least (1 − δ DISPLAYFORM6 and let 1 F x * denote the indicator function corresponding to this event. As we show in Lemma 2, this event occurs with probability at least (1 − δ DISPLAYFORM7 T). Using this, we can write the expected gradient vector corresponding to the j-th sample as 1 DISPLAYFORM8 Here, γ:= E[(A (t) x − y)sign(x * j)1 F x * ] is small and depends on δ (t)T and δ β, which in turn drops with t. Therefore, γ diminishes with t. Further, since 1 F x * + 1 F x * = 1, and Pr[DISPLAYFORM0 T), is very large, DISPLAYFORM1 Therefore, we can write g DISPLAYFORM2 S ] can be made very small by choice of R, we absorb this term in γ. Therefore, DISPLAYFORM3 Writing the expectation by sub-conditioning on the support, DISPLAYFORM4 where we have used the fact that E x * S [sign(x * j)] = 0 and introduced DISPLAYFORM5 Published as a conference paper at ICLR 2019 DISPLAYFORM0 Further, by Claim 7 we have that DISPLAYFORM1 This completes the proof. Proof of Lemma 6. Let W = {j : i ∈ supp(x * (j) )} and then we have that DISPLAYFORM2 where x (j) (i) denotes the i-th element of the coefficient estimate corresponding to the (j)-th sample. Here, for = |W | the summation DISPLAYFORM3 has the same distribution as Σ j=1 z j, where each z j belongs to a distribution as DISPLAYFORM4 Let w j = z j − E[z], we will now apply the vector Bernstein shown in Lemma 11. For this, we require bounds on two parameters for these -L:= w j and σ 2:= Σ j E[w j 2]. Note that, since the quantity of interest is a function of x * i, which are sub-Gaussian, they are only bounded almost surely. To this end, we will employ Lemma 14 (Lemma 45 in ) to get a handle on the concentration. Bound on the norm w: This bound is evaluated in Claim 8, which states that with probability at least (1 − δ DISPLAYFORM5 Bound on variance parameter E[ w 2]: Using Claim 9, we have DISPLAYFORM6. Therefore, the bound on the variance parameter σ 2 is given by DISPLAYFORM7 From Claim 2 we have that with probability at least (1 − δ DISPLAYFORM8 Applying vector Bernstein inequality shown in Lemma 11 and using Lemma 14 (Lemma 45 in ), choosing = Ω(k 3), we conclude DISPLAYFORM9 with probability at least (1 − δ DISPLAYFORM10 Finally, substituting in we have DISPLAYFORM11 with probability at least (1 − δ DISPLAYFORM12 Proof of Lemma 7. Since we only have access to the empirical estimate of the gradient g (t)i, we will show that this estimate is correlated with (A (t) j − A * j ). To this end, first from Lemma 6 we have that the empirical gradient vector concentrates around its mean, specifically, DISPLAYFORM13 with probability at least (1 − δ DISPLAYFORM14 HW). From Lemma 5, we have the following expression for the expected gradient vector DISPLAYFORM15 Then, g (t)i can be written as DISPLAYFORM16 where DISPLAYFORM17 Using the definition of v as shown in FORMULA2 we have DISPLAYFORM18 Further, using Claim 7 DISPLAYFORM0 Now, since A (t) − A * ≤ 2 A * (the closeness property (Def.1) is maintained at every step using Lemma 9), and further since A * = O(m/n), we have that DISPLAYFORM1 Therefore, we have DISPLAYFORM2 Here, we use the fact that γ drops with decreasing t as argued in Lemma 5. Next, using, we have DISPLAYFORM3 we have that, DISPLAYFORM4 Substituting for v, this implies that g (t) DISPLAYFORM5 Further, we also have the following lower-bound DISPLAYFORM6 Published as a conference paper at ICLR 2019Here, we use the fact that R.H.S. can be minimized only if v is directed opposite to the direction of A (t) j − A * j. Now, we show that this gradient is (ρ, 1/100ρ, 0) correlated, DISPLAYFORM0 Therefore, for this choice of k, i.e. k = O(√ n), there is no bias in dictionary estimation in comparison to. This gain can be attributed to estimating the coefficients simultaneously with the dictionary. Further, since we choose 4ρ = p j q j, we have that ρ = Θ(k/m), as a ρ + = 1/100ρ = Ω(m/k). Applying Lemma 15 we have DISPLAYFORM1 Proof of Lemma 8. Here, we will prove that g (t) defined as DISPLAYFORM2 concentrates around its mean. Notice that each summand (DISPLAYFORM3) is a random matrix of the form (y − A (t) x)sign(x). Also, we have g (t) defined as DISPLAYFORM4 To bound DISPLAYFORM5, we are interested in p j=1 W j, where each matrix W j is given by DISPLAYFORM6 Noting that E[W j] = 0, we will employ the matrix Bernstein (Lemma 10) to bound g (t) − g (t). To this end, we will bound W j and the variance proxy DISPLAYFORM7 Now, since each x (j) has k non-zeros, sign(x (j) ) sign(x (j) ) = k, and using Claim 10, with proba- DISPLAYFORM8 Similarly, expanding E[W j W j], and using the fact that DISPLAYFORM9 ] is positive semi-definite. Now, using Claim 8 and the fact that entries of E [(sign( x (j) )sign(x (j) ) ] are q i on the diagonal and zero elsewhere, where DISPLAYFORM10 mp ). Now, we are ready to apply the matrix Bernstein . Since, m = O(n) the variance statistic comes out to be O(DISPLAYFORM11, then as long as we choose p = Ω(mk 2) (using the bound on t β), with probability at least (1 − δ DISPLAYFORM12 Proof of Lemma 9. This lemma ensures that the dictionary iterates maintain the closeness property (Def.1) and satisfies the prerequisites for Lemma 7.The update step for the i-th dictionary element at the s + 1 iteration can be written as DISPLAYFORM13 Here, g (t) i is given by the following as per Lemma 5 with probability at least (1 − δ DISPLAYFORM14 i in the dictionary update step, DISPLAYFORM0 Therefore, the update step for the dictionary (matrix) can be written as DISPLAYFORM1 where, DISPLAYFORM2 i ) and V = A (t) Q, with the matrix Q given by, DISPLAYFORM3, and using the following intermediate shown in Claim 7, DISPLAYFORM4 Therefore, DISPLAYFORM5 We will now proceed to bound each term in. Starting with (A (t) − A * )diag(1 − η A p i q i), and using the fact that p i = O, q i = O(k/m), and DISPLAYFORM6 Using the derived above, and the the derived in Lemma 8 which states that with probability at least (1 − δ DISPLAYFORM7 Proof of Claim 1. We start by looking at the incoherence between the columns of A *, for j = i, DISPLAYFORM0 Claim 2 (Bound on β (t) j: the noise component in coefficient estimate that depends on t ). With DISPLAYFORM1 ).Proof of Claim 2. We have the following definition for β (t) j from, DISPLAYFORM2 Here, since x * i are independent sub-Gaussian random variables, β (t) j is a sub-Gaussian random variable with the variance parameter evaluated as shown below DISPLAYFORM3 Pr[|β DISPLAYFORM0 Now, we need this for each β (t) j for j ∈ supp(x *), union bounding over k coefficients DISPLAYFORM1 ).Claim 3 (Error in coefficient estimation for a general iterate (r + 1)). The error in a general iterate r of the coefficient estimation is upper-bounded as DISPLAYFORM2 Proof of Claim 3. From we have the following expression for C (r+1) i1 DISPLAYFORM3 Our aim will be to recursively substitute for C DISPLAYFORM4 as a function of C 0 max. To this end, we start by analyzing the iterates C DISPLAYFORM5 i1, and so on to develop an expression for C (r+1) i1 as follows. Published as a conference paper at ICLR 2019 DISPLAYFORM0 i1 is given by DISPLAYFORM1 Further, we know from we have DISPLAYFORM2 Therefore, since DISPLAYFORM3 Expression for Ci1 -Next, we writing Ci1, DISPLAYFORM4 i2.Here, using we have the following expression for C DISPLAYFORM5 Substituting for Ci2 in the expression for Ci1, and rearranging the terms in the expression for Ci1, we have DISPLAYFORM6 Expression for C DISPLAYFORM7 Substituting for Ci2 from, Ci2 from, Ci2 using, and rearranging, DISPLAYFORM8 i5.Notice that the terms have a binomial series like form. To reveal this structure, let each α DISPLAYFORM9 max for j = i 1, i 2,..., i k. Therefore, we have DISPLAYFORM10 Further upper-bounding the expression, we have DISPLAYFORM11 Therefore, DISPLAYFORM12 -With this, we are ready to write the general term, DISPLAYFORM13 Claim 4 (An intermediate for bounding the error in coefficient calculations). With prob- DISPLAYFORM14 Proof of Claim 4. Using, the quantity α DISPLAYFORM15 Published as a conference paper at ICLR 2019 Therefore, we are interested in DISPLAYFORM0 Consider the first term which depends on C DISPLAYFORM1, we have DISPLAYFORM2 where δ R is a small constant, and a parameter which determines the number of iterations R required for the coefficient update step. Now, coming back to the quantity of interest DISPLAYFORM3 Now, using sum of geometric series , we have that DISPLAYFORM4, and DISPLAYFORM5. Therefore, with probability at least (1 − δ DISPLAYFORM6 2 and |β (t)i | = t β with probability at least (1 − δ (t) β ) using Claim 2.Claim 5 (Bound on the noise term in the estimation of a coefficient element in the support). With probability (1 − δ DISPLAYFORM7 i1 is defined as DISPLAYFORM8 i1 is as defined in FORMULA0, DISPLAYFORM9 Therefore, we have the following expression for ϑ DISPLAYFORM10 Published as a conference paper at ICLR 2019 DISPLAYFORM0 i1 can be upper-bounded as DISPLAYFORM1 Since from Claim 6 we have DISPLAYFORM2 Further, since 1 − (1 − η x) r−1 ≤ 1, we have that DISPLAYFORM3 Therefore, DISPLAYFORM4 i1. i | = t β with probability at least (1 − δ DISPLAYFORM0 β) for the t-th iterate, and k = O * (√ n µ log(n) ), therefore kc x < 1, we have that DISPLAYFORM1 with probability at least (1 − δ DISPLAYFORM2, we have DISPLAYFORM3 Proof of Claim 6. Here, from Claim 3 we have that for any i 1, DISPLAYFORM4 is given by DISPLAYFORM5 Further, the term of interest C (r−1) i2(1 − η x) R−r can be upper-bounded by DISPLAYFORM6 From the definition of α can be written as DISPLAYFORM7 Therefore, we have FORMULA0, where η DISPLAYFORM8 DISPLAYFORM9 Therefore, DISPLAYFORM10.Therefore, DISPLAYFORM11 Therefore, combining all the we have that, for a constant DISPLAYFORM12 Claim 7 (Bound on the noise term in expected gradient vector estimate). DISPLAYFORM13 Proof of Claim 7. DISPLAYFORM14 From FORMULA43 we have the following definition for ϑ DISPLAYFORM15 where β (t) j is defined as the following DISPLAYFORM16 S is a vector with each element as defined in. Therefore, the elements of the vector DISPLAYFORM17 Consider the general term of interest DISPLAYFORM18 Further, since DISPLAYFORM19 we have that DISPLAYFORM20 Further, for DISPLAYFORM21 In addition, for s =i ♠ s we have that DISPLAYFORM22 Therefore, using the for ♣ and DISPLAYFORM23 and for i = j we have DISPLAYFORM24 Here, from Claim 6, for c x = DISPLAYFORM25 Further, due to our assumptions on sparsity, kc x ≤ 1; in addition by Claim 2, and with probability at least (1 − δ DISPLAYFORM26 with probability at least (1 − δ (t) β ). Combining from, and substituting for the terms in using the analysis above, DISPLAYFORM27 Note that since γ DISPLAYFORM28 i )) can be made small by choice of R. Also, since Pr[i, j ∈ S] = q i,j, we have DISPLAYFORM29 Claim 8 (An intermediate for concentration ). With probability (1 − δ DISPLAYFORM30 Proof of Claim 8. First, using Lemma 4 we have DISPLAYFORM31 i1 . Therefore, the vector x S, for S ∈ supp(x *) can be written as DISPLAYFORM32 where x has the correct signed-support with probability at least (1 − δ T) using Lemma 2. Using this , we can write y − A (t) x as DISPLAYFORM33 With x * S being independent and sub-Gaussian, using Lemma 13, which is a based on the Hanson-Wright BID2 ) for sub-Gaussian random variables, and since A (t) DISPLAYFORM34 we have that with probability at least (1 − δ DISPLAYFORM35). DISPLAYFORM36 Consider the ♠ term. Using Claim 5, each ϑ (R) j is bounded by O(t β). with probability at least DISPLAYFORM37 Finally, combining all the and using the fact that A * DISPLAYFORM38 Claim 9 (Bound on variance parameter for concentration of gradient vector). DISPLAYFORM39 Proof of Claim 9. For the variance E[z 2], we focus on the following, DISPLAYFORM40 Here, x S is given by DISPLAYFORM41 We will now consider each term in separately. We start with ♥. Since x * S s are conditionally independent of S, E[x * S x * S] = I. Therefore, we can simplify this expression as DISPLAYFORM42. Rearranging the terms we have the following for ♥, DISPLAYFORM43 Therefore, ♥ can be upper-bounded as DISPLAYFORM44 Next, since (1 − λ (t) j ) ≤ 1, we have the following bound for ♥ 2 DISPLAYFORM45 Further, ♥ 3 can be upper-bounded by using bounds for ♥ 1 and ♥ 2. Combining the of upperbounding ♥ 1, ♥ 2, and ♥ 3 we have the following for DISPLAYFORM46 S. Therefore we have DISPLAYFORM47 S. where 1 m×m denotes an m × m matrix of ones. Now, we turn to DISPLAYFORM0 ] in, which can be simplified as DISPLAYFORM1 ] in which can also be bounded similarly as DISPLAYFORM2 Therefore, we have the following for ♣ in DISPLAYFORM3 Consider ♠ in. DISPLAYFORM4 S |S], and using the analysis similar to that shown in 7, we have that elements of M ∈ R k×k are given by DISPLAYFORM5 We have the following, DISPLAYFORM6 m 2 ), and 1 m×m = m, DISPLAYFORM7 Therefore, DISPLAYFORM8 Similarly, ♥ in is also bounded as ♠. Next, we consider ♦ in. In this case, letting DISPLAYFORM9 where N ∈ R k×k is a matrix whose each entry N i,j ≤ |ϑ DISPLAYFORM10. with probability at least (1 − δ DISPLAYFORM11 Again, using the on |ϑ DISPLAYFORM12 Combining all the for ♣, ♠, ♥ and ♦, we have, DISPLAYFORM13 We now present some additional to highlight the features of NOODL. Specifically, we compare the performance of NOODL (for both dictionary and coefficient recovery) with the state-of-theart provable techniques for DL presented in (when the coefficients are recovered via a sparse approximation step after DL) 3. We also compare the performance of NOODL with the popular online DL algorithm in BID12, denoted by Mairal'09. Here, the authors show that alternating between a 1 -based sparse approximation and dictionary update based on block co-ordinate descent converges to a stationary point, as compared to the true factors in case of NOODL.Data Generation: We generate a (n = 1000) × (m = 1500) matrix, with entries drawn from N, and normalize its columns to form the ground-truth dictionary A *. Next, we perturb A * with random Gaussian noise, such that the unit-norm columns of the ing matrix, A are 2/ log(n) away from A *, in 2 -norm sense, i.e., 0 = 2/ log(n); this satisfies the initialization assumptions in A.4. At each iteration, we generate p = 5000 samples Y ∈ R 1000×5000 as Y = A * X *, where X * ∈ R m×p has at most k = 10, 20, 50, and 100, entries per column, drawn from the Radamacher distribution. We report the in terms of relative Frobenius error for all the experiments, i.e., for a recovered matrix M, we report M − M * F / M * F. To form the coefficient estimate for Mairal'09 via Lasso we use the FISTA algorithm by searching across 10 values of the regularization parameter at each iteration. Note that, although our phase transition analysis for NOODL shows that p = m suffices, we use p = 5000 in our convergence analysis for a fair comparison with related techniques. TAB3 summarizes the of the convergence analysis shown in FIG4. Here, we compare the dictionary and coefficient recovery performance of NOODL with other techniques. For Arora15(''biased'') and Arora15(''unbiased''), we report the error in recovered coefficients after the HT step (X HT) and the best error via sparse approximation using Lasso 4 , denoted as X Lasso, by scanning over 50 values of regularization parameter. For Mairal'09 at each iteration of the algorithm we scan across 10 values 5 of the regularization parameter, to recover the best coefficient estimate using Lasso (via FISTA), denoted as X Lasso. We observe that NOODL exhibits significantly superior performance across the board. Also, we observe that using sparse approximation after dictionary recovery, when the dictionary suffers from a bias, leads to poor coefficient recovery 6, as is the case with Arora15(''biased''), Arora15(''unbiased''), and Mairal'09. This highlights the applicability of our approach in real-world machine learning tasks where coefficient recovery is of interest. In fact, it is a testament to the fact that, even in cases where dictionary recovery is the primary goal, making progress on the coefficients is also important for dictionary recovery. In addition, the coefficient estimation step is also online in case of NOODL, while for the stateof-the-art provable techniques (which only recover the dictionary and incur bias in estimation) need additional sparse approximation step for coefficient recovery. Moreover, these sparse approximation techniques (such as Lasso) are expensive to use in practice, and need significant tuning. In addition to these convergence , we also report the computational time taken by each of these algorithms in TAB3. The shown here were compiled using 5 cores and 200GB RAM of Intel Xeon E5 − 2670 Sandy Bridge and Haswell E5-2680v3 processors. The primary takeaway is that although NOODL takes marginally more time per iteration as compared to other methods when accounting for just one Lasso update step for the coefficients, it (a) is in fact faster per iteration since it does not involve any computationally expensive tuning procedure to scan across regularization parameters; owing to its geometric convergence property (b) achieves orders of magnitude superior error at convergence, and as a , (c) overall takes significantly less time to reach such a solution. Further, NOODL's computation time can be further reduced via implementations using the neural architecture illustrated in Section 4.Note that since the coefficient estimates using just the HT step at every step may not yield a usable for Arora15(''unbiased'') and Arora15(''biased'') as shown in TAB3, in practice, one has to employ an additional 1 -based sparse recovery step. Therefore, for a fair comparison, we account for running sparse recovery step(s) using Lasso (via the Fast Iterative ShrinkageThresholding Algorithm (FISTA) ) at every iteration of the algorithms Arora15(''biased'') and Arora15(''unbiased'').For our technique, we report the average computation time taken per iteration. However, for the rest of the techniques, the coefficient recovery using Lasso (via FISTA) involves a search over various values of the regularization parameters (10 values for this current exposition). As a , we analyze the computation time per iteration via two metrics. First of these is the average computation time taken per iteration by accounting for the average time take per Lasso update (denoted as "Accounting for one Lasso update"), and the second is the average time taken per iteration to scan over all values of the regularization parameter (denoted as "Overall Lasso search"). 4 We use the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA) , which is among the most efficient algorithms for solving the 1-regularized problems. Note that, in our experiments we fix the step-size for FISTA as 1/L, where L is the estimate of the Lipschitz constant (since A is not known exactly).5 Note that, although scanning across 50 values of the regularization parameter for this case would have led to better coefficient estimates and dictionary recovery, we choose 10 values for this case since it is very expensive to scan across 50 of regularization parameter at each step. This also highlights why Mairal'09 may be prohibitive for large scale applications.6 When the dictionary is not known exactly, the guarantees may exist on coefficient recovery only in terms of closeness in 2-norm sense, due to the error-in-variables (EIV) model for the dictionary . , we scan across 50 values of the regularization parameter for coefficient estimation using Lasso after learning the dictionary (A), and report the optimal estimation error for the coefficients (XLasso), while for Mairal'09, at each step the coefficients estimate is chosen by scanning across 10 values of the regularization parameters. For k = 100, the algorithms of Arora et al. FORMULA0 Avg. As shown in TAB3, in comparison to NOODL the techniques described in still incur a large error at convergence, while the popular online DL algorithm of BID12 exhibits very slow convergence rate. Combined with the convergence shown in FIG4, we observe that due to NOODL's superior convergence properties, it is overall faster and also geometrically converges to the true factors. This again highlights the applicability of NOODL in practical applications, while guaranteeing convergence to the true factors. Definition 6 (sub-Gaussian Random variable). Let x ∼ subGaussian(σ 2). Then, for any t > 0, it holds that Pr[|x| > t] ≤ 2 exp t 2 2σ 2. Lemma 10 (Matrix Bernstein ). Consider a finite sequence W k ∈ R n×m of independent, random, centered matrices with dimension n. Assume that each random matrix satisfies E[W k] = 0 and W k ≤ R almost surely. Then, for all t ≥ 0, DISPLAYFORM0 σ 2 +Rt/3, where σ 2:= max{DISPLAYFORM1 Furthermore, E[ k W k] ≤ 2σ 2 log(n + m) + 1 3 R log(n + m).
We present a provable algorithm for exactly recovering both factors of the dictionary learning model.
539
scitldr
Real-world Relation Extraction (RE) tasks are challenging to deal with, either due to limited training data or class imbalance issues. In this work, we present Data Augmented Relation Extraction (DARE), a simple method to augment training data by properly finetuning GPT2 to generate examples for specific relation types. The generated training data is then used in combination with the gold dataset to train a BERT-based RE classifier. In a series of experiments we show the advantages of our method, which leads in improvements of up to 11 F1 score points compared to a strong baseline. Also, DARE achieves new state-of-the-art in three widely used biomedical RE datasets surpassing the previous best by 4.7 F1 points on average. Relation Extraction (RE) is the task of identifying semantic relations from text, for given entity mentions in it. This task, along with Named Entity Recognition, has become increasingly important recently due to the advent of knowledge graphs and their applications. In this work, we focus on supervised RE (; ; ;), where relation types come from a set of predefined categories, as opposed to Open Information Extraction approaches that represent relations among entities using their surface forms . RE is inherently linked to Natural Language Understanding in the sense that a successful RE model should manage to capture adequately well language structure and meaning. So, almost inevitably, the latest advances in language modelling with Transformer-based architectures (a; ; b) have been quickly employed to also deal with RE tasks (; ; ;). These recent works have mainly leveraged the discriminative power of BERT-based models to improve upon the state-of-the-art. In this work we take a step further and try to assess whether the text generating capabilities of another language model, GPT-2 (b), can be applied to augment training data and deal with class imbalance and small-sized training sets successfully. Specifically, given a RE task we finetune a pretrained GPT-2 model per each relation type and then use the ing finetuned models to generate new training samples. We then combine the generated data with the gold dataset and finetune a pretrained BERT model on the ing dataset to perform RE. We conduct extensive experiments, studying different configurations for our approach and compare DARE against two strong baselines and the stateof-the-art on three well established biomedical RE benchmark datasets. The show that our approach yields significant improvements against the rest of the approaches. To the best of our knowledge, this is the first attempt to augment training data with GPT-2 for RE. In Table 1 we show some generated examples with GPT-2 models finetuned on the datasets that are used in the experiments (refer to Section 4). In the following, we provide a brief overview of related works in Section 2, we then describe our approach in Section 3, followed by our experimental (Section 4) and the (Section 5). Relation Extraction is usually modelled as a text classification task, therefore most approaches to deal with class imbalance or limited data in RE follow the ones from text classification. A number of approaches have been followed in the literature in order to tackle these challenges. One direction is to deal with imbalance at the classifier level, by penalizing misclassification errors differently for each class, depending on the class frequency or by explicitly adjusting prior class probabilities . Another popular direction relies on either undersampling the majority class(es) or oversampling the minority one(s), transforming the training data with the aim to balance it. One of the simplest approaches, random majority undersampling, simply removes a random portion of examples from majority classes such that per class training examples are roughly equal . An improved version of the previous method, balanced bagging , employs bagging of classifiers that have been trained with random majority undersampling. Oversampling approaches for textual data have been somehow limited as opposed to those for image data (; ; ;), since text semantics depend inherently on the exact order or structure of word tokens. A simple approach is to replace words or phrases with their synonyms . have employed topic models to generate additional training examples by sampling from the topic-word and document-topic distributions. have proposed a data augmentation framework that employs transformation operations provided by domain experts, such as a word swap, to learn a sequence generation model. have used both a template-based method as well as an LSTM-based approach to generate new samples for visual question answering. A similar method to our approach has been proposed by Sun et al. (2019a) who presented a framework to deal successfully with catastrophic forgetting in language lifelong learning (LLL). Specifically and given a set of tasks in the framework of LLL, they finetune GPT-2 to learn to solve a task and generate training samples at the same time for that task. At the beginning of training a new task, the model generates some pseudo samples of previous tasks to train alongside the data of the new task, therefore avoiding catastrophic forgetting. Our work falls into the oversampling techniques for text, but our focus is RE. Importantly, we do not need any domain expertise, templates, synonym thesaurus or training a model from scratch, which makes our approach easily adaptable to any domain, with relatively low requirements in resources. In this section we present briefly the GPT-2 model and then introduce in detail our approach. GPT-2 (b) is a successor of the GPT language model (a). Both models are deep neural network architectures using the Transformer , pre-trained on vast amounts of textual data. Both models are pre-trained with a standard language modelling objective, that is to predict the next word token given k previously seen word tokens. This is achieved by maximizing the following likelihood: where Θ are the neural network parameters. The authors have gradually provided publicly four different flavours of GPT-2, with 124M, 355M, 774M and 1558M parameters respectively. In our experiments we use the second largest model (774M), since it seems to represent a good compromise between accuracy and hardware requirements 1. where Θ are the parameters of the model. In this work we employ a RE classifier based on a pretrained BERT language model. This classifier follows the same principle followed by , using a special token (CLS) for classification. The only modification is that we mask entity mentions with generic entity types, i.e., $EN-TITY A$ or $ENTITY B$. It should be noted that the method that we introduce here is not classifier specific, so any other classifier can be used instead. To generate new training data, we split the D dataset into c subsets where each D c subset contains only examples from relation type c. Subsequently, we finetune GPT-2 on each D c for five epochs and then prompt each ing finetuned model to generate new sentences, filtering out sentences that do not contain the special entity masks or that are too small (less than 8 tokens). The generated sequences are combined for all relation types into a dataset Dsynth. Subsequently, we build an ensemble of RE classifiers, each of them being finetuned on a subset 1 https://openai.com/blog/gpt-2-1-5b-release/ of Dsynth and the whole D, such that the perrelation type generated instances are equal to the number of gold instances for that relation, multiplied by ratio, i.e., |Dsynth c | = |D c | * r. In our experiments we have set r = 1.0 (refer to Section 4.6 for a short study of its influence). Algorithm 1 illustrates our method. We would like to note that in early experiments, we also experimented with finetuning over the whole D, by adding a special token to the beginning of each sentence that encoded the relation type, e.g., <0>: or <1>:. Then during generation, we would prompt the model with the different special tokens and let it generate a training instance from the respective relation type. This approach though did not prove effective leading to worse than just using gold data, primarily because frequent classes "influenced" more GPT-2 and the model was generating many incorrectly labeled samples. In this section we present the empirical evaluation of our method. We first describe the experimental setup, the datasets used, the baselines against which we evaluate DARE and subsequently present the experiments and report the relevant . In all experiments we used the second largest GPT-2 model (774M parameters). All experiments were carried out on a machine equipped with a GPU V100-16GB. For the implementation, We have used HuggingFace's Transformers library . To finetune GPT-2 we employed Adam as the optimizer, a sequence length of 128, a batch size of 4 with gradient accumulation over 2 batches (being equivalent to a batch size of 8) and a learning rate of 3e − 5. In all datasets and for all relation types we finetuned for 5 epochs. For generation we used a temperature of 1.0, fixed the top-k parameter to 5 and generated sequences of up to 100 word tokens. An extensive search for the above optimal hyperparameter values is left to future work. Since all of our datasets are from the biomedical domain, we found out empirically (see Section 4.4 for the relevant experiment) that it was beneficial to first finetune a GPT-2 model on 500k PubMed abstracts, followed by a second round of finetuning per dataset, per relation type. As a RE classifier we have used in all cases a pre-trained BERT model (the large uncased model) which we finetuned on either the gold or the gold+generated datasets. We used the AdamW optimizer , a sequence length of 128, a batch size of 32 and a learning rate of 2e − 5, We finetuned for 5 epochs, keeping the best model with respect to the validation set loss. Also, we used a softmax layer to output predictions and we assigned a relation type to each instance si as follows: where c ∈ L and 0 < t < 1 is a threshold that maximizes the micro-F score on the validation set. For DARE, in all experiments we train an ensemble of twenty classifiers, where each classifier has been trained on the full gold set and a sub-sample of the generated data. In this way, we manage to alleviate the effect of potential noisy generated instances. To evaluate DARE, we employ three RE datasets from the biomedical domain, their statistics being provided in Table 2. The BioCreative V CDR corpus contains chemical-disease relations. The dataset is a binary classification task with one relation type, chemical induces disease, and annotations are at the document level, having already been split into train, development and test splits. For simplicity, we followed the work of and considered only intra-sentence relations. We have included the dataset in our GitHub repository to ease replication. In the following, we dub this dataset as CDR. The DDIExtraction 2013 corpus contains MedLine abstracts and DrugBank documents describing drug-drug interactions. The dataset has four relation types and annotations are at the sentence level. The dataset is provided with a train and test split for both MedLine and DrugBank instances. Following previous works, we concatenated the two training sets into one. Also, we randomly sampled 10% as a development set. In the following this dataset will be referred to as DDI2013. The BioCreative VI-ChemProt corpus covers chemical-protein interactions, containing five relation types, the vast majority of them being at the sentence level. The dataset comes with a train-development-test split. In the following we will refer to it as ChemProt. The above datasets suffer both from class imbalance and limited number of positives, for example the rarest relation type in DDI2013 has only 153 instances in the training set, while the respective one in ChemProt has only 173 data points. Therefore, we consider two baselines that are suited for such scenarios, the balanced bagging approach and the class weighting method, both described in Section 2. Both baselines have as a base classifier the one described in Section 4.1. Also, in both cases we consider an ensemble of ten models 2. Finally, for the class weighting approach we set each class's weight as with min being the rarest class. Since all our datasets come from the biomedical domain, we hypothesized that a first round of finetuning GPT-2 on in-domain data could be beneficial as opposed to directly employing the vanilla GPT-2 model. We designed a short experiment using the CDR dataset to test this hypothesis. To clarify, any of the two models would be then finetuned per relation type to come up with the final GPT-2 models that would generate the new training examples. Table 3 illustrates the of this experiment. As we expect, this first round of finetuning proves significantly favourable. We note that when inspecting the generated examples from the vanilla GPT-2 model, there was often the case that generated sentences contained a peculiar mix of news stories with the compound-disease relations. In this experiment we would like to see what is the effect of our method when dealing with great imbalance, i.e., datasets with very few positive samples. To that end, we consider the CDR dataset and sample different numbers of positive examples from the dataset (50, 250, 500, 1000 and all positives) and combine them with all the negative instances. The ing five datasets are used to train either a balanced bagging ensemble or DARE. In Figure 1, we show the , averaging across five different runs. In all cases our approach has a steady, significant advantage over the balanced bagging baseline, their difference reaching up to 11 F1 score points when only few positives (≤ 250) are available. As we add more samples, the differences start to smooth out as expected. These clearly illustrate that DARE can boost the predictive power of a classifier when dealing with few positive samples, by cheaply generating training data of arbitrary sizes. Our next experiment focuses in studying the effect of different sizes of generated data on DARE's performance. As explained, our method relies on finetuning GPT-2 to generate examples for each relation type that will, ideally, come from the same distribution as the ones from the gold training data. Nevertheless, we should expect that this procedure will not be perfect, generating also noisy samples. As mentioned previously, we try to alleviate this effect by training an ensemble of classifiers, each trained on the whole gold and a part of the generated dataset. An important question that arises therefore, is to determine the optimal ratio of generated examples to include in each classifier. If too few, the improvements will be insignificant, if too many we risk to have the model being influenced by the noise. In order to get an empirical insight to the above question we design a short experiment using the CDR dataset, for different sizes of generated data. As gold set, we consider a random subset of 1,000 positive examples and all negatives, to make more prominent the effect of class imbalance. In Figure 2 we show the for five different generated data sizes. Interestingly, adding more data does not necessarily boost classifier performance, since the noisy patterns in the generated data seem to influence more the classifier than those in the gold data. In the following, we choose a ratio = 1, adding for each relation type a number of generated instances equal to the number of gold instances. It should be noted that we are not limited in the total generated data that we will use since we can finetune an arbitrary number of classifiers on combinations of the gold data and subsets of the generated data. Taking into account the previous observations, we proceed to compare DARE against the SOTA and the two previously described baselines. Table 4 describes the . For the multi-class datasets we report the micro-F score in order to make our comparable with previous works. Also, in Appendix A, in Tables 5 and 6 we report the per class for DARE against the state-of-the-art and the class weighting baseline, for the two multiclass datasets in order to ease comparison with past or future works. Comparing DARE against the state-of-the-art, we observe a steady advantage of our method across all datasets, ranging from 3 to 8 F1 points. These are somehow expected, since we employ BERT-large as our base classifier which has proven substantially better than Convolutional (CNN) or Recurrent neural networks (RNN) across a variety of tasks . have used BioBERT which is based on a BERT base cased model, while we use BERT large uncased, in use ensembles of SVM, CNN and RNN models while in DDI2013 Sun et al. (2019b have used hybrid CNN-RNN models. When observing for the baselines, we notice that they perform roughly on par. DARE is better from 2 to 5 F1 points against the baselines, an improvement that is smaller that that against the state-of-the-art, but still statistically significant in all cases. Overall, and in accordance with the from the experiment in Section 4.5, we observe that DARE manages to leverage the GPT-2 automatically generated data, to steadily improve upon the state-of-the-art and two competitive baselines. We have presented DARE, a novel method to augment training data in Relation Extraction. Given a gold RE dataset, our approach proceeds by finetuning a pre-trained GPT-2 model per relation type and then uses the finetuned models to generate new training data. We sample subsets of the synthetic data with the gold dataset to finetune an ensemble of RE classifiers that are based on BERT. On a series of experiments we show empirically that our method is particularly suited to deal with class imbalance or limited data settings, recording improvements up to 11 F1 score points over two strong baselines. We also report new state-of-the-art performance on three biomedical RE benchmarks. Our work can be extended with minor improvements on other Natural Language Understanding tasks, a direction that we would like to address in future work.
Data Augmented Relation Extraction with GPT-2
540
scitldr
This paper addresses the problem of representing a system's belief using multi-variate normal distributions (MND) where the underlying model is based on a deep neural network (DNN). The major challenge with DNNs is the computational complexity that is needed to obtain model uncertainty using MNDs. To achieve a scalable method, we propose a novel approach that expresses the parameter posterior in sparse information form. Our inference algorithm is based on a novel Laplace Approximation scheme, which involves a diagonal correction of the Kronecker-factored eigenbasis. As this makes the inversion of the information matrix intractable - an operation that is required for full Bayesian analysis, we devise a low-rank approximation of this eigenbasis and a memory-efficient sampling scheme. We provide both a theoretical analysis and an empirical evaluation on various benchmark data sets, showing the superiority of our approach over existing methods. Whenever machine learning methods are used for safety-critical applications such as medical image analysis or autonomous driving, it is crucial to provide a precise estimation of the failure probability of the learned predictor. Therefore, most of the current learning approaches return distributions rather than single, most-likely predictions. For example, DNNs trained for classification usually use the softmax function to provide a distribution over predicted class labels. Unfortunately, this method tends to severely underestimate the true failure probability, leading to overconfident predictions . The main reason for this is that neural networks are typically trained with a principle of maximum likelihood, neglecting their epistemic or model uncertainty with the point estimates. A widely known work by shows that this can be mitigated by using dropout at test time. This so-called Monte-Carlo dropout (MC-dropout) has the advantage that it is relatively easy to use and therefore very popular in practice. However, MC-dropout also has significant drawbacks. First, it requires a specific stochastic regularization during training. This limits its use on already well trained architectures, because current networks are often trained with other regularization techniques such as batch normalization. Moreover, it uses a Bernoulli distribution to represent the complex model uncertainty, which in return, leads to an underestimation of the predictive uncertainty. Several strong alternatives exist without these drawbacks. Variational inference; ) and expectation propagation are such examples. Yet, these methods use a diagonal covariance matrix which limits their applicability as the model parameters are often highly correlated. Building upon these,;;; Ritter et al. (2018a) show that the correlations between the parameters can also be computed efficiently by decomposing the covariance matrix of MND into Kronecker products of smaller matrices. However, not all matrices can be Kronecker decomposed and thus, these simplifications usually induce crude approximations . As the dimensionality of statistical manifolds are prohibitively too large in DNNs, more expressive, efficient but still easy to use ways of representing such high dimensional distributions are required. To tackle this challenge, we propose to represent the model uncertainty in sparse information form of MND. As a first step, we devise a new Laplace Approximation (LA) for DNNs, in which we improve the state-of-the-art Kronecker factored approximations of the Hessian by correcting the diagonal variance in parameter space. We show that these can be computed efficiently, and that the information matrix of the ing parameter posterior is more accurate in terms of the Frobenius norm. In this way the model uncertainty is approximated in information form of the MND. counts [-] Figure 1: Main idea. (a) Covariance matrix Σ for DNNs is intractable to infer, store and sample (an example taken from our MNIST experiments). (b) Our main insight is that the spectrum (eigenvalues) of information matrix (inverse of covariance) tend to be sparse. (c) Exploiting this insight a Laplace Approximation scheme is devised which applies a spectral sparsification (LRA) while keeping the diagonals exact. With this formulation, the complexity becomes tractable for sampling while producing more accurate estimates. Here, the diagonal elements (nodes in graphical interpretation) corresponds to information content in a parameter whereas the corrections (links) are the off-diagonals. As this in intractable inverse operation for sampling, we further propose a novel low-rank representation of the ing Kronecker factorization, which paves the way to applications on large network structures trained on realistically sized data sets. To realize such sparsification, we propose a novel algorithm that enables a low-rank approximation of the Kronecker factored eigenvalue decomposition, and we demonstrate an associated sampling computations. Our experiments demonstrate that our approach is effective in providing more accurate uncertainty estimates and calibration on considered benchmark data sets. A detailed theoretical analysis is also provided for further insights. We summarize our main contributions below. • A novel Laplace Approximation scheme with a diagonal correction to the eigenvalue rescaled approximations of the Hessian, as a practical inference tool (section 2.2). • A novel low-rank representation of Kronecker factored eigendecomposition that preserves Kronecker structure (section 2.3). This in a sparse information form of MND. • A novel algorithm to enable a low rank approximation (LRA) for the given representation of MND (algorithm 1) and derivation of a memory-wise tractable sampler (section B.2). • Both theoretical (section C) and experimental (section 4) showing the applicability of our approach. In our experiments, we showcase the state-of-the-art performance within the class of Bayesian Neural Networks that are scalable and training-free. To our knowledge we explore a sparse information form to represent the model uncertainty of DNNs for the first time. Figure 1 depicts our main idea which we provide more rigorous formulation next. We model a neural network as a parameterized function f θ: R N 1 → R N l where θ ∈ R N θ are the weights and N θ = N 1 + · · · + N l. This function f θ is in fact a concatenation of l layers, where each layer i ∈ {1, ..., l} computes h i = W i a i−1 and a i = φ(h i−1). Here, φ is a nonlinear function, a i are activations, h i linear pre-activations, and W i are weight matrices. The bias terms are absorbed into W i by appending 1 to each a i. Thus, θ = vec(W 1) where vec is the operator that stacks the columns of a matrix to a vector. Let g i = δh i, the gradient of h i w.r.t θ. Using LA the posterior is approximated with a Gaussian. The mean is then given by the MAP estimate θ MAP and the covariance by the Hessian of the log-likelihood (H + τI) −1 assuming a Gaussian prior with precision τ. Using loss functions such as MSE or cross entropy and piece-wise linear activation a i Figure 2: Sparse Information Matrix. We perform a low rank approximation on Kronecker factored eigendecomposition that preserves Kronecker structure in eigenvectors for two reasons: (a) reducing directly (U A ⊗ U G) 1:L is memory-wise infeasible, and (b) sampling scheme then only involves matrix multiplications of smaller matrices U A 1:a and U G 1:g. Notations on indicing rules are also depicted. mn×mn is a Kronecker product with row elements v i, j (see definition 1 below). It follows from the properties of the Kronecker product that i = m(α − 1) + γ. The derivation is shown in section B. Note that in this given form, the Kronecker products are never directly evaluated but the diagonal matrix D can be computed recursively, making it computationally feasible. Definition 1: For U A ∈ R n×n and U G ∈ R m×m, the Kronecker product of V = U A ⊗ U G ∈ R mn×mn is given by v i, j = U a α,β U b γ,ζ, with the indices i = m(α − 1) + γ and j = m(β − 1) + ζ. Here, the indices of the matrices U A and U G are α ∈ {1, · · ·, n}, β ∈ {1, · · ·, n}, γ ∈ {1, · · ·, m} and ζ ∈ {1, · · ·, m}. Unfortunately, in the current form, it involves a matrix inversion with size N by N when sampling. For some layers in modern architectures, this is not be feasible. This problem is tackled next. Sampling from the posterior is crucial. For example, an important use-case of the parameter posterior is estimating the predict uncertainty for test data (x *,y *) by a full Bayesian analysis with K mc samples (equation 7). The herein approximation step is so-called Monte-carlo integration . However, directly sampling from equation 6 is non-trivial as explained in an example below. Example 1: Consider the architecture from figure 1 where the covariance matrix Σ 3 ∈ R N 3 ×N 3 for N 3 = 3211264. With equation 6, the sampling requires O(N 3 3) complexity (the cost of inversion and finding a symmetrical factor) and obviously, this operation is computationally infeasible. Consequently, we next describe a sparse formulation of equation 6 that ensures tractability. To tackle this challenge, we propose the low rank form in equation 8 2 as a first step. Here, Λ 1:L ∈ R L×L, U A 1:a ∈ R m×a and U G 1:g ∈ R n×g denote low rank form of corresponding eigenvalues and vectors (depicted in figure 2). Naturally, it follows that L = ag, N = mn and furthermore, the persevered rank L corresponds to preserving top K and additional J eigenvalues (ing in L ≥ K, L = ag = K + J). Figure 3: Illustration of algorithm 1. A low rank approximation on Kronecker factored eigendecomposition that preserves Kronecker structure in eigenvectors constitutes steps 1 to 5. Note the difference to preserving top L eigenvalues and corresponding eigenvectors for LRA. In our case, this in intractable (U A ⊗ U G) 1:L which defies the purpose. Therefore, as seen in equation 8, the Kronecker structure in eigenvectors as (U A 1:a ⊗ U G 1:g) is preserved. Consequently, due to the Kronecker product operation, preserving top K eigenvalues in L = K + J eigenvalues. Example 2: Let matrix E decomposed as E = U 1:6 Λ 1:6 U T 1:6 ∈ R 6×6 with U 1: in a descending order. In this toy example, the LRA with top 3 eigenvalues in E 1:3 = U 1:3 Λ 1:3 U T 1:3 ∈ R 6×6 (see notation to above). Instead, consider now the matrix E kron = (U A 1:3 ⊗U G 1:2)Λ 1:6 (U A 1:3 ⊗U G 1:2) T ∈ R 6×6. Again, say we want to preserve top 3 the eigenvalues Λ 1:3 and corresponding eigenvectors (U A 1:3 ⊗U G 1:2) 1:3, However, as, preserving the eigenvectors with the Kronecker structure in having to store U A 1:2 = u A 1 u A 2 and U G 1:2 = u G 1 u G 2. Consequently, additional eigenvalue Λ 4 has to be saved in order to fulfill the definition of a Kronecker product E kron 1: T ∈ R 6×6. In summary, preserving top K eigenvalues in other J eigenvalues, which ensures the memory-wise tractability when performing LRA on large matrices. Then, how do we compute a low rank approximation that preserves Kronecker structures in eigenvectors? For this computation we propose algorithm 1 as an algorithmic contribution (also illustrated in figure 3). Let us start with a definition on indexing rules of Kronecker factored diagonal matrices. Definition 2: For diagonal matrices S A ∈ R n×n and S G ∈ R m×m, the Kronecker product of Λ = S A ⊗ S G ∈ R mn×mn is given by Λ i = s αβ s γζ, where the indices i = m(β − 1) + ζ with β ∈ {1, · · ·, m} and ζ ∈ {1, · · ·, n}. Then, given i and m, β = int(i m) + 1 and given β, m, and i, ζ = i − m(β − 1). Here, int(·) is an operator that maps its input to lower number integer. Notations in algorithm 1 are also depicted in figure 2. Now we explain with a toy example below. Example 3: For explaining algorithm 1, the toy example can be revisited. Firstly, as we preserve top 3 eigenvalues, i ∈ {1, 2, 3} which are indices of eigenvalues Λ 1:3 (line 1). Then, using line 2, β ∈ {1, 2} and ζ ∈ {1, 2} can be computed using definition 2. This relation holds as Λ is computed from S A ⊗ S G, and thus, U A and U G are their corresponding eigenvectors respectively. In line 3, we keep U A 1:2 and U G 1:2 using β and ζ. Again, in order to fulfill the Kronecker product operation, we use line 4 to find the eigenvalues j ∈ {1, 2, 3, 4}, and then preserve Λ 1:4. As explained, this has ed in saving top 3 and additional 1 eigenvalues. Algorithm 1 provides the generalization of this and even if eigendecomposition does not come with a descending order, the same logic trivially applies. The incorporation of prior or regularization terms also follows without any additional approximation. Sampling: A key benefit of the proposed LRA is that now, sampling from the given covariance (equation 6 with the low rank form in equation 8; equation 9 with an incorporation of priors) only involves the inversion of a L × L matrix (in offline settings) and matrix multiplications of smaller Kronecker factored matrices or diagonal matrices during a full Bayesian analysis. To this end, we derive the analytical form of the sampler in section B.2 which makes the sampling computations feasible. This enables us to bound the intractable complexity of figure 1 where we first show that IM of DNNs tend to be sparse in its spectrum (similar to the findings of). With this insight we propose to represent the parameter posterior in a sparse information form which is visualized with its graphical interpretations. From IM of EFB, we apply our LRA that weakens the strengths of weak nodes (diagonals of IM) and links (off-diagonals) in a preserving fashion. Then, a diagonal correction can be added to keep the information of each nodes exact. A key benefit is that the sampling computations can be achieved in a memory-wise feasible way. Algorithm 2 shows the overall procedures. Further note that, as IM is estimated after training, our method can be applied to existing architectures. EFB is also computed in a different way to so that our EFB does not require batch assumption for taking expectations, and the scheme is cheaper since eigenvalue decomposition of A i−1 and G i are computed only once. Computing diagonal correction term also does not involve data. As a our approach yields a sparse information form of MND where the IM has a low rank eigendecomposition plus diagonal structure that preserves Kronecker structure in eigenvectors (shown above; prior and scaling terms are omitted to keep the notation uncluttered;Ŵ IV MAP is an information vector associated to the proposed IM). Since this formulation of model uncertainty has not bee studied before, we provide theoretical in section C for further insights and justifications. Sparse Information Filters: A similar idea of sparsifying the information matrix while keeping the diagonals accurate can be found in sparse information filters. Here, Bayesian tracking is realized in information form of MND instead of canonical counterparts (Kalman Filters). As this leads to inefficiency in marginalization, sparsity is introduced while keeping the diagonals accurate . A main difference, however, is that DNNs typically have higher dimensions and a sparse structure in the spectrum (eigenvalues) in contrast to spaces of parameters in SLAM problems. Thus, we propose to explore Kronecker factorization and induce spectral sparsity or LRA respectively. Approximation of the Hessian: The Hessian of DNNs is prohibitively too large as its size is quadratic to the parameter space. For this problem an efficient approximation is a layer-wise Kronecker factorization which have demonstrated a notable scalability . In a recent extension of the eigenvalues of the Kronecker factored matrices are re-scaled so that the diagonal variance in its eigenbasis is exact. The work demonstrates a provable method of achieving higher accuracy. Yet, as this is harmed by inaccurate estimates of eigenvectors, we further correct the diagonals in the parameter space. Laplace Approximation: Instead of methods rooted in variational inference (Hinton & van) and sampling , we build upon LA as a practical inference framework. Recently, diagonal and Kronecker-factored approximations to the Hessian have been applied to LA by Ritter et al. (2018a). The authors have further proposed to use LA in continual learning (b), and demonstrate a competitive by significantly outperforming its benchmarks . Building upon Ritter et al. (2018a) for approximate inference, we propose to use more expressive posterior distribution than matrix normal distribution. In the context of variational inference, SLANG share similar spirit to ours in using a low-rank plus diagonal form of covariance where the authors show the benefits of low-rank approximation in detail. Yet, SLANG is different to ours as they do not explore Kronecker structures and requires changes in the training procedure. Dimensionality Reduction: A vast literature is available for dimensionality reduction beyond principal component analysis and singular value decomposition . To our knowledge though, dimensionality reduction in Kronecker factored eigendecomposition that maintains Kronecker structure of eigenvectors has not been studied before. Thus, we propose algorithm 1 and further provide its theoretical properties in section C. An empirical study is presented with a toy regression and classification tasks across MNIST , notMNIST , CIFAR10 and SHVN data-sets. The experiments are designed to demonstrate the quality of predictive uncertainty, effects of varying LRA, the quality of approximate Hessian, and gains in reduction of computational complexity due to LRA. All experiments are implemented using Tensorflow . Predictive Uncertainty: Firstly, an evaluation on toy regression data-set is presented. This experiment has an advantage that we can not only evaluate the quality of predictive uncertainty, but also directly compare various approximations to the Hessian. For this a single-layered fully connected network with seven units in the first layer is considered. We have used 100 uniformly distributed points x ∼ U(−4, 4) and samples y ∼ N(x 3, 3 2). Visualization of predictive uncertainty is shown in figure 4. HMC , BBB , diagonal and KFAC Laplace Ritter et al. All the methods show higher uncertainty in the regimes far away from training data where BBB showing the most difference to HMC. Furthermore, Diag, KFAC and EFB Laplace predicts rather high uncertainty even within the regions that are covered by the training data. DEF variants slightly underestimate the uncertainty but produces the most comparable fit to the FB Laplace and HMC (our ground truths). We believe this is the direct effect of modelling the Hessian more accurately 3. Moreover, since the only difference between EFB and DEF Laplace is a diagonal correction term, this empirical suggest that keeping diagonals of IM exact in accurate predictive uncertainty. Effects of Low Rank Approximation: Next, we quantitatively study the effects of LRA by directly evaluating on the approximations of IM. This is because uncertainty estimation, despite being a crucial entity, are confounded from the problem itself and may not reveal the algorithmic insights to its full potential. For this, we revisit the toy regression problem and provide a direct evaluation of IM with measure on normalized Frobenius norm of error err NF in the first layer of the network. The are shown in figure 5. Here, the reduced dimension is not proportional to the ranks (e.g. many zero or close to eigenvalues). Figure 5 (a) depicts that DEF in accurate estimates on I ii regardless of the chosen dimensions L while EFB has the more approximation error, which we believe is due to inaccurate estimates of eigenvectors. KFAC on the other hand, produces the most errors on diagonal elements, which indicate that its assumption of Kronecker factorization induces crude approximation in this experiment. Regarding the off-diagonal errors EFB also outperforms KFAC and Diag estimates. Furthermore, error profile of off-diagonal error I i j also explains the principles of the LRA that as we decrease the ranks, the error increases but in a preserving manner. These can also be explained by Lemma 1 and 4 of section C which reflects the design principles of the method. Predictive Uncertainty: Next, we evaluate predictive uncertainty on classification tasks in which the proposed low-rank representation is strictly necessary. Furthermore, our goal is not to achieve the highest accuracy but evaluate predictive uncertainty. To this end, we choose classification tasks with known and unknown classes, e.g. a network is not only trained and evaluated on MNIST but also tested using notMNIST. Note that under such tests, any probabilistic methods should report their evaluations on both known and unknown classes with the same hyperparameter settings. This is because a Bayesian Neural Network to be always highly uncertain, which may seem to work well on out-of-distribution samples but are always overestimating uncertainty, even for the correctly classified samples within the distribution similar to the train data. For evaluating predictive uncertainty on known classes, Expectation Calibration Error (ECE) has been used. As we found it more intuitive, normalized entropy is reported for evaluating predictive uncertainty on unknown classes. On MNIST-notMNIST experiments, we compare to MC-dropout , ensemble of size 15, Diag and KFAC Laplace (a). These methods are state-of-the-art baselines that have a merit of requiring no changes in the training procedure. The later is crucial for a fair comparison as we can use the same experiment settings . Regarding the architectures, LeNet with RELU and a L2 coefficient of 1e-8 has been the choice. In particular, this typically makes a neural network overconfident, and we can see the effects of model uncertainty. This architecture validates our claim as it has the parameters of size θ 3 ∈ R 3137×1024 in the 3 rd layer. Obviously, its covariance is intractable as it is quadratic in size (see figure 1). The can be found in table 1. Firstly all the methods improved significantly over the deterministic one (denoted NN). Furthermore, DEF Laplace achieved here the lowest ECE, at the same time, predicted with the highest mean entropy on out-of-distribution samples. Figure 6 shows this where our method separates between wrong and correct predictions which stems from the domain change. Further tests were performed on CIFAR10 (known) and SVHN (unknown) to see the generalization under batch normalization and data augmentation. For this, we trained a 5 layer architecture with 2 CNN and 3 FC layers. The are also reported in table 1. Similar to MNIST experiments, our method ed in a better calibration performance and out-of-distribution detection overall. Note that for Diag, KFAC and EFB Laplace, grid searches on hyperparameters were rather non-trivial here. Increasing τI had the tendency to reduce ECE on CIFAR10, but in return ed in underestimating the uncertainty on SVHN and vice versa. DEF Laplace instead, required smallest regularization hyperparameters to strike a good balance between these two objectives. We omitted dropout as using it as a stochastic regularization instead of batch normalization would in a different network and thus, comparison would be not meaningful. More implementation details are provided in section E. The proposed LRA has been imposed as a means to tackle the challenges of computational intractability of MND. To empirically access the reduction in complexity, we depict the parameter and low rank dimensions N and L respectively in table 2. As demonstrated, our LRA based sampling computations reduce the computational complexity significantly. Furthermore, this explains the necessity of LRA -certain layers (e.g. FC-1 of both MNIST and CIFAR experiments) are computationally intractable to store, infer and sample. As a , we demonstrate an alternative representation for DNNS without resorting to fully factorized and matrix normal distribution. Discussion and Limitations: Importantly, we demonstrate that when projected to different success criteria, no inference methods largely win uniformly. Yet these experiments also show empirical evidence that our method works in principle and compares well to the state-of-the-art. Representing layer-wise MND in a sparse information form, and demonstrating a low rank inverse/sampling computations, we show an alternative approach of designing scalable and practical inference framework. Finally, these also indicate that keeping the diagonals of IM accurate while sparsifying the off-diagonals can lead to outstanding performance in terms of predictive uncertainty and generalizes well to various data, models and even measures. For future works, we share the view that comparing different approximations to the true posterior is quite challenging for DNNs. Consequently, better metrics and benchmarks that show the benefits of model uncertainty can be an important direction. On the other hand, we also address a key limitation of our work which stems from two hypothesis: (a) when represented in information form, the spectrum of IM should be sparse, and (b) keeping the diagonals exact while sparsifying the off-diagonals should in a better estimates of model uncertainty (equivalently keeping the information content of a node exact while sparsifying the weak links between the nodes from a graphical interpretation of information matrix). While empirical evidence from prior works (; ;) along with our experiments validate these to some extent, there exists no theoretic guarantees to our knowledge. Consequently, theoretical studies that connect information geometry of DNNs and Bayesian Neural Networks can be an exciting venue of future research. Nevertheless, similar to sparse Gaussian Processes , we believe our work can be a stepping stone for sparse Bayesian Neural Networks that goes beyond approximate inference alone. We address an effective approach of representing model uncertainty in deep neural networks using Multivariate Normal Distribution, which has been thought computationally intractable so far. This is achieved by designing its novel sparse information form. With one of the most expressive representation of model uncertainty in current Bayesian deep learning literature, we show that uncertainty can be estimated more accurately than existing methods. For future works, we plan to demonstrate a real world application of this approach, pushing beyond the validity of concepts. The matrix normal distribution is a probability density function for the random variable X ∈ R n×m in matrix form. It can be parameterized with mean W MAP ∈ R n×m, scales U A ∈ R n×n and U G ∈ R m×m. It is essentially a multivariate Gaussian distribution with mean vec(W MAP) and covariance U ⊗ V. In section B, we denote this distribution with MN parameterized by W MAP, U A and U G so that p(X|W MAP, U A, U G) = MN(W MAP, U A, U G). Here, tr stands for trace operation and we omitted layer indicing i for better clarity. Refer to for more details. Information form of Multivariate Normal Distribution (MND) is a dual representation for the well known canonical form. Lets denotex = vec(X) ∈ R nm, µ = vec(W MAP)∈ R nm and Σ = I −1 ∈ R mn×nm as a random variable, mean and covariance respectively for N = mn. Then, equation 12 defines the canonical form. Now we denote its Information form in equation 13. Here,x ∈ R mn represent the random variable as well. W IV MAP = Σ −1 µ ∈ R mn and I = Σ −1 ∈ R mn×mn are information vector (denoted IV in the main text with superscript) and matrix respectively. We denote the information form as N −1 which is completely described by an information vector and matrix. Information matrix is also widely known as precision matrix. in Simultaneous Localization and Mapping (SLAM) literature provides a good overview and explanations. Directly evaluating U A ⊗ U G may not be computationally feasible for modern DNNs. Therefore, we derive the analytical form of the diagonal elements for (T without having to fully evaluate it. Let U A ∈ R n×n and U G ∈ R m×m be the square matrices. Λ ∈ R mn×mn is a diagonal matrix by construction. V = U A ⊗ U G ∈ R mn×mn is a Kronecker product with elements v i, j with i = m(α − 1) + γ and j = m(β − 1) + ζ (from definition of Kronecker product). Then, the diagonal entries of (U A ⊗ U G)Λ(U A ⊗ U G) T can be computed as follows: Derivation: As a first step of the derivation, we express (A ⊗ B)Λ(A ⊗ B) T in the following form: Then, diag(UU being again a diagonal matrix. Therefore, u i j = v i, j Λ j due to the multiplication with a diagonal matrix from a right hand side. Substituting back these in ( 2 which completes the derivation. Formulating equation 14 for the non-square matrices (which after a low rank approximation) such as U A 1:a ∈ R n×a and U G 1:g ∈ R m×g and paralleling this operation are rather trivial and hence, they are omitted. For a full Bayesian analysis which is approximated by a Monte Carlo integration, sampling is a crucial operation (see equation 7) for computing predictive uncertainty. We start by stating the problem. Problem statement: Consider drawing samples vec(W s) ∈ R nm from our sparse information form: Typically, drawing such samples vec(W s) from a canonical form of MND requires finding a symmetrical factor of the covariance matrix (e.g. Chloesky decomposition) which is cubic in cost O(N 3). Even worse, when represented in an information form as in equation 16, it requires first an inversion of information matrix and then the computation of a symmetrical factor which overall constitutes two operations of cost O(N 3). Clearly, if N lies in a high dimension such as 1 million, even storing is obvious not feasible, let alone the sampling computations. Therefore, we need a sampling computation that (a) keeps the Kronecker structure while sampling so that first, the storage is memory-wise feasible, and then (b) the operations that require cubic cost such as inversion, must be performed in the dimensions of low rank L instead of full parameter dimensions N. We provide the solution below. Analytical solution: Let us define X l ∈ R mn and X s ∈ R m×n as the samples from a standard Multivariate Normal Distribution in equation 17 where we denote the followings: 0 nm ∈ R nm, I mn ∈ R mn×mn, 0 n×m ∈ R n×m, I n ∈ R n×n and I m ∈ R m×m. Note that these sampling operations are cheap. Furthermore, we denote W l = vec(W s) ∈ R mn, θ MAP = vec(W MAP) ∈ R mn as a sample from equation 16 and its mean as a vector respectively. We also note that Λ 1:L ∈ R L×L and D ∈ R mn×mn are the low ranked form of the re-scaled eigen-values and the diagonal correction term as previously defined. U A 1:a ∈ R m×a and U G 1:g ∈ R n×g are the eigenvectors of low ranked eigen-basis so that m ≥ a, n ≥ g and L = ag. Then, the samples of 16 can be computed analytically as 4: Firstly, the symmetrical factor F c ∈ R mn×mn in equation 18 is a function of matrices that are feasible to store as they involve diagonal matrices or small matrices in a Kronecker structure. Furthermore, 4 We show how the Kronecker structure of F c can be exploited to compute F c X l in the derivation only. Consequently, the matrices in equation 18 are defined as C ∈ R L×L, (C −1 + V T V) ∈ R L×L and I L ∈ R L×L. In this way, the two operations namely Cholesky decomposition and inversion that are cubic in cost O(N 3) are reduced to the low rank dimension L with complexity O(L 3). Derivation: Firstly, note that sampling from a standard multivariate Gaussian for X l or X s is computationally cheap (see equation 17). Given a symmetrical factor for the covariance Σ = F c F c T (e.g. by Cholesky decomposition), samples can be drawn via θ MAP + F c X l as depicted in equation 18. Our derivation involves finding such symmetrical factor for the given form of covariance matrix while exploring the Kronecker structure for the sampling computations to bound the complexity as O(L 3). Let us first reformulate the covariance (inverse of information matrix) as follows. Here, we define: 1:L. Now, a symmetrical factor for Σ = F c F c T can be found by exploiting the above structure. We let W c be a symmetrical factor for VV T + I nm so that is the symmetrical factor of Σ. Following the work of the symmetrical factor W c can be found using equations below. Note that A and B are Cholesky decomposed matrices of V T V ∈ R L×L and V T V + I L ∈ R L×L respectively. As a first , this operation is bounded by complexity O(L 3) instead of the full parameter dimension N. Now the symmetrical factor for Σ can be expressed as follows. Woodbury's Identity is used here. Now, it follows simply by substitution: This completes the derivation of equation 18. As a , the inversion operation is bounded by complexity O(L 3). Furthermore, the derivation constitutes smaller matrices U A 1:a and U G 1:g or diagonal matrices D and I mn which can be stored as vectors. In short the complexity has significantly reduced. Now we further derive computations that exploits rules of Kronecker products. Consider: Then, it follows by defining inverted matrix L c = (We further reduce this by evaluating D L×L . We note that this multiplication operation is memory-wise feasible. Now, we map X l D to matrix normal distribution by an unvec(·) operation so that. Using a widely known relation for Kronecker product that is - Note that matrix multiplication is performed with small matrices. Repeating a similar procedure as above we obtain the equation below for This completes the derivation. Lastly, we provide a remark below to summarize the main points. Remark: We herein presented derivation is to sample from equation 16, a low-rank and information formulation of MND. This analytical solution ensures (a) for a matrix inversion, (c) storage of small matrices U G 1:g, U A 1:a, a diagonal matrix D and identity matrices and finally (d) matrix multiplications that only involve these matrices. This is a direct benefit of our proposed LRA that preserves Kronecker structure in eigenvectors. Some of the interesting theoretical properties are as follows with proofs provided in section D. A theoretical of adding a diagonal correction term is captured below. This relates to the work presented in section 2.2 where a diagonal correction term is added to EFB estimates of IM. Lemma 1: Let I ∈ R N×N be the real Fisher information matrix, and let I def ∈ R N×N and I efb ∈ R N×N be the DEF and EFB estimates of it respectively. It is guaranteed to have I − I efb F ≥ I − I def F. Corollary 1: Let I kfac ∈ R N×N and I def ∈ R N×N be KFAC and our estimates of real Fisher information matrix I ∈ R N×N respectively. Then, it is guaranteed to have I − I kfac F ≥ I − I def F. Remark: For interested readers, find the proof I − I k f ac F ≥ I − I efb F in. or vice versa. Yet, our proposed approximation yield better estimates than KFAC in the information form of MND. To our knowledge, the proposed sparse IM have not been studied before. Therefore, we theoretically motivate its design and validity for better insights. The analysis can be found below. Firstly, we study the effects of preserving Kronecker structure in eigenvectors. We define: as a low rank EFB estimates of true Fisher that preserves top K eigenvalues. Similarly,Î top 1:L can be defined which preserves top L eigenvalues. In contract, our proposal to preserve Kronecker structure in eigenvectorsÎ 1:L is denoted as shown below. Now, we provide our analysis with Lemma 2. Lemma 2: Let I ∈ R N×N be the real Fisher information matrix, and letÎ andÎ 1:L ∈ R N×N be the low rank estimates of I of EFB obtained by preserving top K, L and top K plus additional J ing in L eigenvalues. Here, we define K < L. Then, the approximation error of I 1:L is bounded as follows: Remark: This bound provides an insight that if preserving top L eigenvalues in prohibitively too large covariance matrix, our LRA provides an alternative to preserving top K eigenvalues given that K < L. In practise, note thatÎ 1:L is a memory-wise feasible option as we formulatê T which preserves the Kronecker structure in eigenvectors. This can be a case where evaluating (U A 1:a ⊗ U G 1:g) or (U A 1:a ⊗ U G 1:g) 1:K is not feasible to store. ∈ R N×N is a nondegenerate covariance matrix if the diagonal correction matrix D and T are both symmetric and positive definite. This condition is satisfied if i for all i ∈ {1, 2, · · ·, d} and with Λ 1:L 0. Remark: This Lemma comments on validity of ing parameter posterior and proves that sparsifying the matrix can lead to a valid non-degenerate covariance if two conditions are met. As non-degenerate covariance can have a uniquely defined inverse, it is important to check these two conditions. We note that searching the rank can be automated with off-line computations that does not involve any data. Thus, it does not introduce significant overhead. In case D does not turn out to be, there are still several techniques that can deal with it. We recommend eigen-value clipping or finding nearest positive semi-definite matrices . Lastly, D −1 does not get numerically unstable when we add a prior precision term and a scaling factor (ND + τI) −1. Let I ∈ R N×N be the real Fisher information matrix, and letÎ def ∈ R N×N, I efb ∈ R N×N and I kfac ∈ R N×N be the low rank DEF, EFB and KFAC estimates of it respectively. Then, it is guaranteed to have diag Furthermore, if the eigenvalues of I def contains all non-zero eigenvalues of I def, it follows: Remark: Lemma 4 shows the optimally in capturing the diagonal variance while indicating that our approach also becomes effective in estimating off-diagonal entries if IM contains many close to zero eigenvalues. Validity of this assumption has been studied by where it is shown that the Hessian of overparameterized DNNs tend to have many close-to-zero eigenvalues. Intuitively, from a graphical interpretation of IM, diagonal entries indicate information present in each nodes and off-diagonal entries are links of these nodes (depicted in figure 1). Our sparsification scheme reduces the strength of the weak links (their numerical values) while keeping the diagonal variance exact. This is a of the diagonal correction after LRA which exploits spectrum sparsity of IM. D.1 Diagonal correction leads to more accurate estimation of Information matrix Proposition 1: Let I ∈ R N×N be the real Fisher information matrix, and let I def ∈ R N×N and I def ∈ R N×N be our estimates of it with rank d and k such that k < d. Their diagonal entries are equal that is I ii = I def ii =Î def ii for all i = 1, 2,..., N. proof: The proof trivially follows from the definitions of I ∈ R N×N, I def ∈ R N×N andÎ def ∈ R N×N. As the exact Fisher is an expectation on outer products of back-propagated gradients, its diagonal entries equal I ii = E δθ 2 i for all i = 1, 2,..., N. In the case of full ranked I de f, substituting T ii in equation 32 for all i = 1, 2,..., N. T ii which in equation 33 for all i = 1, 2,..., N. Therefore, we have I ii = I def ii =Î def ii for all i = 1, 2,..., N. Lemma 1: Let I ∈ R N×N be the real Fisher information matrix, and let I def ∈ R N×N and I efb ∈ R N×N be the DEF and EFB estimates of it respectively. It is guaranteed to have I − I efb F ≥ I − I def F. proof: Let e 2 = A − B 2 F define a squared Frobenius norm of error between the two matrices A ∈ R N×N and B ∈ R N×N. Now, e 2 can be formulated as, The first term of equation 34 belongs to errors of diagonal entries in B wrt A whilst the second term is due to the off-diagonal entries. Now, it follows that, since by definition, I efb and I def have the same off-diagonal terms. Corollary 1: Let I k f ac ∈ R N×N and I def ∈ R N×N be KFAC and our estimates of real Fisher Information matrix I ∈ R N×N respectively. Then, it is guaranteed to have I − I k f ac F ≥ I − I def F. Find the proof I − I k f ac F ≥ I − I efb F in. Lemma 2: Let I ∈ R N×N be the real Fisher information matrix, and letÎ andÎ 1:L ∈ R N×N be the low rank estimates of I of EFB obtained by preserving top K, L and top K plus additional J ing in L eigenvalues. Here, we define K < L. Then, the approximation error of I 1:L is bounded as follows: For the second part of the proof, lets recap that Lemma 2 (Wely's idea on eigenvalue perturbation) that removing zero eigenvalues does not affect the approximation error in terms of Frobenius norm. This then implies that off-diagonal elements ofÎ def and I efb are equivalent. Then,: 2 ii = 0 according to proposition 1 for all the elements i. KFAC library from Tensorflow 5 was used to implement the Fisher estimator for our methods and the works of Ritter et al. (2018a). Note that empirical Fisher usually is not a good estimates as it is typically biased and therefore, we did not use it. KFAC library offers several estimation modes for both fully connected and convolutional layers. We have used the gradients mode for KFAC Fisher estimation (which is also crucial for our pipelines) whereas the exact mode was used for diagonal approximations. We did not use the exponential averaging for all our experiments as well as the inversion scheme in the library. However, when using it in practice, it might be useful especially if there are too many layers that one cannot access convergence of the Fisher estimation. We have used NVIDIA Tesla for grid searching the parameters of Diag and KFAC Laplace, and 1080Ti for all other experiments. Apart from the architecture choices discussed in section 4, the training details are as follows. A gradient descent optimizer from tensorflow has been used with a learning rate of 0.001 with zero prior precision or L2 regularization coefficient (τ = 0.2 for KFAC, τ = 0.45 for Diag, N = 1 and τ = 0 for both FB and DEF have been used). Mean squared error (MSE) has been used as its loss function. Interestingly, the exact block-wise Hessian and their approximations for the given experimental setup contained zero values on its diagonals. This can be interpreted as zero variance in information matrix, meaning no information, ing in information matrix being degenerate for the likelihood term. In such cases, the covariance may not be uniquely defined . Therefore, we treated these variances deterministic, making the information matrix non-degenerate (motivated from Lemma 3). Similar findings are interestingly reported by. More importantly, we present a detailed analysis to avoid misunderstanding about our toy dataset experiments. As a starting remark, a main advantage of this toy regression problem is that it simplifies the understandings of on-going process, in lieu of sophisticated networks with a large number of parameters. Typically, as of , Ritter et al. (2018a), , or even originating back to Gaussian processes literature, this example has been used to check the predictive uncertainty by qualitatively evaluating on whether the method predicts high uncertainty in the regimes of no training data. However, a drawback exists: no quantitative analysis has been reported to our knowledge other than qualitatively comparing it to community wide accepted ground truth such as Hamiltonian Monte Carlo Sampling , and LA using KFAC and Diag seem to be sensitive to hyperparameters in this dataset which makes the comparison difficult. This is illustrated in figure 7 where we additional introduce Random which is just a user-set τI for covariance estimation in order to demonstrate this. Qualitatively analyzing from the first look, all the methods look very similar in delivering high uncertainty estimates in the regimes of no training data. Here, we note that the same hyperparameter settings have been used for Diag, KFAC and FB Laplace whereas the user-set τ = 7 has been found for Random. This agrees to the discussions of Ritter et al. (2018a) as KFAC ed in less τ when compared to Diag Laplace. However, we also observed that without the approximation step of equation 3 (denoted OKF), using the same hyper parameter as above ed in visible over-prediction of uncertainty and inaccurate estimates on the prediction. This is shown in figure 8. Again, tuning the parameter to a higher precision τ, similar behavior to figure 7 can be reproduced. This can be analyzed by visualizing the covariance of KFAC and OKF. As it can be seen, in this experiment settings, figure 8 shows that equation 3 damps the magnitude of estimated covariance matrix. A possible explanation is that if the approximate Hessian is degenerate, then small τI places a big mass on areas of low posterior probabilities for some network parameters with no information (zero variance and correlations in the approximate Hessian). This can be seen in figure 8 part (a) where the approximate Hessian contains 3 parameters with exactly zero diagonal elements and zeros in its off-diagonal elements. If one tries to add a small τ = 0.001 here, then the covariance of these parameters get close to its inverse τ −1 = 1000 as shown in figure 8 part (c). This would in return in over prediction of uncertainty and inaccurate predictions which explains figure 8 part (a). Another interesting experiments are studying the effects of dataset size to number of parameters. For this, we have increased the dataset size to 100 in oppose to 20. Again, we now compare the approximate Hessian by visualizing them. Notably, at using 100 data points ed in more number of zero diagonal entries and corresponding rows and columns. This is due to over parameterization of the model which in under determined Hessian. These insights hint for the followings. Accurately estimating the Hessian while forcing its estimates non-degeneracy via not considering zero eigenvalues for this data and model can lead to less sensitivity to its hyperparameters or τ in particular. Secondly, further increasing or decreasing the ratio of data points to number of parameters change the approximate Hessian (similarly found for estimates of Fisher) changes its structure, and can lead to under-determined approximation (therefore, changing its loss landscape). Finally, if the Hessian is under-determined, hyperparameters τ affects the ing predictive uncertainty (or covariance) if its magnitude significantly differs (and in case of KFAC). However, as more detailed experimental analysis is outside the scope of the paper, can be an interesting future work to further analyze the relation between the hyperparameters, their probabilistic interpretation and ing loss landscape of neural network. We have used Numpyro for the implementations of HMC. We have used 50000 MC samples to generate the in order to ensure the convergence. For the implementation of Bayes By Backprop we have used an open-source implementation https://github.com/ThirstyScholar/ bayes-by-backprop for which a similar experiment settings are implemented where the Gaussian noise is sampled in a batch initially, and a symmetric sampling technique is deployed. We note that the number of data samples and network architectures are different. Furthermore, we have used 10000 iterations to ensure convergence of the network. Most of the implementations for MNIST and CIFAR10 experiments were taken from Tensorflow tutorials 6 including the network architectures and training pipelines if otherwise stated in the main text. This is in line of argument that our method can be directly applied to existing, and well trained neural networks. For MNIST experiments, the architecture choices are the followings. Firstly, no down-scaling has been performed to its inputs. The architecture constitutes 2 convolutional layers followed by 2 fully connected layer (each convolutional layer is followed by a pooling layer of size 2 by 2, and a stride 2). For flattening from the second convolutional layer to the first fully connected layer, a pooling operation of 49 by 64 has been naturally used. RELU activation have been used for all the layers except the last layer which computes the softmax output. Dropout has been applied to the fully connected layer with dropout rate of 0.6 after a grid search (explained in section E.3.1). Regarding the loss functions, cross entropy loss has been used with ADAM as its optimizer and learning rate of 0.001. An important information is the size of each layers. The first layer constitutes 32 filters with 5 by 5 kernel, followed by the second layer with 64 filters and 5 by 5 kernel. The first fully connected layer then constitutes 1024 units and the last one ends with 10 units. We note that, this validates our method on memory efficiency as the third layer has a large number of parameters, and its covariance, being quadratic in its size, cannot be stored in our utilized GPUs. Regarding the architecture selection of CI-FAR10 experiments, no down-scaling of the inputs has be done. The chosen architecture is composed of 2 convolutional layers followed by 3 fully connected layers. Pooling layers of size 3 by 3 with strides 2 have been applied to outputs of the convolutional layers. Obviously, the third convolutional layer is pooled to match the input size of the following fully connected layers. Batch normalization has been applied to each outputs of convolutional layer before pooling, with bias 1, α of 0.001/9.0 and β of 0.75 (notations are different to the main text, and this follows that of tensorflow library). A weight decay factor of 0.004 has been used, and trained again with cross entropy loss, now with a stochastic gradient descent. Learning rate of 0.001 has been used. Again, the most relevant settings are: the first layer constitutes 5 by 5 kernel with 64 filters. This is then again followed by the same (but as input to CIFAR10 is RGB, the second layer naturally has more number of parameters). Units of 384, 192, and 10 have been used for the fully connected layers in an ascending order. Lastly, random cropping, flipping, brightness changes and contract have been applied as the data augmentation scheme. Similar to MNIST experiments, the necessity of LRA is capture in CIFAR10 as well. Unlike Ritter et al. (2018a) we did not artificially augment the data for MNIST experiments because the usual training pipeline did not require it. For our low rank approximation, we always have used the maximum rank we could fit, after removing all the zero eigenvalues and checking the conditions from Lemma 3. Lastly we have used 1000 Monte-Carlo samples for MNIST, and 100 samples for CIFAR10 and toy regression experiments. Implementation of deep ensemble was kept rather simple by not using the adversarial training, but we combined 15 networks that were trained with different initialization. The same architecture and training procedure were used for all. Note that CIFAR10 experiments with similar convolutional architectures were not present in the works of to the best of our knowledge. found similar to ours that deep ensemble performed similar to the MC-dropout . For dropout, we have tried a grid search of dropout probabilities of 0.5 and 0.8, and have reported the best . For the methods based on Laplace approximation, we have performed grid search on hyperparameters N of and 100 values of τ were tried using known class validation set. Note that for every method, and different data-sets, each method required different values of τI to give a reasonable accuracy. The starting point of the grid-search were determined based on if the mean values of their predictions were obtained similar accuracy to the deterministic counter parts. The figure below are the examples on MNIST where minimum ece points were selected and reported.
An approximate inference algorithm for deep learning
541
scitldr
The ever-increasing size of modern datasets combined with the difficulty of obtaining label information has made semi-supervised learning of significant practical importance in modern machine learning applications. In comparison to supervised learning, the key difficulty in semi-supervised learning is how to make full use of the unlabeled data. In order to utilize manifold information provided by unlabeled data, we propose a novel regularization called the tangent-normal adversarial regularization, which is composed by two parts. The two parts complement with each other and jointly enforce the smoothness along two different directions that are crucial for semi-supervised learning. One is applied along the tangent space of the data manifold, aiming to enforce local invariance of the classifier on the manifold, while the other is performed on the normal space orthogonal to the tangent space, intending to impose robustness on the classifier against the noise causing the observed data deviating from the underlying data manifold. Both of the two regularizers are achieved by the strategy of virtual adversarial training. Our method has achieved state-of-the-art performance on semi-supervised learning tasks on both artificial dataset and practical datasets. The recent success of supervised learning (SL) models, like deep convolutional neural networks, highly relies on the huge amount of labeled data. However, though obtaining data itself might be relatively effortless in various circumstances, to acquire the annotated labels is still costly, limiting the further applications of SL methods in practical problems. Semi-supervised learning (SSL) models, which requires only a small part of data to be labeled, does not suffer from such restrictions. The advantage that SSL depends less on well-annotated datasets makes it of crucial practical importance and draws lots of research interests. The common setting in SSL is that we have access to a relatively small amount of labeled data and much larger amount of unlabeled data. And we need to train a classifier utilizing those data. Comparing to SL, the main challenge of SSL is how to make full use of the huge amount of unlabeled data, i.e., how to utilize the marginalized input distribution p(x) to improve the prediction model i.e., the conditional distribution of supervised target p(y|x). To solve this problem, there are mainly three streams of research. The first approach, based on probabilistic models, recognizes the SSL problem as a specialized missing data imputation task for classification problem. The common scheme of this method is to establish a hidden variable model capturing the relationship between the input and label, and then applies Bayesian inference techniques to optimize the model BID10; BID21. Suffering from the estimation of posterior being either inaccurate or computationally inefficient, this approach performs less well especially in high-dimensional dataset BID10.The second line tries to construct proper regularization using the unlabeled data, to impose the desired smoothness on the classifier. One kind of useful regularization is achieved by adversarial training BID8, or virtual adversarial training (VAT) when applied to unlabeled data BID15. Such regularization leads to robustness of classifier to adversarial examples, thus inducing smoothness of classifier in input space where the observed data is presented. The input space being high dimensional, though, the data itself is concentrated on a underlying manifold of much lower dimensionality BID2 BID17; BID22. Thus directly performing VAT in input space might overly regularize and does potential harm to the classifier. Another kind of regularization called manifold regularization aims to encourage invariance of classifier on manifold BID25 BID0 BID18 BID11 BID22, rather than in input space as VAT has done. Such manifold regularization is implemented by tangent propagation BID25 BID11 or manifold Laplacian norm BID0 BID13, requiring evaluating the Jacobian of classifier (with respect to manifold representation of data) and thus being highly computationally inefficient. The third way is related to generative adversarial network (GAN) BID7. Most GAN based approaches modify the discriminator to include a classifier, by splitting the real class of original discriminator into K subclasses, where K denotes the number of classes of labeled data BID24 BID19 BID5 BID20. The features extracted for distinguishing the example being real or fake, which can be viewed as a kind of coarse label, have implicit benefits for supervised classification task. Besides that, there are also works jointly training a classifier, a discriminator and a generator BID14.Our work mainly follows the second line. We firstly sort out three important assumptions that motivate our idea:The manifold assumption The observed data presented in high dimensional space is with high probability concentrated in the vicinity of some underlying manifold of much lower dimensionality BID2 BID17; BID22. We denote the underlying manifold as M. We further assume that the classification task concerned relies and only relies on M BID22. The noisy observation assumption The observed data x can be decomposed into two parts as x = x 0 + n, where x 0 is exactly supported on the underlying manifold M and n is some noise independent of x 0 BID1 BID21. With the assumption that the classifier only depends on the underlying manifold M, the noise part might have undesired influences on the learning of the classifier. The semi-supervised learning assumption If two points x 1, x 2 ∈ M are close in manifold distance, then the conditional probability p(y|x 1) and p(y|x 2) are similar BID0 BID22 BID18. In other words, the true classifier, or the true condition distribution p(y|X) varies smoothly along the underlying manifold M. DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 DISPLAYFORM3 c 0 w R T 9 e + O m m f W V l n i K i e 7 2 8 f e R P y f 1 y 8 p 3 R 7 U U h c l o R Y P g 9 J S A e U w S R G G 0 q A g V T n C h Z F u V x A j b r g g l 7 X v Q g g f n / y U H G 9 0 w 6 A b f t 1 s 7 3 2 a x b H A V t l 7 t s Z C t s X 2 2 G d 2 x H p M s B / s i t 2 w W + / S u / b u v P u H 0 j l v 1 v O O / Q P v 9 x 8 6 h a h y < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " f j E y Y 9 J 5 C G f U l R v p 4 t P 7 5 Z Z a A t s = " > A DISPLAYFORM4 c 0 w R T 9 e + O m m f W V l n i K i e 7 2 8 f e R P y f 1 y 8 p 3 R 7 U U h c l o R Y P g 9 J S A e U w S R G G 0 q A g V T n C h Z F u V x A j b r g g l 7 X v Q g g f n / y U H G 9 0 w 6 A b f t 1 s 7 3 2 a x b H A V t l 7 t s Z C t s X 2 2 G d 2 x H p M s B / s i t 2 w W + / S u / b u v P u H 0 j l v 1 v O O / Q P v 9 x 8 6 h a h y < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " f j E y Y 9 J 5 C G f U l R v p 4 t P 7 5 Z Z a A t s = " > A DISPLAYFORM5 c 0 w R T 9 e + O m m f W V l n i K i e 7 2 8 f e R P y f 1 y 8 p 3 R 7 U U h c l o R Y P g 9 J S A e U w S R G G 0 q A g V T n C h Z F u V x A j b r g g l 7 X v Q g g f n / y U H G 9 0 w 6 A b f t 1 s 7 3 2 a x b H A V t l 7 t s Z C t s X 2 2 G d 2 x H p M s B / s i t 2 w W + / S u / b u v P u H 0 j l v 1 v O O / Q P v 9 x 8 6 h a h y < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " f j E y Y 9 J 5 C G f U l R v p 4 t P 7 5 Z Z a A t s = " > A DISPLAYFORM6 c 0 w R T 9 e + O m m f W V l n i K i e 7 2 8 f e R P y f 1 y 8 p 3 R 7 U U h c l o R Y P g 9 J S A e U w S R G G 0 q A g V T n C h Z F u V x A j b r g g l 7 X v Q g g f n / y U H G 9 0 w 6 A b f t 1 s 7 3 2 a x b H A V t l 7 t s Z C t s X 2 2 G d 2 x H p M s B / s i t 2 w W + / S u / b u v P u H 0 j l v 1 v O O / Q P v 9 x 8 6 h a h y < / l a t e x i t > x 0 < l a t e x i t s h a 1 _ b a s e 6 4 = " U v o A x a M + o b w r T 4 7 t H V I T M R l 7 n k U = " > A A A C P n i c b V B N a 9 t A E F 2 l T e o q X 2 5 y 7 DISPLAYFORM7 4 2 c d e G 4 N 8 / + S E Y n n Z 9 r + t / f t 3 q n 6 3 j a J C X 5 B X p E J + 8 I X 3 y g Z y T A e H k K / l O f p J f z r X z w / n t / L m 1 b j j r m U N y p 5 y / / w C 0 + a o E < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " U v o A x a M + o b w r T 4 7 t H V I T M R l 7 n k U = " > A A A C P n i c b V B N a 9 t A E F 2 l T e o q X 2 5 y 7 DISPLAYFORM8 4 2 c d e G 4 N 8 / + S E Y n n Z 9 r + t / f t 3 q n 6 3 j a J C X 5 B X p E J + 8 I X 3 y g Z y T A e H k K / l O f p J f z r X z w / n t / L m 1 b j j r m U N y p 5 y / / w C 0 + a o E < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " U v o A x a M + o b w r T 4 7 t H V I T M R l 7 n k U = " > A A A C P n i c b V B N a 9 t A E F 2 l T e o q X 2 5 y 7 DISPLAYFORM9 n n Z 9 r + t / f t 3 q n 6 3 j a J C X 5 B X p E J + 8 I X 3 y g Z y T A e H k K / l O f p J f z r X z w / n t / L m 1 b j j r m U N y p 5 y / / w C 0 + a o E < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " U v o A x a M + o b w r T 4 7 t H V I T M R l 7 n k U = " > A A A C P n i c b V B N a 9 t A E F 2 l T e o q X 2 5 y 7 G W p M T g k G C k U W i g G k 1 5 6 a U m h / g D L E a v 1 K F 6 8 W o n d U b A Q + m W 9 5 D f 0 l m M v P b S U X n v s y v G h + R h Y e P P e G 2 b n R Z k U B j 3 v x t l 4 8 n R z 6 1 n j u b u 9 s 7 u 3 3 3 n n Z 9 r + t / f t 3 q n 6 3 j a J C X 5 B X p E J + 8 I X 3 y g Z y T A e H k K / l O f p J f z r X z w / n t / L m 1 b j j r m U N y p 5 y / / w C 0 + a o E < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " U v o A x a M + o b w r T 4 7 t H V I T M R l 7 n k U = " > A A A C P n i c b V B N a 9 t A E F 2 l T e o q X 2 5 y 7 G W p M T g k G C k U W i g G k 1 5 6 a U m h / g D L E a v 1 K F 6 8 W o n d U b A Q + m W 9 5 D f 0 l m M v P b S U X n v s y v G h + R h Y e P P e G 2 b n R Z k U B j 3 v x t l 4 8 n R z 6 1 n j u b u 9 s 7 u 3 3 3 n n Z 9 r + t / f t 3 q n 6 3 j a J C X 5 B X p E J + 8 I X 3 y g Z y T A e H k K / l O f p J f z r X z w / n t / L m 1 b j j r m U N y p 5 y / / w C 0 + a o E < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " U v o A x a M + o b w r T 4 7 t H V I T M R l 7 n k U = " > A A A C P n i c b V B N a 9 t A E F 2 l T e o q X 2 5 y 7 G W p M T g k G C k U W i g G k 1 5 6 a U m h / g D L E a v 1 K F 6 8 W o n d U b A Q + m W 9 5 D f 0 l m M v P b S U X n v s y v G h + R h Y e P P e G 2 b n R Z k U B j 3 v x t l 4 8 n R z 6 1 n j u b u 9 s 7 u 3 3 3 tangent-normal adversarial regularization. x = x0 + n is the observed data, where x0 is exactly supported on the underlying manifold M and n is the noise independent of x0. r is the adversarial perturbation along the tangent space to induce invariance of the classifier on manifold; r ⊥ is the adversarial perturbation along the normal space to impose robustness on the classifier against noise n. Inspired by the three assumptions, we introduce a novel regularization called the tangent-normal adversarial regularization (TNAR), which is composed by two parts. The tangent adversarial regularization (TAR) induces the smoothness of the classifier along the tangent space of the underlying manifold, to enforce the invariance of the classifier along manifold. And the normal adversarial regularization (NAR) penalizes the deviation of the classifier along directions orthogonal to the tangent space, to impose robustness on the classifier against the noise carried in observed data. The two regularization terms enforce different aspects of the classifier's smoothness and jointly improve the generalization performance, as demonstrated in Section 4.To realize our idea, we have two challenges to conquer: how to estimate the underlying manifold and how to efficiently perform TNAR.For the first issue, we take advantage of the generative models equipped with an extra encoder, to characterize coordinate chart of manifold BID11 BID13 BID20. More specifically, in this work we choose variational autoendoer (VAE) BID9 and localized GAN to estimate the underlying manifold from data. For the second problem, we develop an adversarial regularization approach based on virtual adversarial training (VAT) BID16. Different from VAT, we perform virtual adversarial training in tangent space and normal space separately as illustrated in FIG8, which leads to a number of new technical difficulties and we will elaborate the corresponding solutions later. Compared with the traditional manifold regularization methods based on tangent propagation BID25 BID11 or manifold Laplacian norm BID0 BID13, our realization does not require explicitly evaluating the Jacobian of classifier. All we need is to calculate the derivative of matrix vector product, which only costs a few times of back or forward propagation of network. We denote the labeled and unlabeled dataset as D l = {(x l, y l)} and D ul = {x ul} respectively, thus D:= D l ∪ D ul is the full dataset. The output of classification model is written as p(y|x, θ), where θ is the model parameters to be trained. We use (·, ·) to represent supervised loss function. And the regularization term is denoted as R with specific subscript for distinction. The observed space of x is written as R D. And the underlying manifold of the observed data x is written as DISPLAYFORM0 We use z for the manifold representation of data x. We denote the decoder, or the generator, as x = g(z) and the encoder as z = h(x), which form the coordinate chart of manifold together. If not stated otherwise, we always assume x and z correspond to the coordinate of the same data point in observed space R D and on manifold M, i.e., g(z) = x and h(x) = z. The tangent space of M at point DISPLAYFORM1 where J z g is the Jacobian of g at point z. The tangent space T x M is also the span of the columns of J z g. For convenience, we define J:= J z g. The perturbation in the observed space R D is denoted as r ∈ R D, while the perturbation on the manifold representation is denoted as η ∈ R d. Hence the perturbation on manifold is g(z DISPLAYFORM2 When the perturbation η is small enough for the holding of the first order Taylor's expansion, the perturbation on manifold is approximately equal to the perturbation on its tangent space, g(z + η) − g(z) ≈ J · η ∈ T x M. Therefore we say a perturbation r ∈ R D is actually on manifold, if there is a perturbation η ∈ R d, such that r = J · η. VAT BID16 ) is an effective regularization method for SSL. The virtual adversarial loss introduced in VAT is defined by the robustness of the classifier against local perturbation in the input space R D. Hence VAT imposes a kind of smoothness condition on the classifier. Mathematically, the virtual adversarial loss in VAT for SSL is DISPLAYFORM0, where the VAT regularization R vat is defined as R vat (x; θ):= max r 2 ≤ dist(p(y|x, θ), p(y|x + r, θ)), where dist(·, ·) is some distribution distance measure and controls the magnitude of the adversarial example. For simplicity, define F (x, r, θ):= dist(p(y|x, θ), p(y + r, θ)).Then R vat = max r 2 ≤ F (x, r, θ). The so called virtual adversarial example is r *:= arg max r ≤ F (x, r, θ). Once we have r *, the VAT loss can be optimized with the objective as DISPLAYFORM1 To obtain the virtual adversarial example r *, BID16 suggested to apply second order Taylor's expansion to F (x, r, θ) around r = 0 as DISPLAYFORM2 where H:= ∇ 2 r F (x, r, θ)| r=0 denotes the Hessian of F with respect to r. The vanishing of the first two terms in Taylor's expansion occurs because that dist(·, ·) is a distance measure with minimum zero and r = 0 is the corresponding optimal value, indicating that at r = 0, both the value and the gradient of F (x, r, θ) are zero. Therefore for small enough, r * ≈ arg max r 2 ≤ 1 2 r T Hr, which is an eigenvalue problem and the direction of r * can be solved by power iteration. We take advantage of generative model with both encoder h and decoder g to estimate the underlying data manifold M and its tangent space T x M. As assumed by previous works BID11 BID13, perfect generative models with both decoder and encoder can describe the data manifold, where the decoder g(z) and the encoder h(x) together serve as the coordinate chart of manifold M. Note that the encoder is indispensable for it helps to identify the manifold coordinate z = h(x) for point x ∈ M. With the trained generative model, the tangent space is given by DISPLAYFORM0, or the span of the columns of J = J z g. In this work, we adopt VAE BID9 and localized GAN to learn the targeted underlying data manifold M as summarized below. VAE VAE BID9 ) is a well known generative model consisting of both encoder and decoder. The training of VAE is by optimizing the variational lower bound of log likelihood, DISPLAYFORM1 Here p(z) is the prior of hidden variable z, and q(z|x, θ), p(x|z, θ) models the encoder and decoder in VAE, respectively. The derivation of the lower bound with respect to θ is well defined thanks to the reparameterization trick, thus it could be optimized by gradient based method. The lower bound could also be interpreted as a reconstruction term plus a regularization term BID9. With a trained VAE, the encoder and decoder are given as h(x) = arg max z q(z|x) and g(z) = arg max x q(x|z) accordingly. Localized GAN Localized GAN BID20 suggests to use a localized generator G(x, z) to replace the global generator g(z) in vanilla GAN Goodfellow et al. (2014a). The key difference between localized GAN and previous generative model for manifold is that, localized GAN learns a distinguishing local coordinate chart for each point x ∈ M, which is given by G(x, z), rather than one global coordinate chart. To model the local coordinate chart in data manifold, localized GAN requires the localized generator to satisfy two more regularity conditions: 1) locality: G(x, 0) = x, so that G(x, z) is localized around x; 2) orthogonmality: DISPLAYFORM2 is non-degenerated. The two conditions are achieved by the following penalty during training of localized GAN: DISPLAYFORM3 Since G(x, z) defines a local coordinate chart for each x separately, in which the latent encode of x is z = 0, there is no need for the extra encoder to provide the manifold representation of x. In this section we elaborate our proposed tangent-normal adversarial regularization (TNAR) strategy. The TNAR loss to be minimized for SSL is DISPLAYFORM0 The first term in Eq. is a common used supervised loss. R tangent and R normal is the so called tangent adversarial regularization (TAR) and normal adversarial regularization (NAR) accordingly, jointly forming the proposed TNAR. We assume that we already have a well trained generative model for the underlying data manifold M, with encoder h and decoder g, which can be obtained as described in Section 2.3. Vanilla VAT penalizes the variety of the classifier against local perturbation in the input space R D BID16, which might overly regularize the classifier, since the semi-supervised learning assumption only indicates that the true conditional distribution varies smoothly along the underlying manifold M, but not the whole input space R D BID0 BID22 BID18. To avoid this shortcoming of vanilla VAT, we propose the tangent adversarial regularization (TAR), which restricts virtual adversarial training to the tangent space of the underlying manifold T x M, to enforce manifold invariance property of the classifier. DISPLAYFORM0 where F (x, r, θ) is defined as in Eq.. To optimize Eq., we first apply Taylor's expansion to F (x, r, θ) so that R tangent (x; θ) ≈ max r 2 ≤,r∈TxM=Jzg(R d) 1 2 r T Hr, where the notations and the derivation are as in Eq.. We further reformulate R tangent as DISPLAYFORM1 demand r being orthogonal to only one specific tangent direction, i.e., the tangent space adversarial perturbation r. Thus the constraint J T · r = 0 is relaxed to (r) T · r = 0. And we further replace the constraint by a regularization term, DISPLAYFORM2 where λ is a hyperparameter introduced to control the orthogonality of r. Since Eq. FORMULA15 is again an eigenvalue problem, and we can apply power iteration to solve it. Note that a small identity matrix λ r I is needed to be added to keep 1 2 H − λr r T + λ r I semipositive definite, which does not change the optimal solution of the eigenvalue problem. The power iteration is as DISPLAYFORM3 And the evaluation of Hr is by Hr = ∇ r ∇ r F (x, 0, θ) · r, which could be computed efficiently. After finding the optimal solution of Eq. FORMULA15 as r ⊥, the NAR becomes R normal (x, θ) = F (x, r ⊥, θ).Finally, as in BID16, we add entropy regularization to our loss function. It ensures neural networks to output a more determinate prediction and has implicit benefits for performing virtual adversarial training, R entropy (x, θ):= − y p(y|x, θ) log p(y|x, θ). Our final loss for SSL is DISPLAYFORM4 The TAR inherits the computational efficiency from VAT and the manifold invariance property from traditional manifold regularization. The NAR causes the classifier for SSL being robust against the off manifold noise contained in the observed data. These advantages make our proposed TNAR, the combination of TAR and NAR, a reasonable regularization method for SSL, the superiority of which will be shown in the experiment part in Section 4. We also conduct experiments on FashionMNIST dataset 1. There are three sets of experiments with the number of labeled data being 100, 200 and 1, 000, respectively. The details about the networks are in Appendix. The corresponding are shown in TAB1, from which we observe at least two phenomena. The first is that our proposed TANR methods (TNAR-VAE, TNAR-LGAN) achieve lower classification errors than VAT in all circumstances with different number of labeled data. The second is that the performance of our method depends on the estimation of the underlying manifold of the observed data. In this case, TNAR-VAE brings larger improvement than TNAR-LGAN, since VAE produces better diverse examples according to our observation. As the development of generative model capturing more accurate underlying manifold, it is expected that our proposed regularization strategy benefits more for SSL. We conduct ablation study on FashionMNIST datasets to demonstrate that both of the two regularization terms in TNAR are crucial for SSL. The are reported in TAB1. Removing either tangent adversarial regularization (NAR) or normal adversarial regularization (TAR) will harm the final performance, since they fail to enforce the manifold invariance or the robustness against the off-manifold noise. Furthermore, the adversarial perturbations and adversarial examples are shown in FIG10. We can easily observe that the tangent adversarial perturbation focuses on the edges of foreground objects, while the normal space perturbation mostly appears as certain noise over the whole image. This is consistent with our understanding on the role of perturbation along the two directions that capture the different aspects of smoothness. There are two classes of experiments for demonstrating the effectiveness of TNAR in SSL, SVHN with 1, 000 labeled data, and CIFAR-10 with 4, 000 labeled data. The experiment setups are identical with BID16. We test two kinds of convolutional neural networks as classifier (denoted as "small" and "large") as in BID16. Since it is difficult to obtain satisfying VAE on CIFAR-10, we only conduct the proposed TNAR with the underlying manifold identified by Localized GAN (TNAR-LGAN) for CIFAR-10. Note that in BID16, the authors applied ZCA as pre-processing procedure, while other compared methods do not use this trick. For fair comparison, we only report the performance of VAT without ZCA. More detailed experimental settings are included in Appendix. BID6 7.41 17.99 Improved GAN BID24 8.11 18.63 Tripple GAN BID14 5.77 16.99 FM GAN BID11 4.39 16.20 LGAN BID20 4 In TAB2 we report the experiments on CIFAR-10 and SVHN, showing that our proposed TNAR outperforms other state-of-the-art SSL methods on both SVHN and CIFAR-10, demonstrating the superiority of our proposed TNAR. We present the tangent-normal adversarial regularization, a novel regularization strategy for semisupervised learning, composing of regularization on the tangent and normal space separately. The tangent adversarial regularization enforces manifold invariance of the classifier, while the normal adversarial regularization imposes robustness of the classifier against the noise contained in the observed data. Experiments on artificial dataset and multiple practical datasets demonstrate that our approach outperforms other state-of-the-art methods for semi-supervised learning. The performance of our method relies on the quality of the estimation of the underlying manifold, hence the breakthroughs on modeling data manifold could also benefit our strategy for semi-supervised learning, which we leave as future work. represent two different classes. The observed data is sampled as x = x 0 + n, where x 0 is uniformly sampled from M and n ∼ N (0, 2 −2). We sample 6 labeled training data, 3 for each class, and 3, 000 unlabeled training data, as shown in FIG9. In FashionMNIST 2 experiments, we preserver 1, 00 data for validation from the original training dataset. That is, we use 100/200/1, 000 labeled data for training and the other 100 labeled data for validation. For pre-processing, we scale images into 0 ∼ 1. The classification neural network is as following. (a, b) means the convolution filter is with a × a shape and b channels. The max pooling layer is with stride 2. And we apply local response normalization (LRN) BID23. The number of hidden nodes in the first fully connected layer is 512.Conv → ReLU → Conv → ReLU → MaxPooling → LRN → Conv → ReLU → Conv → ReLU → MaxPooling → LRN → FC1 → ReLU → FC2 For the labeled data, the batch size is 32, and for the unlabeled data, the batch size is 128. All networks are trained for 12, 000 updates. The optimizer is ADAM with initial learning rate 0.001, and linearly decay over the last 4, 000 updates. The hyperparameters tuned is the magnitude of the tangent adversarial perturbation, the magnitude of the normal adversarial perturbation and the hyperparameter λ in Eq.. Other hyperparameters are all set to 1. We tune λ from {1, 0.1, 0.01, 0.001}, and 1, 2 randomly from [0.05, 20]. BID24; BID12. All the convolutional layers and fully connected layers are followed by batch normalization except the fully connected layer on CIFAR-10. The slopes of all lReLU functions in the networks are 0.1. The encoder of the VAE for identify the underlying manifold is a LeNet-like one, with two convolutional layers and one fully connected layer. And the decoder is symmetric with the encoder, except using deconvolutional layers to replace convolutional layer. The latent dimensionality is 128. The localized GAN for identify the underlying manifold is similar as stated in BID20. And the implementation is modified from https://github.com/z331565360/Localized-GAN. We change the latent dimensionality into 128. We tried both joint training the LGAN with the classifier, and training them separately, observing no difference. In SVHN 3 and CIFAR-10 4 experiments, we preserve 1, 000 data for validation from the original training set. That is, we use 1, 000/4, 000 labeled data for training and the other 1, 000 labeled data for validation. The only pre-processing on data is to scale the pixels value into 0 ∼ 1. We do not use data augmentation. The structure of classification neural network is shown in TAB4, which is identical as in BID16.For the labeled data, the batch size is 32, and for the unlabeled data, the batch size is 128. For SVHN, all networks are trained for 48, 000 updates. And for CIFAR-10, all networks are trained for 200, 000 updates. The optimizer is ADAM with initial learning rate 0.001, and linearly decay over the last 16, 000 updates. The hyperparameters tuned is the magnitude of the tangent adversarial perturbation, the magnitude of the normal adversarial perturbation and the hyperparameter λ in Eq.. Other hyperparameters are all set to 1. We tune λ from {1, 0.1, 0.01, 0.001}, and 1, 2 randomly from [0. 05, 20].The VAE for identify the underlying manifold for SVHN is implemented as in https:// github.com/axium/VAE-SVHN. The only modification is we change the coefficient of the regularization term from 0.01 to 1. The localized GAN for identify the underlying manifold for SVHN and CIFAR-10 is similar as stated in BID20. And the implementation is modified from https://github.com/z331565360/Localized-GAN. We change the latent dimensionality into 512 for both SVHN and CIFAR-10. More adversarial perturbations and adversarial examples in tangent space and normal space are shown in FIG12 and FIG13. Note that the perturbations is actually too small to distinguish easily, thus we show the scaled perturbations. From left to right: original example, tangent adversarial perturbation, normal adversarial perturbation, tangent adversarial example, normal adversarial example.
We propose a novel manifold regularization strategy based on adversarial training, which can significantly improve the performance of semi-supervised learning.
542
scitldr
Universal approximation property of neural networks is one of the motivations to use these models in various real-world problems. However, this property is not the only characteristic that makes neural networks unique as there is a wide range of other approaches with similar property. Another characteristic which makes these models interesting is that they can be trained with the backpropagation algorithm which allows an efficient gradient computation and gives these universal approximators the ability to efficiently learn complex manifolds from a large amount of data in different domains. Despite their abundant use in practice, neural networks are still not well understood and a broad range of ongoing research is to study the interpretability of neural networks. On the other hand, topological data analysis (TDA) relies on strong theoretical framework of (algebraic) topology along with other mathematical tools for analyzing possibly complex datasets. In this work, we leverage a universal approximation theorem originating from algebraic topology to build a connection between TDA and common neural network training framework. We introduce the notion of automatic subdivisioning and devise a particular type of neural networks for regression tasks: Simplicial Complex Networks (SCNs). SCN's architecture is defined with a set of bias functions along with a particular policy during the forward pass which alternates the common architecture search framework in neural networks. We believe the view of SCNs can be used as a step towards building interpretable deep learning models. Finally, we verify its performance on a set of regression problems. It is well-known that under mild assumptions on the activation function, a neural network with one hidden layer and a finite number of neurons can approximate continuous functions. This characteristic of neural networks is generally referred to as the universal approximation property. There are various theoretical universal approximators. For example, a of the Stone-Weierstrass theorem; is that multivariate polynomials are dense in the space of continuous real valued functions defined over a hypercube. Another example is that the reproducing kernel Hilbert space (RKHS) associated with kernel functions with particular properties can be dense in the same space of functions. Kernel functions with this property are called universal kernels. A subsequent of this theory is that the set of functions generated by a Gaussian process regression with an appropriate kernel can approximate any continuous function over a hypercube with arbitrary precision. Although multivariate polynomials and Gaussian processes also have this approximation property, each has practical limitations that cause neural networks to be used more often in practice compared to these approaches. For instance, polynomial interpolations may a model that overfits to the data and suffers from a poor generalization, and Gaussian processes often become computationally intractable for a large number of training data Bernardo et al.. Neural networks, with an efficient structure for gradient computation using backpropagation, can be trained using gradient based optimization for large datasets in a tractable time. Moreover, in contrast to existing polynomial interpolations, neural networks generalize well in practice. Theoretical and empirical understanding of the generalization power of neural networks is an ongoing research;. Topological Data Analysis (TDA), a geometric approach for data analysis, is a growing field which provides statistical and algorithmic methods to analyze the topological structures of data often referred to as point clouds. TDA methods mainly relied on deterministic methods until recently where w l,0 w l,1...... statistical approaches were proposed for this purpose;. In general, TDA methods assume a point cloud in a metric space with an inducing distance (e.g. Euclidean, Hausdorff, or Wasserstein distance) between samples and build a topological structure upon point clouds. The topological structure is then used to extract geometric information from data. These models are not trained with gradient based approaches and they are generally limited to predetermined algorithms whose application to high dimensional spaces may be challenging. In this work, by leveraging geometrical perspective of TDA, we provide a class of restricted neural networks that preserve the universal approximation property and can be trained using a forward pass and the backpropagation algorithm. Motivated by the approximation theorem used to develop our method, Simplicial Complex Network (SCN) is chosen to refer these models. SCNs do not require an activation function and architecture search in the way that conventional neural networks do. Their hidden units are conceptually well defined, in contrast to feed-forward neural networks for which the role of a hidden unit is yet an ongoing problem. SCNs are discussed in more details in later sections. Our contribution can be summarized in building a novel class of neural networks which we believe can be used in the future for developing deep models that are interpretable, and robust to perturbations. The rest of this paper is organized as follows: Section 2 is specified for the explanation of SCNs and their training procedure. In section 3, related works are explained. Sections 4, 5, and 6 are specified to experiments, limitations, and . We first describe some necessary notation in the section 2.1. In section 2.2, we discuss the barycentric subdivision and the simplicial approximation theorem. In section 2.3, we modify the barycentric subdivision in order to develop an approach that allows us to learn a subdivision of the input space into small simplexes. We then introduce a general framework for defining an SCN for regression tasks that simultaneously learns a subdivision of the input space and a piece-wise linear mapping where the linear pieces are defined on each simplex of the subdivision. We consider a dataset D = {( Let f θ denotes a mapping from the input to the output space in which θ represents its parameters. In a regression task, we wish to minimize Similarly, we assume each y (i) also lies in the standard probability k-simplex. Simplicial approximation theorem allows the approximation of continuous functions using simplicial mappings. Before stating the theorem, we borrow a few definitions from the algebraic topology literature. Definition 1. (simplicial complex) A simplicial complex K is a set of simplexes such that: 1) Every face of a simplex of K is in K. 2) The intersection of any two simplexes in K is a face of each of them. (A simplicial complex can be informally defined as a set of simplexes glued together through their faces.) Definition 2. (simplicial mapping) A mapping between two simplicial complexes is called simplicial mapping if the images of the vertices are vertices. We also use the definition of a standard subdivisioning method which is used to break a d-simplex (or any other simplicial complex) into arbitrary small simplexes with the same dimension. Figure 2(a), (b) visualize examples of BCS for a 2-simplex, and a 3-simplex. Note that i-th BCS is the of applying BCS to each simplex in the (i − 1)-th BCS. Using these definitions, simplicial approximation theorem can be stated as follows, Theorem 1. (Simplicial Approximation Theorem) Let X and Y be two simplicial complexes and f: X → Y be a continuous function. Then for an arbitrary, there exist sufficiently large k and l and a simplicial mapping g: represent the k-th and l-th barycentric subdivision of X and Y, respectively. In the appendix A, we provide a short topological proof for this theorem. Although the theorem provides a general approximation framework, its approximation is through using BCS of the input and output spaces. Each time BCS is applied, the number of simplexes is multiplied by (d + 1)! and simplexes in higher order subdivisions become flatter and flatter. This fact limits the use of this subdivision algorithm in practice. Moreover, BCS subdivides input or output space completely independent of the data. In the next section, we modify the BCS to a data-driven approach which allows us to learn a suitable subdivision given data. Apart from BCS, in TDA, building simplicial complexes from data is often based on deterministic approaches. For instance, Vietoris-Rips complex; is the set of all simplexes which their vertices are data points that their pair distances are less than a threshold. Cech complex; , is the set of all those simplexes that completely lie in a closed ball with specific radius. While these non-parametric data driven methods can extract rich topological structures of the data using mathematical tools such as persistent homology, they are not often used as standard features in machine learning algorithms immediately;. In the next section, we modify BCS to a parametric framework that allows automation of the subdivisioning process and it can be used directly during the training of an SCN. In the Algorithm 1, we have shown the process of generating a random simplex from the set of all simplexes in BCS of a d-simplex. The algorithm gives an uncommon view for identification of a simplex in the BCS which is through a set of nested simplexes. We modify this view to a data driven approach in a way that BCS is a special case of the modified version. In the Algorithm 1, for a random permutation P, This fact is visualized in Figure 2 (c), (d). N d indicates the simplex in the BCS that corresponds to P. Knowing the nested sequence uniquely determines a simplex in the BCS. Also, we note that each N i can be obtained by replacing one of the vertices of N i−1 with a new vertex inside or on the boundary (closure) of it. Thereby, since the new vertex is in the closure of N i−1, it can be represented as a convex combination of the vertices of N i−1. The weights of these convex combinations can be computed in a straightforward way as shown in the Algorithm 1. Note that these weights are specified to the BCS. To build a data driven approach initially we eliminate two restrictions of the Algorithm 1. First, we allow the number of repeats to be any arbitrary integer l rather fixing it to d. This number is later referred as the subdivision depth. Second, at each repeat, we let the weights to freely have any arbitrary values as long as they can be used as a convex combination weights. This removes the restriction of having deterministic weights specified for BSC. This modification allows us to learn the weights through an optimization process. A natural approach to make the subdivision process data-driven is to, instead of randomly sampling a simplex as in the Algorithm 1, sample a simplex with the probability proportion to the likelihood that it contains data and update the subdivision parameters accordingly. However, the number of all possible simplexes grows exponentially as l increases, thereby making the computation of all these probabilities intractable. Alternatively, we sample from data and identify the simplex in the subdivision that contains the sample and use that simplex in an stochastic optimization framework to update the subdivision towards a desired one. But how can we identify the simplex that contains a given sample? Our parameterization of the subdivisioning process using the nested sequence of simplexes along with the following lemma help to locate the input using a forward pass, Proof. x can be represented with the following convex combination which the lemma, Similar to the steps of Algorithm 1 and based on our view for identification of the target simplex through a nested set of simplexes, at the first step, a point is added inside the simplex using a convex combination of vertices of σ with weights w 1. Then lemma 1 can be used to locate the x in the σ and accordingly replace one of its vertices with the new vertex. This process is repeated with the new d-simplex up to l times to extract the simplex in the subdivision that contains x. Having the final simplex and a given cost function, parameters of the subdivision can be updated accordingly using the backpropagation and gradient descent framework. The procedure is formally shown in the Algorithm 2. We refer the general procedure described in Algorithm 2 to as automatic subdivision. Note that the barycentric subdivision can be a particular outcome of the automatic subdivisioning. Algorithm 1 Generating a simplex in barycentric subdivision of a d-simplex Algorithm 2 One gradient step in automatic subdivisioning of a d-simplex using one data sample Set N j+1 = N j with k-th vertex replaced with u Set w x as convex combination weights of x represented with vertices in N j+1 As theorem 2 states, any continuous function from a simplicial complex to another can be approximated with a simplicial mapping from BCSs of the input space to BCSs of the output space. In the last section we explained how we can subdivision the input space through the automatic subdivision process where BCS was a particular outcome. In this section, we develop SCNs to parameterize a class of simplicial mappings with trainable values on vertices of the input space subdivision and it is defined linearly on the d-simplexes of the subdivision using the evaluations at its vertices, ing a piece-wise linear simplicial mapping. Parameters of this simplicial mapping are then optimized for function approximation or a regression tasks. We leverage a same technique used in the previous section to parameterize the SCN output. Recalling our initial assumption that inputs lie in a d-simplex σ = [v 0, ..., v d] (v 0 representing the origin and other v i are the standard basis vectors), a given input x can be represented with the following convex combination, In the previous section, we showed that how the position of x can be found in σ through the subdivisioning process. SCN's output at x is calculated using the values of its mapping at the vertices of σ and added vertices during locating x. Simplicial mapping at the m-th added vertex is defined recursively using its preceding vertices. Assuming that the m-th added vertex represented as h m = d i=0 w m,i u i, where u i are vertices of the preceding simplex in the nested sequence of simplexes used to locate x and w m,i are its corresponding convex combination weights, the value of the simplicial mapping at h m is defined as, In other words, f (h m) is defined using a convex combination of f (u i) with the same weights, added with a bias which is a function of h m. A geometrical view of equation 2 for a 2-simplex is shown in the Appendix C. Using our proof of simplicial approximation theorem in the appendix A, it is straightforward to show that as long as we consider each b m is an arbitrary function of the h m, the simplicial mapping holds the universal approximation property. In our experiments, however, we used a primitive model for biases and defined them as constant parameters. We will empirically show that even this simple model for biases can complicated simplicial mappings and accurate approximations. All in all, an SCN is defined using the mentioned process (Figure 1 visualizes a general architecture). Parameters that should be learned are bias function parameters, and the weights that we used to subdivision the input space. All these parameters can be updated using the backpropagation algorithm. Another point that must be noted here is that, as shown in Figure 1, inputs to h 2,..., h l are not specified. Even though we described how to obtain the value of these inputs using lemma 1, the order that the vertices of the preceding simplex are combined is not specified. We refer to the policy on the ordering of the vertices used to fed to the next convex combination as the SCN's network policy. Specifying the depth, network policy, and the bias functions fully determines the architecture of an SCN. To train an SCN, the derivative of the loss function over mini-batches is taken with respect to the weights and the bias function parameters, and these parameters are updated using a gradient descent approach such as stochastic gradient descent, or. General training process for an SCN is shown in the Algorithm 3. Note that after updating weights of the network, for each hidden node, a projection of weights is used such that their summation is equal to 1. This projection can be avoided through use of logits and the Softmax function in parameterizing the weights. Algorithm 3 training procedure for a general SCN bias function, and weight params), P (network policy), Extract j and update w x using x, S, and lemma 1 project w m on the standard d-simplex end for In TDA, Persistent homology methods are commonly used to extract features that are robust to perturbations of the input;. A range of works use these features in a feed-forward architecture. For instance, in , persistence landscape, a topological summary of data, is used to develop persistent layers. These layers are used in a neural network architecture along with convolutional layers and the ing architecture is applied to the problem of music tagging , and multi-label classification. A similar approach is applied in for the time series classification. introduces TopologyNet, an architecture that uses a persistent homology method for 3D biomolecular structures to extract features suitable for convolutional layers. proposes a trainable layer to learn suitable representation from persistence diagram of the data. This layer which extracts topological signatures from the data is used along with a common neural network architecture and their approach achieved the state-of-the-art in a specific task on classification of social network graphs at the time. Although aforementioned methods try to improve the performance of neural networks through using topological scenarios, the TDA geometrical perspective vanishes as they are aligned with commonly used architectures. In addition, a specific persistent homology method is applied for a determined task. SCNs are single-model architectures that can be trained with the common forward and backward passes. In addition, no specific persistent homology is used to extract the topological geometry of the inputs, which enables its generalization to other domains. We perform regression tasks in order to evaluate the performance and the complexity of functions that SCN can learn. Mean squared error was used as the loss function in all the experiments. In some cases, an SCN with a few number of parameters can estimate a function better than a neural network with fully connected layers even with around 10 times more parameters. In the appendix B, with a straightforward derivation, we also show that how does an SCN without a hidden unit reformulates the linear regression problem. As a primary experiment, we approximate sum of three Gaussian functions with an SCN with a depth of 4 and constant parameters as the biases. The ing simplicial mapping and the learned subdivision is shown in Figure 3. We compare the performance of using an SCN in the problem of learning particular structures to the obtained by a neural network with fully-connected layers and ReLU activations. This is a more general experiment done in to learn the absolute value function. The experiment was used in to show the limitations of specific activation functions for neural networks in learning some of the basic functions. Here, we show that SCN can learn more complicated structures even with constant parameters as its bias functions. Models are trained using with learning rate 0.001, and a same mini-batch size. Figure 4 represents the comparison. More details about the experiments, and a binary classification experiment on the MNIST dataset can be found in the appendix F. here, to backpropagate through the SCN's architecture it is required to store only d out of d + 1 vectors and their function evaluations, which were used to extract the last hidden node and evaluate its function value, latest convex weights for the input in the forward pass, and an array of integers with a length of l indicating the order that nodes were removed during the forward pass. Storing these values is enough for the gradient computation of the network parameters. Interestingly, it is not needed to store the weights of the network and function evaluations at the previous nodes as they all can be extracted with these information (proof in the Appendix E). This approach trades the cost of memory with an additional computational cost and can be helpful in training very deep SCNs. In some conditions, the bias function for a hidden node may outputs high norm vectors. In these conditions, output may not be close to a smooth curve. Accordingly, SCN's output not behave well in practice and potentially overfits to data. In case of using a conventional neural networks, to increase the stability of the output, one approach is to enforce Lipschitz constraints Anil et al. In some cases, input weights to a hidden node might converge to degenerate values with the value one in one position and zero elsewhere. In these cases, the corresponding layer does not change the learned subdivision and it may be assumed as a redundant layer. We refer to this phenomena as weight drifting. In situations that we have a deep SCN architecture with higher than the true required capacity, these layers with a bias value close to zero may be helpful to automatically adjust the capacity of the model and prevent SCN from overfitting. Throughout this work we assumed that inputs lie in a given d-simplex. The assumption was used just for the sake of presentation. It can be assumed that samples lie in any given simplicial complex, not a specific d-simplex. The vertices of any simplex in the simplicial complex that a sample falls in can be used as the primary nodes in the network. Choosing an appropriate network policy for SCNs may be assumed as a challenging task in different domains. In our experiments, we observed that even simple policies a proper function approximation. In fact, in one dimensional case, a direct use of the Stone-Weierstrass theorem can be used to prove that SCNs are universal approximators even with random policies and fixed weights. In our experiments, the sensitivity of SCNs to the choice of their bias function observed to be larger than the network policy. In this work, we have used techniques from topological data analysis to build a class of neural network architectures with the universal approximation property which can be trained using the common neural network training framework. Topological data analysis methods are based on the geometrical structure of the data and have strong theoretical analysis. SCNs are made using the geometrical view of TDA and we believe that they can be used as a step towards building interpretable deep learning models. Most of the experiments in the paper are synthetic. More practical applications of the paper is considered as an immediate continual work. Moreover, throughout this work, bias functions of the simplest kinds (constant parameters) were used. We mentioned earlier that a bias function may be an arbitrary function of its input to keep the universal approximation property of SCNs. A natural idea is to use common neural network architectures as the bias function. In this case, backpropagation can be continued to the bias function parameters as well. This is also considered as another continuation of this work. We provide a short topological proof of the approximation theorem we used in main text. Some mathematical precision is dropped throughout the proof. (v) is less than δ (proof is shown in figure 6). So, for each v ∈ V, there exist a w ∈ W such that St(v) ⊂ f −1 (St(w)). We define the simplicial map g such that g(v) = w. Also, note that f (St(v)) ⊂ St(w). Now we prove that g approximates f as desired. Let σ ∈ X (k) be a simplex in A. Let v 1, v 2,..., v p denote the vertices of this simplex. For any x ∈ σ and a vertex v i, i ∈ {1, 2, ..., p}, x is in the. Using the last note in the previous paragraph, we have x ∈ ∩ p i=1 St(g(v i)). This fact means that f (x) lies in the simplex that its vertices are We now extend the definition of g for all non-vertex elements of X. Let x ∈ |X| be a non-vertex element represented as a convex combination of a number of vertices of V as x = Σt i v i. We define g(x) as, Straightforward steps can be used to prove that g is a continuous simplicial mapping. Using the facts in the last two paragraphs, we conclude that for each x ∈ |X| and a simplex σ ∈ X containing x, both f (x), and g(x), lie in the simplex with vertices {g(v)} v∈σ. Due to the initial choice of l, we know that diameter of this simplex is less than, which means sup x∈X f (x) − g(x) <. In case that an SCN has no hidden node (no subdivisioning process), it can be viewed as linear regression reformulation. A real valued linear function f: Assume a data matrix X ∈ R N ×d of N samples within ∆ d, and their corresponding output in a vector y. We formulate the linear regression problem with training a weight w that minimizes ||Xw − y|| We represent the coefficients of representing samples in X as a convex combination of v 0,..., v d in a matrix C ∈ R N ×(d+1) with a rank of at most d, where i-th row indicates the corresponding coefficients for i-th sample. Then the linear regression problem can be reformulated as, where f is a (d + 1) dimensional vector representing the function value at v i as its i-th element. With a straightforward computation, One can verifies that the optimal w or f can be computed from the optimal value of the other one. Figure 7: A geometrical view of how does equation 2 evaluates the simplicial mapping at m-th added point using its preceding vertices u i. A same convex combination of u i used to generate h m is applied to the corresponding f (u i). This value added with a bias term determines f (h m). x can be represented with the following convex combination which the lemma, In contrast to; , no separation of input into blocks is needed for SCNS to extract outputs of the previous layers. Retrieving the values for previous in SCN is a of the fact that knowing all weights, all vertices except one, and the ing vector of a convex combination is enough to extract the missing vertex. Formally, let x =. Assume an SCN with a depth of l. Given the SCN's last layer weights w l, the bias functions b 1,..., b l, and its network policy P, the following algorithm shows that knowing the last d hidden nodes, their function values, and an array indicating the order of vertices that was removed during the forward pass to locate x, is enough to extract all the previous function values, hidden nodes, and also the weights. For simplicity of the algorithm, u 0,..., u l+d is used as indicators for v 0,..., v d, h 1,..., h l with the same order. Algorithm 4 Retrieving preceding layers and parameters for the backward pass of an SCN given u l+d, u l+d−1,..., u l+1, f (u l+d), f (u l+d−1),..., f (u l+1), and convex weights w x of representing the input x as the convex combination of u l+d, u l+d−1,..., u l, array a = [a 1, ..., a l−d] of integers storing the indices of u i were removed during the forward pass with the exact order. Extract w l−j as the convex combination weights of representing u l+d−j using u l+d−j−1,..., u l−j Extract w (u l+d−j−i) l−j, (∀0 ≤ i ≤ d) (element of w l−j that is assigned to u l+d−j−i) using the network policy P, and a l−d−j. Update w x to the convex combination weights of x represented using u l+d−j−1,..., u l−j, u l−j−1 j+ = 1 until j = d − 1 We provide the details of the experiments in the main text as well as of a binary classification experiment on MNIST as a primary proof of concept for practical usages of SCNs. In all the three toy experiments, the neural network consisted of two fully connected layers with 300 and 2 hidden neurons respectively. ReLU activations was used for both layers. The SCN model had a depth of 4. Accordingly, the network had 4 bias functions which all were constant parameters. A learning rate of 0.001, mini-batches of size 100 were used. The network policy was random. Thereby, preceding nodes were assigned randomly to the next convex combination weights (recall the fact that in one dimensional case, SCNs are universal approximators even with a random policy and fix convex combination weights. Thereby the optimization can be done through their biases only). In the sum of Gaussians experiment, inputs lied in a 2-simplex with vertices,, and as the way that SCNs are presented throughout the paper. Similarly, in the one dimensional case inputs lied in the interval (1-simplex). To provide an observation that SCNs can be used in practice, we did a primary experiment on the MNIST data set. The aim was the binary classification of zeros and ones. We lessened the dimensionality of the data to 20 using PCA. We compared SCN to a simple logistic regression, and a neural network with the same architecture used in our synthetic experiments. Logistic regression can be seen as an SCN without any hidden nodes followed by a Sigmoid. We used a single unit SCN where its bias function was again a single constant parameter. Models were trained using the binary cross entropy loss and a same learning rate and mini-batch size were used in training the models. The performance is shown in the Figure 8 and the test accuracy and the number of parameters are shown in the Table 1. Although the performance of the SCN with a single hidden unit is not as good as the neural network with fully connected layers, SCN could improve the accuracy of logistic regression by around 7% with adding a single hidden node to its architecture. Scaling up the SCN architecture to higher dimensional spaces is considered as a continuation of the paper.
A novel method for supervised learning through subdivisioning the input space along with function approximation.
543
scitldr
The goal of network representation learning is to learn low-dimensional node embeddings that capture the graph structure and are useful for solving downstream tasks. However, despite the proliferation of such methods there is currently no study of their robustness to adversarial attacks. We provide the first adversarial vulnerability analysis on the widely used family of methods based on random walks. We derive efficient adversarial perturbations that poison the network structure and have a negative effect on both the quality of the embeddings and the downstream tasks. We further show that our attacks are transferable since they generalize to many models, and are successful even when the attacker is restricted. Unsupervised node embedding (network representation learning) approaches are becoming increasingly popular and achieve state-of-the-art performance on many network learning tasks BID5. The goal is to embed each node in a low-dimensional feature space such that the graph's structure is captured. The learned embeddings are subsequently used for downstream tasks such as link prediction, node classification, community detection, and visualization. Among the variety of proposed approaches, techniques based on random walks (RWs) (Perozzi et al.; Grover & Leskovec) are highly successful since they incorporate higher-order relational information. Given the increasing popularity of these method, there is a strong need for an analysis of their robustness. In particular, we aim to study the existence and effects of adversarial perturbations. A large body of research shows that traditional (deep) learning methods can easily be fooled/attacked: even slight deliberate data perturbations can lead to wrong BID17 BID28 BID6 BID12 BID26 BID10.So far, however, the question of adversarial perturbations for node embeddings has not been addressed. This is highly critical, since especially in domains where graph embeddings are used (e.g. the web) adversaries are common and false data is easy to inject: e.g. spammers might create fake followers on social media or fraudsters might manipulate friendship relations in social networks. Can node embedding approaches be easily fooled? The answer to this question is not immediately obvious. On one hand, the relational (non-i.i.d.) nature of the data might improve robustness since the embeddings are computed for all nodes jointly rather than for individual nodes in isolation. On the other hand, the propagation of information might also lead to cascading effects, where perturbations in one part of the graph might affect many other nodes in another part of the graph. Compared to the existing works on adversarial attacks our work significantly differs in various aspects. First, by operating on plain graph data, we do not perturb the features of individual instances but rather their interaction/dependency structure. Manipulating the structure (the graph) is a highly realistic scenario. For example, one can easily add or remove fake friendship relations on a social network, or write fake reviews to influence graph-based recommendation engines. Second, the node embedding works are typically trained in an unsupervised and transductive fashion. This means that we cannot rely on a single end-task that our attack might exploit to find appropriate perturbations, and we have to handle a challenging poisoning attack where the model is learned after the attack. That is, the model cannot be assumed to be static as in most other adversarial attack works. Lastly, since graphs are discrete classical gradient-based approaches BID28 for finding adversarial perturbations that were designed for continuous data are not well suited. Particularly for RW-based methods, the gradient computation is not directly possible since they are based on a non-differentiable sampling procedure. How to design efficient algorithms that are able to find adversarial perturbations in such a challenging -discrete and combinatorial -graph domain?We propose a principled strategy for adversarial attacks on unsupervised node embeddings. Exploiting from eigenvalue perturbation theory BID35 we are able to efficiently solve a challenging bi-level optimization problem associated with the poisoning attack. We assume an attacker with full knowledge about the data and the model, thus, ensuring reliable vulnerability analysis in the worst case. Nonetheless, our experiments on transferability demonstrate that our strategy generalizes -attacks learned based on one model successfully fool other models as well. Overall, we shed light on an important problem that has not been studied so far. We show that node embeddings are sensitive to adversarial attacks. Relatively few changes are needed to significantly damage the quality of the embeddings even in the scenario where the attacker is restricted. Furthermore, our work highlights that more work is needed to make node embeddings robust to adversarial perturbations and thus readily applicable in production systems. We focus on adversarial attacks on unsupervised node embedding approaches based on random walks (RWs), and further show how one can easily apply a similar analysis to attack other node embeddings based on factorization. For a recent extensive survey, also of other non-RW based approaches, we refer to BID5. Moreover, while many (semi-)supervised learning methods BID22 BID15 have been introduced, we focus on unsupervised methods since they are often used in practice due to their flexibility in solving various downstream tasks. Adversarial attacks. Attacking machine learning models has a long history, with seminal works on SVMs and logistic regression BID4 BID28. Deep neural networks were also shown to be highly sensitive to small adversarial perturbations to the input BID36 BID17. While most works focus on image classification, recent works have shown the existence of adversarial examples also in other domains BID18.Different taxonomies exist characterizing the attacks/adversaries based on their goals, knowledge, and capabilities BID30. The two dominant attacks types are poisoning attacks that target the training data (the model is trained after the attack) and evasion attacks that target the test data/application phase (the learned model is assumed fixed). Compared to evasion attacks, poisoning attacks are far less studied BID23 BID30 BID28 BID10 since they require solving a challenging bi-level optimization problem. Attacks on semi-supervised graph models. The robustness of semi-supervised graph classification methods to adversarial attacks has recently been analyzed (Zügner et al., 2018; BID13 . The first work, introduced by Zügner et al., linearizes a graph convolutional network (GCN) BID22 to derive a closed-form expression for the change in class probabilities for a given edge/feature perturbation. They calculate a score for each possible edge flip based on the classification margin and greedily pick the top edge flips with highest scores. Later, BID13 proposed a reinforcement (Q-)learning formulation where they decompose the selection of relevant edge flips into selecting the two end-points. Both approaches focus on targeted attacks (misclassify a given node) for the semi-supervised graph classification task. In contrast, our work focuses on general attacks (decrease the overall quality) on unsupervised node embeddings. Manipulating graphs. In the context of graph clustering, BID11 measure the changes in the when injecting noise to a bi-partite graph of DNS queries, but do not focus on automatically generating attacks. There is an extensive literature on works that optimize the graph structure to manipulate e.g. information spread in a network (Chen et al.; Khalil et al.), user opinions BID1 BID8, shortest paths (Phillips; Israeli & Wood), page rank scores and other metrics (Avrachenkov & Litvak; Chan et al.). Remotely related are poisoning attacks on multi-task relationship learning . While they exploit the relations between different tasks, they still deal with the classical scenario of i.i.d. instances within each task. Robustness and adversarial training. The robustification of machine learning models has also been studied -known as adversarial machine learning or robust machine learning. Such approaches are out of scope for this paper and we do not discuss them. The goal of adversarial training (e.g. via GANs BID14) is to improve the embeddings, while our goal is to damage the embeddings produced by existing models by perturbing the graph structure. Here we explore poisoning attacks on the graph structure -the attacker is capable of adding or removing (flipping) edges in the original graph within a given budget. We focus mainly on approaches based on random walks and extend the analysis to spectral approaches (Sec. 6.2 in the appendix). Let G = (V, E) be an undirected unweighted graph where V is the set of nodes, E is the set of edges, and A ∈ {0, 1} |V |×|V | is the adjacency matrix. The goal of network representation learning is to find a low-dimensional embedding z v ∈ R K for each node with K |V |. This dense lowdimensional representation should preserve information about the network structure -nodes similar in the original network should be close in the embedding space. DeepWalk (Perozzi et al.) and node2vec (Grover & Leskovec) learn an embedding based on RWs by extending and adapting the skip-gram architecture BID29 for learning word embeddings. They sample finite (biased) RWs and use the co-occurrence of node-context pairs in a given window in each RW as a measure of similarity. To learn z v they maximize the probability of observing v's neighborhood. We denote with the adjacency matrix of the graph obtained after the attacker has modified certain entries in A. We assume the attacker has a given, fixed budget and is only capable of modifying f entries, i.e. || − A|| 0 = 2f (we have 2f since G is undirected). The goal of the attacker is to damage the quality of the learned embeddings, which in turn harms subsequent learning tasks such as node classification or link prediction that use the embeddings as features. We consider both a general attack that aims to degrade the embeddings of the network as a whole, as well as a targeted attack that aims to damage the embedding regarding a specific target or specific task. The quality of the embeddings is measured by the loss L(A, Z) of the model under attack, with lower loss corresponding to higher quality, where Z ∈ R N ×K is the matrix containing the embeddings of all nodes. Thus, the goal of the attacker is to maximize the loss. We can formalize this as the following bi-level optimization problem: DISPLAYFORM0 Here, Z * is always the'optimal' embedding ing from the (to be optimized) graphÂ, i.e. it minimizes the loss, while the attacker tries to maximize the loss. Solving such a problem is highly challenging given its discrete and combinatorial nature, thus we derive efficient approximations. Since the first step in the embedding approaches is to generate a set of random walks that serve as a training corpus for the skip-gram model, the bi-level optimization problem is even more complicated. We have DISPLAYFORM0 where RW l is an intermediate stochastic procedure that generates RWs of length l given the graph which we are optimizing. By flipping (even a few) edges in the original graph, the attacker necessarily changes the set of possible RWs, thus changing the training corpus. Therefore, this RW generation process precludes any gradient-based methods. To tackle this challenge we leverage recent that show that (given certain assumptions) RW based node embedding approaches are implicitly factorizing the Pointwise Mutual Information (PMI) matrix (; BID34 . We study DeepWalk as an RW-based representative approach since it's one of the most popular methods and has many extensions. Specifically, we use the from BID34 to sidestep the RW stochasticity. Lemma 1 BID34). DeepWalk is equivalent to factorizingM = log(max(M, 1)) with DISPLAYFORM1 where the embedding Z * is obtained by the Singular Value Decomposition ofM = U ΣV T using the top-K largest singular values / vectors, i.e. DISPLAYFORM2 Here, D is the diagonal degree matrix with D ii = j A ij, T is the window size, b is the number of negative samples and vol(A) = i,j A ij is the volume. Since M is sparse and has many zero entries the matrix log(M) where the log is elementwise is ill-defined and dense. To cope with this, similar to the Shifted Positive PMI (PPMI) approach the elementwise maximum is introduced to formM. Using this insight, we see that DeepWalk is equivalent to optimizing minM DISPLAYFORM3 F wherẽ M K is the best rank-K approximation toM. This in turn means that the loss for DeepWalk when using the optimal embedding Z * for a given graph A is L DW1 (A, Z *) = |V | p=K+1 σ 2 p where σ p are the singular values ofM (A) ordered decreasingly σ 1 ≥ σ 2 · · · ≥ σ |V |. This shows that we do not need to construct random walks, nor do we have to (explicitly) learn the embedding Z * -it is implicitly considered via the singular values ofM (A). Accordingly, we have transformed the bi-level problem into a single-level optimization problem. However, maximizing L DW1 is still challenging due to the singular value decomposition and the discrete nature of the problem. Gradient based approach. Maximizing L DW1 with a gradient-based approach is not straightforward since we cannot easily backpropagate through the SVD. To tackle this challenge we exploit ideas from eigenvalue perturbation theory BID35 ) to approximate L DW1 (A) in closed-form without needing to recompute the SVD. This enables us to efficiently calculate the gradient. Theorem 1. Let A be the initial adjacency matrix andM (A) be the respective co-occurrence matrix. Let u p be the p-th eigenvector corresponding to the p-th largest eigenvalue ofM. Given a perturbed matrix A, with A = A + ∆A, and the respective change ∆M. We can approximately compute the loss: DISPLAYFORM4 The proof is given in the appendix. For a small ∆A and thus small ∆M we obtain a very good approximation, and if ∆A = ∆M = 0 then the loss is exact. Intuitively, we can think of using eigenvalue perturbation as analogous to taking the gradient of the loss w.r.t. M (A). Now, gradient-based optimization is efficient since ∇ A L DW2 (A) avoids recomputing the eigenvalue decomposition. The gradient provides useful information for a small change, however, here we are considering discrete flips, i.e. = ±1 so its usefulness is limited. Furthermore, using gradient-based optimization requires a dense instantiation of the adjacency matrix, which has complexity O(N 2) in both runtime and memory (infeasible for large graphs). This motivates the need for our more advanced approach. Sparse closed-form approach. Our goal is to efficiently compute the change in the loss L DW1 (A) given a set of flipped edges. To do so we will analyze the change in the spectrum of some of the intermediate matrices and then derivate a bound on the change in the spectrum of the co-occurrence matrix, which in turn will give an estimate of the loss. First, we need some . Lemma 2. The matrix S in Eq. 2 is equal to S = U (T r=1 Λ r)U T where the matrices U and Λ contain the eigenvectors and eigenvalues solving the generalized eigen-problem Au = λDu. The proof is given in the appendix. We see that the spectrum of S (and, thus, the one of M by taking scalars into account) is obtainable from the generalized spectrum of A. The difference to BID34's derivation where a factorization of S using A norm:= D −1/2 AD −1/2 is important. As we will show, our formulation using the generalized spectrum of A is key for an efficient approximation. Let A = A + ∆A be the adjacency matrix after the attacker performed some edge flips. As above, by computing the generalized spectrum of A, we can estimate the spectrum of the ing S and M. However, recomputing the eigenvalues λ of A for every possible set of edge flips is still not efficient for large graphs, preventing an effective application. Thus, we derive our first main : an efficient approximation bounding the change in the singular values of M for any edge flip. Theorem 2. Let ∆A be a matrix with only 2 non-zero elements, namely ∆A ij = ∆A ji = 1 − 2A ij corresponding to a single edge flip (i, j), and ∆D the respective change in the degree matrix, i.e. A = A + ∆A and D = D + ∆D. Let u y be the y-th generalized eigenvector of A with generalized eigenvalue λ y. Then the generalized eigenvalue λ y of A solving λ y A = λ y D u y is approximately: DISPLAYFORM5 where u yi is the i-th entry of the vector u y, and ∆w ij = (1 − 2A ij) indicates the edge flip, i.e ±1.The proof is provided in the appendix. By working with the generalized eigenvalue problem in Theorem 2 we were able to express A and D after flipping an edge as additive changes to A and D, this in turn enabled us to leverage from eigenvalue perturbation theory to efficiently approximate the change in the spectrum. If we used A norm instead, the change to A norm would be multiplicative preventing efficient approximations. Using Eq. 3, instead of recomputing λ we only need to compute ∆λ, significantly reducing the complexity when evaluating different edge flips (i, j). Using this , we can now efficiently bound the change in the singular values of S. Lemma 3. Let A be defined as before and S be the ing matrix. The singular values of S are bounded: DISPLAYFORM6 r where π is a permutation simply ensuring that the finalσ p (i, j) are sorted decreasingly, where d min is the smallest degree in A.We provide the proof in the appendix. Using this , we can efficiently compute the loss for a rank-K approximation/factorization of M, which we would obtain when performing the edge flip DISPLAYFORM7 based on the matrixM = log(max(M, 1)), there are unfortunately currently no tools available to analyze the spectrum ofM given the spectrum of M. Therefore, we use L DW3 as a surrogate loss for L DW1 (Yang et al. similarly exclude the element-wise logarithm). As our experimental analysis shows, the surrogate loss is effective and we are able to successfully attack the node embeddings that factorize the actual co-occurrence matrixM, as well as the original skip-gram model. Similarly, methods based on spectral embedding, factorize the graph Laplacian and have a strong connection to the RW based approaches. We provide a similar detailed analysis in the appendix (Sec. 6.2).The overall algorithm. Our goal is to maximize L DW3 by performing f edge flips. While Eq. 3 enables us to efficiently compute the loss for a single edge, there are still O(n 2) possible flips. To reduce the complexity when adding edges (see Sec. 4.2 for removing) we instead form a candidate set by randomly sampling C candidate flips. This introduces a further approximation that nonetheless works well in practice. For every candidate we compute its impact on the loss via L DW3 and greedily choose the top f flips.1 The runtime complexity of our overall approach is: O(N ·|E|+C ·N log N). First, we can compute the generalized eigenvectors of A in a sparse fashion in O(N · |E|). Then we sample C candidate edges, and for each we can compute the approximate eigenvalues in constant time (Theorem 2). To obtain the final loss, we sort the values leading to the overall complexity. The approach is easily parallelizable since every candidate edge flip can be evaluated in parallel. If the goal of the attacker is to attack a specific node t ∈ V, called the target, or a specific downstream task, it is suboptimal to maximize the overall loss via L DW *. Rather, we should define some other target specific loss that depends on t's embedding -replacing the loss function of the outer optimization in Eq. 1 by another one operating on t's embedding. Thus, for any edge flip (i, j) we now need the change in t's embedding -meaning changes in the eigenvectors -which is inherently more difficult to compute compared to changes in eigen/singular-values. We study two cases: misclassifying a target node and manipulating the similarity of node pairs (i.e. link prediction task).Surrogate embeddings. To efficiently compute the change in eigenvectors, we define surrogate embeddingsZ *. Specifically, instead of performing an SVD decomposition on M (or equivalently S with upscaling) and using the from Lemma 2 we defineZ DISPLAYFORM0 Experimentally, usingZ * instead of Z * as the embedding showed no significant change in the performance on downstream tasks (even on the clean graph; suggesting its general use since it is more efficient to compute). Now, we can approximate the generalized eigenvectors, and thusZ * (A), in closed-form: Theorem 3. Let ∆A, ∆D and ∆w ij be defined as before, and ∆λ y be the change in the y-th generalized eigenvalue λ y as derived in Theorem 2. Then, the y-th generalized eigenvector u y of A after performing the edge flip (i, j) can be approximated with: DISPLAYFORM1 where E i (x) returns a vector of zeros except at position i where the value is x, d is a vector of the node degrees, • is the Hadamard product, and (·) + is the pseudo inverse. We provide the proof in the appendix. Computing Eq. 4 seems expensive at first due to the pseudo inverse term. However, note that this term does not depend on the particular edge flip we perform. Thus, we can pre-compute it once and furthermore, parallelize the computation for each y. Similarly, we can pre-compute u y d, while the rest of the terms are all computable in O. For any edge flip we can now efficiently compute the optimal embeddingZ * (A) using Eqs. 3 and 4. The t-th row of Z * (A) is the desired embedding for a target node t after the attack. Targeting node classification. The goal is to enforce misclassification of the target t for the downstream task of node classification (i.e. node labels are partially given). To fully specify the targeted attack we need to define the candidate flips and the target-specific loss responsible for scoring the candidates. As candidates we use {(v, t)|v = t}. For the loss, we first pre-train a classifier C on the clean embeddingZ *. Then we predict the class probabilities p t of the target t using the compromisedZ * t,· and we calculate the classification margin m(t) = p t,c(t) − max c =c(t) p t,c, where c(t) is the ground-truth class for t. That is, our loss is the difference between the probability of the ground truth and the next most probable class after the attack. Finally, we select the top f flips with smallest margin m (note when m(t) < 0 node t is misclassified). In practice, we average over 10 randomly trained classifiers. Another (future work) approach is to treat this as a tri-level optimization problem. Targeting link prediction. The goal of the attack is: given a set of target node pairs T ⊂ V × V, decrease the similarity between the nodes that have an edge, and increase the similarity between nodes that do not have an edge, by modifying other parts of the graph -i.e. it is not allowed to directly flip pairs in T. For example, in an e-commerce graph representing users and items, the goal might be to increase the similarity between a certain item and user, by adding/removing connections between other users/items. To achieve this, we first train the initial clean embedding without the target edges. Then, for a candidate set of flips, we estimateZ * using Eqs. 3 and 4 and use them to calculate the average precision score (AP score) on the target set T, withZ * DISPLAYFORM2 T as a similarity measure. Finally, we pick the top f flips with lowest AP scores and use them to poison the network. Since this is the first work considering adversarial attacks on node embeddings there are no known baselines. Similar to works that optimize the graph structure (Chen et al.) we compare with several strong baselines. B rnd randomly flips edges (we report averages over ten seeds), B eig removes edges based on their eigencentrality in the line graph L(A), and B deg removes edges based on their degree centrality in L(A) -or equivalently sum of degrees in the original graph. When adding edges we use the same baselines as above, now calculated on the complement graph, except for B eig since it is infeasible to compute even for medium size graphs. A DW2 denotes our gradient based attack, A DW3 our closed-form attack, A link our link prediction attack, A class our node classification attack. The size of the sampled candidate set for adding edges is 20K (for removing edges see Sec. 4.2).We aim to answer the following questions: (Q1) how good are our approximations of the loss; (Q2) how much damage is caused to the embedding quality by our attacks/baselines; (Q3) can we still perform a successful attack when restricted; FORMULA10 FORMULA2 ). In all experiments, after choosing the top f flips we retrain the embeddings and report the final performance since this is a poisoning attack. Note, for the general attack, the downstream node classification task is only a proxy for estimating the quality of the embeddings after the attack, it is not our goal to damage this task, but rather to attack the unsupervised embeddings in general. To estimate the approximation quality we randomly select a subset of 20K candidate flips and compute the correlation between the actual loss and our approximation as measured by Pearson's R score. For example, for K = 32 we have R(L DW2, L DW1) = 0.11 and R(L DW3, L DW1) = 0.90, clearly showing that our closed-form strategy approximates the loss significantly better compared to the gradient-based one. Similarly, L DW 3 is a better approximation than L DW2 for K = 16, 64, 128. To obtain a better understanding we investigate the effect of removing and adding edges separately. Since real graphs are usually sparse, for removing we set the candidate set to be the set of all edges, with one edge set aside for each node to ensure we do not have singleton nodes. To obtain candidate edges for adding we randomly sample a set of edges. We then simply select the top f edges from the candidate set according to our scoring function. For adding edges, we also implemented an alternative add-by-remove strategy denoted as A abr. Here, we first add cf -many edges randomly sampled from the candidate set to the graph and subsequently remove (c − 1)f -many of them. This strategy performed better empirically. Since the graph is undirected, for each (i, j) we also flip (j, i).. Removed/added edges are denoted on the x-axis with negative/positive values respectively. On FIG2 we see that our strategies achieve a significantly higher loss compared to the baselines when removing edges. To analyze the change in the embedding quality we consider the node classification task (i.e. using it as a proxy to evaluate quality; this is not our targeted attack). Interestingly, B deg is the strongest baseline w.r.t. to the loss, but this is not true for the downstream task. As shown in FIG2, our strategies significantly outperform the baselines. As expected, A DW3 and A abr perform better than A DW2. On Cora our attack can cause up to around 5% more damage compared to the strongest baseline. On PolBlogs, by adding only 6% edges we can decrease the classification performance by more than 23%, while being more robust to removing edges. Restricted attacks. In the real world, attackers cannot attack any node, but rather only specific nodes under their control, which translates to restricting the candidate set. To evaluate the restricted scenario, we first initialize the candidate sets as before, then we randomly choose a given percentage p r of nodes as restricted and discard every candidate that includes them. As expected, the in FIG2 show that for increasingly restrictive sets with p r = 10%, 25%, 50%, our attack is able to do less damage. However, we always outperform the baselines (not plotted), and even in the case when half of the nodes are restricted (p r = 50%) we are still able to damage the embeddings. With this we are can answer question (Q3) affirmatively -the attacks are successful even when restricted. Analysis of selected adversarial edges. In Fig. 2a we analyze the top 1K edges on Cora-ML. For each edge we consider its source node degree (destination node, resp.) and plot it on the x-axis (yaxis). The heatmap shows adversarial edge counts divided by total edge counts for each bin. We see that low, medium and high degree nodes are all represented. In Fig. 2b we plot the edge centrality distribution for the top 1K adversarial edges and compare it with the distribution of the remaining edges. There is no clear distinction. The findings highlight the need for a principled method such as ours since using intuitive heuristics such as degree/edge centrality cannot identify adversarial edges. To obtain a better understanding of the performance we study the margin m(t) before and after the attack considering every node t as a potential target. We allow only (d t + 3) flips for attacking each node ensuring the degrees stay similar. Each dot in Fig. 4 represents one node grouped by its degree in the clean graph (logarithmic bins). We see that low-degree nodes are easier to misclassify (m(t) < 0), and that high degree nodes are more robust in general -the baselines have 0% success. Our method, however, can successfully attack even high degree nodes. In general, our attack is significantly more effective across all bins -as shown by the numbers on top of each box -with 77.89% nodes successfully misclassified on average compared to e.g. only 33.64% for B rnd. For the link prediction task (Fig. 3) we are similarly able to cause significant damage -e.g. A link achieves almost 10% decrease in performance by flipping around 12.5% of edges on Cora, significantly better than all other baselines. Here again, compared to adding edges, removing has a stronger effect. Overall, answering (Q5), both experiments confirm that our attacks hinder the downstream tasks. The question of transferability -do attacks learned for one model generalize to other models -is important since in practice the attacker might not know the model used by the system under attack. However, if transferability holds, such knowledge is not required. To obtain the perturbed graph, we remove the top f adversarial edges with the A DW3 attack. The same perturbed graph is then used to learn node embeddings using several other state-of-the-art approaches. TAB0 shows the change in node classification performance compared to the embeddings learned on the clean graph for each method respectively. We tune the key hyperparameters for each method (e.g. p and q for node2vec). Answering (Q6), the show that our attack generalizes: the adversarial edges have a noticeable impact on other models as well. We can damage DeepWalk trained with the skip-gram objective with negative sampling (SGNS) showing that the factorization analysis is successful. We can even damage the performance of semi-supervised approaches such as GCN and Label Propagation. Compared to the transferability of the baselines (Sec. 6.3) our attack causes significantly more damage. We demonstrate that node embeddings are vulnerable to adversarial attacks which can be efficiently computed and have a significant negative effect on node classification and link prediction. Furthermore, successfully poisoning the system is possible with relatively small perturbations and under restriction. More importantly, our attacks generalize -the adversarial edges are transferable across different models. Future work includes modeling the knowledge of the attacker, attacking other network representation learning methods, and developing effective defenses against such attacks. Attacking spectral embedding. Finding the spectral embedding is equivalent to the following trace minimization problem: DISPLAYFORM0 subject to orthogonality constraints, where L xy is the graph Laplacian. The solution is obtained via the eigen-decomposition of L, with Z * = U K where U K are the K-first eigen-vectors corresponding to the K-smallest eigenvalues λ i. The Laplacian is typically defined in three different ways: the unnormalized Laplacian L = D − A, the normalized random walk Laplacian From Lemma 5 we see that we can attack both normalized versions of the graph Laplacian with a single attack strategy since they have the same eigenvalues. It also helps us to do that efficiently similar to our previous analysis (Theorem. 3). DISPLAYFORM1 Theorem 4. Let L rw (or equivalently L sym) be the initial graph Laplacian before performing a flip and λ y and u y be any eigenvalue and eigenvector of L rw. The eigenvalue λ y of L rw obtained after flipping a single edge (i, j) is DISPLAYFORM2 where u yi is the i-th entry of the vector u y.Proof. From Lemma 5 we can estimate the change in L rw (or equivalently L sym) by estimating the eigenvalues solving the generalized eigen-problem Lu = λDu. Let ∆L = L − L be the change in the unnormalized graph Laplacian after performing a single edge flip (i, j) and ∆D be the corresponding change in the degree matrix. Let e i be defined as before. Then ∆L = (1 − 2A ij)(e i − e j)(e i − e j) T and ∆D = (1 − 2A ij)(e i e T i + e j e T j). Based on the theory of eigenvalue perturbation we have λ y ≈ λ y + u T y (∆L − λ y ∆D)u y. Substituting ∆L and ∆D are re-arranging we get the above . Using now Theorem 4 and Eq. 5 we finally estimate the loss of the spectral embedding after flipping an edge L SC (L rw, Z) ≈ K p=1 λ p. Note that here we are summing over the K-first smallest eigenvalues. We see that spectral embedding and the random walk based approaches are indeed very similar. We provide similar analysis for the the unnormalized Laplacian:Theorem 5. Let L be the initial unnormalized graph Laplacian before performing a flip and λ y and u y be any eigenvalue and eigenvector of L. The eigenvalue λ y of L obtained after flipping a single edge (i, j) can be approximated by: DISPLAYFORM3 Proof. Let ∆A = A − A be the change in the adjacency matrix after performing a single edge flip (i, j) and ∆D be the corresponding change in the degree matrix. Let e i be defined as before. Then Upper bound on singular values. From Lemma 3 we have that L DW 3 is an upper bound on L DW 1 (excluding the elementwise logarithm) so maximizing L DW 3 is principled. To gain a better understanding of the tightness of the bound we visualize the singular values of S and their respective upper-bound for all datasets. As we can see in FIG6, the gap is different for different datasets and relatively small. Furthermore we can notice that the gap tends to increase for larger singular values. Transferability of the baselines. To further support the transferability of our proposed attack we also examine the transferability of the baseline attacks. Specifically, we examine the transferability of B eig since it is the strongest baseline when removing edges as shown in FIG2. We use the same experimental setup as in Sec. 4.4 and show the in TAB1. We can see that compared to our proposed attack the baseline can do a significantly smaller amount of damage (compare to in TAB0). Interestingly, it can do significant damage to GCN when removing 250 edges on Cora, but not when removing 500 edges. We plan on exploring this counterintuitive finding in future work.
Adversarial attacks on unsupervised node embeddings based on eigenvalue perturbation theory.
544
scitldr
Hierarchical reinforcement learning methods offer a powerful means of planning flexible behavior in complicated domains. However, learning an appropriate hierarchical decomposition of a domain into subtasks remains a substantial challenge. We present a novel algorithm for subtask discovery, based on the recently introduced multitask linearly-solvable Markov decision process (MLMDP) framework. The MLMDP can perform never-before-seen tasks by representing them as a linear combination of a previously learned basis set of tasks. In this setting, the subtask discovery problem can naturally be posed as finding an optimal low-rank approximation of the set of tasks the agent will face in a domain. We use non-negative matrix factorization to discover this minimal basis set of tasks, and show that the technique learns intuitive decompositions in a variety of domains. Our method has several qualitatively desirable features: it is not limited to learning subtasks with single goal states, instead learning distributed patterns of preferred states; it learns qualitatively different hierarchical decompositions in the same domain depending on the ensemble of tasks the agent will face; and it may be straightforwardly iterated to obtain deeper hierarchical decompositions. Hierarchical reinforcement learning methods hold the promise of faster learning in complex state spaces and better transfer across tasks, by exploiting planning at multiple levels of detail BID0. A taxi driver, for instance, ultimately must execute a policy in the space of torques and forces applied to the steering wheel and pedals, but planning directly at this low level is beset by the curse of dimensionality. Algorithms like HAMS, MAXQ, and the options framework permit powerful forms of hierarchical abstraction, such that the taxi driver can plan at a higher level, perhaps choosing which passengers to pick up or a sequence of locations to navigate to BID19 BID3 BID13. While these algorithms can overcome the curse of dimensionality, they require the designer to specify the set of higher level actions or subtasks available to the agent. Choosing the right subtask structure can speed up learning and improve transfer across tasks, but choosing the wrong structure can slow learning BID17 BID1. The choice of hierarchical subtasks is thus critical, and a variety of work has sought algorithms that can automatically discover appropriate subtasks. One line of work has derived subtasks from properties of the agent's state space, attempting to identify states that the agent passes through frequently BID18. Subtasks are then created to reach these bottleneck states (van ; BID17 BID4 . In a domain of rooms, this style of analysis would typically identify doorways as the critical access points that individual skills should aim to reach (Şimşek &). This technique can rely only on passive exploration of the agent, yielding subtasks that do not depend on the set of tasks to be performed, or it can be applied to an agent as it learns about a particular ensemble of tasks, thereby suiting the learned options to a particular task set. Another line of work converts the target MDP into a state transition graph. Graph clustering techniques can then identify connected regions, and subtasks can be placed at the borders between connected regions BID11. In a rooms domain, these connected regions might correspond to rooms, with their borders again picking out doorways. Alternately, subtask states can be identified by their betweenness, counting the number of shortest paths that pass through each specific node (Şimşek & ; BID17 . Other recent work utilizes the eigenvectors of the graph laplacian to specify dense rewards for option policies that are defined over the full state space BID10 . Finally, other methods have grounded subtask discovery in the information each state reveals about the eventual goal (van). Most of these approaches aim to learn options with a single or low number of termination states, can require high computational expense BID17, and have not been widely used to generate multiple levels of hierarchy (but see BID24 ; BID12).Here we describe a novel subtask discovery algorithm based on the recently introduced Multitask linearly-solvable Markov decision process (MLMDP) framework BID14, which learns a basis set of tasks that may be linearly combined to solve tasks that lie in the span of the basis BID21. We show that an appropriate basis can naturally be found through non-negative matrix factorization BID8 BID3, yielding intuitive decompositions in a variety of domains. Moreover, we show how the technique may be iterated to learn deeper hierarchies of subtasks. In line with a number of prior methods, BID17 BID12 our method operates in the batch off-line setting; with immediate application to probabilistic planning. The subtask discovery method introduced in BID10, which also utilizes matrix factorization techniques to discover subtasks albeit from a very different theoretical foundation, is notable for its ability to operate in the online RL setting, although it is not immediately clear how the approach taken therein might achieve a deeper hierarchical architecture, or enable immediate generalization to novel tasks. In the multitask framework of BID14, the agent faces a set of tasks where each task has an identical transition structure, but different terminal rewards, modeling the setting where an agent pursues different goals in the same fixed environment. Each task is modeled as a finite-exit LMDP BID21. The LMDP is an alternative formulation of the standard MDP that carefully structures the problem formulation such that the Bellman optimality equation becomes linear in the exponentiated cost-to-go. As a of this linearity, optimal policies compose naturally: solutions for rewards corresponding to linear combinations of two optimal policies are simply the linear combination of their respective exponentiated cost-to-go functions BID22. This special property of LMDPs is exploited by BID14 to develop a multitask reinforcement learning method that uses a library of basis tasks, defined by their boundary rewards, to perform a potentially infinite variety of other tasks-any tasks that lie in the subspace spanned by the basis can be performed optimally. Briefly, the LMDP BID21 is defined by a three-tuple L = S, P, R, where S is a set of states, P is a passive transition probability distribution P: S × S →, and R is an expected instantaneous reward function R: S → R. The'action' chosen by the agent is a full transition probability distribution over next states, a(·|s). A control cost is associated with this choice such that a preference for energy-efficient actions is inherently specified: actions corresponding to distributions over next states that are very different from the passive transition probability distribution are expensive, while those that are similar are cheap. In this way the problem is regularized by the passive transition structure. Finally, the LMDP has rewards r i (s) for each interior state, and r b (s) for each boundary state in the finite exit formulation. The LMDP can be solved by finding the desirability function z(s) = e V (s)/λ which is the exponentiated cost-to-go function for a specific state s. Here λ is a temperature-like parameter related to the stochasticity of the solution. Given z(s), the optimal control can be computed in closed form (see BID20 for details). Despite the restrictions inherent in the formulation, the LMDP is generally applicable; see the supplementary material in BID14 for examples of how the LMDP can be applied to non-navigational, and conceptual tasks. A primary difficulty in translating standard MDPs into LMDPs is the construction of the action-free passive dynamics P (although a general way of approximating MDPs using LMDPs is given in BID20); however, in many cases, this can simply be taken as the ing Markov chain under a uniformly random policy. In this instance the problem is said to be'entropy regularized'. A similar problem set-up appears in a number of recent works BID15 BID6.The Multitask LMDP (MLDMP) BID14 operates by learning a set of N t tasks, defined by LMDPs L t = S, P, q i, q t b, t = 1, · · ·, N t with identical state space, passive dynamics, and internal rewards, but different instantaneous exponentiated boundary reward structures q for the multitask module. With this machinery in place, if a new task with boundary reward q can be expressed as a linear combination of previously learned tasks, q = Qw. Then the same weighting can be applied to derive the corresponding optimal desirability function, z = Zw, due to the compositionality of the LMDP. More generally, if the new task cannot be exactly expressed as a linear combination of previously learned tasks, a significant jump-start in learning may nevertheless be gained by finding an approximate representation. The multitask module can be stacked to form deep hierarchies BID14 by iteratively constructing higher order MLMDPs in which higher levels select the instantaneous reward structure that defines the current task for lower levels in a feudal-like architecture. This recursive procedure is carried out by firstly augmenting the layer l state spaceS l = S l ∪S l t with a set of N t terminal boundary states S l t called subtask states. Transitioning into a subtask state corresponds to a decision by the layer l MLMDP to access the next level of the hierarchy, and is equivalent to entering a state of the higher layer. These subtask transitions are governed by a new N ] are then suitably defined BID14. Solving the higher layer MLMDP will yield an optimal action a(·|s) making some transitions more likely than they would be under the passive dynamic, indicating that they are more desirable for the current task. Similarly, some transitions will be less likely than they would be under the passive dynamic, indicating that they should be avoided for the current task. The instantaneous rewards for the lower layer are therefore set to be proportional to the difference between the controlled and passive dynamic, r DISPLAYFORM0 Return control to lower layer to execute g) Figure 1: Execution model for the hierarchical MLMDP BID14. a) Beginning at some start state, the agent will make a transition underP 1. This transition may be to an interior, boundary, or subtask state. b) Transitioning into a subtask state is equivalent to entering a state of the higher layer MLMDP. No'real' time passes during this transition. c) The higher layer MLMDP is then solved and a next higher layer state is drawn. d) Knowing the next state at the higher layer allows us to specify the reward structure defining the current task at the lower layer. Control is then passed back to the lower layer to achieve this new task. Notice that the details of how this task should be solved are left to the lower layer (one possible trajectory being shown). e) At some point in the future the agent may again elect to transition into a subtask state -in this instance the transition is into a different subtask corresponding to a different state in the higher layer. f) The higher layer MLMDP is solved, and a next state drawn. This specifies the reward structure for a new task at the lower layer. g) Control is again passed back to the lower layer, which attempts to solve the new task. This process continues until the agent transitions into a boundary state. Prior work has assumed that the task basis Q is given a priori by the designer. Here we address the question of how a suitable basis may be learned. A natural starting point is to find a basis that retains as much information as possible about the ensemble of tasks to be performed, analogously to how principal component analysis yields a basis that maximally preserves information about an ensemble of vectors. In particular, to perform new tasks well, the desirability function for a new task must be representable as a (positive) linear combination of the desirability basis matrix Z. This naturally suggests decomposing Z using PCA (i.e., the SVD) to obtain a low-rank approximation that retains as much variance as possible in Z. However, there is one important caveat: the desirability function is the exponentiated cost-to-go, such that Z = exp(V /λ). Therefore Z must be non-negative, otherwise it does not correspond to a well-defined cost-to-go function. Our approach to subtask discovery is thus to uncover a low-rank representation through non-negative matrix factorization, to realize this positivity constraint BID8 BID3. We seek a decomposition of Z into a data matrix D ∈ R (m×k) and a weight matrix W ∈ R (k×n) as: DISPLAYFORM0 where d ij, w ij ≥ 0. The value of k in the decomposition must be chosen by a designer to yield the desired degree of abstraction, and is referred to as the decomposition factor. A small value of k corresponds to a high degree of abstraction since the variance in the desirability space Z must be captured in a k dimensional subspace spanned by the vectors in the data matrix D. Conversely, a large value of k corresponds to a low degree of abstraction. Since Z is strictly positive, the non-negative decomposition is not unique for any value of k BID5. Formally then, we seek a decomposition which minimizes the cost function DISPLAYFORM1 where d denotes the β-divergence, a subclass of the more familiar Bregman Divergences BID7, between the true basis Z and the approximate basis D. The β-divergence collapses to the better known statistical distances for β ∈ {1, 2}, corresponding to the Kullback-Leibler and Euclidean distances respectively BID2.Crucially, since Z depends on the set of tasks that the agent will perform in the environment, the representation is defined by the tasks taken against it, and is not simply a factorization of the domain structure. To keep the focus on the decomposition strategy, we assume, here and throughout, that Z ∈ R n×n is given. The basis set of tasks can be a tiny fraction of the set of possible tasks in the space. As an example, suppose we consider tasks with boundary rewards at any of two separate locations in an n-dimensional world such that there are n-choose-2 possible tasks (corresponding to tasks like 'navigate to point A or B'). We require only an n-dimensional Z matrix containing tasks to navigate to each point individually. The ing subtasks we uncover will aid in solving all of these n-choose-2 tasks. More generally we might consider tasks in which boundary rewards are placed at three or more locations, etc. To know Z therefore means to know an optimal policy to achieve n of ∼ 2 n tasks in a space. An online version of this method would estimate Z from data, either directly or by learning a transition model (see BID10 for some possibilities). Nested roomHairpin maze DISPLAYFORM0 Desirability functions for subtasksFigure 2: Intuitive decompositions in structured domains. All colour-plots correspond to the desirability functions for subtasks overlaid onto the base domains shown in panels a) and e). b,c,d) Subtasks correspond to'regions', distributed patterns over preferred states, rather than single states. Where the decomposition factor is chosen to match the structure of the domain (here k = 16 for example), subtasks correspond to an intuitive semantic -"go to room X". f,g,h) Again, subtasks correspond to regions rather than single states. Collectively the subtasks form an approximate cover for the space. To demonstrate that the proposed scheme recovers an intuitive decomposition, we consider the ing low-rank approximation to the desirability basis in two domains, for a few hand-picked decomposition factors. All presented in this section correspond to solutions to Eqn. for β = 1 so that the cost function is taken to be the KL-divergence (although the method does not appear to be overly sensitive to β ∈). Note that in the same way that the columns of Z represent the exponentiated cost-to-go for the single-states tasks in the basis, so the columns in D represent the exponentiated cost-to-go for the discovered subtasks. In Fig. 2, we compute the data matrix D ∈ R m×k for k = {4, 9, 16} for both the nested rooms domain, and the hairpin domain. The desirability functions for each of the subtasks is then plotted over the base domain. All of the decompositions share a number of properties intrinsic to the proposed scheme. Most notably, the subtasks themselves do not correspond to single states (like bottle-neck states), but rather to complex distributions over preferred states. By way of example, semantically, a single subtask in Fig. 2-d corresponds to the task'Go to Room', where any state in the room is suitable as a terminal state for the subtask. Also, since Z is taken to be the full basis matrix in this example, the distributed patterns of the subtasks collectively form an approximate cover for the full space. This is true regardless of the decomposition factor chosen. It is worthwhile noting that the decompositions discovered are refactored for larger values of k. That is to say that the decomposition for k = 5 is not the same as the decomposition for k = 4 just with the addition of an extra subtask. Instead all five of the subtasks in the decomposition are adjusted allowing for maximum expressiveness in the representation. It follows that there is no intrinsic ordering of the subtasks. It only matters that they collectively form a good representation of the task space Z. While we have shown only spatial decomposition thus far, our scheme is applicable to tasks more general than simple navigation-like tasks. To make this point clear, we consider the scheme's application to the standard TAXI domain BID3 with one passenger and four pickup/drop-off locations. The 5 × 5 TAXI domain considered is depicted in FIG2. Here the agent operates in the product space of the base domain (5 × 5 = 25), and the possible passenger locations (5-choose-1 = 5) for a complete state-space of 125 states. We consider a decomposition with factor k = 5. FIG2 is a depiction of the subtask structure we uncover. Each column of FIG2 is one of the subtasks we discover. Each of these is a policy over the full state space. For visual clarity, these are then divided into the five copies of the base domain, each being defined by the passenger's location. The color-map corresponds to the desirability function for each subtask. To help interpret the semantic nature of the subtasks discovered, consider the first column of FIG2. This subtask has almost all of its desirability function mass focused at states in which the passenger is in the Taxi. This task is thus a general pick-up action. By a similar analysis, column two of FIG2 depicts a subtask whose desirability function is essentially uniform over all states where the passenger is at location A. Semantically this subtask seeks to enter states with the passenger at location A regardless of taxi position. This subtask thus corresponds to the drop-off action at location A. Also note the slight probability leakage into the'in taxi' state for the drop off point -the precondition for the passenger to be dropped off. Considered as a whole, the subtask basis represents policies for getting the passenger to each of the pick-up/drop-off locations, and for having the passenger in the taxi. The proposed scheme discovers a set of subtasks by finding a low-rank approximation to the desirability basis matrix Z. By leveraging the stacking mechanism defined in BID14, this approximation procedure can simply be reapplied to find an approximate desirability basis for each subsequent layer of the hierarchy, by factoring the desirability matrix Z l+1 at each layer. However, As a demonstration of the recursive and multiscale nature of the scheme, we consider a spacial domain inspired by the multiscale nature of cities, see FIG3. At the highest level we consider a city which is comprised of three major communities, each of which is comprised of five houses. Each house is further comprised of four rooms, each of which is comprised of sixteen base states in a 4 × 4 grid. We consider a decomposition in line with the natural scales of the domain and take k l = {3 × 5 × 4 = 60, 3 × 5 = 15, 3} respectively for l = 2, 3, 4. As expected, the scheme discovers subtasks corresponding to the multiscale nature of the domain with the highest layer subtasks intuitively corresponding to whole communities, etc. Of course the semantic clarity of the subtasks is due to the specific decomposition factors chosen, but any decomposition factors would work to solve tasks in the domain. At this point the scheme has automated the discovery of the subtasks themselves, and the transitions into these subtasks. What remains is for a designer to specify the decomposition factors k l at each layer. In an analogy to neural network architectures, the scheme has defined the network connectivity but not the number of neurons at each layer. While this is a typical hyperparameter, by leveraging the unique construction in Eqn., a good value for this parameter may be estimated from data. By increasing the decomposition factor k l the approximation error, given by Eqn. FORMULA2, is monotonically decreased. For some domains there is an obvious inflection point at which increasing the decomposition factor only slightly improves the approximation. Let us denote the dependence of d β (·) on the decomposition factor simply as f (k). Then we may somewhat naively take the smallest value that demonstrates diminishing incremental returns as a good value for k. In this instance the (LEFT) By projecting the state at each layer back into the base domain, it becomes apparent that subtasks correspond to distributed patterns of preferred states, rather than single goal states. In this way hierarchical subtasks are ever more abstracted in space and time, as higher layers are accessed. Tangibly, where the states at the lowest layer correspond to individual locations, higher layer states correspond to entire rooms, houses, and communities correspondingly. (RIGHT) An abstract representation of'subtasks' as states of a higher layer MLMDP. A key contribution of this paper is to define an autonomous way of uncovering the contents of higher layer states, and the transition structures into these states. approximation error, Eqn. FORMULA2, is said to exhibit elbow-joint behaviour: DISPLAYFORM0 In practice, when the task ensemble is drawn uniformly from the domain, the observed elbow-joint behaviour is an encoding of the high-level domain structure. Choosing the right set of subtasks is known to speed up learning and improve transfer between tasks. However, choosing the wrong subtasks can actually slow learning. While in general it is not possible to assess a priori whether a set of subtasks is'good' or'bad', the new approach taken here provides a natural measure of the quality of a set of subtasks, by evaluating the quality of the approximation in Eqn.. It follows immediately that different sets of subtasks can be compared simply by evaluating Eqn. for each set individually. This leads naturally to the notion of subtask equivalence. Suppose some standard metric is defined on the space of matrices as m(A, B). Then a formal pseudoequivalence relation may be defined on the set of subtasks, encoded as the columns of the data matrix D, by assigning subtasks that provide similar approximations to the desirability basis to the same classes. Explicitly, for DISPLAYFORM0 The pseudo-equivalence class follows as: DISPLAYFORM1 A full equivalence relation here fails since transitivity does not hold. As noted above, our scheme typically uncovers subtasks as complex distributions over preferred states, rather than individual states themselves. As in Fig., we uncover regions such as'rooms', whereas other methods typically uncover single states such as'doorways'. There is a natural duality between these abstractions, which we consider below. A weight vector can be assigned to each state by solving Eqn. for a specific z: DISPLAYFORM0 This weight vector can be thought of as the representation of s in D. To each state we then assign a real-valued measure of stability, by considering how much this representation changes under state-transition. Explicitly, we consider the stability function g: S → R: DISPLAYFORM1 which is a measure of how the representations of neighbour states differ from the current state, weighted by the probability of transitioning to those neighbours. States for which g(s) takes a high value are considered to be unstable, whereas states for which g(s) takes a small value are considered to be stable. Unstable states are those which fall on the boundary between subtask'regions'. A cursory analysis of Fig. immediately identifies doorways as being those unstable states. Figure 6: A natural duality exists between the subtasks uncovered by our scheme, and those typically uncovered by other methods. a) A filter-stack of four subtasks corresponding to the layer one decomposition. Here k 1 = 4 and we present the full set of subtasks. Each of the four subtasks corresponds to one of the four rooms in the domain. b) A hand picked example path through the domain, chosen to illustrate the changing representation for different domain states in terms of the higher layer states. This path and does not correspond to a real agent trajectory. c) For each state along the example path we compute the desirability function z s and approximate it using a linear blend of our subtasks according to Eqn.. The task weights are plotted as a function of steps, revealing the change in representation for different states along the example path. d) Agnostic to any particular path, we compute the stability function g(s) for each state in the domain. It is immediately clear that unstable states, those for which the representation in D l changes starkly, correspond to'doorways'. We present a novel subtask discovery mechanism based on the low rank approximation of the desirability basis afforded by the LMDP framework. The new scheme reliably uncovers intuitive decompositions in a variety of sample domains. Unlike methods based on pure state abstraction, the proposed scheme is fundamentally dependent on the task ensemble, recovering different subtask representations for different task ensembles. Moreover, by leveraging the stacking procedure for hierarchical MLMDPs, the subtask discovery mechanism may be straightforwardly iterated to yield powerful hierarchical abstractions. Finally, the unusual construction allows us to analytically probe a number of natural questions inaccessible to other methods; we consider specifically a measure of the quality of a set of subtasks, and the equivalence of different sets of subtasks. A current drawback of the approach is its reliance on a discrete, tabular, state space. Scaling to high dimensional problems will require applying state function approximation schemes, as well as online estimation of Z directly from experience. These are avenues of current work. More abstractly, the method might be extended by allowing for some concept of nonlinear regularized composition allowing more complex behaviours to be expressed by the hierarchy. AMS thanks the Swartz Program in Theoretical Neuroscience at Harvard University for support.
We present a novel algorithm for hierarchical subtask discovery which leverages the multitask linear Markov decision process framework.
545
scitldr
This paper introduces a framework for solving combinatorial optimization problems by learning from input-output examples of optimization problems. We introduce a new memory augmented neural model in which the memory is not resettable (i.e the information stored in the memory after processing an input example is kept for the next seen examples). We used deep reinforcement learning to train a memory controller agent to store useful memories. Our model was able to outperform hand-crafted solver on Binary Linear Programming (Binary LP). The proposed model is tested on different Binary LP instances with large number of variables (up to 1000 variables) and constrains (up to 700 constrains). An intelligent agent with a long-term memory processes raw data (as images, speech and natural language sentences) and then transfer these data streams into knowledge. The knowledge stored in the long-term memory can be used later in inference either by retrieving segments of memory during recalling, matching stored concepts with new raw data (e.g. image classification tasks) or solving more complex mathematical problems that require memorizing either the method of solving a problem or simple steps during solving. For example, the addition of long-digit numbers requires memorizing both the addition algorithm and the carries produced from the addition operations BID28.In neural models, the weights connecting the layers are considered long term memories encoding the algorithm that transform inputs to outputs. Other neural models as recurrent neural networks (RNNs) introduce a short-term memory encoded as hidden states of previous inputs BID12 BID7.In memory augmented neural networks (MANNs), a controller writes memories projected from its hidden state to a memory bank (usually in the form of a matrix), the controller then reads from the memory using some addressing mechanisms and generates a read vector which will be fed to the controller in the next time step BID6. The memory will contain information about each of the input sequence tokens and the controller enriches its memory capacity by using the read vector form the previous time step. Unfortunately, In MANNs the memory is not a long-term memory and is re-settable when new examples are processed, making it unable to capture general knowledge about the inputs domain. In context of natural language processing, one will need general knowledge to answer open-ended questions that do not rely on temporal information only but also on general knowledge from previous input streams. In long-digits multiplication, it will be easier to store some intermediate multiplication steps as digit by digit multiplications and use them later when solving other instances than doing the entire multiplication digit by digit each time from scratch. Neural networks have a large capacity of memorizing, a long-term persistent memory will even increase the network capacity to memorize but will decrease the need for learning coarse features of the inputs that requires more depth. Storing features of the inputs will create shortcut paths for the network to learn the correct targets. Such a network will no longer need to depend on depth to learn good features of the inputs but instead will depend on stored memory features. In other words a long-term memory can provide intermediate answers to the network. Unlike regular MANNs and RNNs, a long-term memory can provide shortcut connections to both inputs features and previous time steps inputs. Consider when the memory contains the output of previous examples, the network would cheat from the memory to provide answers. Training such a network will focus on two stages: Learning to find similarities between memory vectors and current input data, learning to transform memory vectors into meaningful representations for producing the final output. The No Free Lunch Theorem of optimization BID25 states that: any two algorithms are equivalent when their performance is averaged across all possible problems, this means that an algorithm that solve certain classes of problems efficiently will be incompetent in other problems. In the setting of combinatorial optimization, there is no algorithm able to do better than a random strategy in expectation. The only way an algorithm outperforms another is to be specialized to a certain class of optimization problems BID0. Learning optimization algorithms from scratch using pairs of input-output examples is a way to outperform other algorithms on certain classes. It is further interesting to investigate the ability of learned models to generate better solutions than hand crafted solvers. The focus of this paper is on designing neural models to solve Binary Linear Programming (or 0-1 Integer Programming) which is a special case of Integer Linear Programming problems where all decision variables are binary. The 0-1 integer programming is one of Krap's 21 NP-complete problems introduced in BID9. The goal of Binary LP is to optimize a linear function under certain constrains. It is proved by BID3 that Binary LP expresses the complexity class NP (i.e any problem in the complexity class NP can be modeled as Binary LP instance).The standard form of a Binary LP problem is: DISPLAYFORM0 where c and b are vectors and A is a matrix. We propose a general framework for long-term memory neural models that uses reinforcement learning to store memories from a neural network. A long-term memory is not resettable and may or may not store hidden states from individual time steps. Instead a long term memory stores information that is considered to be useful for solving similar instances. The controller that decides to write memories follows a policy function that properly constructs the memory contents. We train and test this framework on synthetic data set of Binary LP instances. We analyze the model capability of generalization to more complex instances beyond the training data set. A related line of work is learning to learn and meta-learning, argued that it is an important building block in artificial intelligence. BID26 and BID0 used recurrent neural networks to act as a gradient descent procedure to train other networks. The application of neural networks to combinatorial optimization has a long distinguished history. Most common approach is using Hopfield networks BID8 for solving Travelling Salesman Problem (TSP). Other neural approaches applied to Travelling Salesman Problem include Elastic nets BID4 and Self-organized maps BID5.Pointer networks BID24 an architecture similar to sequence-to-sequence neural network BID20 to solve combinatorial problems as TSP. Pointer networks solve class of problems where the number of target classes in each step of the output depends on the input length which is variable. Recently, BID2 introduced a pointer network for solving combinatorial problems which is optimized using policy gradient methods. This pointer network was applied to TSP and Knapsack. Our work is different from this recent approaches, we applied our model to a general mathematical formulation of combinatorial optimization problems, instead of learning solutions from problem dependent data. We introduce a memory neural network where the coupled memory is persistent and is not resettable. The model is able to learn and store inputs features as neural memories and utilizes them in solving Binary LP instances. The model can learn from minimal amount of data and yet generate better solutions than handcrafted solver.3 LONG-TERM MEMORY NETWORK FOR BINARY LP Figure 1: The operation of an LTMN processing one Binary LP instance. First the encoder receives a sequence of constrains coefficients and produces a memory vector via a linear projection of encoder final hidden state. The memory controller receives this linear projection and decides whether to store it in memory M or delete the current slot it points to. A control signal (sequence of costs) is then passed to the decoder to produce un-normalized weights w t over the memory locations. A dot product operation between w t and M Finally the output o t at each time step is generated using the memory vector (or encoder final state), the decoder hidden state and the read vector. A sequence-to-sequence approach to solve Binary LP requires formulating the entire problem inputs (costs vector c T, bounds vector b and constrains matrix A) as one sequence and mapping it to an output sequence. A naive linear programming solver constructs the set of feasible solutions from the constrains matrix A, we denote this set as S F, then it iterates over S F using the cost function to find the optimal solution. For a Binary LP instance, the number of possible feasible solutions is 2 N where N is the number of variables. The model we describe works in in a similar way to the naive solver, an encoder encodes the entire constrains matrix into one fixed vector v a, this vector is a continuous representation of the set S F. The vector v a is then passed as the initial state of a decoder along with each cost c t that is used as a control signal to read from memory. Figure 1 describes the operation of an LTMN for a single Binary LP instance. A general Long-Term Memory Network (LTMN) consists of three basic modules: a core network, a memory bank in the form of a matrix and a memory controller. The core network has three main functions: providing memory fragments that are useful information from input signals to be stored in the memory, providing a reading mechanism to read from the memory, and finally producing an output. The memory bank is a matrix R × C, were each row is addressed by a pointer P T R which is a value in range [0, R − 1]. The pointer is provided to the memory controller to know the last accessed memory slot. Clearing the memory contents is not allowed by the memory controller. A memory controller is another module that uses reinforcement learning as a hard attention mechanism to store memory fragments from the core network as in BID27. Upon processing of each example, the controller basically chooses either to store or discard the memory fragments. Writing to the memory in LTMN is totally a discrete process, the final computational graph will not be fully differentiable. Thus it is essential to pass the memories to the output layer for backpropagation to work. The final output depends on both memory contents representation (a read vector) and inputs embedding, this is similar to end-to-end memory networks BID19 that is used in question answering tasks. In end-to-end memory networks the final answer depends on both the output memory representation and question embedding. For non-sequential data the core network can be a regular feed forward neural network and the memories can be the hidden vectors from each layer, while for sequential data the core network can be a recurrent neural network and memories can be its hidden states at each time step. The attention mechanism BID1 used over to attend to memory locations will depend either on a control signal or previous output. Let the sequence A = {a 11, a 12, ..., a 1N, SEP ARAT OR, b i, SEP ARAT OR, ..., a M N} be the sequence describing the constrains set, a ij represents the coefficient of the j th variable in the i th constrain. The SEP ARAT OR token separates between the left and right hand sides of the constrains inequality, the same token is used to separate between constrains in the sequence. Finally b i is the bound for the i th constrain. The encoder module encodes the entire sequence A in to a fixed vector. The encoder then produces a linear projection of this vector to be stored in memory. Equations and describe the constrains encoder operation. DISPLAYFORM0 where W hv and b h are weight matrix and bias vector respectively. The input to the decoder is a sequence of costs C = {c 1, ...c N}. The decoder uses the fixed vector v a as initial hidden state. At each time step the decoder reads a cost c t of the costs vector C and produces a hidden state vector: DISPLAYFORM0 The reader uses the decoder hidden state h d t to produce weights that are used to address the memory: DISPLAYFORM1 A simple dot product operation between the memory and the weight vector produces a read vector: DISPLAYFORM2 where M (i) is the memory produced by the memory controller after processing instance i in the data set. The linear projection h a, the cost embedding w t and the read vector r t are used to generate a final output: DISPLAYFORM3 The final output layer is a soft-max layer, and the whole model can be thought of estimating the probability distribution P (o t | h a, w t, r t). Each cost is embedded via a linear layer first, then the embedding is passed to recurrent layers. The reader is implemented as a recurrent layer, in this sense the weights produced will depend on both the cost at time step t (the control signal) and the weight from the previous time step w t−1.The reader produces un-normalized weights over memory slots N in range [-1,1]. When the weight for a slot i is 0 this means that the memory slot does not contribute at all to the read vector, when it is 1 the whole memory slot is used. The interesting case is when the weight is −1 which means the inverse of the current memory slot contents, when the weight is between between 0 and 1 the information from the memory slot is preserved to a certain extent, instead a −1 weight transforms the entire slot into a new representation. Hopefully, the reader weights learned by backpropgation through the whole network will not only act as attention weights over memory slots but also transform the memory contents in to useful representations. A typical memory controller is provided a memory fragment at a discrete time t and makes a decision whether to store or discard this memory. Learning to store useful memories is descirbed as an reinforcement learning (RL) framework. An agent senses the environment s, takes an action a from an action space A using a policy function π and receives an immediate reward r. The total future reward from point t is expressed by: DISPLAYFORM0 where γ is the discount factor. The environment is the memory bank at discrete time step t along with the current pointer P T R t and a window of previous memories W t that the controller has been subjected to. A window is similar to a short term memory with limited size τ and the controller chooses to store one of the contents of that memory. In all our experiments, the controller stores the last memory vector in the window, in this sense the window gives insights to the controller about what memories have been produced by the core network. The basic actions a controller may perform are storing the current memory we refer to this as a ST R, and not to store any thing at all we refer to this as a N O−OP. We include three more actions: delete the current slot a DELET E, increment the pointer then store a ST RIN C, and decrement the pointer then store a ST RDEC. An action taken by the controller is evaluated through a reward function. One can evaluate the controller the same way we evaluate the core network, through common metrics as final loss or accuracy, for each example the model solves correctly the controller gets a reward. In this way, the rewards received will only account for how better the whole model gets on solving tasks. Instead we want the controller to store useful memories that will be used in the next processed examples. The reward function is a simple question: Did the current memory in a better response than empty memory?. The reward function is: DISPLAYFORM0 where M Zero is the memory with zero entries, M t is the memory produced by memory controller and eval is an evaluation function for the core network. When the entries of a read vector are all zeros the output will depend only on information coming from the control signal (costs c t in case of Binary LP). The memory controller receives a reward only when the non-empty memory in solving an example correctly and empty memory in incorrect response. An optimal training procedure will make the core network depends only on non-empty memory to minimize a loss function, while an optimal training procedure for the memory controller will in useful memories only. At the heart of the memory controller is an RL agent which interacts with the environment. The environment is basically the memories received per each example the LTMN solves. For each instance the LTMN processes, the RL agent receives a state s t and selects an action a t following a policy π(a t |s t) and receives a reward r t and the next state s t+1 BID21.To train the memory controller agent, we used deep Q-learning algorithm with experience replay BID15 as described in algorithm 1. A neural network is used to estimate the actionvalue function Q(s, a) BID23 BID18 BID17. We choose to implement the action-value neural network as a stacked LSTM which receives the memory contents M t, one slot each time step, followed by window contents W t and the pointer P T R.We draw the similarity between using deep Q-learning algorithm for playing Atari games and using it for storing long-term memories. During each episode the core network is given a sequence of instances form a training dataset X controller and the memory agent should take proper actions on memory to construct useful memory entries that helps solving theses instances. One can consider the agent is learning to construct the memory as constructing a puzzle from pieces, where the pieces come from a bucket. There will be no final state for the agent to reach, because of the fact that one can not know exactly how the memory should be constructed. In the setting of Atari games the agent will reach the final state (game over) quickly in the first few epochs as the agent will still be struggling to learn an optimal action-value function. In the case of the memory agent, we have to determine an episode length T that both simulates the agent failure in the earlier training phase and the agent reaching a final state. The final state of the agent will be reached when the LTMN processes the last example in an episode. Thus, there will be an infinite number of final states. We can keep the episode length constant so the agent have a limited time to succeed in accumulating rewards. Both the reader and the memory controller agent should be trained jointly together. This procedure is critical since both the reader and the memory controller depend on each other, the reader learns to generate proper weights over memory slots that depend on a control signal and the memory controller learns how to construct memories that are useful for the reader. For each action the memory agent perform on the memory, one should train the core network. We sample a batch of instances from training data X train, run the memory agent using this batch for one session (memory is not cleared between instances) and perform the backward pass on the core network. A typical learning algorithm for neural networks as stochastic gradient descent, updates the network parameters using data batches. The number of iterations (one forward pass followed by one backward pass) is equal to N/m where N is the size of dataset and m is the batch size. The number of iterations can get large as the size of the dataset increases. To effectively train the LTMN core network, the episode length should be as the same as the number of iterations. In our experiments, we keep the episode length T as small as possible and then increase it after K epochs. To avoid similar episode lengths and simulate various solving sessions (where the memory controller agent has to store memories), each episode T is chosen randomly between [k, N/m], where k is the minimum length of an episode. We generate two separate data sets: one for training the core network and the other for training the memory controller. We generate 14k Binary LP instances as the core network training set, 3k instances as the memory controller training set and 3k as a validation/test set. We use mixed curriculum learning strategy as suggested in BID28. For each instance, we generate two random numbers n and m, where n is the number of variables and m is the number of constrains for the instance. The maximum number of variables in an instance is N and maximum number of constrains is M. In our experiments, n is chosen uniformly between [3,N] and m is between [1,M]. For our training dataset N is 10 and M is 5.All the coefficients of the objective function and the constrains matrix A is generated between [-99,99]. To ensure that the constrains matrix is sparse we generate a random number SL called sparsity level, for each constrain in the problem we ensure that at most 1/3 of the coefficients are zeros. To generate supervised targets for the problems we used a python interface to the COIN-OR branch and cut solver BID13 called PuLP BID14. We ensure that all the generated problems have a feasible solution. We use denote the COIN-OR branch and cut solver as the baseline solver and compare the solver with the LTMN . Select randomly an episode length T 5:for t = 1,T do 6:Select randomly an example x i from X controller 7:Solve x i using Encoder to get memory vector h ai 8:Store h ai in to window W t With probability select a random action a t 10:otherwise select a t = max a Q * (s t, a; θ)11:Execute action a t on memory M t observe the next memory state M t+1 and reward r t 12: DISPLAYFORM0 Sample random minibatch of transitions (s t, a t, r t, s t+1) from D end for 21: end for We implement both the encoder and decoder as a recurrent neural network with gated recurrent units (GRU). We use three GRU layers each with 256 units followed by a linear layer with 128 units to project the memory vector of the constrains sequence. As suggested by BID2, it is harder to learn from input-output examples of optimization problems due to subtle features that model cannot figure out. We suggest a technique to learn more features by connecting the 1, 2,..., L − 1 layers to layer L instead of connecting only the previous layer L − 1. In this way an encoder can learn more features using combined features from the previous layers. The decoder is implemented as two stacked GRU layers, the reader is implemented as one GRU layer. The costs are first embedded using a linear layer of 64 units. The rest of the decoder has 256 units. All the weights are initialized uniformly in [-0.08,0,08]. We set the dropout ratio to 0.2 in all GRU layers. We constrain the norm of the gradients to be no greater than 5 (gradient clipping). We set the learning rate initially to 0.001 and drop it by factor 0.5 after each 10 epochs. The memory controller agent is implemented as a stacked LSTM of 2 layers each with 200 units. We used batch normalization over the inputs to normalize the memory contents and window contents to zero-mean and unit variance. We used Adam BID10 optimizer to train the agent. The window size τ is 5 in all our experiments. We used generalization loss as an early stopping criteria for the core network as suggested by BID16. We then allow the memory controller agent to train until a reasonable average total reward per episode is reached. We compare the performance of LTMN to the baseline solver and the sequence-to-sequence approach. We trained a network of GRU units with comparable number of parameters (seq-to-seq has 1.3M parameters and LTMN has 1.9M parameters). The seq-to-seq network receives the input as one long sequence. We train the LTMN core network and seq-to-seq model using RMSprop algorithm BID22 with cross entropy as a loss function. We define a metric to test the quality of solutions produced by the model, we define average costs over N Binary LP instances: DISPLAYFORM0 All the instances in training data and testing data are maximization problems, so the higher the average costs the better. We sample only feasible solutions using the output probability distribution with temperatures [0.6, 1.0, 1.2, 1.5, 2.2].First we evaluate the LTMN model against the baseline solver, we generate a test set of 1000 instances where all instances have 10 variables and the number of constrains is between 1 and 5. Table 1 shows at different sampling temperatures the average costs and the number of instances where the model outperformed the solver and found a better solution. The LTMN model outperformed the baseline solver by large margin (average costs are nearly 51.7% higher).To validate that the sampling technique is effective, we used an initial untrained model to sample solutions. TAB1 shows that the untrained model performed poorly on the same test set, and hence the random sampling is not generating solutions by chance and the trained model learned to generate solutions effectively. We also evaluate the LTMN against the seq-to-seq GRU model which fails to generate any feasible solutions. While the number of instances is small (only 14K instances), other similar models as pointer networks BID24 used a training data of 1M instances for Travelling Salesman Problem, however our model did not require large training dataset. We test the generalization ability of LTMN to longer sequences beyond those in training set. FIG2 shows the on test sets of Binary LP instances (each test set contains 1000 instances) where the number of variables is incremented while the maximum number of constrains is 5. LTMN outperformed the baseline solver by large margin even when the number of variables is larger than 100, the LTMN was still able to generate better solutions. We test LTMN generalization on very large instances. TAB2 shows that LTMN outperformed the baseline solver even for very large instances with 1000 variables and 700 constrains. To understand the effectiveness of using an augmented long-term memory network for solving Binary LP instances, we conduct tests to prove that long-term memory improves model . We calculate the average costs for the same test set (10 variables and constrains between 1 and 5) but we reset the memory each time a new example is processed. TAB3 shows that the average costs is slightly dropped when the memory is reset between examples. We conduct a per-example test where we identify whether memory helped in generating good solutions or not. Given two problems identified by their cost vectors and constrains matrices as [c 1, A 1] and [c 2, A 2]. Let A 1 = A 2 but both of them construct the same set of feasible solutions S F. The encoder produces two memory vectors h A1 and h A2 for constrains A 1 and A 2 respectively. To ensure that both h A1 and h A2 represent the same set of feasible solutions S F, we measure the cosine similarity between these two memory vectors such that CosSim(h A1, h A2) ≥ S, where S is set to 0.8. We then enforce the controller to store h A1, and generate a solution for [c 2, A 2] We define three metrics:Memory Faults: number of examples where non-empty memory in worse solution than empty memory. Memory Trues: number of examples where non-empty memory in better solution than empty memory. Memory Equals: number of examples where both non-empty memory and empty memory in the same solution. We generate 10K instances (with 10 variables and constrains in range of ), we compare the feasible solutions of each two consecutive instances and calculate the cosine similarity between the two memory vectors. We found 181 instances with the same feasible solutions. We record the solutions of these instances using both empty memory and non-empty memory containing h A1. TAB4 shows high memory trues where non-empty memory helped generating better solutions than empty memory. The table also shows high percentage of memory equals, for these 28 instances the non-empty memory did not help much in generating a better solution, in fact both memory faults and memory equals are similar metrics and can be thought of as the number of instances where non-empty memory failed to generate a better solution. A good memory controller agent should maximize the memory trues and minimize both the memory faults and equals at the same time. We conclude that a long-term memory is quite effective in generating better solutions and the memory controller learns effectively how to store input features useful for longer interaction with a model. This paper introduced a long term memory coupled with a neural network, that is able to memorize useful input features to solve similar instances. We applied LTMN model to solve Binary LP instances. The LTMN was able to learn from supervised targets provided by a handcrafted solver, and generate better solutions than the solver. The LTMN model was able to generalize to more complex instances beyond those in the training set.
We propose a memory network model to solve Binary LP instances where the memory information is perseved for long-term use.
546
scitldr
Sequence-to-sequence (Seq2Seq) models with attention have excelled at tasks which involve generating natural language sentences such as machine translation, image captioning and speech recognition. Performance has further been improved by leveraging unlabeled data, often in the form of a language model. In this work, we present the Cold Fusion method, which leverages a pre-trained language model during training, and show its effectiveness on the speech recognition task. We show that Seq2Seq models with Cold Fusion are able to better utilize language information enjoying i) faster convergence and better generalization, and ii) almost complete transfer to a new domain while using less than 10% of the labeled training data. Sequence-to-sequence (Seq2Seq) BID1 models have achieved state-of-the-art on many natural language processing problems including automatic speech recognition BID2 BID4, neural machine translation, conversational modeling and many more. These models learn to generate a variable-length sequence of tokens (e.g. texts) from a variable-length sequence of input data (e.g. speech or the same texts in another language). With a sufficiently large labeled dataset, vanilla Seq2Seq can model sequential mapping well, but it is often augmented with a language model to further improve the fluency of the generated text. Because language models can be trained from abundantly available unsupervised text corpora which can have as many as one billion tokens BID13 BID19, leveraging the rich linguistic information of the label domain can considerably improve Seq2Seq's performance. A standard way to integrate language models is to linearly combine the score of the task-specific Seq2Seq model with that of an auxiliary langauge model to guide beam search BID5 BID20. BID10 proposed an improved algorithm called Deep Fusion that learns to fuse the hidden states of the Seq2Seq decoder and a neural language model with a gating mechanism, after the two models are trained independently. While this approach has been shown to improve performance over the baseline, it has a few limitations. First, because the Seq2Seq model is trained to output complete label sequences without a language model, its decoder learns an implicit language model from the training labels, taking up a significant portion of the decoder capacity to learn redundant information. Second, the residual language model baked into the Seq2Seq decoder is biased towards the training labels of the parallel corpus. For example, if a Seq2Seq model fully trained on legal documents is later fused with a medical language model, the decoder still has an inherent tendency to follow the linguistic structure found in legal text. Thus, in order to adapt to novel domains, Deep Fusion must first learn to discount the implicit knowledge of the language. In this work, we introduce Cold Fusion to overcome both these limitations. Cold Fusion encourages the Seq2Seq decoder to learn to use the external language model during training. This means that Seq2Seq can naturally leverage potentially limitless unsupervised text data, making it particularly proficient at adapting to a new domain. The latter is especially important in practice as the domain from which the model is trained can be different from the real world use case for which it is deployed. In our experiments, Cold Fusion can almost completely transfer to a new domain for the speech recognition task with 10 times less data. Additionally, the decoder only needs to learn task relevant information, and thus trains faster. The paper is organized as follows: Section 2 outlines the and related work. Section 3 presents the Cold Fusion method. Section 4 details experiments on the speech recognition task that demonstrate Cold Fusion's generalization and domain adaptation capabilities.2 AND RELATED WORK 2.1 SEQUENCE-TO-SEQUENCE MODELS A basic Seq2Seq model comprises an encoder that maps an input sequence x = (x 1, . . ., x T) into an intermediate representation h, and a decoder that in turn generates an output sequence y = (y 1, . . ., y K) from h BID21. The decoder can also attend to a certain part of the encoder states with an attention mechanism. The attention mechanism is called hybrid attention BID7, if it uses both the content and the previous context to compute the next context. It is soft if it computes the expectation over the encoder states BID1 as opposed to selecting a slice out of the encoder states. For the automatic speech recognition (ASR) task, the Seq2Seq model is called an acoustic model (AM) and maps a sequence of spectrogram features extracted from a speech signal to characters. During inference, we aim to compute the most likely sequenceŷ: DISPLAYFORM0 Here, p(y|x) is the probability that the task-specific Seq2Seq model assigns to sequence y given input sequence x. The argmax operation is intractable in practice so we use a left-to-right beam search algorithm similar to the one presented in BID20. We maintain a beam of K partial hypothesis starting with the start symbol s. At each time-step, the beam is extended by one additional character and only the top K hypotheses are kept. Decoding continues until the stop symbol /s is emitted, at which point the hypothesis is added to the set of completed hypotheses. A standard way to integrate the language model with the Seq2Seq decoder is to change the inference task to:ŷ = argmax DISPLAYFORM1 where p LM (y) is the language model probability assigned to the label sequence y.; describe several heuristics that can be used to improve this basic algorithm. We refer to all of these methods collectively as Shallow Fusion, since p LM is only used during inference. BID10 proposed Deep Fusion for machine translation that tightens the connection between the decoder and the language model by combining their states with a parametric gating: DISPLAYFORM2 where s t, s In Deep Fusion, the Seq2Seq model and the language model are first trained independently and later combined as in Equation. The parameters v and b are trained on a small amount of data keeping the rest of the model fixed, and allow the gate to decide how important each of the models are for the current time step. Ground Truth where's the sport in that greer snorts and leaps greer hits the dirt hard and rolls Plain Seq2Seqwhere is the sport and that through snorks and leaps clear its the dirt card and rules Deep Fusion where is the sport and that there is north some beliefs through its the dirt card and rules Cold Fusion where's the sport in that greer snorts and leaps greer hits the dirt hard and rolls Cold Fusion (Fine-tuned) where's the sport in that greer snorts and leaps greer hits the dirt hard and rollsGround Truth jack sniffs the air and speaks in a low voice Plain Seq2Seqjacksonice the air and speech in a logos Deep Fusion jacksonice the air and speech in a logos Cold Fusion jack sniffs the air and speaks in a low voice Cold Fusion (Fine-tuned) jack sniffs the air and speaks in a low voice Ground Truth skipper leads her to the dance floor he hesitates looking deeply into her eyes Plain Seq2Seq skip er leadure to the dance floor he is it takes looking deeply into her eyes Deep Fusion skip er leadure to the dance floor he has it takes looking deeply into her eyes Cold Fusion skipper leads you to the dance floor he has a tates looking deeply into her eyes Cold Fusion (Fine-tuned) skipper leads her to the dance floor he hesitates looking deeply into her eyes Table 1: Some examples of predictions by the Deep Fusion and Cold Fusion models. The biggest disadvantage with Deep Fusion is that the task-specific model is trained independently from the language model. This means that the Seq2Seq decoder needs to learn a language model from the training data labels, which can be rather parsimonious compared to the large text corpora available for language model training. So, the fused output layer of should learn to overcome this bias in order to incorporate the new language information. This also means that a considerable portion of the decoder capacity is wasted. A few methods have been proposed for leveraging unlabeled text corpora in the target domain, for both better generalization and domain transfer. BID18 proposed backtranslation as a way of using unlabeled data for machine translation. Backtranslation improves the BLEU score by increasing the parallel training corpus of the neural machine translation model by automatically translating the unlabeled target domain text. However, this technique does not apply well to other tasks where backtranslation is infeasible or of very low quality (like image captioning or speech recogntion). BID17 proposed warm starting the Seq2Seq model from language models trained on source and target domains separately. Unsupervised pre-training shows improvements in the BLEU scores. BID17 also show that this improvement is from improved generalization, and not only better optimization. While this is a promising approach, the method is potentially difficult to leverage for the transfer task since training on the parallel corpus could end up effectively erasing the knowledge of the language models. Both back-translation and unsupervised pre-training are simple methods that require no change in the architecture. Our proposed Cold Fusion method is largely motivated from the Deep Fusion idea but with some important differences. The biggest difference is that in Cold Fusion, the Seq2Seq model is trained from scratch together with a fixed pre-trained language model. Because the Seq2Seq model is aware of the language model throughout training, it learns to use the language model for language specific information and capture only the relevant information conducive to mapping from the source to the target sequence. This disentanglement can increase the effective capacity of the model significantly. This effect is demonstrated empirically in Section 4 where Cold Fusion models perform well even with a very small decoder. We also improve on some of the modeling choices of the fusion mechanism.1. First, both the Seq2Seq hidden state s t and the language model hidden state s LM t can be used as inputs to the gate computation. The task-specific model's embedding contains information about the encoder states which allows the fused layer to decide its reliance on the language model in case of input uncertainty. For example, when the input speech is noisy or a token unseen by the Seq2Seq model is presented, the fusion mechanism learns to pay more attention to the language model. 2. Second, we employ fine-grained (FG) gating mechanism as introduced in BID24. By using a different gate value for each hidden node of the language model's state, we allow for greater flexibility in integrating the language model because the fusion algorithm can choose which aspects of the language model it needs to emphasize more at each time step. 3. Third, we replace the language model's hidden state with the language model probability. The distribution and dynamics of s LM t can vary considerably across different language models and data. As a concrete example, any fusion mechanism that uses the LM state is not invariant to the permutation of state hidden nodes. This limits the ability to generalize to new LMs. By projecting the token distribution onto a common embedding space, LMs that model novel uses of the language can still be integrated without state discrepancy issues. This also means that we can train with or swap on n-gram LMs during inference. The Cold Fusion layer (Figure 1) works as follows: DISPLAYFORM0 is the logit output of the language model, s t is the state of the task specific model, and s CF t is the final fused state used to generate the output. Since logits can have arbitrary offsets, the maximum value is subtracted off before feeding into the layer. In (4a), (4d), the DNN can be a deep neural network with any number of layers. In our experiments, we found a single affine layer, with ReLU activation prior to softmax, to be helpful. For our experiments, we tested the Cold Fusion method on the speech recognition task. The are compared using the character error rate (CER) and word error rate (WER) on the evaluation sets. For all models which were trained on the source domain, the source CER and WER indicate in-domain performance and the target CER and WER indicate out-of-domain performance. We collected two data sets: one based on search queries which served as our source domain, and another based on movie transcripts which served as our target domain. For each dataset, we used Amazon Mechanical Turk to collect audio recordings of speakers reading out the text. We gave identical instructions to all the turkers in order to ensure that the two datasets only differed in the text domain. The source dataset contains 411,000 utterances (about 650 hours of audio), and the target dataset contains 345,000 utterances (about 676 hours of audio). We held out 2048 utterances from each domain for evaluation. The text of the two datasets differ significantly. Table 2 shows of training character-based recurrent neural network language models BID15 on each of the datasets and evaluating on both datasets. Language models very easily overfit the training distribution, so models trained on one corpus will perform poorly on a different distribution. We see this effect in Table 2 Table 2: Dev set perplexities for character RNN language models trained on different datasets on source and target domain. Note that i) the model trained on source domain does poorly on target domain and vice-versa indicating that the two domains are very different, and ii) the best model on both domains is a larger model trained on a superset of both corpuses. We use the model trained on the full dataset (which contains the source and target datasets along with some additional text) for all of the LM integration experiments. The language model described in the final row of Table 2 was trained on about 25 million words. This model contains three layers of gated recurrent units (GRU) BID8 ) with a hidden state dimension of 1024. The model was trained to minimize the cross-entropy of predicting the next character given the previous characters. We used the Adam optimizer BID14 with a batch size of 512. The model gets a perplexity of 2.49 on the source data and 2.325 on the target data. TAB4: Results from models trained on the publicly available Librispeech data. All of the acoustic models were trained on Librispeech training data and evaluated on librispeech test-clean, WSJ test-92 and our proprietary target domain data. Results from the Wav2Letter model BID9 ) are presented for referenceFor the acoustic models, we used the Seq2Seq architecture with soft attention based on BID2. The encoder consists of 6 bidirectional LSTM (BLSTM) BID12 layers each with a dimension of 480. We also use max pooling layers with a stride of 2 along the time dimension after the first two BLSTM layers, and add residual connections BID11 for each of the BLSTM layers to help speed up the training process. The decoder consisted of a single layer of 960 dimensional Gated Recurrent Unit (GRU) with a hybrid attention BID7.The final Cold Fusion mechanism had one dense layer of 256 units followed by ReLU before softmax. The input sequence consisted of 40 mel-scale filter bank features. We expanded the datasets with noise augmentation; a random noise is added with a 40% probability at a uniform random SNR between 0 and 15 dB. We did not use any other form of regularization. We trained the entire system end-to-end with Adam BID14 ) with a batch size of 64. The learning rates were tuned separately for each model using random search. To stabilize training early on, the training examples were sorted by increasing input sequence length in the first epoch BID0. During inference, we used beam search with a fixed beam size of 128 for all of our experiments. We also used scheduled sampling with a sampling rate of 0.2 which was kept fixed throughout training. Scheduled sampling helped reduce the effect of exposure bias due to the difference in the training and inference mechanisms. Leveraging a language model that has a better perplexity on the distribution of interest should directly mean an improved WER for the ASR task. In this section, we compare how the different fusion methods fare in achieving this effect. Swapping the language model is not possible with Deep Fusion because of the state discrepancy issue motivated in Section 3. All fusion models were therefore trained and evaluated with the same language model that achieved a low perplexity on both the source and target domains (See Table 2). This way, we can measure improvements in transfer capability over Deep Fusion due to the training and architectural changes. TAB2 compares the performance of Deep Fusion and Cold Fusion on the source and target held-out sets. Clearly, Cold Fusion consistently outperforms on both metrics on both domains than the baselines. For the task of predicting in-domain, the baseline model gets a word error of 14.68%, while our best model gets a relative improvement of more than 21% over that number. Even compared to the recently proposed Deep Fusion model BID10, the best Cold Fusion model gets a relative improvement of 15%.We get even bigger improvements in out-of-domain . The baseline attention model, when trained on the source domain but evaluated on the target domain gets, 43.5% WER. This is significantly worse than the 17.6% that we can get by training the same model on the target dataset. The goal of domain adaptation is to bridge the gap between these numbers. The final column in TAB2 shows the remaining gap as a fraction of the difference for each model. We present additional on the publicly available Librispeech dataset BID16 ) in TAB4. Librispeech contains 1000 hours of read audio speech and comes with train, development and test splits. We used similar architectures for the Seq2Seq model and the language model as in the previous experiments, but we trained the Seq2Seq model only on the 960 hour Librispeech training dataset. We test whether cold fusion does indeed relieve the decoder of learning a language model. We do so by checking how a decrease in the decoder capacity affected the error rates. As evidenced in TAB3, the performance of the Cold Fusion models degrades gradually as the decoder cell size is decreased whereas the performance of the attention models deteriorates abruptly beyond a point. It is remarkable that the Cold Fusion decoder still outperforms the full attentional decoder with 4× fewer number of parameters. Also, we find that training is accelerated by a factor of 3 (see Figure 2). Attention models typically need hundreds of thousands of iterations to converge BID6. Most of the training time is spent in learning the attention mechanism. One can observe this behavior by plotting the attention context over time and seeing that the diagonal alignment pattern emerges in later iterations. Because the pretrained, fixed language model infuses the model with lower level language features like the likely spelling of a word, error signals propagate more directly into the attention context. In the presence of limited data from the target distribution, fine tuning a model for domain transfer is often a promising approach. We test how much labeled data from the target distribution is required for Cold Fusion models to effectively close the domain adaptation gap. The same language model from Section 4.4 trained on both the source and target domains was used for all fine-tuning experiments. The learning rate was restored to its initial value. Then, we finetuned only the fusion mechanism of the best Cold Fusion model from TAB2 on various amounts of the labeled target dataset. Results are presented in TAB5. With just 0.6% of labeled data, the domain gap decreases from 38.2% to 21.3%. With less than 10% of the data, this gap is down to only 8%. Note that because we keep the Seq2Seq parameters fixed during the fine-tuning stage, all of the improvements from fine-tuning come from combining the acoustic and the language model better. It's possible that we can see bigger gains by fine-tuning all the parameters. We do not do this in our experiments because we are only interested in studying the effects of language model fusion in the Seq2Seq decoder. Some examples are presented in Table 1. Recall that all models are trained on the source domain consisting of the read speech of search queries and evaluated on the read speech of movie scripts to measure out-of-domain performance. Because search queries tend to be sentence fragments, we see that the main mode of error for vanilla attention and Deep Fusion is due to weak grammar knowledge. Cold Fusion on the other hand demonstrates a better grasp of grammar and is able to complete sentences. In this work, we presented a new general Seq2Seq model architecture where the decoder is trained together with a pre-trained language model. We study and identify architectural changes that are vital for the model to fully leverage information from the language model, and use this to generalize better; by leveraging the RNN language model, Cold Fusion reduces word error rates by up to 18% compared to Deep Fusion. Additionally, we show that Cold Fusion models can transfer more easily to new domains, and with only 10% of labeled data nearly fully transfer to the new domain.
We introduce a novel method to train Seq2Seq models with language models that converge faster, generalize better and can almost completely transfer to a new domain using less than 10% of labeled data.
547
scitldr
A central capability of intelligent systems is the ability to continuously build upon previous experiences to speed up and enhance learning of new tasks. Two distinct research paradigms have studied this question. Meta-learning views this problem as learning a prior over model parameters that is amenable for fast adaptation on a new task, but typically assumes the set of tasks are available together as a batch. In contrast, online (regret based) learning considers a sequential setting in which problems are revealed one after the other, but conventionally train only a single model without any task-specific adaptation. This work introduces an online meta-learning setting, which merges ideas from both the aforementioned paradigms to better capture the spirit and practice of continual lifelong learning. We propose the follow the meta leader (FTML) algorithm which extends the MAML algorithm to this setting. Theoretically, this work provides an O(logT) regret guarantee for the FTML algorithm. Our experimental evaluation on three different large-scale tasks suggest that the proposed algorithm significantly outperforms alternatives based on traditional online learning approaches. Two distinct research paradigms have studied how prior tasks or experiences can be used by an agent to inform future learning. Meta-learning casts this as the problem of learning to learn, where past experience is used to acquire a prior over model parameters or a learning procedure. Such an approach, where we draw upon related past tasks and form associated priors, is particularly crucial to effectively learn when data is scarce or expensive for each task. However, meta-learning typically studies a setting where a set of meta-training tasks are made available together upfront as a batch. In contrast, online learning considers a sequential setting where tasks are revealed one after another, but aims to attain zero-shot generalization without any task-specific adaptation. We argue that neither setting is ideal for studying continual lifelong learning. Metalearning deals with learning to learn, but neglects the sequential and non-stationary nature of the world. Online learning offers an appealing theoretical framework, but does not generally consider how past experience can accelerate adaptation to a new task. In this work, we motivate and present the online meta-learning problem setting, where the agent simultaneously uses past experiences in a sequential setting to learn good priors, and also adapt quickly to the current task at hand. Our contributions: In this work, we first formulate the online meta-learning problem setting. Subsequently, we present the follow the meta-leader (FTML) algorithm which extends MAML to this setting. FTML is analogous to follow the leader in online learning. We analyze FTML and show that it enjoys a O(log T) regret guarantee when competing with the best metalearner in hindsight. In this endeavor, we also provide the first set of (under any assumptions) where MAML-like objective functions can be provably and efficiently optimized. We also develop a practical form of FTML that can be used effectively with deep neural networks on large scale tasks, and show that it significantly outperforms prior methods in terms of learning efficiency on vision-based sequential learning problems with the MNIST, CIFAR, and PASCAL 3D+ datasets. Due to space constraints, we review the foundations of our work (meta-learning and online learning) in Appendix A. We consider a general sequential setting where an agent faces tasks one after another. Each task corresponds to a round, denoted by t. In each round, the goal of the learner is to determine model parameters w t that perform well for the corresponding task at that round. This is monitored by f t: w ∈ W → R, which we would like to be minimized. Crucially, we consider a setting where the agent can perform some local task-specific updates to the model before it is deployed and evaluated T t=1 be the sequence of models generated by the algorithm. Then, the regret we consider is: DISPLAYFORM0 Notice that we allow the comparator to adapt locally to each task at hand; thus the comparator has strictly more capabilities than the learning agent, since it is presented with all the task functions in batch mode. Achieving sublinear regret suggests that the agent is improving over time and is competitive with the best meta-learner in hindsight (since Regret T /T → 0 as T → ∞). In the batch setting, meta-learning has been observed to perform better than jointly training a single model to work on all the tasks (; ; ; ;). Thus, we may hope that learning sequentially, but still being competitive with the best meta-learner in hindsight, provides a significant leap in continual learning. Our algorithmic approach, follow the meta leader (FTML), takes inspiration from follow the leader (FTL) and adapts it to the online meta learning setting. FTML chooses the parameters according to: DISPLAYFORM0 This can be interpreted as the agent playing the best meta-learner in hindsight if the learning process were to stop at round t. In practice, we may not have full access to f k (·), such as when it is the population risk and we only have a finite size dataset. In such cases, we will draw upon stochastic approximation algorithms to solve the optimization problem in Eq..We concentrate on the case where the update procedure is 1 step of stochastic gradient descent, as in the case of MAML, i.e. U t (w) = w − α∇f i (w). We assume that each loss function, {f t,f t} ∀t, is C 2 −smooth (i.e. G−Lipschitz, β−smooth, and ρ−Lipschitz Hessian) and µ−strongly convex. See Appendix B for more details and implications of these assumptions, and connections to standard online learning setting. Importantly, these assumptions do not trivialize the meta-learning setting. There is a clear separation between meta-learning and joint training even for linear regression (simplest strongly convex problem). See Appendix E for an example illustration. Theorem 1. Suppose f andf: R d → R satisfy the stated assumptions. Letf be the function evaluated after the gradient update procedure, i.e.f (w):= f w − α∇f (w). If the step size is selected as α ≤ min{1 2β, µ 8ρG}, thenf isβ = 9β/8 smooth andμ = µ/8 strongly convex. Since the objective function is convex, we may expect first-order optimization methods to be effective, since gradients can be efficiently computed with standard automatic differentiation libraries (as discussed in). In fact, this work provides the first set of (under any assumptions) under which MAML-like objective function can be provably and efficiently optimized. An immediate corollary of our main theorem is that FTML now enjoys the same regret guarantees (up to constant factors) as FTL does in the comparable setting (with strongly convex losses). Corollary 1. (inherited regret bound for FTML) Suppose that for all t, f t andf t satisfy assumptions 1 and 2. Suppose that the update procedure in FTML (Eq. 2) is chosen as U t (w) = w − α∇f t (w) with α ≤ min{1 2β, µ 8ρG}. Then, FTML enjoys the following regret guarantee DISPLAYFORM1 More generally, our main theorem implies that there exists a large family of online meta-learning algorithms that enjoy sub-linear regret, based on the inherited smoothness and strong convexity of f (· DISPLAYFORM2 Here, ν t (·) denotes a sampling distribution for the previously seen tasks (we use a uniform distribution in our experiments). L(D, w) is the loss function (e.g. cross-entropy) averaged over the datapoints (x, y) ∈ D for the model with parameters w. While U t in Eq. includes only one gradient step, we observed that it is beneficial to take multiple gradient steps in the inner loop (i.e., in U t), which is consistent with prior works (; ; BID3 .The overall algorithmic procedure proceeds as follows. We first initialize a task buffer B = []. When presented with a new task at round t, we add task T t to B and initialize a task-specific dataset D t = [], which is appended to as data incrementally arrives for task T t. As new data arrives for task T t, we iteratively compute and apply the gradient in Eq., which uses data from all tasks seen so far. Once all of the data (finite-size) has arrived for T t, we move on to task T t+1. Our experimental evaluation studies the practical FTML algorithm (Section 2.1) in the context of vision-based online learning problems. These problems include synthetic modifications of the MNIST dataset, pose detection with synthetic images based on PASCAL3D+ models , and realistic online image classification experiments with the CIFAR-100 dataset. The aim of our experimental evaluation is to study the following questions: can online meta-learning (and specifically FTML) be successfully applied to multiple non-stationary learning problems? and does online meta-learning (FTML) provide empirical benefits over prior methods?To this end, we compare to the following algorithms: (a) Train on everything (TOE) trains on all available data so far (including D t at round t) and trains a single predictive model. This model is directly tested without any specific adaptation since it has already been trained on D t. (b) Train from scratch, which initializes w t randomly, and finetunes it using D t. (c) Joint training with fine-tuning, which at round t, trains on all the data jointly till round t − 1, and then finetunes it specifically to round t using only D t. This corresponds to the standard online learning approach where FTL is used (without any meta-learning objective), followed by task-specific fine-tuning. We compare the algorithms on two metrics: task performance (e.g. classification accuracy) for each task in the sequence; and learning efficiency or amount of data needed to reach a proficiency threshold (e.g. 90% classification accuracy). We note that TOE is a very strong point of comparison, capable of reusing representations across tasks, as has been proposed in a number of prior continual learning works (; BID1). However, unlike FTML, TOE does not explicitly learn the structure across tasks. Thus, it may not be able to fully utilize the information present in the data, and will likely not be able to learn new tasks with only a few examples. Further, the model might incur negative transfer if the new task differs substantially from previously seen ones, as has been observed in prior work . FTL with fine-tuning represents a natural online learning comparison, which in principle should combine the best parts of learning from scratch and TOE, since this approach adapts specifically to each task and benefits from prior data. However, in contrast to FTML, this method does not explicitly meta-learn and hence may not fully utilize any structure in the tasks, and may also overfit in the fine-tuning stage. In the rainbow MNIST experiment, we transform the digits in a number of ways to create different tasks, such as 7 different colored s, 2 scales (half size and original size), and 4 rotations of 90 degree intervals. A task involves correctly classifying digits with a randomly sampled , scale, and rotation. As seen in the curves in FIG0, FTML learns tasks more and more quickly with each new task. We also observe that FTML substantially outperforms the baselines in both efficiency and end performance. FTL is better than TOE since it performs task-specific adaptation, but still worse than FTML. We hypothesize that, while TOE and FTL improve in efficiency over the course of learning as they see more tasks, they struggle to prevent negative transfer on each new task. Our last observation is that training independent models does not learn efficiently, compared to models that incorporate data from other tasks; but, their asymptotic performance with a large data size is similar. Our next experiment studies a 3D pose prediction problem. Each task involves learning to predict the global position and orientation of an object in an image. We construct a dataset of synthetic images using 50 object models from 9 different object classes in the PASCAL3D+ dataset , rendering the objects on a table using the renderer accompanying the MuJoCo physics engine . To place an object on the table, we select a random 2D location, as well as a random azimuthal angle. Each task corresponds to a different object with a randomly sampled camera angle. For the loss functions, we use mean-squared error, and set the proficiency threshold to an error of 0.05. We show the of this experiment in FIG0. The demonstrate that meta-learning can significantly improve both efficiency and performance of new tasks over the course of learning, solving many of the tasks with only 10 datapoints. Unlike the previous settings, TOE substantially outperforms the independent task models, indicating that it can effectively make use of the previous data from other tasks (likely due to greater structural similarity in this task). However, the efficiency and performance of online meta-learning demonstrates that even better transfer can be accomplished by explicitly optimizing for the ability to quickly and effectively learn new tasks. Appendix C presents a more detailed evaluation and on the CIFAR task. In this paper, we introduced the online meta-learning problem statement, with the aim of connecting the fields of meta-learning and online learning. Online meta-learning provides, in some sense, a more natural perspective on the ideal real-world learning procedure. An intelligent agent interacting with a constantly changing environment should utilize streaming experience to both master the task at hand, and become more proficient at learning new tasks in the future. We summarize prior work related to our setting in Appendix D. For the online meta-learning setting, we proposed the FTML algorithm and showed that it enjoys logarithmic regret. We then illustrated how FTML can be adapted to a practical algorithm. Our experimental evaluations demonstrated that the proposed practical variant outperforms prior methods. Here, we summarize the foundations of our online meta-learning formulation. In particular, we concentrate on model agnostic meta-learning and online (i.e. regret based) learning. To illustrate the differences in setting and algorithms, we will use the running example of few-shot learning, which we describe below first. We emphasize that online learning, MAML, and the online meta-learning formulations have a broader scope than few-shot supervised learning. We use the few-shot supervised learning example primarily for illustration. In the few-shot supervised learning setting , we are interested in a family of tasks, where each task T is associated with a notional and infinite-size population of input-output pairs. In the few-shot learning, the goal is to learn a task while accessing only a small, finite-size labeled dataset D i:= {x i, y i} corresponding to task T i. If we have a predictive model, h(·; w), with parameters w, the population risk of the model is f i (w):= E (x,y)∼Ti [(x, y, w)], where the expectation is over the task population and (·) is a loss function, such as the square loss or crossentropy between the model prediction and the correct label. For example, in th e case of square loss we have, (x, y, w) = ||y − h(x; w)|| 2. Let L(D i, w) represent the average loss on the dataset D i.Being able to effectively minimize f i (w) is likely hard if we rely only on D i due to the small size of the dataset. However, we are exposed to many such tasks from the family -either in sequence or as a batch, depending on the setting. By being able to draw upon the multiplicity of tasks, we may hope to perform better, as for example demonstrated in the meta-learning literature. A.2 META-LEARNING AND MAML Meta-learning, or learning to learn , aims to effectively bootstrap from a set of tasks to learn faster on a new task. It is assumed that tasks are drawn from a fixed distribution, DISPLAYFORM0 are drawn from this distribution and datasets corresponding to them are made available to the agent. At deployment time, we are faced with a new test task T j ∼ P(T), for which we are again presented with a small labeled dataset D j:= {x j, y j}. Meta-learning algorithms attempt to find a model using the M training tasks, such that when D j is revealed from the test task, the model can be quickly updated to minimize f j (w). MAML does this by learning an initial set of parameters w MAML, such that at meta-test time, performing a few steps of gradient descent from w MAML using D j minimizes f j (·). To get such an initialization, at meta-training time, MAML solves the optimization problem: DISPLAYFORM1 The inner gradient, ∇f i (w), is based on a mini-batch of data from D i. Hence, MAML optimizes for few-shot generalization. Note that the optimization problem is subtle: we have a gradient descent step embedded in the actual objective function. show that gradient-based methods can be used on this optimization objective with existing automatic differentiation libraries. Stochastic optimization techniques are used to solve the optimization problem in Eq. since the population risk is not known directly. At meta-test time, the solution to Eq. 4 is fine-tuned as: w j ← w MAML − α∇f j (w MAML) with the gradient obtained using D j.MAML and other meta-learning algorithms are not directly applicable to sequential settings for two reasons. First, they have two distinct phases: meta-training and meta-testing or deployment. We would like the algorithms to work in a continuous learning fashion. Second, meta-learning methods generally assume that the tasks come from some fixed distribution, whereas we would like methods that work for non-stationary task distributions. In the online learning setting, an agent faces a sequence of loss functions {f t} ∞ t=1, one in each round t. These functions need not be drawn from a fixed distribution, and could even be chosen adversarially over time. The goal for the learner is to sequentially decide on model parameters {w t} ∞ t=1 that perform well on the loss sequence. In particular, the standard objective is to minimize some notion of regret defined as the difference between our learner's loss, T t=1 f t (w t), and the best performance achievable by some family of methods (comparator class). The most standard notion of regret is to compare to the cumulative loss of the best fixed model in hindsight: DISPLAYFORM0 The goal in online learning is to design algorithms such that this regret grows with T as slowly as possible. In particular, an agent (algorithm) whose regret grows sub-linearly in T is non-trivially learning and adapting. One of the simplest algorithms in this setting is follow the leader (FTL) , which updates the parameters as: DISPLAYFORM1 FTL enjoys strong performance guarantees depending on the properties of the loss function, and some variants use additional regularization to improve stability. For the few-shot supervised learning example, FTL would consolidate all the data from the prior stream of tasks into a single large dataset and fit a single model to this dataset. As observed in the meta-learning literature, such a "joint training" approach may not learn effective models. To overcome this issue, we may desire a more adaptive notion of a comparator class, and algorithms that have low regret against such a comparator, as done in the online meta-learning formulation. In this section, outline the assumptions and proofs. We make the following assumptions about each loss function {f t,f t} ∀t in the learning problem. Let θ and φ represent two arbitrary choices of model parameters. Assumption 1. (C 2 -smoothness) 1. (Lipschitz in function value) f has gradients bounded by G, i.e. ||∇f (θ)|| ≤ G ∀ θ. Assumption 2. (Strong convexity) Suppose that f is convex. Furthermore, suppose f is µ−strongly convex, i.e. ||∇f (θ) − ∇f (φ)|| ≥ µ||θ − φ||.These assumptions are largely standard in online learning, in various settings Cesa-Bianchi & Lugosi FORMULA1, except 1.3. Examples where these assumptions hold include logistic regression and L2 regression over a bounded domain. Assumption 1.3 is a statement about the higher order smoothness of functions which is common in non-convex analysis Nesterov & Polyak FORMULA1;. In our setting, it allows us to characterize the landscape of the MAML-like function which has a gradient update step embedded within it. Importantly, these assumptions do not trivialize the meta-learning setting. A clear difference in performance between meta-learning and joint training can be observed even in the case where f (·) are quadratic functions, which correspond to the simplest strongly convex setting. See Appendix E for an example illustration. We analyze the FTML algorithm when the update procedure is a single step of gradient descent, as in the formulation of MAML. Concretely, the update procedure we consider is U t (w) = w − α∇f t (w). We restate our main theorem below for completeness. Theorem. Suppose f andf: R d → R satisfy assumptions 1 and 2. Letf be the function evaluated after a one step gradient update procedure, i.e. DISPLAYFORM0 If the step size is selected as α ≤ min{1 2β, µ 8ρG}, thenf is convex. Furthermore, it is alsoβ = 9β/8 smooth andμ = µ/8 strongly convex. Proof. First, the smoothness and strong convexity of f andf implies µ ≤ ||∇ 2f (θ)|| ≤ β ∀θ. Thus, DISPLAYFORM1 Also recall the earlier notationθ = U (θ) = θ − α∇f (θ). For α < 1/β, we have the following bounds: DISPLAYFORM2 since we have U (θ) − U (φ) = I − α∇ 2f (ψ) (θ − φ) for some ψ that connects θ and φ due to the mean value theorem on ∇f. Using the chain rule and our definitions, DISPLAYFORM3 Taking the norm on both sides, for the specified α, we have: DISPLAYFORM4 Similarly, we obtain the following lower bound DISPLAYFORM5 which completes the proof. The following corollary is now immediate. satisfy assumptions 1 and 2, then the MAML optimization problem, DISPLAYFORM0 } is convex. Furthermore, it is 9β/8-smooth and µ/8-strongly convex. Since the objective function is convex, we may expect first-order optimization methods to be effective, since gradients can be efficiently computed with standard automatic differentiation libraries (as discussed in). In fact, this work provides the first set of (under any assumptions) under which MAML-like objective function can be provably and efficiently optimized. Another immediate corollary of our main theorem is that FTML now enjoys the same regret guarantees (up to constant factors) as FTL does in the comparable setting (with strongly convex losses).Corollary. (inherited regret bound for FTML) Suppose that for all t, f t andf t satisfy assumptions 1 and 2. Suppose that the update procedure in FTML (Eq. 2) is chosen as U t (w) = w − α∇f t (w) with α ≤ min{1 2β, µ 8ρG}. Then, FTML enjoys the following regret guarantee DISPLAYFORM1 Proof. From Theorem 1, we have that each functionf t (w) = f t (U t (w)) isμ = µ/8 strongly convex. The FTML algorithm is identical to FTL on the sequence of loss functions {f t} T t=1, which has a O(performance after 100 datapoints on the current task. Right: The task performance after all 900 datapoints for the current task have been received. Lower is better for all plots. FTML can learn new tasks more and more efficiently as each new task is received, demonstrating effective forward transfer. In this appendix, we present the extended experimental evaluation that is introduced in Section 3. In this experiment, we create a sequence of tasks based on the MNIST character recognition dataset. We transform the digits in a number of ways to create different tasks, such as 7 different colored s, 2 scales (half size and original size), and 4 rotations of 90 degree intervals. As illustrated in FIG1, a task involves correctly classifying digits with a randomly sampled , scale, and rotation. This leads to 56 total tasks. We partitioned the MNIST training dataset into 56 batches of examples, each with 900 images and applied the corresponding task transformation to each batch of images. The ordering of tasks was selected at random and we set 90% classification accuracy as the proficiency threshold. The learning curves in FIG3 show that FTML learns tasks more and more quickly, with each new task added. We also observe that FTML substantially outperforms the alternative approaches in both efficiency and final performance. FTL performance better than TOE since it performs task-specific adaptation, but its performance is still inferior to FTML. We hypothesize that, while the prior methods improve in efficiency over the course of learning as they see more tasks, they struggle to prevent negative transfer on each new task. Our last observation is that training independent models does not learn efficiently, compared to models that incorporate data from other tasks; but, their final performance with 900 data points is similar. In this experiment, we create a sequence of 5-way classification tasks based on the CIFAR-100 dataset, which contains more challenging and realistic RGB images than MNIST. Each classification problem involves a newly-introduced class from the 100 classes in CIFAR-100. Thus, different tasks correspond to different labels spaces. The ordering of tasks is selected at random, and we measure performance using classification accuracy. Since it is less clear what the proficiency threshold should Figure 4: Online CIFAR-100 , evaluating task performance after 50, 250, and 2000 datapoints have been received for a given task. We see that FTML learns each task much more efficiently than models trained from scratch, while both achieve similar asymptotic performance after 2000 datapoints. We also observe that FTML benefits from adapting all layers rather than learning a shared feature space across tasks while adapting only the last layer. On the left, we observe that online meta-learning generally leads to faster learning as more and more tasks are introduced, learning with only 10 datapoints for many of the tasks. In the center and right, we see that meta-learning enables transfer not just for faster learning but also for more effective performance when 60 and 400 datapoints of each task are available. Note that the order of tasks is randomized, hence leading to spikes when more difficult tasks are introduced.be for this task, we evaluate the accuracy on each task after varying numbers of datapoints have been seen. Since these tasks are mutually exclusive (as label space is changing), it makes sense to train the TOE model with a different final layer for each task. An extremely similar approach to this is to use our meta-learning approach but to only allow the final layer parameters to be adapted to each task. Further, such a meta-learning approach is a more direct comparison to our full FTML method, and the comparison can provide insight into whether online meta-learning is simply learning features and performing training on the last layer, or if it is adapting the features to each task. Thus, we compare to this last layer online meta-learning approach instead of TOE with multiple heads. The (see Figure 4) indicate that FTML learns more efficiently than independent models and a model with a shared feature space. The on the right indicate that training from scratch achieves good performance with 2000 datapoints, reaching similar performance to FTML. However, the last layer variant of FTML seems to not have the capacity to reach good performance on all tasks. In our final experiment, we study a 3D pose prediction problem. Each task involves learning to predict the global position and orientation of an object in an image. We construct a dataset of synthetic images using 50 object models from 9 different object classes in the PASCAL3D+ dataset , rendering the objects on a table using the renderer accompanying the MuJoCo physics engine (see FIG1 . To place an object on the table, we select a random 2D location, as well as a random azimuthal angle. Each task corresponds to a different object with a randomly sampled camera angle. We place a red dot on one corner of the table to provide a global reference point for the position. Using this setup, we construct 90 tasks (with an average of about 2 camera viewpoints per object), with 1000 datapoints per task. All models are trained to regress to the global 2D position and the sine and cosine of the azimuthal angle (the angle of rotation along the z-axis). For the loss functions, we use mean-squared error, and set the proficiency threshold to an error of 0.05. We show the of this experiment in FIG4. The demonstrate that meta-learning can improve both efficiency and performance of new tasks over the course of learning, solving many of the tasks with only 10 datapoints. Unlike the previous settings, TOE substantially outperforms training from scratch, indicating that it can effectively make use of the previous data from other tasks, likely due to the greater structural similarity between the pose detection tasks. However, the performance of FTML suggests that even better transfer can be accomplished by explicitly optimizing for the ability to quickly and effectively learn new tasks. Finally, we find that FTL performs comparably or worse than TOE, indicating that task-specific fine-tuning can lead to overfitting when the model is not explicitly trained for the ability to fine-tune effectively. Our work proposes to use meta-learning or learning to learn;; , in the context of online (regret-based) learning. We reviewed the foundations of these approaches in Section A, and we summarize additional related work along different axis. FORMULA0, nearly all prior meta-learning algorithms assume that the meta-training tasks come from a stationary distribution. Furthermore, most prior work has not evaluated versions of meta-learning algorithms when presented with a continuous stream of tasks. Recent work has considered handling non-stationary task distributions in meta-learning using Dirichlet process mixture models over meta-learned parameters. Unlike this prior work, we introduce a simple extension onto the MAML algorithm without mixtures over parameters, and provide theoretical guarantees. Continual learning: Our problem setting is related to (but distinct from) continual, or lifelong learning; In this paper, we sidestep the problem of catastrophic forgetting by maintaining a buffer of all the observed data. In future work, we hope to understand the interplay between limited memory and catastrophic forgetting for variants of the FTML algorithm. Here, we instead focuses on the problem of forward transfer -maximizing the efficiency of learning new tasks within a non-stationary learning setting. Prior works have also considered settings that combine joint training across tasks with task-specific adaptation FORMULA0, we also focus on the setting where there are several tens or hundreds of tasks. This setting is interesting since there is significantly more information that can be transferred from previous tasks and we can employ more sophisticated techniques such as meta-learning for transfer, enabling the agent to move towards few-shot learning after experiencing a large number of tasks. Online learning: Similar to continual learning, online learning deals with a sequential setting with streaming tasks. It is well known in online learning that FTL has good regret guarantees, but is often computationally expensive. Thus, there is a large body of work on developing computationally cheaper algorithms;;;. Again, in this work, we sidestep the computational considerations to first study if the meta-learning analog of FTL can provide performance gains. For this, we derived the FTML algorithm which has low regret when compared to a powerful adaptive comparator class that performs task-specific adaptation. We leave the design of more computationally efficient versions of FTML to future work. To avoid the pitfalls associated with a single best model in hindsight, online learning literature has also studied alternate notions of regret, with the closest settings being dynamic regret and adaptive or tracking regret. In the dynamic regret setting (; ;), the performance of the online learner's model sequence is compared against the sequence of optimal solutions corresponding to each loss function in the sequence. Unfortunately, lower-bounds suggest that the comparator class is too powerful and may not provide for any non-trivial learning in the general case. To overcome these limitations, prior work has placed restrictions on how quickly the loss functions or the comparator model can change (; ;). In contrast, we consider a different notion of adaptive regret, where the learner and comparator both have access to an update procedure. The update procedures allow the comparator to produce different models for different loss functions, thereby serving as a powerful comparator class (in comparison to a fixed model in hindsight). For this setting, we derived sublinear regret algorithms without placing any restrictions on the sequence of loss functions. We believe that this setting captures the spirit and practice of continual lifelong learning, and also leads to promising empirical .
We introduce the online meta learning problem setting to better capture the spirit and practice of continual lifelong learning.
548
scitldr
We investigate the extent to which individual attention heads in pretrained transformer language models, such as BERT and RoBERTa, implicitly capture syntactic dependency relations. We employ two methods—taking the maximum attention weight and computing the maximum spanning tree—to extract implicit dependency relations from the attention weights of each layer/head, and compare them to the ground-truth Universal Dependency (UD) trees. We show that, for some UD relation types, there exist heads that can recover the dependency type significantly better than baselines on parsed English text, suggesting that some self-attention heads act as a proxy for syntactic structure. We also analyze BERT fine-tuned on two datasets—the syntax-oriented CoLA and the semantics-oriented MNLI—to investigate whether fine-tuning affects the patterns of their self-attention, but we do not observe substantial differences in the overall dependency relations extracted using our methods. Our suggest that these models have some specialist attention heads that track individual dependency types, but no generalist head that performs holistic parsing significantly better than a trivial baseline, and that analyzing attention weights directly may not reveal much of the syntactic knowledge that BERT-style models are known to learn. Pretrained Transformer models like OpenAI GPT BID9 and BERT BID1 have shown stellar performance on language understanding tasks. BERT and BERTbased models significantly improve the state-ofthe-art on many tasks such as constituency parsing BID5, question answering BID11, and have attained top positions on the GLUE leaderboard. As BERT becomes a staple component of many NLP models, many researchers have attempted to analyze the linguistic knowledge that BERT has learned by analyzing the BERT model BID3 or training probing classifiers on the contextualized embeddings of BERT BID12.BERT, as a Transformer-based language model, computes the hidden representation at each layer for each token by attending to all the tokens in an input sentence. The attention heads of Transformer have been claimed to capture the syntactic structure of the sentences BID13. Intuitively, for a given token, some specific tokens in the sentence would be more linguistically related to it than the others, and therefore the selfattention mechanism should be expected to allocate more weight to the linguistically related tokens in computing the hidden state of the given token. In this work, we aim to investigate the hypothesis that syntax is implicitly encoded by BERT's self-attention heads. We use two relation extraction methods to extract dependency relations from all the self-attention heads of BERT. We analyze the ing dependency relations to investigate whether the attention heads of BERT implicitly track syntactic dependencies significantly better than chance, and what type of dependency relations BERT learn. We extract the dependency relations from the self-attention heads instead of the contextualized embeddings of BERT. In contrast to probing models, our dependency extraction methods require no further training. Our experiments suggest that the attention heads of BERT encode most dependency relation types with substantially higher accuracy than our baselines-a randomly initialized Transformer and relative positional baselines. Finetuning BERT on the syntax-oriented CoLA does not appear to impact the accuracy of extracted dependency relations. However, when fine-tuned on the semantics-oriented MNLI dataset, there is a slight improvement in accuracy for longer-term clausal relations and a slight loss in accuracy for shorter-term relations. Overall, while BERT models obtain non-trivial accuracy for some dependency types such as nsubj, obj, nmod, aux, and conj, they do not substantially outperform the trivial right-branching trees in terms of undirected unlabeled attachment scores (UUAS). Therefore, although the attention heads of BERT reflect a small number of dependency relation types, it does not reflect the full extent of the significant amount of syntactic knowledge BERT is shown to learn by the previous probing work. There has been substantial work so far on extracting syntactic trees from the attention heads of Transformer-based neural machine translation (NMT) models. BID6 aggregate the attention weights across the self-attention layers and heads to form a single attention weight matrix. Using this matrix, they propose a method to extract constituency and (undirected) dependency trees by recursively splitting and constructing the maximum spanning tree respectively. In contrast, BID10 train Transformer-based machine translation model on different language-pairs and extract the dependency trees using the maximum spanning tree algorithm on the attention weights of the encoder for each layer and head individually. In work concurrent with ours, BID14 focus on finding confident attention heads of the Transformer encoder based on a heuristic of the concentration of attention weights on single tokens. They identify that these heads appear to serve three specific functions: attending to relative positions, syntactic relations, and rare words. Prior work on the analysis of the contextualized embeddings of BERT has shown that BERT learns significant knowledge of syntax BID3. BID12 introduce a probingstyle method for evaluating syntactic knowledge in BERT and show that BERT encodes syntax more than semantics. BID4 train a structural probing model that maps the hidden representations of each token to an innerproduct space that corresponds to syntax tree distance. They show that the learned spaces of strong models such as BERT and ELMo BID7 are better able to reconstruct dependency trees compared to baselines that can encode features for training a parser but aren't capable of parsing themselves. BERT BID1 ) is a Transformer-based masked language model pretrained on BooksCorpus BID21 and English Wikipedia that has attained stellar performance on a variety of downstream NLP tasks. We run our experiments on the pretrained cased and uncased versions of the BERT-large model, which is a Transformer model consisting of 24 self-attention layers with 16 heads each. For a given dataset, we feed each input sentence through BERT and capture the attention weights for each individual head and layer. BID8 report that they achieve performance gains on the GLUE benchmark by supplementing pre-trained BERT with data-rich supervised tasks such as the Multi-Genre Natural Language Inference dataset (MNLI; BID18 . Therefore, we also run experiments on the uncased BERT-large model fine-tuned on the Corpus of Linguistic Acceptability (CoLA; BID16 and MNLI, to investigate the impact of fine-tuning on a syntax-related task (CoLA) and a semantic-related task (MNLI) on the structure of attention weights and ant extracted dependency relations. We refer to these fine-tuned models as CoLA-BERT and MNLI-BERT. As a baseline, we apply the same relation extraction methods to the BERT-large model with randomly initialized weights (which we refer to as random BERT) as the previous work has shown that randomly initialized sentence encoders perform surprisingly well on a suite of NLP tasks BID20 BID17. We aim to test the hypothesize that the attention heads of BERT learn syntactic relations implicitly, and that self-attention between two words encodes information about their dependency relation. We use two methods for extracting relations from the attention weights in BERT. Both methods operate on the weight matrix W ∈ T ×T for a given head at a given layer, where T is the number of tokens in the sequence, and the rows and columns correspond to the attending and attended tokens respectively (such that each row sums to 1). We exclude [CLS] and [SEP] tokens from the attention matrices, which allows us to focus on inter-word attention. Where the tokenization of our parsed corpus does not match the BERT tokenization, we merge the non-matching tokens until they are mutually compatible, and sum the attention weights for the corresponding columns and rows. We then apply either of the two extraction methods to the attention matrix. To handle the subtokens within the merged tokens, we set all subtokens except for the first to depend on the first subtoken. This approach is largely similar to that in BID4. We use the English Parallel Universal Dependencies (PUD) treebank from the CoNLL 2017 shared task BID19 as gold standard for our evaluation. We assign a relation (w i, w j) between word w i and w j if j = argmax W [i] for each row i in attention matrix W. Based on this simple method, we extract relations for all sentences in our evaluation datasets. The relations extracted using this method need not form a valid tree, or even be fully connected. The ing edge directions may or may not match the canonical directions in a tree, so we evaluate the ing arcs as undirected. Maximum Spanning Tree To extract valid dependency trees from the attention weights for a given layer and head, we follow the approach of BID10 and treat the matrix of attention weight tokens as a complete weighted directed graph, with the edges pointing from the output token to each attended token. As in Raganato and Tiedemann, we take the root of the gold dependency tree as the starting node and apply the Chu-Liu-Edmonds algorithm BID0 BID2 to compute the maximum spanning tree. The ing tree is a valid undirected dependency tree. Relative position baselines Many dependency relations tend to occur in specific positions relative to the parent word. For example, nsubj mostly occurs between a verb and the adjacent word before verb. As an example, FIG0 shows the distribution of relative positions for four major UD rela-, we compute the most common positional offset between a parent and child word for a given dependency relation, and formulate a baseline based on that most common relative positional offset. FIG1 and Table 1 describes the accuracy for nsubj, obj, advmod, and amod and the 10 most frequent relation types in the dataset using relations extracted based on the maximum attention weight method. We also include advcl and csubj in Table 1 as it shows the behavior of MNLI-BERT that tends to track longer-term clasual dependencies better than BERT and CoLA-BERT. Additionally, FIG2 shows the accuracy for nsubj, obj, advmod, and amod relations extracted based on the maximum spanning tree algorithm. The pre-trained and fine-tuned BERT models outperform random BERT substantially for all dependency types. They also outperform the relative position baselines for more than 75% of relation types. They outperform all baselines by a large margin for nsubj and obj, but only slightly better for advmod and amod. These suggest that the self-attention weights in trained BERT models implicitly encode certain dependency relations. Moreover, we do not observe very substantial changes in accuracy by fine-tuning on CoLA and MNLI. However, both BERT and CoLA-BERT have similar or slightly better performance than MNLI-BERT, except for clausal dependencies such as advcl (adverbial clause modifier) and csubj (clausal subject) where MNLI-BERT outperforms BERT and CoLA-BERT by more than 5 absolute points in accuracy. This suggests that semantic-oriented fine-tuning task encourages effective long-distance dependencies. FIG3 describes the maximum undirected unlabeled attachment scores (UUAS) across each Table 1: Highest accuracy for the most frequent dependency types (excluding nsubj, obj, advmod, and amod). We include advcl and csubj although they are not among the ten most frequent relation types as MNLI-BERT outperform other models for these dependency types. Bold marks the highest accuracy for each dependency type. Italics marks accuracies that outperform our trivial baselines. layer. BERT, CoLA-BERT, and MNLI-BERT achieve significantly higher UUAS than the random BERT. Although BERT models perform better than the right-branching baseline in most cases, the performance gap is not very large. However, as the performance gap between BERT models and the trivial right-branching baseline is not substantial, we cannot confidently conclude that the attention heads of BERT track syntactic dependencies. Additionally, the performance of BERT and the fine-tuned BERTs are similar, as fine-tuning on CoLA and MNLI does not have a large impact on UUAS. In this work, we investigate whether the attention heads of BERT exhibit the implicit syntax depen- dency by extracting and analyzing the dependency relations from the attention heads of BERT at all layers. We use two simple dependency relation extraction methods that require no additional training, and observe that there are attention heads of BERT that track more than 75% of the dependency types with higher accuracy than our baselines. However, the hypothesis that the attention heads of BERT track the dependency syntax is not well-supported as the linguistically uninformed baselines outperform BERT on nearly 25% of the dependency types. Additionally, BERT's performance in terms of UUAS is only slightly higher than that of the trivial right-branching trees, suggesting that the dependency syntax learned by the attention heads is trivial. Additionally, we observe that fine-tuning on the CoLA and MNLI does not affect the pattern of self-attention, although the fine-tuned models shows different performance from BERT on the GLUE benchmark.
Attention weights don't fully expose what BERT knows about syntax.
549
scitldr
State-of-the-art face super-resolution methods employ deep convolutional neural networks to learn a mapping between low- and high-resolution facial patterns by exploring local appearance knowledge. However, most of these methods do not well exploit facial structures and identity information, and struggle to deal with facial images that exhibit large pose variation and misalignment. In this paper, we propose a novel face super-resolution method that explicitly incorporates 3D facial priors which grasp the sharp facial structures. Firstly, the 3D face rendering branch is set up to obtain 3D priors of salient facial structures and identity knowledge. Secondly, the Spatial Attention Mechanism is used to better exploit this hierarchical information (i.e. intensity similarity, 3D facial structure, identity content) for the super-resolution problem. Extensive experiments demonstrate that the proposed algorithm achieves superior face super-resolution and outperforms the state-of-the-art. Face images provide crucial clues for human observation as well as computer analysis . However, the performance of most face image tasks, such as face recognition and facial emotion detection , degrades dramatically when the resolution of a facial image is relatively low. Consequently, face super-resolution, also known as face hallucination, was coined to restore a low-resolution face image to its high-resolution counterpart. A multitude of deep learning methods (; ; ; ; a; b) have been successfully applied in face Super-Resolution (SR) problems and achieve state-of-the-art . But super-resolving arbitrary facial images, especially at high magnification factors, is still an open and challenging problem due to the ill-posed nature of the SR problem and the difficulty in learning and integrating strong priors into a face hallucination model. Some researches (; a;) on exploiting the face priors to assist neural networks to capture more facial details have been proposed recently. A face hallucination model incorporating identity priors is presented in. But the identity prior is extracted only from the multi-scale up-sampling in the training procedure and therefore cannot provide enough extra priors to guide the network to achieve a better . Yu et al. (2018a) employ facial component heatmaps to encourage the upsampling stream to generate super-resolved faces with higher-quality details, especially for large pose variations. Although heatmaps can provide global component regions, it cannot learn the reconstruction of detailed edges, illumination or expression priors. Besides, all of these aforementioned face SR approaches ignore facial structure and identity recovery. In contrast to previous methods, we propose a novel face super-resolution method that embeds 3D face structures and identity priors. Firstly, a deep 3D face reconstruction branch is set up to explicitly obtain 3D face render priors which facilitate the face super-resolution branch. Specifically, the 3D face render prior is generated by the ResNet-50 network . It contains rich hierarchical information, such as low-level (e.g., sharp edge, illumination) and perception level (e.g., identity). The Spatial Attention Mechanism is proposed here to adaptively integrate the 3D facial prior into the network. Specifically, we employ the Spatial Feature Transform (SFT) to generate affine transformation parameters for spatial feature modulation. Afterwards, it encourages the network to learn the spatial interdepenencies of features between 3D facial priors and input images after adding the attention module into the network. The main contributions of this paper are: 1. A novel face SR model is proposed by explicitly exploiting facial structure in the form of facial-prior estimation. The estimated 3D facial prior provides not only spatial information of facial components but also their visibility information, which are ignored by the pixel-level content. 2. We propose a feature-fusion-based network to better extract and integrate the face rendered priors by employing the Spatial Attention Mechanism (SAM). 3. We qualitatively and quantitatively explore multi-scale face super-resolution, especially at very low input resolutions. The proposed network achieves better SR criteria and superior visual quality compared to state-of-the-art face SR methods. Face hallucination relates closely to the natural image super-resolution problem. Thus, in this section, we discuss recent research on super-resolution and face hallucination to illustrate the necessary context for our work. Recently, neural networks have demonstrated a remarkable capability to improve SR . Since the pioneering network can learn to map the relationship between LR and HR (a), a lot of CNN architectures have been proposed for SR (b; ; ; ;). Most of the existing high-performance SR networks have residual blocks (Jiwon to go deeper in the network architecture, and achieve better performance. EDSR improves the performance by removing unnecessary batch normalization layers in residual blocks. A residual dense network (RDN) (a) was proposed to exploit the hierarchical features from all the convolutional layers. Zhang et al. (2018b) proposed the very deep residual channel attention networks(RCAN) to discard abundant low-frequency information which hinders the representational ability of CNNs. used a spatial feature transform layer to introduce the semantic prior as an additional input of SR network. presented a wavelet-based CNN approach that can ultra-resolve a very low resolution face image in a unified framework. However, these networks require a lot of time to train the large-scale parameters to obtain good . In our work, we largely decrease the training parameters, but still achieve the superior performance in SR criteria (SSIM and PSNR) and visible quality. Facial Prior Knowledge: Exploiting facial priors in face hallucination, such as spatial configuration of facial components, is the key factor that differentiates it from generic super-resolution tasks. There are some face SR methods that use facial prior knowledge to better super-resolve LR faces. learned subspaces from LR and HR face images respectively, and then reconstructed an HR output from the PCA coefficients of the LR input. set up a Markov Random Field (MRF) to reduce ghosting artifacts because of the misalignments in LR images. These methods are prone to generate severe artifacts, especially in large pose variations and misalignments in LR images. Yu & Porikli (2017b) interweaved multiple spatial transformer networks with the deconvolutional layers to handle unaligned LR faces. Dahl et al. (2017b) leveraged the framework of PixelCNN to super-resolve very low-resolution faces. presented a cascade bi-network, dubbed CBN, to localize LR facial components first and then upsample the facial components; however, CBN may produce ghosting faces when localization errors occur. Recently, Yu et al. (2018a) used a multi-task convolutional neural network (CNN) to incorporate structural information of faces. built a face recognition model that acts as identity priors for the super-resolution network during training. In our paper, we used the 3D face reconstruction branch to extract the facial structure, detailed edges, illumination, and identity priors. Furthermore, we recover these priors in an explicit way. The 3D shapes of facial images can be restored from unconstrained 2d images by the 3D face reconstruction. In this paper, we employ the 3D Morphable Model (3DMM) based on the fusion of parametric descriptions of face attributes (e.g., gender, identity, and distinctiveness) to reconstruct the 3D facial priors. The reconstructed face will inherit the facial features and present the clear and sharp facial components. Given a low-resolution facial image, the 3D rendering branch aims to extract the 3D face coefficients based on the 3D Morphable Model (3DMM). The high-resolution face rendered image is generated after obtaining the 3D coefficients and regarded as the high-resolution facial priors which facilitate the face super-resolution. The 3D coefficients contain abundant hierarchical knowledge, such as identity, facial expression, texture, illumination, and face pose. The proposed face super-resolution framework is presented in Figure 2, and it consists of two branches: the 3D rendering network to extract the facial prior and the Spatial Attention Mechanism aiming to exploit the prior for the face super-resolution problem. It is still a challenge for state-of-the-art edge prediction methods to acquire very sharp facial structures from low-resolution images. Therefore, a 3DMM-based model is proposed to localize the precise facial structure by generating the 3D facial images which are constructed by the 3D coefficient vector. Besides, there exist large face pose variations, such as inplane and out-of-plane rotations. A large amount of data is needed to learn the representative features varying with the facial poses. To address this problem, an inspiration came from the idea that the 3DMM coefficients can analytically model the pose variation with a simple math derivation (; and does not require a large training set, we utilize a face rendering network based on ResNet-50 to regress a face coefficient vector. The output of the ResNet-50 is the representative feature vector of x = (α, β, δ, γ, ζ) ∈ R 239, where α ∈ R 80, β ∈ R 64, δ ∈ R 80, γ ∈ R 9, and ζ ∈ R 6 represent the identity, facial expression, texture, illumination, and face pose, respectively. According to the Morphable model , we transform the face coefficients to a 3D shape S and texture T of the face image as and where S and T are the average values of the S and T. Besides, B t, B id and B exp denote the base vector of texture, identity, and expression calculated by the PCA method. A modified L 2 based loss function for the 3D face reconstruction is presented based on a paired training set where j is the paired image index, and L is the total number of training pairs. i and M denote the pixel index and face region, respectively. A, I and B represent the skin color based attention mask, the sharp image, and the up-sampling of low-resolution image, respectively. R(B i j (x)) denotes the reconstructed face image based on the learned face vector by the ResNet-50 network. The proposed face super-resolution architecture. Our model consists of two branches: the top block is a ResNet-50 Network to extract the 3D facial coefficients and restore a sharp face rendered structure. The bottom block is dedicated to face super-resolution guided by the facial coefficients and rendered sharp face structures which are concatenated by the Spatial Feature Transform. Given the LR images, the generated 3D face rendered reconstructions are shown in Figure 1. The rendered face predictions contain the clear spatial knowledge and good visual quality of facial components which are very close to the information of the ground-truths. The 3D priors grasp very well the pose variations and skin colour, and further embed pose variations into the SR networks which improve the accuracy and stability in face images with large pose variations. Therefore, we concatenate the reconstructed face image as an additional feature in the SR network. The face expression, identity, texture, illumination, and face pose are transformed into four feature maps and fed into the spatial feature transform block of the SR network. As shown in Figure 2, our Spatial Attention Mechanism aims to exploit the 3D face rendered priors which grasp the precise locations of face components and the facial identity. In order to explore the interdependence and correlation of priors and input images between channels, the attention block is added into the Spatial Attention Mechanism. The proposed network, also named the Spatial Attention Mechanism (SAM), consists of three simple parts: a spatial transform block, an attention block, and an upscale module. We import the 3D face priors into the Spatial Attention Transform Block after a convolutional layer. The 3D face priors consist of two parts: one directly from the rendered face images (as the RGB input), and the other from the feature transformation of the coefficient parameters. The feature transformation procedure is described as follows: firstly, the coefficients of (identity, expression, texture, and the fusion of illumination and face pose) are reshaped to a matrix by setting extra elements to zeros. Afterwards, it is expanded to the same size as the LR images by zero-padding, and then scaled to the interval. Finally, the coefficient features are concatenated with the priors from the rendered face images. The Spatial Feature Transform (SFT) learns a mapping function Θ that provides a modulation parameter pair (µ, ν) according to the priors ψ, such as segmentation probability. Instead, the 3D face priors are taken as the input. The outputs of the SFT layer are adaptively controlled by the modulation parameter pair by way of applying an affine transformation spatially to each intermediate feature map. Specifically, the intermediate transformation parameters (µ, ν) are derived from the priors ψ by a mapping function as: and then where N denotes the SR network, and θ represents trainable parameters of the network. The intermediate feature maps are modified by scaling and shifting feature maps according to the transformation parameters: where F denotes the feature maps, and ⊗ is referred to element-wise multiplication. At this step, the SFT layer implements the spatial-wise transformation. Attention mechanism can be viewed as a guide to bias the allocation of available processing resources towards the most informative components as input . Consequently, the channel module is presented to explore the most informative components and the interdependency between the channels. The attention module is composed of a series of residual channel attention blocks (RCAB) shown in Figure 2. Inspired by the integration of channel attention and residual blocks, we ensemble a series of residual channel attention blocks. For the b-th block, the output F b of RCAB is obtained by: where C b denotes the channel attention function. F b−1 is the block's input, and X b is calculated by two stacked convolutional layers. In order to evaluate the performance of our priors and algorithms, we compare them with the startof-art methods qualitatively and quantitatively.. We use open-resource implementations from the authors and train all the networks on the same dataset for a fair comparison. We propose two models: first is the VDSR+ which is the basic VDSR model embedded with the 3D facial prior as extra RGB channel information and the other is our SR network incorporating facial priors by the Spatial Attention Mechanism (SAM). The implementation code will be made available to the public. More are shown in the supplementary material. CelebA and Menpo dataset are used to verify the performance of the algorithm. The training phase uses 162,080 images from the CelebA dataset. In the testing phase, 40,519 images from the CelebA test set are used along with the large-pose-variation test set from the Menpo dataset. The every facial pose test set of Menpo (left, right and semi-frontal) contains 1000 images, respectively. The HR ground-truth images are obtained by center-cropping the facial images and then resizing them to the 128×128 pixels. The LR face images are generated by downsampling HR ground-truths to 32×32 pixels (4 scale) and 16×16 pixels (8 scale). In our network, the ADAM optimizer is used with a batch size of 64 for training, and input images are center-cropped as RGB channels. The initial learning rate is 0.0002 and is divided by 2 every 50 epochs. The whole training process takes 2 days with an NVIDIA Titan X GPU. Quantitative evaluation of the network using PSNR and the structural similarity (SSIM) scores for the CelebA test set are listed in Table 1. Furthermore, in order to analyze the proposed methods' performance and stability regarding to large face pose variations, three case corresponding to different face poses (left, right, and semifrontal) of the Menpo test data are listed in Table 2. CelebA Test: Ours (VDSR+) achieves significantly better (1 dB higher than the remaining best method and 2 db higher than the basic VDSR method in x8 SR) even for the large-scale parameter methods, such as RDN and RCAN. But it does perform slightly worse than ours (SAM). It should be noted that ours (VDSR+) is the same as VDSR except for the extra 3D face priors as the Figure 5: Visual comparison with state-of-the-art methods(×8). The from the proposed method have less visual artifacts and more details on key face components (e.g., eyes, mouth, and nose) RGB channel inputs. It indicates that the 3D priors make a great contribution to the performance improvement (average 1.6 db improvement) of face super-resolution. Menpo Test: To verify the effectiveness and stability of face priors and our proposed network towards large pose variations, the PSNR and SSIM of face poses are listed in Table 2. While ours (SAM) is the best method superior than others, VDSR+ achieves 1.8db improvement compared with the basic VDSR method in the magnification factors (×4). Super-resolution: The qualitative of our methods at different magnifications (×4 and ×8) are shown respectively in Figures 3 and 4. It can be observed that our proposed method recovers clearer faces with finer component details (e.g., nose and eyes). Artifacts: The outputs of most methods (e.g., RCAN, RDN, and Wavelet-SRNet) contain some artifacts around facial components, such as the eyes, nose, and mouth shown in Figure 5. After adding the rendered face priors, ours show clear and sharp facial structures without any ghosting artifacts. It illustrates that the proposed 3D priors can help the network understand the spatial location and the entire face structure. Ablation Study: In this section, we conduct an ablation study to demonstrate the effectiveness of each module. We compare the proposed network with and without using the rendered 3D face priors and the Spatial Attention Mechanism (SAM) in terms of PSNR and SSIM on the test data. As shown in Figure 6 (b, f), the baseline method without rendered faces and SAM tends to generate blurry faces that cannot capture sharp edges. Figure 6 (c and g) shows clearer and sharper facial structures after adding the rendered priors. By using SAM, the visual quality is further improved in Figure 6 (d and h). The quantitative comparisons between (VDSR, our VDSR+, and our SAM) in Tables 1 and 2 also illustrate the effectiveness of the rendered priors and the Spatial Attention Mechanism. Model Size Analysis: Figure 7 shows comparisons of model size and performance. Our networks, VDSR+ and SAM, embedded with 3D priors are more lightweight while still achieving the best performance even compared with other state-of-the-art methods (e.g., RCAN and RDN) with a larger scale of parameters. In this paper, we proposed a novel network that incorporates 3D facial priors of rendered faces and identity knowledge. The 3D rendered branch utilizes the face rendering loss to encourage a highquality guided image providing clear spatial locations of facial components and other hierarchical information (i.e., expression, illumination, and face pose). To well exploit 3D priors and consider the channel correlation between priors and inputs, the Spatial Attention Mechanism is presented by employing the Spatial Feature Transform and Attention block. The comprehensive experimental have demonstrated that the proposed method can deliver the better performance and largely decrease artifacts in comparison with the state-of-the-art methods by using significantly fewer parameters. Semi-Frontal Facial Pose Visualization: For the semi-frontal pose, the SR of RCAN, RDN and Wavelet-SRNet have a lot of artifacts around facial components (e.g., eyes, teeth, nose, mouth). Fortunately, after incorporating the rendered face priors, it largely avoids the appearance of ghosting artifacts, seen in Figure.8 Left Facial Pose Visualization: For the left pose, the high-resolution of the proposed method perform much better. Ours (VDSR+) which exploiting the 3D facial priors can grasp the facial structure knowledge and restore the high-resolution facial components (e.g. mouth) much closer to the ground-truth compared with the basic VDSR method without priors. Right Facial Pose Visualization: For the right pose, the high-resolution of the proposed method are still the best. Adding the facial structure priors can help network to learn the location of facial components even for the large pose variation. High Magnification Factor × 8 Visualization: It is still a challenge to generate the sharp superresolution images for a large magnification factor (×8). The 3D rendered facial priors provide extra facial structure knowledge that are crucial for SR problems. As shown in Figure 12 and 13, the proposed method generates a high visible quality of SR images even for the large magnification factor. Learning Curves with Different Ablation Configurations:To verify the effectiveness of 3D facial structure priors, we design the three different configurations (w/o 3D priors, w/o Spatial Attention Mechanism): baseline methods (i.e., VDSR, SRCNN); baseline incorporating 3D facial priors (i.e., VDSR+,SRCNN+); the method using the Spatial Attention Mechanism and 3D priors (our proposed method: +priors and +SAM). The learning curves of each configuration are plotted to show the effectiveness of the each block. The priors are easy to insert into any network without increasing any parameters, but largely improve the accuracy and the convergence of the algorithms shown in Figure 14. Quantitative Results with Different Ablation Configurations: As shown in Table 3, each block boosts the accuracy of baseline algorithms: the average performance improvement stemming from 3D facial priors and from Spatial Attention Mechanism are 1.6db and 0.57db, respectively. Qualitative Evaluation with different ablation configurations: The baseline incorporated with the facial rendered priors tends to avoid some artifacts around the key facial components and generate more sharp edges compared with the basic baseline method without the facial priors. By adding the Spatial Attention Mechanism, it could help the network better exploit the priors and is easier to generate more sharp facial structures, shown in Figure 15.
We propose a novel face super resolution method that explicitly incorporates 3D facial priors which grasp the sharp facial structures.
550
scitldr
We present Cross-View Training (CVT), a simple but effective method for deep semi-supervised learning. On labeled examples, the model is trained with standard cross-entropy loss. On an unlabeled example, the model first performs inference (acting as a "teacher") to produce soft targets. The model then learns from these soft targets (acting as a ``"student"). We deviate from prior work by adding multiple auxiliary student prediction layers to the model. The input to each student layer is a sub-network of the full model that has a restricted view of the input (e.g., only seeing one region of an image). The students can learn from the teacher (the full model) because the teacher sees more of each example. Concurrently, the students improve the quality of the representations used by the teacher as they learn to make predictions with limited data. When combined with Virtual Adversarial Training, CVT improves upon the current state-of-the-art on semi-supervised CIFAR-10 and semi-supervised SVHN. We also apply CVT to train models on five natural language processing tasks using hundreds of millions of sentences of unlabeled data. On all tasks CVT substantially outperforms supervised learning alone, ing in models that improve upon or are competitive with the current state-of-the-art. Deep learning classifiers work best when trained on large amounts of labeled data. However, acquiring labels can be costly, motivating the need for effective semi-supervised learning techniques that leverage unlabeled examples during training. Many semi-supervised learning algorithms rely on some form of self-labeling. In these approaches, the model acts as both a "teacher" that makes predictions about unlabeled examples and a "student" that is trained on the predictions. As the teacher and the student have the same parameters, these methods require an additional mechanism for the student to benefit from the teacher's outputs. One approach that has enjoyed recent success is adding noise to the student's input BID0 BID50. The loss between the teacher and the student becomes a consistency cost that penalizes the difference between the model's predictions with and without noise added to the example. This trains the model to give consistent predictions to nearby data points, encouraging smoothness in the model's output distribution with respect to the input. In order for the student to learn effectively from the teacher, there needs to be a sufficient difference between the two. However, simply increasing the amount of noise can in unrealistic data points sent to the student. Furthermore, adding continuous noise to the input makes less sense when the input consists of discrete tokens, such in natural language processing. We address these issues with a new method we call Cross-View Training (CVT). Instead of only training the full model as a student, CVT adds auxiliary softmax layers to the model and also trains them as students. The input to each student layer is a sub-network of the full model that sees a restricted view of the input example (e.g., only seeing part of an image), an idea reminiscent of cotraining BID1. The full model is still used as the teacher. Unlike when using a large amount of input noise, CVT does not unrealistically alter examples during training. However, the student layers can still learn from the teacher because the teacher has a better, unrestricted view of the input. Meanwhile, the student layers improve the model's representations (and therefore the teacher) as they learn to make accurate predictions with a limited view of the input. Our method can be easily combined with adding noise to the students, but works well even when no noise is added. We propose variants of our method for Convolutional Neural Network (CNN) image classifiers, Bidirectional Long Short-Term Memory (BiLSTM) sequence taggers, and graph-based dependency parsers. For CNNs, each auxiliary softmax layer sees a region of the input image. For sequence taggers and dependency parsers, the auxiliary layers see the input sequence with some context removed. For example, one auxiliary layer is trained to make predictions without seeing any tokens to the right of the current one. We first evaluate Cross-View Training on semi-supervised CIFAR-10 and semi-supervised SVHN. When combined with Virtual Adversarial Training BID39, CVT improves upon the current state-of-the-art on both datasets. We also train semi-supervised models on five tasks from natural language processing: English dependency parsing, combinatory categorical grammar supertagging, named entity recognition, text chunking, and part-of-speech tagging. We use the 1 billion word language modeling benchmark BID3 as a source of unlabeled data. CVT works substantially better than purely supervised training, ing in models that improve upon or are competitive with the current state-of-the-art on every task. We consider these particularly important because many recently proposed semi-supervised learning methods work best on continuous inputs and have only been evaluated on vision tasks BID0 BID50 BID26 BID59. In contrast, CVT can handle discrete inputs such as language very effectively. Semi-supervised learning in general has been widely studied BID2. Early approaches to deep semi-supervised learning pre-train neural models on unlabeled data, which has been successful for applications in computer vision BID21 BID28 and natural language processing BID7 BID46. More recent work incorporates generative models based on autoencoders BID22 BID47 or Generative Adversarial Networks BID55 BID51 into the training. Self-Training. One of the earliest approaches to semi-supervised learning is self-training BID52 BID11. Initially, a classifier is trained on labeled data only. In each subsequent round of training, the classifier, acting as a "teacher," labels some of the unlabeled data and adds it to the training set. Then, acting as a "student," it is retrained on the new training set. The new examples added each round act as noisy "pseudo labels" BID29 that the model can learn from. Many recent approaches train the student with soft targets from the teacher's output distribution rather than a hard label, making the procedure more akin to knowledge distillation BID16.Consistency Training and Distributional Smoothing. Recent works add noise to the student's input BID0 BID50. This trains the model to give consistent predictions to nearby data points, encouraging distributional smoothness in the model. Inspired by the success of adversarial training BID12, BID37 extend this idea by adversarially selecting the perturbation to the input. Other approaches focus on improving the targets provided by the teacher by tracking an exponential moving average of its predictions BID26 or its weights BID59. Our method is complimentary to these previous approaches, and can be combined with them effectively. Co-Training. Co-Training BID1 BID41 trains two models with disjoint views of the input. On unlabeled data, each one acts as a "teacher" for the other model. In contrast, our approach trains a single unified model where auxiliary prediction layers see different, but not necessarily independent views of the input. Auxiliary Prediction Layers. Another way of leveraging unlabeled data is through the addition of auxiliary "self-supervised" losses. These approaches train auxiliary prediction layers on tasks where performance can be measured without human-provided labels. Previous work has jointly trained image classifiers with tasks like relative position and colorization BID9, sequence taggers with language modeling BID48, and reinforcement learning agents with predicting changes in the environment BID20. Unlike these approaches, our auxiliary losses are based on self-labeling, not labels deterministically constructed from the input. Data Augmentation. Data augmentation, such as random translations or crops of input images, bears some similarity to our method in that it also exposes the model to different views of input examples. Data augmentation has become a common practice for both supervised and semi-supervised training of image classifiers BID54. We first provide a general description of Cross-View Training. We then present specific constructions for auxiliary prediction layers that work well for image classification, sequence tagging, and dependency parsing. We use D l = {(x 1, y 1), (x 2, y 2),..., (x N, y N)} to represent a labeled dataset and D ul = {x 1, x 2, ..., x M} to represent an unlabeled dataset. We use p θ (y|x i) to denote the output distribution over classes produced by a model with parameters θ on input x i. Our approach uses a standard cross-entropy loss over the labeled data: DISPLAYFORM0 On unlabeled data, a popular approach is to add a consistency cost encouraging distributional smoothness in the model. First, the model produces soft targets for the current example:ŷ i = p θ (y|x i). The model is then trained to minimize the consistency cost DISPLAYFORM1 where D is a distance function (we use KL divergence) and η is a perturbation to the input that can be chosen randomly or adversarially. As is common in prior work, we hold the teacher's prediction y i fixed during training (i.e., we don't back-propagate through it) so the student learns to imitate the teacher, but not vice versa. Our dependency parsing models use auxiliary layers analogous to the "forward" and "backward" sequence tagging ones. Cross-View Training adds k additional prediction layers p )produced by the model. It outputs a distribution over labels, usually with a softmax layer (an affine transformation followed by a softmax activation function) applied to this representation: DISPLAYFORM2 At test time, only the main prediction layer p θ is used. Each h j is chosen such that it only uses a part of each input x i; the particular choice can depend on the task and model architecture. We propose variants for CNN image classifiers, BiLSTM sequence taggers, and graph-based dependency parsers in sections 3.2, 3.3, and 3.4. We add the distances between the output distributions of the teacher and auxiliary students to the consistency loss, ing in a cross-view consistency (CVC) loss: DISPLAYFORM3 We combine the supervised and CVC losses into the total loss, L = L sup + λ 2 L CVC, and minimize it with stochastic gradient descent. At each step, L sup is computed over a minibatch of labeled examples and L CVC is computed over a minibatch of unlabeled examples. λ 1 and λ 2 are hyperparameters controlling the strength of the auxiliary prediction layers and the strength of the unsupervised loss. For all experiments we set λ 1 = k and λ 2 = 1 unless indicated otherwise. See FIG0 for an illustration of the training procedure. Although adding noise or an adversarial perturbation to the input generally improves , L CVC can be trained without this enhancement (i.e., setting η = 0). In this case, the first term inside the expectation disappears (the student will exactly match the teacher, so the distance is zero). In contrast, L consistency requires a nonzero η to make the student and teacher output different distributions. In most neural networks, a few additional softmax layers is computationally cheap compared to the portion of the model building up representations (such as a CNN or RNN). Therefore our method contributes little overhead to training time over consistency training. CVT does not change inference time because the auxiliary layers are only used during training. Our image recognition models are based on Convolutional Neural Networks, which produce a set of features H(x i) ∈ R n×n×d from an image x i. The first two dimensions of H index into the spatial coordinates of feature vectors and d is the size of the feature vectors. For shallower CNNs, a particular feature vector corresponds to a region of the input image. For example, H 0,0 would be a d-dimensional vector of features extracted from the upper left corner. For deeper CNNs, a particular feature vector would be extracted from the whole image, but still only use a "region" of the representations from an earlier layer. The CNNs in our experiment are all in the first category. The primary prediction layer for our CNNs take as input the mean of H over the first two dimensions, which in a d-dimensional vector that is fed into a softmax layer: p θ (y|x i) = SML(global average pool(H)).We add n 2 auxiliary softmax layers to the top of the CNN. The jth layer takes a single feature vector as input, as shown in the left of FIG1: p j θ (y|x i) = SML(H j/n,j mod n). We also experimented with adding auxiliary softmaxes to the outputs of earlier layers in the CNN, but found this did not improve performance. In sequence tagging, each example (DISPLAYFORM0 We assume an L-layer bidirectional RNN sequence tagging model, which has become standard for many sequence tagging tasks BID13 BID14 . Each layer runs an RNN such as an LSTM BID18 in the forward direction (taking x t i as input at each step t) and the backward direction (taking x T −t+1 i as input at each step) and concatenates the . A softmax layer on top of the outputs of the last BiRNN layer, DISPLAYFORM1 The auxiliary softmax layers take DISPLAYFORM2, the outputs of the forward and backward RNNs in the first BiRNN layer, as inputs. We add the following four softmax layers to the model (see the right of FIG1): DISPLAYFORM3 The "forward" and "backward" prediction layers use the RNN's current output to predict the current token. The "future" and "past" layers use the RNN's previous output (or, equivalently, they predict the label for the next token). The forward layer makes each prediction without seeing the right context of the current token. The future layer makes each prediction without the right context or the current token itself. Therefore it works like a neural language model that, instead of predicting which token comes next in the sequence, predicts which class of token comes next in the sequence. In a dependency parse, words in a sentence are treated as nodes in a graph. Typed directed edges connect the words, forming a tree structure describing the syntactic structure of the sentence. In particular, each word x To give a specific example, in the sentence "The small dog barked", the correct label for "small" would be the edge ("dog", "small", adjectival-modifier).We use a neural graph-based dependency parser similar to the one from BID10. It first runs a BiRNN encoder over the sentence as described in section 3.3, producing a sequence of DISPLAYFORM0 is passed through two separate multilayer perceptrons, one producing a representation for x t i as a head word and one producing a representation for it as a dependent. A bilinear classifier applied to these representations produces a score for each candidate edge. Lastly, these scores are passed through a softmax layer to produce probabilities. Mathematically, the probability of an edge is given as p θ ((u, t, r) DISPLAYFORM1 Where s is the scoring function s(z 1, z 2, r) = MLP head (z 1)(W r + W)MLP dep (z 2). The bilinear classifier uses a weight matrix W r specific to the candidate relation as well as a weight matrix W shared across all relations. We add four auxiliary prediction layers to our model for cross-view training: DISPLAYFORM2 Each auxiliary layer has some missing context (not seeing either the preceding or following words) for the candidate head and candidate dependent. All the parameters for the scoring function of each auxiliary prediction layer are layer-specific. To validate our approach, we evaluate Cross-View Training on two semi-supervised learning benchmarks. These discard most of the labels from standard image image recognition datasets to artificially make them semi-supervised. As a sterner test of our approach, we also apply CVT to five tasks from Natural Language Processing (NLP) using hundreds of millions of unlabeled sentences for semi-supervised learning. Data. We experiment on two semi-supervised image recognition benchmarks. These are constructed from the CIFAR-10 (BID23) and Street View House Numbers (SVHN) BID40 datasets. Following previous work, we make the datasets semi-supervised by only using the provided labels for a subset of the examples in the training set; the rest are treated as unlabeled examples. Model. We use the convolutional neural network from Miyato et al. We add 36 auxiliary softmax layers to the 6 × 6 collection of feature vectors produced by the CNN. Each auxiliary layer sees a patch of the image ranging in size from 21 × 21 pixels (the corner) to 29 × 29 pixels (the center) of the 32 × 32 pixel images. We optimize L with λ 1 = 1 and each minibatch consisting of 32 labeled and 128 unlabeled examples. Miyato et al. use Virtual Adversarial Training (VAT), minimizing L consistency with the input perturbation η chosen adversarially. We train our cross-view models (which instead use L CVC) both with and without this adversarial noise. We report with and without using data augmentation (random translations for SVHN and random translations and horizontal flipping for CIFAR-10) in Table 1.Results. CVT works well as semi-supervised learning method without any noise being added to the student. When random noise is added, it performs close to VAT (the standard-deviation-based confidence intervals intersect) while training much faster (requiring only one backwards pass for each training minibatch, while VAT requires an additional one to compute the adversarial perturbation). Our method can easily be combined with VAT, ing in further improvements and state-of-theart . The benefit of CVT is less when data augmentation is applied, perhaps because random translations of the input expose the model to different "views" in a similar manner as with CVT. We believe the gains on SVHN are smaller than CIFAR-10 because the digits in SVHN occur in the center of the image, so the auxiliary softmaxes seeing the sides and corner do not learn as effectively. We also note that incorporating auxiliary softmax layers into the supervised loss L sup does not improve (see Appendix C). This indicates that the benefit of CVT comes from the improved self-training mechanism, not the additional losses regularizing the model. Model Analysis. To understand why CVT produces better , we compare the behavior of the VAT and CVT (with adversarial noise) models trained on CIFAR-10. First, we record the average value of each feature vector produced by the CNNs when they run over the test set. As shown in the left of FIG5, the CVT model has higher activation strengths for the feature vectors corresponding to the edges of the image. We hypothesize that the VAT model fits to the data while primarily using BID8 f Miyato et al. (2017b) *We found Miyato et al.'s implementation produces slightly different than the ones they report in their paper. Table 1: Error rates on semi-supervised learning benchmarks. We report means and standard deviations from 5 runs. + after a dataset means data augmentation was applied. In contrast, the model with CVT must learn meaningful representations for the edge regions in order to train the corresponding auxiliary softmax layers. As these feature vectors are more useful, their magnitude become larger so they contribute more to the final representation produced by the global average pool. To compare to discriminatory power of the feature vectors, we freeze the weights of the CNNs and add auxiliary softmax layers that are trained from scratch. We then measure the accuracies of the added layers (see the center and right of FIG5 . Unsurprisingly, the VAT model, which only learns representations that will be useful after the average pool, has much lower accuracies from individual feature vectors. The difference is particularly striking in the sides and corners, where CVT accuracies are around 50% higher (they are about 25% higher in the center). This finding further indicates that CVT is improving the model's representations, particularly for the outside parts of images. Data. Although the widely-used benchmarks in the previous section provide validation of our approach, they are small datasets that are artificially made to be semi-supervised. In this section, we show CVT is successful on well-studied tasks where semi-supervised learning is rarely applied. In particular, we train semi-supervised models on the following NLP tasks:• Combinatory Category Grammar (CCG) Supertagging: Labeling words with CCG supertags: lexical categories that encode information about the predicate-argument structure of the sentence. CCG is widely used in syntactic and semantic parsing. We use data from CCGBank BID19 and report word-level accuracy.• Text Chunking: Dividing a sentence into syntactically correlated parts (e.g., a noun phrase followed by a verb phrase). We use the CoNLLL-2000 shared task data (Tjong Kim BID60 and report the F1 score over predicted chunks.• Named Entity Recognition (NER): Identifying and classifying named entities (organizations, places, etc.) in a sentence. We use the CoNLL-2003 dataset (Tjong Kim BID61 and report entity-level F1 score.• Part-of-Speech (POS) Tagging: Labeling words with their syntactic categories (e.g., determiner, adjective, etc.). We use the Wall Street Journal (WSJ) portion of the Penn Treebank BID36 and report word-level accuracy.• Dependency Parsing: Inferring a tree-structure describing the syntactic structure of a sentence. We use the Penn Treebank converted to Stanford Dependencies (version 3.3.0) and report unlabeled and labeled attachment score (UAS and LAS).We use the 1 Billion Word Language Model Benchmark BID3 as a pool of unlabeled sentences for semi-supervised learning. Models. We use a CNN-BiLSTM sequence tagging model BID5. The model first represents each word as the sum of a word embedding and the output of a character-level CNN. This sequence of word representations is then fed through two BiLSTM layers and a softmax layer to produce predictions. See Appendix A for details about the model. Our dependency parser uses the same CNN-BiLSTM encoder as our sequence tagger. As described in Section 3.4, a MLP-Bilinear classifier on top of the encoder makes the predictions. Although it is common for dependency parsers to take words and part-of-speech tags as inputs, our model only takes words as inputs. See Appendix B for details about the model. BID38 were able to apply Virtual Adversarial Training to document classification, but we found VAT ineffective for our word-level tasks. Although we experimented with constraining the word embeddings to unit length and adding random or adversarial perturbations to them during training, it did not improve performance. This is perhaps because, unlike with RGB values in an image, words are discrete, so adding noise to their representations is less meaningful. Instead, we add dropout to the student but not the teacher. Recent work BID48 has shown that jointly training a neural language model with sequence taggers improves . We report accuracies with and without this enhancement (training the language model on the unlabeled data). See TAB1 for sequence tagging and TAB3 for dependency parsing . Results. CVT significantly improves over the supervised baseline on all tasks, both with and without the auxiliary language modeling objective. We report a new state-of-the-art for CCG-supertagging and pure dependency parsing (i.e., without using constituency parse annotations) and competitive with the current state-of-the-art on the other tasks. Our dependency parsing is particularly important because our model does not include part-of-speech tags as input, which other works have shown to improve performance notably BID10 BID4. Of the prior listed in the BID31 b c BID30 d BID62 e BID15 f g * The full TagLM model has many times more parameters than ours. TagLM-2048 is of more comparable size to our models, although still larger. TAB1: Results for sequence tagging tasks. We report the means and standard deviation of 5 runs. "Baseline" trains with L sup, "Consistency" trains with" L sup + L consistency, and "CVT" trains with L sup + L CVC. +LM means language modeling is added as an auxiliary task on the unlabeled data. Depparse UAS Depparse LAS BID15 94.67 92.90 BID35 94.9 93.0 BID53 95.33 - BID10 95 BID25, and BID32 because these train constituency parsers and convert the system outputs to dependency parses. They produce higher scores, but have access to more information during training and do not apply to datasets without constituency annotations. Although the large TagLM model is competitive with ours for Chunking and NER, reducing the size of TagLM to having 2048 hidden units already causes it to perform worse than our model. Although there has been a large body of work successfully applying consistency-cost-based learning to vision tasks, we find it does not provide the same gains for NLP. Training a model with the consistency loss L consistency did not improve over the baseline for sequence tagging and only slightly improved over the baseline for dependency parsing. This is perhaps due to the lack of benefit from adding noise when the input consists of discrete tokens as discussed earlier. CVT, on the other hand, works well as a semi-supervised learning method for NLP. mance, but the "future" and "past" layers are more beneficial than the "forward" and "backward" ones, perhaps because theses provide a more distinct and challenging view of the input. Training Larger NLP Models. Most sequence taggers and dependency parsers in prior use work small LSTMs (hidden state sizes of at most 500 units) because larger models yield little to no gains in performance BID49. We found our own supervised approaches and, to a lesser extent, our models when only using language modeling as the auxiliary task to also not benefit from increasing the model size. In contrast, when using CVT accuracy scales much better with model size (see FIG6 . This suggests the appropriate semi-supervised learning methods may enable the development of larger, more sophisticated models for natural language processing tasks with limited amounts of labeled data.found this to slightly improve accuracy and significantly reduce the variance in accuracy between models trained with different random initializations. For Chunking and Named Entity Recognition, we use a BIOES tagging scheme. The model is trained using SGD with momentum BID45 BID57 . Word embeddings are initialized with GloVe vectors BID42 We use the same 2-layer CNN-BiLSTM encoder and the same hyperparameters as listed in Appendix A. The MLPs used to produce representations for candidate head and dependent words have one hidden layer of size 512 with a ReLU activation and an output layer of size 256. We apply dropout to the output of the hidden layer. We omit punctuation from evaluation, which is standard practice for the PTB-SD 3.3.0 dataset. In initial experiments, we explored whether cross-view losses could benefit purely supervised classifiers. To do this, we trained models with the following objective: DISPLAYFORM0 See Section 3.1 for an explanation of the notation. We hoped that adding auxiliary softmax layers with different views of the input would act as a regularizer on the model. However, we found little to no benefit from this approach. For sequence tagging, improved slightly on CCG and POS but degraded on NER and Chunking. For image recognition, we augmented WideResNet BID63 with auxiliary softmax layers and evaluated it on CIFAR-10 and CIFAR-100. On both datasets, the augmented model performed slightly worse (by ∼0.2% on CIFAR-10 and ∼0.9% on CIFAR-100).We also experimented with using L sup-cv instead of of L sup on semi-supervised CIFAR-10 and CIFAR-10+. Surprisingly, it (slightly) decreased performance for all of the methods we experimented with: supervised training, VAT, CVT, and CVT with adversarial noise. We note we only tried these experiments with λ 1 = 1, but this value of λ 1 did work well for the semi-supervised setting. These negative suggest that the gains are from CVT are from the improved self-training mechanism, not the additional prediction layers regularizing the model.
Self-training with different views of the input gives excellent results for semi-supervised image recognition, sequence tagging, and dependency parsing.
551
scitldr
I show how it can be beneficial to express Metropolis accept/reject decisions in terms of comparison with a uniform value, and to then update this uniform value non-reversibly, as part of the Markov chain state, rather than sampling it independently each iteration. This provides a small improvement for random walk Metropolis and Langevin updates in high dimensions. It produces a larger improvement when using Langevin updates with persistent momentum, giving performance comparable to that of Hamiltonian Monte Carlo (HMC) with long trajectories. This is of significance when some variables are updated by other methods, since if HMC is used, these updates can be done only between trajectories, whereas they can be done more often with Langevin updates. This is seen for a Bayesian neural network model, in which connection weights are updated by persistent Langevin or HMC, while hyperparameters are updated by Gibbs sampling. A decision to accept or reject a Metropolis proposal to move from x to x * can be done by checking whether u < π(x *)/π(x), where π is the density function for the distribution being sampled, and u is a uniform random variable. Standard practice is to generate a value for u independently for each decision. I show here that it can be beneficial to instead update u each iteration without completely forgetting the previous value, using a non-reversible method. Doing non-reversible updates to u will not change the fraction of proposals that are accepted, but can in acceptances being clustered together (with rejections similarly clustered). This can be beneficial, for example, when rejections cause reversals of direction, as in variant of Langevin updates with persistent momentum. For any MCMC method, we can augment the variable of interest, x, with density π(x), by a variable s, whose conditional distribution given x is uniform over [0, π(x) ]. The ing joint distribution for x and s is uniform over the region where 0 < s < π(x). This is the view underlying "slice sampling" (Neal 2003), in which s is introduced temporarily, by sampling uniformly from [0, π(x)], and then forgotten once a new x has been chosen. Metropolis updates can also be viewed in this way, with the new x found by accepting or rejecting a proposal, x *, according to whether π(x *) > s, with s newly sampled from [0, π(x) ]. However, it is valid to instead retain s in the state between updates, utilizing its current value for accept/reject decisions, and updating this value when desired by any method that leaves invariant the uniform distribution on [0, π(x)] (since this is the conditional distribution for s given x). Equivalently, one can retain in the state a value u whose distribution is uniform over, independent of x, with u corresponding to s/π(x). Accept/reject decisions are then made by checking whether u < π(x *)/π(x). Note, however, that if x * is accepted, u must then be updated to u π(x)/π(x *), which corresponds to s not changing. Here, I will consider non-reversible updates for u, which translate it by some fixed amount, δ, and perhaps add some noise, reflecting off the boundaries at 0 and 1. It is convenient to express such an update with reflection in terms of a variable v that is uniform over [−1, +1], and define u = |v|. An update for v can then be done as follows: For any δ and any distribution for the noise (not depending on the current value of v), this update leaves the uniform distribution over [−1, +1] invariant. The full state consists of x and v, with x having density π(x) and, independently, v being uniform over [−1, +1], which corresponds to the conditional distribution of s = |v|π(x) given x being uniform over [0, π(x) ]. If a proposed move from x to x * is accepted we change v to v π(x)/π(x *), which leaves s unchanged (allowing the reverse move). Because of this change to v on acceptance, when π(x) varies continuously, it may not be necessary to include noise in the update for v, but if π(x) has only a finite number of possible values, adding noise may be necessary to ensure ergodicity. The hope with these non-reversible updates is that u will move slowly (if δ and the noise amount are small) between values near 0, where acceptance is easy, and values near 1, where acceptance is harder. (But note that u may change in either direction when proposals are accepted, though s will not.) Non-reversibly updating u will not change the overall acceptance rate, but it is expected that acceptances and rejections will become more clustered -with an accepted proposal more likely to be followed by another acceptance, and a rejected proposal more likely to be followed by another rejection. We might wish to intermingle Metropolis updates for x that use v to decide on acceptance with other sorts of updates for x -for example, Gibbs sampling updates, or Metropolis updates accepted in the standard fashion. We can do these updates while ignoring v, and then simply resume use of v afterwards, since v is independent of x. We could also include several independent v variables in the state, using different v values for different classes of updates, but this generalization is not explored here. A small benefit from non-reversible updating of u can be seen with simple random-walk Metropolis updates. Such updates operate as follows: Here are when sampling a 40-dimensional Gaussian distribution with identity covariance matrix: The values for σ used were the stepsizes shown above scaled down by 40 1/2. The autocorrelation times (one plus twice sum of autocorrelations) shown are for groups of 40 iterations (hence the single-iteration autocorrelation time is about 40 times larger). When estimating the mean of a single coordinate, little difference is seen between the standard method and using a non-reversible update for u. But for estimating the mean of the log of the probability density, the non-reversible method is about 1.14 times better. A similar small benefit is seen for simple Langevin updates. One possible explanation for the improvement is that, as noted by , the performance of Metropolis methods in high dimensions is limited by their ability to sample different values for the log of the density. In D dimensions, the log density typically varies over a range proportional to D 1/2. A Metropolis update will typically change the log density only by about one -larger decreases in the log density are unlikely to be accepted, and it follows from reversibility that increases in the log density of much more than one must also be rare (once equilibrium has been reached). Since standard Metropolis methods are reversible, these changes of order one will occur in a random walk, and so around D steps will be needed to traverse the range of log densities of about D 1/2, limiting the speed of mixing. The gain seen from using non-reversible updates for u may come from helping with this problem. When u is small few proposals will be rejected, and the chain will tend to drift towards smaller values for the log density, with the opposite behaviour at times when u is near one. This could reduce the random walk nature of changes in the log density. I obtained more interesting when applying the non-reversible acceptance method to the one-step, non-reversible version of Hamiltonian Monte Carlo (Duane, et al 1987) due to. This method is a "persistent" form of "Langevin" update. See the review by for more discussion of these methods. Hamiltonian Monte Carlo works in an extended state space with momentum variables, p, newly sampled each iteration. It proposes a new value for (x, p) by simulating Hamiltonian dynamics with some number of "leapfrog" steps (and then negating p, so the proposal is reversible). A leapfrog step has the form In the limit as the stepsize η goes to zero, the proposed point will always be accepted. If many leapfrog steps are used, the proposed point can be distant from the current point, avoiding the slowdown from doing a random walk with small steps. In Horowitz's method, only one leapfrog step is done, but a trick is used so that these steps nevertheless usually keep going in the same direction, except on a rejection. These updates operate as follows: 2) Find (x *, p *) from (x ′, p ′) with one "leapfrog" step (as in HMC), with stepsize η. For α near 1, Step only slightly changes p. If Step accepts, the negation in the proposal is canceled by the negation in Step. But a rejection will reverse p, leading the chain to almost double back on itself. Unfortunately, even with this non-reversibility trick, Horowitz's method is not as efficient as HMC with long trajectories. To reduce the rejection rate, and hence random-walk inducing reversals of direction, a small, inefficient step size (η) is needed. But a higher rejection rate would be tolerable if rejections cluster in time, producing long runs of no rejections. For example, rejection-free runs of 20, 0, 20, 0,... are better than rejection-free runs of 10, 10, 10, 10,..., since N of the former runs will move via a random walk a distance proportional to 20(N/2) 1/2 ≈ 14 N 1/2, whereas N of the latter runs will move only a distance proportional to 10 N 1/2. I tried sampling with the standard persistent Langevin method and with persistent Langeving updates with non-reversible updating of u, on a multivariate Gaussian distribution consisting of 16 independent pairs of variables having variances of 1 and correlations of 0.99. (Ie, a 32-dimensional Gaussian with block-diagonal covariance matrix). The plots below show the . The values for η used were the stepsizes shown scaled down by 32 1/6. The curves in different colours are for α ∈ {0.1 η, ..., 0.8 η}. The autocorrelation times shown are for groups of 31 iterations. The non-reversible method is 1.62 times better estimating the mean log probability. The right plots show that rejections are indeed clustered when non-reversible updates are used, which reduces random walks, explaining the improvement. I tried using the persistent Langevin method with non-reversible updates for u to sample the posterior distribution of a Bayesian neural network model. Such models (Neal 1995) typically have hyperparameters controlling the variance of groups of weights in the network. It is convenient to use Gibbs sampling updates for these hyperparameters, alternating such updates with HMC updates for the network weights. However, when long trajectories are used for HMC, as is desirable to reduce random-walk behaviour, the Gibbs sampling updates for hyperparameters are done infrequently. Using persistent Langevin updates for weights would allow hyperparameters to be updated more frequently, perhaps speeding overall convergence. We hope to make this work better by non-reversibly updating u. I tested this approach on a binary classification problem, with 5 inputs, and 300 training cases. A network with one hidden layer of 12 tanh hidden units was used. Only two of the inputs for this problem were relevant, with two more being slightly noisy versions of the two relevant inputs, and one input being independent noise. Five separate hyperparameters controlled the variance of weights out of each inputs. For the best-tuned persistent Langevin method non-reversibly updating u, the average autocorrelation time for the four plausibly-relevant input hyperparameters was 1.25 times smaller than for the best-tuned HMC method. This is an encouraging preliminary .
A non-reversible way of making accept/reject decisions can be beneficial
552
scitldr
We introduce ES-MAML, a new framework for solving the model agnostic meta learning (MAML) problem based on Evolution Strategies (ES). Existing algorithms for MAML are based on policy gradients, and incur significant difficulties when attempting to estimate second derivatives using backpropagation on stochastic policies. We show how ES can be applied to MAML to obtain an algorithm which avoids the problem of estimating second derivatives, and is also conceptually simple and easy to implement. Moreover, ES-MAML can handle new types of nonsmooth adaptation operators, and other techniques for improving performance and estimation of ES methods become applicable. We show empirically that ES-MAML is competitive with existing methods and often yields better adaptation with fewer queries. Meta-learning is a paradigm in machine learning that aims to develop models and training algorithms which can quickly adapt to new tasks and data. Our focus in this paper is on meta-learning in reinforcement learning (RL), where data efficiency is of paramount importance because gathering new samples often requires costly simulations or interactions with the real world. A popular technique for RL meta-learning is Model Agnostic Meta Learning (MAML) (;, a model for training an agent which can quickly adapt to new and unknown tasks by performing one (or a few) gradient updates in the new environment. We provide a formal description of MAML in Section 2. MAML has proven to be successful for many applications. However, implementing and running MAML continues to be challenging. One major complication is that the standard version of MAML requires estimating second derivatives of the RL reward function, which is difficult when using backpropagation on stochastic policies; indeed, the original implementation of MAML did so incorrectly, which spurred the development of unbiased higher-order estimators (DiCE, ) and further analysis of the credit assignment mechanism in MAML . Another challenge arises from the high variance inherent in policy gradient methods, which can be ameliorated through control variates such as in T-MAML , through careful adaptive hyperparameter tuning and learning rate annealing . To avoid these issues, we propose an alternative approach to MAML based on Evolution Strategies (ES), as opposed to the policy gradient underlying previous MAML algorithms. We provide a detailed discussion of ES in Section 3.1. ES has several advantages: 1. Our zero-order formulation of ES-MAML (Section 3.2, Algorithm 3) does not require estimating any second derivatives. This dodges the many issues caused by estimating second derivatives with backpropagation on stochastic policies (see Section 2 for details). 2. ES is conceptually much simpler than policy gradients, which also translates to ease of implementation. It does not use backpropagation, so it can be run on CPUs only. 3. ES is highly flexible with different adaptation operators (Section 3.3). 4. ES allows us to use deterministic policies, which can be safer when doing adaptation (Section 4.3). ES is also capable of learning linear and other compact policies (Section 4.2). On the point, a feature of ES algorithms is that exploration takes place in the parameter space. Whereas policy gradient methods are primarily motivated by interactions with the environment through randomized actions, ES is driven by optimization in high-dimensional parameter spaces with an expensive querying model. In the context of MAML, the notions of "exploration" and "task identification" have thus been shifted to the parameter space instead of the action space. This distinction plays a key role in the stability of the algorithm. One immediate implication is that we can use deterministic policies, unlike policy gradients which is based on stochastic policies. Another difference is that ES uses only the total reward and not the individual state-action pairs within each episode. While this may appear to be a weakness, since less information is being used, we find in practice that it seems to lead to more stable training profiles. This paper is organized as follows. In Section 2, we give a formal definition of MAML, and discuss related works. In Section 3, we introduce Evolutionary Strategies and show how ES can be applied to create a new framework for MAML. In Section 4, we present numerical experiments, highlighting the topics of exploration (Section 4.1), the utility of compact architectures (Section 4.2), the stability of deterministic policies (Section 4.3), and comparisons against existing MAML algorithms in the few-shot regime (Section 4.4). Additional material can be found in the Appendix. We first discuss the original formulation of MAML . Let T be a set of reinforcement learning tasks with common state and action spaces S, A, and P(T) a distribution over T. In the standard MAML setting, each task T i ∈ T has an associated Markov Decision Process (MDP) with transition distribution q i (s t+1 |s t, a t), an episode length H, and a reward function R Ti which maps a trajectory τ = (s 0, a 1, ..., a H−1, s H) to the total reward R(τ). A stochastic policy is a function π: S → P(A) which maps states to probability distributions over the action space. A deterministic policy is a function π: S → A. Policies are typically encoded by a neural network with parameters θ, and we often refer to the policy π θ simply by θ. The MAML problem is to find the so-called MAML point (called also a meta-policy), which is a policy θ * that can be'adapted' quickly to solve an unknown task T ∈ T by taking a (few) 1 policy gradient steps with respect to T. The optimization problem to be solved in training (in its one-shot version) is thus of the form: where: is called the adapted policy for a step size α > 0 and P T (·|η) is a distribution over trajectories given task T ∈ T and conditioned on the policy parameterized by η. Standard MAML approaches are based on the following expression for the gradient of the MAML objective function to conduct training: We collectively refer to algorithms based on computing using policy gradients as PG-MAML. Since the adaptation operator U (θ, T) contains the policy gradient is second-order in θ: Correctly computing the gradient with the term using automatic differentiation is known to be tricky. Multiple authors (; ;) have pointed out that the original implementation of MAML incorrectly estimates the term, which inadvertently causes the training to lose'pre-adaptation credit assignment'. Moreover, even when correctly implemented, the variance when estimating can be extremely high, which impedes training. To improve on this, extensions to the original MAML include ProMP , which introduces a new low-variance curvature (LVC) estimator for the Hessian, and T-MAML , which adds control variates to reduce the variance of the unbiased DiCE estimator . However, these are not without their drawbacks: the proposed solutions are complicated, the variance of the Hessian estimate remains problematic, and LVC introduces unknown estimator bias. Another issue that arises in PG-MAML is that policies are necessarily stochastic. However, randomized actions can lead to risky exploration behavior when computing the adaptation, especially for robotics applications where the collection of tasks may involve differing system dynamics as opposed to only differing rewards . We explore this further in Section 4.3. These issues: the difficulty of estimating the Hessian term, the typically high variance of ∇ θ J(θ) for policy gradient algorithms in general, and the unsuitability of stochastic policies in some domains, lead us to the proposed method ES-MAML in Section 3. Aside from policy gradients, there have also been biologically-inspired algorithms for MAML, based on concepts such as the Baldwin effect . However, we note that despite the similar naming, methods such as'Evolvability ES' bear little resemblance to our proposed ES-MAML. The problem solved by our algorithm is the standard MAML, whereas aims to maximize loosely related notions of the diversity of behavioral characteristics. Moreover, ES-MAML and its extensions we consider are all derived notions such as smoothings and approximations, with rigorous mathematical definitions as stated below. Formulating MAML with ES allows us to employ numerous techniques originally developed for enhancing ES, to MAML. We aim to improve both phases of MAML algorithm: the meta-learning training algorithm, and the efficiency of the adaptation operator. Evolution Strategies (ES) (; 2014), which recently became popular for RL , rely on optimizing the smoothing of the blackbox function f: R d → R, which takes as input parameters θ ∈ R d of the policy and outputs total discounted (expected) reward obtained by an agent applying that policy in the given environment. Instead of optimizing the function f directly, we optimize a smoothed objective. We define the Gaussian smoothing of The gradient of this smoothed objective, sometimes called an ES-gradient, is given as (see: ): Note that the gradient can be approximated via Monte Carlo (MC) samples: In ES literature the above algorithm is often modified by adding control variates to equation 4 to obtain other unbiased estimators with reduced variance. The forward finite difference (Forward-FD) estimator is given by subtracting the current policy value f (θ), yielding The antithetic estimator is given by the symmetric difference Notice that the variance of the Forward-FD and antithetic estimators is translation-invariant with respect to f. In practice, the Forward-FD or antithetic estimator is usually preferred over the basic version expressed in equation 4. In the next sections we will refer to Algorithm 1 for computing the gradient though we emphasize that there are several other recently developed variants of computing ES-gradients as well as applying them for optimization. We describe some of these variants in Section 3.3 and appendix A.3. A key feature of ES-MAML is that we can directly make use of new enhancements of ES. To formulate MAML in the ES framework, we take a more abstract viewpoint. For each task T ∈ T, let f T (θ) be the (expected) cumulative reward of the policy θ. We treat f T as a blackbox, and make no assumptions on its structure (so the task need not even be MDP, and f T may be nonsmooth). The MAML problem is then max As argued in (see also Section 2), a major challenge for policy gradient MAML is estimating the Hessian, which is both conceptually subtle and difficult to correctly implement using automatic differentiation. The algorithm we propose obviates the need to calculate any second derivatives, and thus avoids this issue. Suppose that we can evaluate (or approximate) f T (θ) and U (θ, T), but f T and U (·, T) may be nonsmooth or their gradients may be intractable. We consider the Gaussian smoothing J σ of the MAML reward, and optimize J σ using ES methods. The gradient ∇ J σ (θ) is given by and can be estimated by jointly sampling over (T, g) and evaluating f T (U (θ + σg, T)). This algorithm is specified in Algorithm 2 box, and we refer to it as (zero-order) ES-MAML. Data: initial policy θ 0, meta step size β 1 for t = 0, 1,... do 2 Sample n tasks T 1,..., T n and iid vectors g 1,..., g n ∼ N (0, I); Algorithm 2: Zero-Order ES-MAML (general adaptation operator U (·, T)) Data: initial policy θ 0, adaptation step size α, meta step size β, number of queries Sample n tasks T 1,..., T n and iid vectors g 1,..., g n ∼ N (0, I); Algorithm 3: Zero-Order ES-MAML with ESGradient Adaptation The standard adaptation operator U (·, T) is the one-step task gradient. Since f T is permitted to be nonsmooth in our setting, we use the adaptation operator U (θ, T) = θ + α∇ f T σ (θ) acting on its smoothing. Expanding the definition of J σ, the gradient of the smoothed MAML is then given by This leads to the algorithm that we specify in Algorithm 3, where the adaptation operator U (·, T) is itself estimated using the ES gradient in the inner loop. We can also derive an algorithm analogous to PG-MAML by applying a first-order method to the MAML reward E T ∼P(T) f T (θ + α∇ f T (θ)) directly, without smoothing. The gradient is given by which corresponds to equation in when expressed in terms of policy gradients. Every term in this expression has a simple Monte Carlo estimator (see Algorithm 4 in the appendix for the MC Hessian estimator). We discuss this algorithm in greater detail in Appendix A.1. This formulation can be viewed as the "MAML of the smoothing", compared to the "smoothing of the MAML" which is the basis for Algorithm 3. It is the additional smoothing present in equation 6 which eliminates the gradient of U (·, T) (and hence, the Hessian of f T). Just as with the Hessian estimation in the original PG-MAML, we find empirically that the MC estimator of the Hessian (Algorithm 4) has high variance, making it often harmful in training. We present some comparisons between Algorithm 3 and Algorithm 5, with and without the Hessian term, in Appendix A.1.2. Note that when U (·, T) is estimated, such as in Algorithm 3, the ing estimator for ∇ J σ will in general be biased. This is similar to the estimator bias which occurs in PG-MAML because we do not have access to the true adapted trajectory distribution. We discuss this further in Appendix A.2. Algorithm 2 allows for great flexibility in choosing new adaptation operators. The simplest extension is to modify the ES gradient step: we can draw on general techniques for improving the ES gradient estimator, some of which are described in Appendix A.3. Some other methods are explored below. Instead of using i.i.d Gaussian vectors to estimate the ES gradient in U (·, T), we consider samples constructed according to Determinantal Point Processes (DPP). DPP sampling is a method of selecting a subset of samples so as to maximize the'diversity' of the subset. It has been applied to ES to select perturbations g i so that the gradient estimator has lower variance (a). The sampling matrix determining DPP sampling can also be data-dependent and use information from the meta-training stage to construct a learned kernel with better properties for the adaptation phase. In the experimental section we show that DPP-ES can help in improving adaptation in MAML. Nondifferentiable operators U (·, T) can be also used in Algorithm 2. One particularly interesting example is the local search operator given by U (θ, T) = argmax{f T (θ): θ − θ ≤ R}, where R > 0 is the search radius. That is, U (θ, T) selects the best policy for task T which is in a'neighborhood' of θ. For simplicity, we took the search neighborhood to be the ball B(θ, R) here, but we may also use more general neighborhoods of θ. In general, exactly solving for the maximizer of f T over B(θ, R) is intractable, but local search can often be well approximated by a hill climbing algorithm. Hill climbing creates a population of candidate policies by perturbing the best observed policy (which is initialized to θ), evaluates the reward f T for each candidate, and then updates the best observed policy. This is repeated for several iterations. A key property of this search method is that the progress is monotonic, so the reward of the returned policy U (θ, T) will always improve over θ. This does not hold for the stochastic gradient operator, and appears to be beneficial on some difficult problems (see Section 4.1). It has been claimed that hill climbing and other genetic algorithms are competitive with gradient-based methods for solving difficult RL tasks ). Another stochastic algorithm approximating local search is CMA-ES (; ;), which performs more sophisticated search by adapting the covariance matrix of the perturbations. The performance of MAML algorithms can be evaluated in several ways. One important measure is the performance of the final meta-policy: whether the algorithm can consistently produce metapolicies with better adaptation. In the RL setting, the adaptation of the meta-policy is also a function of the number K of queries used: that is, the number of rollouts used by the adaptation operator U (·, T). The meta-learning goal of data efficiency corresponds to adapting with low K. The speed of the meta-training is also important, and can be measured in several ways: the number of metapolicy updates, wall-clock time, and the number of rollouts used for meta-training. In this section, we present experiments which evaluate various aspects of ES-MAML and PG-MAML in terms of data efficiency (K) and meta-training time. Further details of the environments and hyperparameters are given in Appendix A.7. In the RL setting, the amount of information used drastically decreases if ES methods are applied in comparison to the PG setting. To be precise, ES uses only the cumulative reward over an episode, whereas policy gradients use every state-action pair. Intuitively, we may thus expect that ES should have worse sampling complexity because it uses less information for the same number of rollouts. However, it seems that in practice ES often matches or even exceeds policy gradients approaches . Several explanations have been proposed: In the PG case, especially with algorithms such as PPO, the network must optimize multiple additional surrogate objectives such as entropy bonuses and value functions as well as hyperparameters such as the TDstep number. Furthermore, it has been argued that ES is more robust against delayed rewards, action infrequency, and long time horizons . These advantages of ES in traditional RL also transfer to MAML, as we show empirically in this section. ES may lead to additional advantages (even if the numbers of rollouts needed in training is comparable with PG ones) in terms of wall-clock time, because it does not require backpropagation, and can be parallelized over CPUs. In this section, we present two experiments on environments with very sparse rewards where the meta-policy must exhibit exploratory behavior to determine the correct adaptation. The four corners benchmark was introduced in to demonstrate the weaknesses of exploration in PG-MAML. An agent on a 2D square receives reward for moving towards a selected corner of the square, but only observes rewards once it is sufficiently close to the target corner, making the reward sparse. An effective exploration strategy for this set of tasks is for the meta-policy θ * to travel in circular trajectories to observe which corner produces rewards; however, for a single policy to produce this exploration behavior is difficult. In Figure 1, we demonstrate the behavior of ES-MAML on the four corners problem. When K = 20, the same number of rollouts for adaptation as used in , the basic version of Algorithm 3 is able to correctly explore and adapt to the task by finding the target corner. Moreover, it does not require any modifications to encourage exploration, unlike PG-MAML. We further used K = 10, 5, which caused the performance to drop. For better performance in this low-information environment, we experimented with two different adaptation operators U (·, T) in Algorithm 2, which are HC (hill climbing) and DPP-ES. The standard ES gradient is denoted MC. Furthermore, ES-MAML is not limited to "single goal" exploration. We created a more difficult task, six circles, where the agent continuously accrues negative rewards until it reaches six target points to "deactivate" them. Solving this task requires the agent to explore in circular trajectories, similar to the trajectory used by PG-MAML on the four corners task. We visualize the behavior in Figure 2. Observe that ES-MAML with the HC operator is able to develop a strategy to explore the target locations. From Figure 1, we observed that both operators DPP-ES and HC were able to improve exploration performance. We also created a modified task by heavily penalizing incorrect goals, which caused performance to dramatically drop for MC and DPP-ES. This is due to the variance from the MC-gradient, which may in a adapted policy that accidentally produces large negative rewards or become stuck in local-optima (i.e. refuse to explore due to negative rewards). This is also fixed by the HC adaptation, which enforces non-decreasing rewards during adaptation, allowing the ES-MAML to progress. Additional examples on the classic Navigation-2D task are presented in Appendix A.4, highlighting the differences in exploration behavior between PG-MAML and ES-MAML. One of the main benefits of ES is due to its ability to train compact linear policies, which can outperform hidden-layer policies. We demonstrate this on several benchmark MAML problems in the HalfCheetah and Ant environments in Figure 3. In contrast, observed that PG-MAML empirically and theoretically suggested that training with more deeper layers under SGD increases performance. We demonstrate that on the Forward-Backward and Goal-Velocity MAML benchmarks, ES-MAML is consistently able to train successful linear policies faster than deep networks. We also show that, for the Forward-Backward Ant problem, ES-MAML with the new HC operator is the most performant. Using more compact policies also directly speeds up ES-MAML, since fewer perturbations are needed for gradient estimation. We find that deterministic policies often produce more stable behaviors than the stochastic ones that are required for PG, where randomized actions in unstable environments can lead to catastrophic outcomes. In PG, this is often mitigated by reducing the entropy bonus, but this has an undesirable side effect of reducing exploration. In contrast, ES-MAML explores in parameter space, which mitigates this issue. To demonstrate this, we use the "Biased-Sensor CartPole" environment from . This environment has unstable dynamics and sparse rewards, so it requires exploration but is also risky. We see in Figure 4 that ES-MAML is able to stably maintain the maximum reward. We also include in Figure 4 from two other environments, Swimmer and Walker2d, for which it is known that PG is surprisingly unstable, and ES yields better training . Notice that we again find linear policies (L) outperforming policies with one (H) or two (HH) hidden layers. For real-world applications, we may be constrained to use fewer queries K than has typically been demonstrated in previous MAML works. Hence, it is of interest to compare how ES-MAML compares to PG-MAML for adapting with very low K. One possible concern is that low K might harm ES in particular because it uses only the cumulative rewards; if for example K = 5, then the ES adaptation gradient can make use of only 5 values. In comparison, PG-MAML uses K · H state-action pairs, so for K = 5, H = 200, PG-MAML still has 1000 pieces of information available. However, we find experimentally that the standard ES-MAML (Algorithm 3) remains competitive with PG-MAML even in the low-K setting. In Figure 5, we compare ES-MAML and PG-MAML on the Forward-Backward and Goal-Velocity tasks across four environments (HalfCheetah, Swimmer, Walker2d, Ant) and two model architectures. While PG-MAML can generally outperform ES-MAML on the Goal-Velocity task, ES-MAML is similar or better on the Forward-Backward task. Moreover, we observed that for low K, PG-MAML can be highly unstable (note the wide error bars), with some trajectories failing catastrophically, whereas ES-MAML is relatively stable. This is an important consideration in real applications, where the risk of catastrophic failure is undesirable. We have presented a new framework for MAML based on ES algorithms. The ES-MAML approach avoids the problems of Hessian estimation which necessitated complicated alterations in PG-MAML and is straightforward to implement. ES-MAML is flexible in the choice of adaptation operators, and can be augmented with general improvements to ES, along with more exotic adaptation operators. In particular, ES-MAML can be paired with nonsmooth adaptation operators such as hill climbing, which we found empirically to yield better exploratory behavior and better performance on sparse-reward environments. ES-MAML performs well with linear or compact deterministic policies, which is an advantage when adapting if the state dynamics are possibly unstable. but slightly worse than the full PG-MAML, and does not report comparisons with and without the Hessian on RL MAML. argue for the importance of the second-order terms in proper credit assignment, but use heavily modified estimators (LVC, control variates; see Section 2) in their experiments, so the performance is not directly comparable to the'naive' estimator in Algorithm 4. Our interpretation is that Algorithm 4 has high variance, making the Hessian estimates inaccurate, which can slow training on relatively'easier' tasks like ForwardBackward walking but possibly increase the exploration on four corners. We also compare FO-NoHessian against Algorithm 3 on Forward-Backward HalfCheetah and Ant in Figure A2. In this experiment, the two methods ran on servers with different number of workers available, so we measure the score by the total number of rollouts. We found that FO-NoHessian was slightly faster than Algorithm 3 when measured by rollouts on Ant, but FO-NoHessian had notably poor performance when the number of queries was low (K = 5) on HalfCheetah, and failed to reach similar scores as the others even after running for many more rollouts. Since the adapted policy U (θ, T) generally cannot be evaluated exactly, we cannot easily obtain unbiased estimates of f T (U (θ, T)). This problem arises for both PG-MAML and ES-MAML. We consider PG-MAML first as an example. In PG-MAML, the adaptation operator is U (θ, In general, we can only obtain an estimate of ∇ θ E τ ∼P T (τ |θ) [R(τ)] and not its exact value. However, the MAML gradient is given by which requires exact sampling from the adapted trajectories τ ∼ P T (τ |U (θ, T)). Since this is a nonlinear function of U (θ, T), we cannot obtain unbiased estimates of ∇J(θ) by sampling τ generated by an estimate of U (θ, T). In the case of ES-MAML, the adaptation operator is U (θ, We may question whether using an unbiased estimator of f T (U (θ, T)) is likely to improve performance. One natural strategy is to reformulate the objective function so as to make the desired estimator unbiased. This happens to be the case for the algorithm E-MAML, which treats the adaptation operator as an explicit function of K sampled trajectories and "moves the expectation outside". That is, we now have an adaptation operator U (θ, T ; τ 1, . . ., τ K), and the objective function becomes θ + α∇ f T σ (θ), we replace the estimation of U by ESGRAD on line 4 of Algorithm 3 with an improved estimator of ∇ f T σ (θ), which even may depend on data collected during the meta-training stage. Many techniques exist for reducing the variance of the estimator such as Quasi Monte Carlo sampling . Aside from variance reduction, there are also methods with special properties. A.3.1 ACTIVE SUBSPACES Active Subspaces is a method for finding a low-dimensional subspace where the contribution of the gradient is maximized. Conceptually, the goal is to find and update on-the-fly a low-rank subspace L so that the projection ∇f. This should be done in such a way that ∇f T (θ) does not need to be computed explicitly. Optimizing in lower-dimensional subspaces might be computationally more efficient and can be thought of as an example of guided ES methods, where the algorithm is guided how to explore space in the anisotropic way, leveraging its knowledge about function optimization landscape that it gained in the previous steps of optimization. In the context of RL, the active subspace method ASEBO (b) was successfully applied to speed up policy training algorithms. This strategy can be made data-dependent also in the MAML context, by learning an optimal subspace using data from the meta-training stage, and sampling from that subspace in the adaptation step. Regression-Based Optimization (RBO) is an alternative method of gradient estimation. From Taylor series expansion we have. By taking multiple finite difference expressions f (θ + d) − f (θ) for different d, we can recover the gradient by solving a regularized regression problem. The regularization has an additional advantage -it was shown that the gradient can be recovered even if a substantial fraction of the rewards f (θ + d) are corrupted (c). Strictly speaking, this is not based on the Gaussian smoothing as in ES, but is another method for estimating gradients using only zero-th order evaluations. We present a preliminary experiment with RBO and ASEBO gradient adaptation in Figure A3. To be precise, the algorithms used are identical to Algorithm 3 except that in line 4, d On the left plot, we test for noise robustness on the Forward-Backward Swimmer MAML task, comparing standard ES-MAML (Algorithm 3) to RBO-MAML. To simulate noisy data, we randomly corrupt 25% of the queries f T (θ + σg) used to estimate the adaptation operator U (θ, T) with an enormous additive noise. This is the same type of corruption used in (c). Interestingly, RBO does not appear to be more robust against noise than the standard MC estimator, which suggests that the original ES-MAML has some inherent robustness to noise. On the right plot, we compare ASEBO-MAML to ES-MAML on the Goal-Velocity HalfCheetah task in the low-K setting. We found that when measured in iterations, ASEBO-MAML outperforms ES-MAML. However, ASEBO requires additional linear algebra operations and thus uses significantly more wall-clock time (not shown in plot) per iteration, so if measured by real time, then ES-MAML was more effective. Navigation-2D is a classic environment where the agent must explore to adapt to the task. The agent is represented by a point on a 2D square, and at each time step, receives reward equal to its distance from a given target point on the square. Note that unlike the four corners and six circles tasks, the reward for Navigation-2D is dense. We visualize the differing exploration strategies learned by PG-MAML and ES-MAML in Figure A4. Notice that PG-MAML makes many tiny movements in multiple directions to'triangulate' the target location using the differences in reward for different state-action pairs. On the other hand, ES-MAML learns a meta-policy such that each perturbation of the meta-policy causes the agent to move in a different direction (represented by red paths), so it can determine the target location from the total rewards of each path. In Figure A5, we compare ES-MAML and PG-MAML on the Forward-Backward and Goal-Velocity tasks for HalfCheetah, Swimmer, Walker2d, and Ant, using the same values of K that were used in the original experiments of . Figure A5: Comparisons between ES-MAML and PG-MAML using the queries K from . A.6 REGRESSION AND SUPERVISED LEARNING MAML has also been applied to supervised learning. We demonstrate ES-MAML on sine regression , where the task is to fit a sine curve f with unknown amplitude and phase given a set of K pairs (x i, f (x i)). The meta-policy must be able to learn that all of tasks have a common periodic nature, so that it can correctly adapt to an unknown sine curve outside of the points x i. For regression, the loss is the mean-squared error (MSE) between the adapted policy π θ (x) and the true curve f (x). Given data samples {( 2 . Note that unlike in reinforcement learning, we can exactly compute ∇L(θ); for deep networks, this is by automatic differentiation. Thus, we opt to use Tensorflow to compute the adaptation operator U (θ, T) in Algorithm 3. This is in accordance with the general principle that when gradients are available, it is more efficient to use the gradient than to approximate it by a zero-order method . We show several in Figure A6. The adaptation step size is α = 0.01, which is the same as in . For comparison, reports that PG-MAML can obtain a loss of ≈ 0.5 after one adaptation step with K = 5, though it is not specified how many iterations the meta-policy was trained for. ES-MAML approaches the same level of performance, though the number of training iterations required is higher than for the RL tasks, and surprisingly high for what appears to be a simpler problem. This is likely again a reflection of the fact that for problems such as regression where the gradients are available, it is more efficient to use gradients. As an aside, this leads to a related question of the correct interpretation of the query number K in the supervised setting. There is a distinction between obtaining a data sample (x i, f (x i)), and doing a computation (such as a gradient) using that sample. If the main bottleneck is collecting the data {(x i, f (x i)}, then we may be satisfied with any algorithm that performs any number of operations on the data, as long as it uses only K samples. On the other hand, in the (on-policy) RL setting, samples cannot typically be're-used' to the same extent, because rollouts τ sampled with a given Figure A6: The MSE of the adapted policy, for varying number of gradient steps and query number K. Runs are averaged across 3 seeds. policy π θ follow an unknown distribution P(τ |θ) which reduces their usefulness away from θ. Thus, the corresponding notion to rollouts in the SL setting would be the number of backpropagations (for PG-MAML) or perturbations (for ES-MAML), but clearly these have different relative costs than doing simulations in RL. A.7 HYPERPARAMETERS AND SETUPS A.7.1 ENVIRONMENTS Unless otherwise explicitly stated, we default to K = 20 and horizon = 200 for all RL experiments. We also use the standard reward normalization in , and use a global state normalization (i.e. the same mean, standard deviation normalization values for MDP states are shared across workers). For the Ant environments (Goal-Position Ant, Forward-Backward Ant), there are significant differences in weighting on the auxiliary rewards such as control costs, contact costs, and survival rewards across different previous work (e.g. those costs are downweighted in whereas the coefficients are vanilla Gym weightings in ). These auxiliary rewards can lead to local minima, such as the agent staying stationary to collect the survival bonus which may be confused with movement progress when presenting a training curve. To make sure the agent is explicitly performing the required task, we opted to remove such costs in our work and only present the main goal-distance cost and forward-movement reward respectively. For the other environments, we used default weightings and rewards, since they do not change across previous works. Let N be the number of possible distinct tasks possible. We sample tasks without replacement, which is important if N 5, as each worker performs adaptations on all possible tasks. For standard ES-MAML (Algorithm 3), we used the following settings. For ES-MAML and PG-MAML, we took 3 seeded runs, using the default TRPO hyperparameters found in .
We provide a new framework for MAML in the ES/blackbox setting, and show that it allows deterministic and linear policies, better exploration, and non-differentiable adaptation operators.
553
scitldr
Transforming one probability distribution to another is a powerful tool in Bayesian inference and machine learning. Some prominent examples are constrained-to-unconstrained transformations of distributions for use in Hamiltonian Monte-Carlo and constructing flexible and learnable densities such as normalizing flows. We present Bijectors.jl, a software package for transforming distributions implemented in Julia, available at github.com/TuringLang/Bijectors.jl. The package provides a flexible and composable way of implementing transformations of distributions without being tied to a computational framework. We demonstrate the use of Bijectors.jl on improving variational inference by encoding known statistical dependencies into the variational posterior using normalizing flows, providing a general approach to relaxing the mean-field assumption usually made in variational inference. When working with probability distributions in Bayesian inference and probabilistic machine learning, transforming one probability distribution to another comes up quite often. For example, when applying Hamiltonian Monte Carlo on constrained distributions, the constrained density is usually transformed to an unconstrained density for which the sampling is performed . Another example is to construct highly flexible and learnable densities often referred to as normalizing flows (; ;); for a review see. When a distribution P is transformed into some other distribution Q using some measurable function b, we write Q = b * P and say Q is the push-forward of P. When b is a differentiable bijection with a differentiable inverse, i.e. a diffeomorphism or a bijector , the induced or pushed-forward distribution Qit is obtained by a simple application of change of variables. Specifically, given a distribution P on some Ω ⊆ R d with density p: Ω → [0, ∞), and a bijector b: Ω →Ω for someΩ ⊆ R d, the induced or pushed forward distribution Q = b * P has density q(y) = p b −1 (y) |det J b −1 (y)| or q b(x) = p(x) |det J b (x)| As mentioned, one application of this idea is learnable bijectors such as normalizing flows. One particular family of normalizing flow which has received a lot of attention is coupling flows (; ;). The idea is to use certain parts of the input vector x, say, x I 1 to construct parameters for a bijector f (the coupling law), which is then applied to a different part of the input vector, say, x I 2. In full generality, a coupling flow c I 1,I 2, the transformation in a coupling flow, is defined c I 1,I 2 (· ; f, θ): x I 2 → f x I 2; θ(x I 1) y I 2 → f −1 y I 2; θ(y I 1) where I 1, I 2 ⊂ I:= {1, . . ., d} are disjoint. As long as f ·; θ(x I 1): R I 2 → R I 2 is a bijector, c I 1,I 2 is invertible since y I 1 = x I 1. Note the parameter-map θ can be arbitrarily complex. Bijectors.jl is a framework for creating and using bijectors in the Julia programming language. The main idea is to treat standard constrained-to-unconstrained bijectors, e.g. log: R → (0, ∞), and more complex and possibly parameterized bijectors, e.g. coupling flows, as the same just as they are mathematically the same. This turns out to be quite a useful abstraction allowing seamless interaction between standard and learnable bijectors, making something like automatic differentiation variational inference (ADVI;) easy to implement (see Source Code 1). Table 1 shows supported mathematical operations. Only b(x) and b −1 (y) need to be manually implemented for a new bijector b. Another example is the introduction of neural autoregressive flows (NAFs;) where the inverse autoregressive flow (IAF;) is extended by replacing the affine coupling law used in IAF with a monotonic deep neural network. Despite the novel introduction of neural network, in Bijectors.jl the only difference between IAF and NAF is the choice of Bijector as the coupling law. A summarization of the related work and how it compares to Bijectors.jl can be seen in Table 2. More detailed comparisons can be found in Appendix A.1. 1. This refers to the torch.distributions submodule. After our submission, another transformer module based on PyTorch was released in Pyro : pyro.distributions.transforms. At the time of writing, we have not yet done a thorough comparison with this. At a first glance its features seem similar in natu re to tensorflow.probability which can be found in Table 2. 2. Bijectors.jl is agnostic to array-types used, therefore GPU functionality is provided basically for free by using the independent package CuArrays.jl to construct the arrays. forward(Q) , using neural networks from Flux.jl to define the coupling law in a coupling flow, and so on. For more examples, see the project website. Inversed{Bijectors. Logit{Float64},0}(Bijectors. Logit{Float64}(0.0, 1.0)) 12 13 julia> # Works like a standard`Distribution1 We demonstrate how to use Bijectors.jl by a possible approach to relaxing the meanfield assumption commonly made in variational inference through the use of normalizing flows. We consider a simple two-dimensional Gaussian with known covariance matrix with non-zero off-diagonal entries, i.e. different components are dependent, defined as follows In this case we can obtain an analytical expression for the posterior p(m | {x i} n i=1 ), and can indeed observe that the covariance matrix has non-zero off-diagonals. In this case, the mean-field assumption often made in variational inference is incorrect. Recall that in variational inference the objective is to maximize the evidence lower bound (ELBO) of the variational posterior q(m) and the true posterior For the ELBO of a transformed distribution, see Appendix B.2. Here we propose using a mean-field multivariate normal as a starting point, and then combine this with coupling flows to encode the structure of the model we are approximating into the variational posterior at a low computational cost. The idea is to encode an undirected edge between the random variables m 1 and m 2 by adding directed mappings in both directions; we do this by composing coupling flows c {2},{1} and c {1},{2}. For the coupling flows, we experimented with two different coupling laws f, affine and the recently introduced rational-quadratic splines . The parameter maps θ 1 and θ 2, respectively, were defined by a simple neural network in both cases. The ing density, letting Q µ,σ be the distribution of an isotropic multivariate Gaussian with mean µ and variance σ, is given by We then optimized the ELBO w.r.t. the parameters of the neural networks θ 1, θ 2, µ and σ to obtain our variational posteriors. The of the standard mean-field VI (MFVI) and this particular normalizing flow VI (NFVI) applied to the model in Equation can be seen in Figure 1. Here we observe that NFVI captures the correlation structure of the true posterior in Figure 1 (a) while MFVI, as expected, fails to do so. This is also reflected in the value of the ELBO for the two approaches (see Appendix B.1). This can potentially provide a flexible approach to taking advantage of structure in the joint distribution when performing variational inference without introducing a large number of parameters in addition to the mean-field parameters. See Appendix B.1 for specifics of the experiment. We presented Bijectors.jl, a framework for working with bijectors and thus transformations of distributions. We then demonstrated the flexibility of Bijectors.jl in an application of introducing correlation structure to the mean-field ADVI approach. We believe Bijectors.jl will be a useful tool for future research, especially in exploring normalizing flows and their place in variational inference. An interesting note about the NF variational posterior we constructed is that it only requires a constant number of extra parameters on top of what is required by mean-field normal VI. This approach can be applied in more general settings where one has access to the directed acyclic graph (DAG) of the generative model we want to perform inference. Then this approach will scale linearly with the number of unique edges between random variables. It is also possible in cases where we have an undirected graph representing a model by simply adding a coupling in both directions. This would be very useful for tackling issues faced when using mean-field VI and would be of interest to explore further. For related work we have mainly compared against Tensorflow's tensorflow probability, which is used by other known packages such pymc4, and PyTorch's torch.distributions, which is used by packages such as pyro. Other frameworks which make heavy use of such transformations using their own implementations are stan, pymc3, and so on. But in these frameworks the transformations are mainly used to transform distributions from constrained to unconstrained and vice versa with little or no integration between those transformation and the more complex ones, e.g. normalizing flows. pymc3 for example support normalizing flows, but treat them differently from the constrained-to-unconstrained transformations. This means that composition between standard and parameterized transformations is not supported. Of particular note is the bijectors framework in tensorflow probability introduced in . One could argue that this was indeed the first work to take such a drastic approach to the separation of the determinism and stochasticity, allowing them to implement a lot of standard distributions as a TransformedDistribution. This framework was also one of the main motivations that got the authors of Bijectors.jl interested in making a similar framework in Julia. With that being said, other than the name, we have not set out to replicate tensorflow probability and most of the direct parallels were observed after-the-fact, e.g. a transformed distribution is defined by the TransformedDistribution type in both frameworks. Instead we believe that Julia is a language well-suited for such a framework and therefore one can innovate on the side of implementation. For example in Julia we can make use of code-generation or meta-programming to do program transformations in different parts of the framework, e.g. the composition b • b −1 is transformed into the identity function at compile time. Similar to tensorflow probability we can use higher-order bijectors to construct new bijectors. Examples of such are Inverse, Compose, and Stacked. A significant difference is that in Bijectors.jl, the constructors are rarely called explicitly by the user but instead through a completely intuitive interface, e.g. inv (b) gives you the Inverse, b1 • b2 gives you the composition of b1 and b2, stack(b1, b2) gives you the two bijectors "stacked" together. Moreover, if b actually has a "named" inverse, e.g. b = Exp, then inv(b) will in Log rather than some thin wrapper Inversed(Exp). Irregardless of whether the bijector has a named inverse or not, the dual-nature is exploited in compositions so that b • inv(b) in Identity. For type-stable code, this is all done at compile-time. A particularly nice one is the Stacked(bijectors, ranges) which allows the user to specify which parts (or ranges) of the input vector should be passed to which of the "stacked". For all methods acting on a Stacked the loop for iterating through the different ranges and applying the corresponding Bijector will be unrolled, meaning that this abstraction has a zero-cost overhead and the only cost is the evaluation of corresponding methods on for the bijectors it wraps. In a limited sense Bijectors.jl can do what is known as program transformations. A good example is b • b −1 ing in identity at compile-time for simple transformations which we have mentioned before. In tensorflow probability indeed b • b −1 is reduced to the identity mapping, not by collapsing the computational graph but instead by the use of caching. This means that when (b • b −1)(x) is evaluated, work will only be done for the b −1 (x) evaluation. When b −1 (x) is evaluated by b, the cached value x used to evaluate b −1 just before will be returned immediately. torch.distributions take a similar approach but because caching can come with its own issues, especially when used in conjunction with automatic differentiation, there are cases where it will fail, e.g. dependency reversal. In Bijectors.jl there are two parts of this story. First off, b • b −1 will, as noted earlier, be compiled to the identity map upon compilation, i.e. there is zero-overhead at run-time to this evaluation. But one nice property of the Tensorflow and PyTorch approach which uses caching is that one can write code that looks like In Bijectors.jl this has to be done manually by the user through the forward method for a TransformedDistribution. Recall from Table 1 that forward returns a 4-tuple (x, b(x), logabsdetjac (b, x), logpdf(q, b(x))) using the most efficient computation path. Therefore to replicate the above example in Bijectors.jl, we can do # Samples x from base and returns y = b(x) x, y, _, _ = forward(transformed_distribution) # do some more computation potentially involving y #... Therefore "caching" in Bijectors.jl cannot be done across function barriers at the time of writing (unless the function explicitly returns all values used). On the bright side one can explicitly do caching, making it more difficult to do something wrong in addition to the fact that the computation is transparent from the users perspective. Appendix B. Adding structure to mean-field VI using coupling flows In the experimental setup we generate data by fixing m = 0 and generating n = 100 samples from Equation. This ed in a posterior multivariate normal with covariance matrix This was done by design, as we specifically chose L = 10 0 10 10 to get a posterior covariance matrix with non-zero off-diagonals. Let q µ,σ denote the density of a multivariate Gaussian with mean µ and diagonal covariance σ, and b denote the coupling flow in Equation with f as a rational-quadratic spline (RQS) with K = 3 knot points and bin [−50, 50], θ as a neural network consisting of one layer with a (3K − 1) × 1 weight matrix and bias with identity activation, i.e. a simple affine transformation. See for more information on RQS. We use Distributions.jl for implementation of the Gaussian multivariate distribution . We then performed variational inference on the model in Equation ing in Figure 5 (c). The ing densities can be observed in Figure 3. Note that here we have used a slight abuse of notation writing max θ to mean "maximize wrt. parameters of θ". The expressions for the KL-divergence and the ELBO, which under the transformation by a bijector picks up an additional term, see Equation and Equation, respectively. In all cases we set the number of samples used in the Monte-Carlo estimate of the objective to be m = 50. In all cases we used a DecayedADAGrad from Turing.jl to perform gradient updates. This is a classical ADAGrad but with a decay for the accumulated gradient norms. This is to circumvent the possibility of large initial gradient norms bringing all subsequent optimization steps to practically zero step-size. For DecayedADAGrad we used a base step-size η = 0.001, post-factor decay β post = 1.0 and pre-factor decay β pre = 0.9, and we performed 5 000 optimization steps before terminating. In general we of course do not have access to the true posterior and so we cannot minimize the KL-divergence between the variational posterior and the true posterior directly, but instead have to do so implicitly by minimizing the ELBO. In theory there is no difference, but in practice one usually observe a significantly lower variance in the gradient estimates of the KL-divergence compared to the ELBO. We therefore also performed VI using the KLdivergence to verify that the NF did not lack the expressibility to capture the true posterior, but that the slight inaccuracy in the variational posterior obtained by maximizing the ELBO was indeed due to the variance in the gradient estimate. And, as expected, minimizing the KL-divergence directly in the MF-case did not provide much of a gain compared to maximizing the ELBO. Numerical for multiple runs where the ELBO was used as an objective can be seen in Table 3 and Figure 2; the NFVI approach consistently obtains lower KL divergence and a greater ELBO. The main quantity of interest is the KL-divergence which quantifies the difference between the variational posterior and the true posterior. The ELBO is a lower bound on the evidence and thus the actual values can vary widely across experiments. Additionally, the difference between the ELBO of two distributions with respect to the same set of observations is equal to the difference between KL-divergence on that set of observations, and so we gain no additional information of the difference between the variational posterior and the true posterior by looking at the ELBO. Therefore we visualize the KL-divergence instead of the ELBO in Figure 2, but still provide the numerical values for both in Table 3 Table 3: (Rational-Quadratic Spline coupling law) Exponentially smoothed estimates of the last 4000 (out of 5000) optimization steps. As can be seen in Figure 2, after the 1000th step is basically when the optimas are reached. Here the ELBO has been used as the objective. We also performed the same experiment using an affine transformation f as a coupling law. The setup is identical, but now θ is a neural network consisting of two layers; the first layer is a dense layer with 2 × 1 weight matrix and bias and ReLU activation, and the second layer is a dense layer with 2 × 2 weight matrix and bias and identity activation. In Flux.jl, which we have used for the neural network part of the bijector, this is given by Chain(Dense(1, 2, relu), Dense)) . As one can see in Table 4 and Figure 4, even with an affine coupling law we obtain very good approximations. Table 4: (Affine coupling law) Exponentially smoothed estimates of the last 4000 (out of 5000) optimization steps. As can be seen in Figure 2, after the 1000th step is basically when the optimas are reached. Here the ELBO has been used as the objective. Recall the definition of the Kullback-Leibler (KL) divergence, here relating the variational density q(z) and posterior p(z | {x} n i=1 ), As per usual, we can rewrite this where in the second-to-last equality we used the assumption that the observations are i.i.d. and in the last equality we used the fact that log p(x i) is independent of z for all i = 1,..., n. We can then arrange this into Observe that given a set of observations, the left-hand side is constant. Therefore we can minimize the KL-divergence by maximizing the remaining terms on the right-hand side of the equation, which we call the evidence lower bound (ELBO) Now suppose that the variational posterior q(z) is in fact a transformed distribution, say, with base density q 0 and using transformation b, i.e. Substituting these terms into the ELBO from Equation, we get This expression ise very useful when q 0 is a density which it is computationally cheap to sample from and we have an analytical expression for the entropy of q 0, e.g. if q 0 is the density of a mulitvariate Gaussian both of these conditions are satisfied. In practice we use a Monte-Carlo estimate of the ELBO where η k ∼ q 0 (η) for k = 1,..., m. From this we can then obtain a Monte-Carlo estimate of the gradient wrt. parameters.
We present a software framework for transforming distributions and demonstrate its flexibility on relaxing mean-field assumptions in variational inference with the use of coupling flows to replicate structure from the target generative model.
554
scitldr
Multi-domain learning (MDL) aims at obtaining a model with minimal average risk across multiple domains. Our empirical motivation is automated microscopy data, where cultured cells are imaged after being exposed to known and unknown chemical perturbations, and each dataset displays significant experimental bias. This paper presents a multi-domain adversarial learning approach, MuLANN, to leverage multiple datasets with overlapping but distinct class sets, in a semi-supervised setting. Our contributions include: i) a bound on the average- and worst-domain risk in MDL, obtained using the H-divergence; ii) a new loss to accommodate semi-supervised multi-domain learning and domain adaptation; iii) the experimental validation of the approach, improving on the state of the art on two standard image benchmarks, and a novel bioimage dataset, Cell. Advances in technology have enabled large scale dataset generation by life sciences laboratories. These datasets contain information about overlapping but non-identical known and unknown experimental conditions. A challenge is how to best leverage information across multiple datasets on the same subject, and to make discoveries that could not have been obtained from any individual dataset alone. Transfer learning provides a formal framework for addressing this challenge, particularly crucial in cases where data acquisition is expensive and heavily impacted by experimental settings. One such field is automated microscopy, which can capture thousands of images of cultured cells after exposure to different experimental perturbations (e.g from chemical or genetic sources). A goal is to classify mechanisms by which perturbations affect cellular processes based on the similarity of cell images. In principle, it should be possible to tackle microscopy image classification as yet another visual object recognition task. However, two major challenges arise compared to mainstream visual object recognition problems BID51. First, biological images are heavily impacted by experimental choices, such as microscope settings and experimental reagents. Second, there is no standardized set of labeled perturbations, and datasets often contain labeled examples for a subset of possible classes only. This has limited microscopy image classification to single datasets and does not leverage the growing number of datasets collected by the life sciences community. These challenges make it desirable to learn models across many microscopy datasets, that achieve both good robustness w.r.t. experimental settings and good class coverage, all the while being robust to the fact that datasets contain samples from overlapping but distinct class sets. Multi-domain learning (MDL) aims to learn a model of minimal risk from datasets drawn from distinct underlying distributions BID20, and is a particular case of transfer learning BID46. As such, it contrasts with the so-called domain adaptation (DA) problem BID7 BID5 BID22 BID46. DA aims at learning a model with minimal risk on a distribution called "target" by leveraging other distributions called "sources". Notably, most DA methods assume that target classes are identical to source classes, or a subset thereof in the case of partial DA BID77.The expected benefits of MDL, compared to training a separate model on each individual dataset, are two-fold. First, MDL leverages more (labeled and unlabeled) information, allowing better generalization while accommodating the specifics of each domain BID20 BID72. Thus, MDL models have a higher chance of ab initio performing well on a new domain − a problem referred to as domain generalization BID44 or zero-shot domain adaptation BID74. Second, MDL enables knowledge transfer between domains: in unsupervised and semi-supervised settings, concepts learned on one domain are applied to another, significantly reducing the need for labeled examples from the latter BID46. Learning a single model from samples drawn from n distributions raises the question of available learning guarantees regarding the model error on each distribution. BID32 introduced the notion of H-divergence to measure the distance between source and target marginal distributions in DA. BID4 have shown that a finite sample estimate of this divergence can be used to bound the target risk of the learned model. The contributions of our work are threefold. First, we extend the DA guarantees to MDL (Sec. 3.1), showing that the risk of the learned model over all considered domains is upper bounded by the oracle risk and the sum of the H-divergences between any two domains. Furthermore, an upper bound on the classifier imbalance (the difference between the individual domain risk, and the average risk over all domains) is obtained, thus bounding the worst-domain risk. Second, we propose the approach Multi-domain Learning Adversarial Neural Network (MULANN), which extends Domain Adversarial Neural Networks (DANNs) BID22 to semi-supervised DA and MDL. Relaxing the DA assumption, MULANN handles the so-called class asymmetry issue (when each domain may contain varying numbers of labeled and unlabeled examples of a subset of all possible classes), through designing a new loss (Sec. 3.2). Finally, MULANN is empirically validated in both DA and MDL settings (Sec. 4), as it significantly outperforms the state of the art on three standard image benchmarks BID52 BID35, and a novel bioimage benchmark, CELL, where the state of the art involves extensive domain-dependent pre-processing. Notation. Let X denote an input space and Y = {1, . . ., L} a set of classes. For i = 1,..., n, dataset S i is an iid sample drawn from distribution D i on X × Y. The marginal distribution of D i on X is denoted by D X i. Let H be a hypothesis space; for each h in H (h : X → Y) we define the risk under distribution D i as i (h) = P x,y∼Di (h(x) = y). h i (respectively h) denotes the oracle hypothesis according to distribution D i (resp. with minimal total risk over all domains): DISPLAYFORM0 In the semi-supervised setting, the label associated with an instance might be missing. In the following, "domain" and "distribution" will be used interchangeably, and the "classes of a domain" denote the classes for which labeled or unlabeled examples are available in this domain. Machine learning classically relies on the iid setting: when training and test samples are independently drawn from the same joint distribution P (X, Y) BID71. Two other settings emerged in the 1990s, "concept drift" and "covariate shift". They respectively occur when conditional data distributions P (Y |X) and marginal data distributions P (X) change, either continuously or abruptly, across training data or between train and test data BID56. Since then, transfer learning has come to designate methods to learn across drifting, shifting or distinct distributions, or even distinct tasks (; BID46 In MDL, the different domains can be taken into account by maintaining shared and domain-specific parameters BID20, or through a domain-specific use of shared parameters. The domaindependent use of these parameters can be learned, e.g. using domain-guided dropout BID72, or based on prior knowledge about domain semantic relationships BID74 .Early DA approaches leverage source examples to learn on the target domain in various ways, e.g. through reweighting source datapoints BID41 BID26 BID24, or defining an extended representation to learn from both source and target BID19 . Other approaches proceed by aligning the source and target representations with PCA-based correlation alignment, or subspace alignment BID21 . In the field of computer vision, a somewhat related way of mapping examples in one domain onto the other is image-to-image translation, possibly in combination with a generative adversarial network (see references in Appendix A).Intuitively, the difficulty of DA crucially depends on the distance between source and target distribution. Accordingly, a large set of DA methods proceed by reducing this distance in the original input space X, e.g. via importance sampling BID7 or by modifying the source representation using optimal transport BID17 BID18. Another option is to map source and target samples on a latent space where they will have minimal distance. Neural networks have been intensively exploited to build such latent spaces, either through generative adversarial mechanisms BID67 BID23, or through combining task objective with an approximation of the distance between source(s) and target. Examples of used distances include the Maximum Mean Discrepancy due to BID25 BID66 BID10, some of its variants BID38, the L 2 contrastive divergence BID43, the Frobenius norm of the output feature correlation matrices, or the H-divergence BID4 BID22 BID47 BID40 ) (more in Sec. 3). Most DA methods assume that source(s) and target contain examples from the same classes; in particular, in standard benchmarks such as OFFICE BID52, all domains contain examples from the same classes. Notable exceptions are partial DA methods, where target classes are expected to be a subset of source classes e.g. BID77. DA and partial DA methods share two drawbacks when applied to semi-supervised MDL with non-identical domain class sets. First, neither generic nor partial DA methods try to mitigate the impact of unlabeled samples from a class without any labeled counterparts. Second, as they focus on target performance, (partial) DA methods do not discuss the impact of extra labeled source classes on source accuracy. However, as shown in Sec. 4.3, class asymmetry can heavily impact model performance if not accounted for. Bioinformatics is increasingly appreciating the need for domain adaptation methods BID55 BID73 BID68. Indeed, experimentalists regularly face the issues of concept drift and covariate shift. Most biological experiments that last more than a few days are subject to technical variations between groups of samples, referred to as batch effects. Batch effects in image-based screening data are usually tackled with specific normalization methods BID8 ). More recently, work by applied CorAl for this purpose, aligning each batch with the entire experiment. DA has been applied to image-based datasets for improving or accelerating image segmentation tasks BID3 BID70 BID6 BID30. However, to our knowledge, MDL has not yet been used in Bioimage Informatics, and this work is the first to leverage distinct microscopy screening datasets using MDL. The H-divergence has been introduced to bound the DA risk BID4 BID22. This section extends the DA theoretical to the MDL case (Sec. 3.1), supporting the design of the MULANN approach (Sec. 3.2). The reader is referred to Appendix B for formal definitions and proofs. The distance between source and target partly governs the difficulty of DA. The H-divergence has been introduced to define such a distance which can be empirically estimated with proven guarantees BID2 BID32. This divergence measures how well one can discriminate between samples from two marginals. It inspired an adversarial approach to DA BID22, through the finding of a feature space in which a binary classification loss between source and target projections is maximal, and thus their H-divergence minimal. Furthermore, the target risk is upper-bounded by the empirical source risk, the empirical H-divergence between source(s) and target marginals, and the oracle DA risk BID4 BID76.Bounding the MDL loss using the H-divergence. A main difference between DA and MDL is that MDL aims to minimize the average risk over all domains while DA aims to minimize the target risk only. Considering for simplicity a binary classification MDL problem and taking inspiration from BID42 BID5, the MDL loss can be formulated as an optimal convex combination of domain risks. A straightforward extension of Ben-David et al. FORMULA0 (Theorem 2 in Appendix B.2) establishes that the compound empirical risk is upper bounded by the sum of: i) the oracle risk on each domain; ii) a statistical learning term involving the VC dimension of H; iii) the divergence among any two domains as measured by their H-divergence and summed oracle risk. This states that, assuming a representation in which domains are as indistinguishable as possible and on which every 1-and 2-domain classification task is well addressed, then there exists a model that performs well on all of them. In the 2-domain case, the bound is minimized when one minimizes the convex combination of losses in the same proportion as samples. Bounding the worst risk. The classifier imbalance w.r.t. the i-th domain is defined as DISPLAYFORM0 The extent to which marginal D i can best be distinguished by a classifier from H (i.e., the Hdivergence), and the intrinsic difficulty i of the i-th classification task, yield an upper-bound on the classifier imbalance (proof in Appendix B.3): Proposition 1. Given an input space X, n distributions D i over X × {0, 1} and hypothesis class H on X, for any h ∈ H, let i (h) (respectively¯ (h)) denote the classification risk of h w.r.t. distribution D i (resp. its average risk over all D i). The risk imbalance | i (h) −¯ (h)| is upper bounded as: DISPLAYFORM1 Accordingly, every care taken to minimize H-divergences or ∆ ij (e.g. using the class-wise contrastive losses BID43) improves the above upper bound. An alternative bound of the classifier imbalance can be obtained by using the H∆H-divergence (proposition 3, and corollaries 4, 5 for the 2-domain case in Appendix). As pointed out by e.g. BID47, when minimizing the H-divergence between two domains, a negative transfer can occur in the case of class asymmetry, when domains involve distinct sets of classes. For instance, if a domain has unlabeled samples from a class which is not present in the other domains, both global BID22 and class-wise BID47 ) domain alignments will likely deteriorate at least one of the domain risks by putting the unlabeled samples close to labeled ones from the same domain. A similar issue arises if a domain has no (labeled or unlabeled) samples in classes which are represented in other domains. In general, unlabeled samples are only subject to constraints from the domain discriminator, as opposed to labeled samples. Thus, in the case of class asymmetry, domain alignment will tend to shuffle unlabeled samples more than labeled ones. This limitation is addressed in MULANN by defining a new discrimination task referred to as Known Unknown Discrimination (KUD). Let us assume that, in each domain, a fraction p of unlabeled samples comes from extra classes, i.e. classes with no labeled samples within the domain. KUD aims at discriminating, within each domain, labeled samples from unlabeled ones that most likely belong to such extra classes. More precisely, unlabeled samples of each domain are ranked according to the entropy of their classification according to the current classifier, restricted to their domain classes. Introducing the hyper-parameter p, the top p% examples according to this classification entropy are deemed "most likely unknown", and thus discriminated from the labeled ones of the same domain. The KUD module aims at repulsing the most likely unknown unlabeled samples from the labeled ones within each domain (FIG1, thus resisting the contractive effects of global domain alignment. DISPLAYFORM0 Overall, MULANN involves 3+n interacting modules, where n is the number of domains with unlabeled data. The first module is the feature extractor with parameters θ f, which maps the input space X to some latent feature space Ω. 2+n modules are defined on Ω: the classifier module, the domain discriminator module, and the n KUD modules, with respective parameters θ c, θ d and (θ u,i) i. All modules are simultaneously learned by minimizing loss DISPLAYFORM1 where ζ and λ are hyper-parameters, DISPLAYFORM2 is the domain discrimination loss (multi-class cross-entropy loss of classifying examples from S i in class i), and L i u (θ f, θ u,i) is the KUD loss (binary cross-entropy loss of discriminating labelled samples from S i from the "most likely unknown" unlabelled samples from S i).The loss minimization aims to find a saddle point (θ f,θ y,θ d,θ u), achieving an equilibrium between the classification performance, the discrimination among domains (to be prevented) and the discrimination among labeled and some unlabeled samples within each domain (to be optimized). The sensitivity w.r.t. hyperparameter p will be discussed in Sec. 4.3. This section reports on the experimental validation of MULANN in DA and MDL settings on three image datasets (Sec. 4.2), prior to analyzing MULANN and investigating the impact of class asymmetry on model performances (Sec. 4.3). Datasets The DA setting considers three benchmarks: DIGITS, including the well-known MNIST and MNIST-M BID35 BID22; Synthetic road signs and German traffic sign benchmark BID14 BID60 and OFFICE BID52. The MDL setting considers the new CELL benchmark, which is made of fluorescence microscopy images of cells (detailed in Appendix C). Each image contains tens to hundreds of cells that have been exposed to a given chemical compound, in three domains: California (C), Texas (T) and England (E). There are 13 classes across the three domains (Appendix FIG5 ; a drug class is a group of compounds targeting a similar known biological process, e.g. DNA replication. Four domain shifts are considered: C↔T, T↔E, E↔C and C↔T↔E.Baselines and hyperparameters. In all experiments, MULANN is compared to DANN BID22 and its extension MADA BID47 (that involves one domain discriminator module per class rather than a single global one). For DANN, MADA and MULANN, the same pre-trained VGG-16 architecture BID59 from Caffe BID28 ) is used for OFFICE and CELL 2; the same small convolutional network as BID22 is used for DIGITS (see Appendix D.1 for details). The models are trained in Torch BID16 using stochastic gradient descent with momentum (ρ = 0.9). As in BID22, no hyper-parameter grid-search is performed for OFFICE -double cross-validation is used for all other benchmarks. Hyper-parameter ranges can be found in Appendix D.2.Semi-supervised setting. For OFFICE and CELL, we follow the experimental settings from BID52. A fixed number of labeled images per class is used for one of the domains in all cases (20 for Amazon, 8 for DSLR and Webcam, 10 in CELL). For the other domain, 10 labeled images per class are used for half of the classes (15 for OFFICE, 4 for CELL). For DIGITS and RoadSigns, all labeled source train data is used, whereas labeled target data is used for half of the classes only (5 for DIGITS, 22 for RoadSigns). In DA, the evaluation is performed on all target images from the unlabeled classes. In MDL, the evaluation is performed on all source and target classes (considering labeled and unlabeled samples).Evaluation goals. A first goal is to assess MULANN performance comparatively to the baselines. A second goal is to assess how the experimental setting impacts model performance. As domain discriminator and KUD modules can use both labeled and unlabeled images, a major question regards the impact of seeing unlabeled images during training. Two experiments are conducted to assess this impact: a) the same unlabeled images are used for training and evaluation (referred to as fully transductive setting, noted FT); b) some unlabeled images are used for training, and others for evaluation (referred to as non-fully transductive setting, noted NFT). (The case where no unlabeled images are used during training is discarded due to poor ). DA on DIGITS, RoadSigns and OFFICE. BID43 ) (legend CCSA), that uses a contrastive loss to penalizes large (resp. small) distances between same (resp. different) classes and different domains in the feature space; Published from BID65, an extension of DANN that adds a loss on target softmax values ("soft label loss"; legend Tseng15). Overall, MULANN yields the best , significantly improving upon the former best on the most difficult cases, i.e., D→A, A→D or W→A. As could be expected, the fully transductive match or significantly outperform the non-fully transductive ones. Notably, MADA performs similarly to DANN on DIGITS and RoadSigns, but worse on OFFICE; a potential explanation is that MADA is hindered as the number of classes, and thus domain discriminators, increases (respectively 10, 32 and 43 classes).MDL on CELL. A state of the art method for fluorescence microscopy images relies on tailored approaches for quantifying changes to cell morphology BID31. Objects (cells) are segmented in each image, and circa 650 shape, intensity and texture features are extracted for each object in each image. The profile of each image is defined as the vector of its Kolmogorov-Smirnov statistics, computed for each feature by comparing its distribution to that of the same feature from pooled negative controls of the same plate 3. Classification in profile space is realized using linear discriminant analysis, followed by k-nearest neighbor (LDA+k-NN) ("Baseline P" in Table 2). As a state of the art shallow approach to MDL to be applied in profile space, CORAL was chosen ("P + CORAL" in Table 2). A third baseline corresponds to fine-tuning VGG-16 without any transfer loss ("Baseline NN"). Table 2 compares DANN, MADA and MULANN to the baselines, where columns 4-7 (resp. 8-9) consider raw images (resp. the profile representations). 4 The fact that a profile-based baseline generally outperforms an image-based baseline was expected, as profiles are designed to reduce the impact of experimental settings (column 4 vs. 8). The fact that standard deviations tend to be larger Table 2: CELL test classification accuracy on all domains (average and stdev on 5 folds), in the fully transductive setting (see TAB10 in Appendix for non-transductive ones, and sections C.4, C.5 for details about image and class selection).Shift Image set # classes Baseline NN DANN MADA MULANN Baseline P P+Coral E-C E 7 63.7 (7.0) 62.9 (7.6) 59.5 (9.5) 64.4 (8.0) 74.1 (3.9) 58.4 (6.1) C lab. here than for OFFICE, RoadSigns or DIGITS is explained by a higher intra-class heterogeneity; some classes comprise images from different compounds with similar but not identical biological activity. Most interestingly, MULANN and P+CORAL both improve classification accuracy on unlabeled classes at the cost of a slighty worse classification accuracy for the labeled classes (in all cases but one). This is explained as reducing the divergence between domain marginals on the latent feature space prevents the classifier from exploiting dataset-dependent biases. Overall, MULANN and P+CORAL attain comparable on two-domain cases, with MULANN performing significantly better in the three-domain case. Finally, MULANN matches or significantly outperforms DANN and MADA. Sensitivity w.r.t. the fraction p of "known unknowns". MULANN was designed to counter the negative transfer that is potentially caused by class asymmetry. This is achieved through the repulsion of labeled examples in each domain from the fraction p of unlabeled examples deemed to belong to extra classes (not represented in the domain). The sensitivity of MULANN performance to the value of p and its difference to the ground truth p is investigated on MNIST↔MNIST-M. A first remark is that discrepancies between p and p has no influence on the accuracy on a domain without unlabeled datapoints (Fig. 4 in Appendix). FIG1, right, displays the error depending on p for various values of p. As could have been expected, it is better to underestimate than to overestimate p; it is even better to slightly underestimate it than to get it right, as the entropy ranking of unlabeled examples can be perturbed by classifier errors. Impact of class/domain asymmetry. Section 4.2 reports on the classification accuracy when all classes are represented in all domains of a given shift. In the general case however, the classes represented by the unlabeled examples are unknown, hence there might exist "orphan" classes, with labeled or unlabeled samples, unique to a single domain. The impact of such orphan classes, referred to as class asymmetry, is investigated in the 2-domain case. Four types of samples are considered TAB5: A class might have labeled examples in both domains (α), labeled in one domain and unlabeled in the other domain (β), labeled in one domain and absent in the other one (orphan γ), and finally unlabeled in one domain and absent in the other one (orphan δ). The impact of the class asymmetry is displayed on FIG3, reporting the average classification accuracy of α, β classes on domain 1 on the x-axis, and classification accuracy of unlabeled β classes on domain 2 on the y-axis, for MULANN, DANN and MADA on OFFICE (on CELL in Fig. 5, Appendix).A clear trend is that adding labeled orphans γ (case "2", FIG3) entails a loss of accuracy for all algorithms compared to the no-orphan reference (case "1"). This is explained as follows: on the one hand, the γ samples are subject to the classifier pressure as all labeled samples; on the other hand, they must be shuffled with samples from domain 2 due to the domain discriminator(s) pressure. Thus, the easiest solution is to shuffle the unlabeled β samples around, and the loss of accuracy on these β samples is very significant (the "2" is lower on the y-axis compared to "1" for all algorithms). The perturbation is less severe for the labeled (α, β) samples in domain 1, which are preserved by the classifier pressure (x-axis).The in case "3" are consistent with the above explanation: since the unlabeled δ samples are only seen by the discriminator(s), their addition has little impact on either the labeled or unlabeled data classification accuracy FIG3. Finally, there is no clear trend in the impact of both labeled and unlabeled orphans (case "4"): labeled (α, β) (resp. unlabeled β) are only affected for MADA on CELL (resp. MULANN on OFFICE). Overall, these show that class asymmetry matters for practical applications of transfer learning, and can adversely affect all three adversarial methods FIG3, with asymmetry in labeled class content ("2") being the most detrimental to model performance. This paper extends the use of domain adversarial learning to multi-domain learning, establishing how the H-divergence can be used to bound both the risk across all domains and the worst-domain risk (imbalance on a specific domain). The stress is put on the notion of class asymmetry, that is, when some domains contain labeled or unlabeled examples of classes not present in other domains. Showing the significant impact of class asymmetry on the state of the art, this paper also introduces MULANN, where a new loss is meant to resist the contractive effects of the adversarial domain discriminator and to repulse (a fraction of) unlabeled examples from labeled ones in each domain. The merits of the approach are satisfactorily demonstrated by comparison to DANN and MADA on DIGITS, RoadSigns and OFFICE, and obtained on the real-world CELL problem establish a new baseline for the microscopy image community. A perspective for further study is to bridge the gap between the proposed loss and importance sampling techniques, iteratively exploiting the latent representation to identify orphan samples and adapt the loss while learning. Further work will also focus on how to identify and preserve relevant domain-specific behaviours while learning in a domain adversarial setting (e.g., if different cell types have distinct responses to the same class of perturbations). This work was supported by NIH RO1 CA184984 (LFW), R01GM112690 (SJA) and the Institute of Computational Health Sciences at UCSF (SJA and LFW). We thank the Shoichet lab (UCSF) for access to their GPUs and Theresa Gebert for suggestions and feedback. In the field of computer vision, another way of mapping examples in one domain onto the other domain is image-to-image translation. In the supervised case (the true pairs made of an image and its translation are given), Pic2Pix trains a conditional GAN to discriminate true pairs from fake ones. In the unsupervised case, another loss is designed to enforce cycle consistency (simultaneously learning the mapping φ from domain A to B, ψ from B to A, and requiring φoψ =Id) BID75. Note that translation approaches do not per se address domain adaptation as they are agnostic w.r.t. the classes. Additional losses are used to overcome this limitation: Domain transfer network (DTN) BID64 uses an auto-encoder-like loss in the latent space; GenToAdapt BID53 ) uses a classifier loss in the latent space; UNIT BID36 ) uses a VAE loss. StarGAN BID15 combines image-to-image translation with a GAN, where the discriminator is trained to discriminate true from fake pairs on the one hand, and the domain on the other hand. ComboGAN BID1 learns two networks per domain, an encoder and a decoder. DIRT-T BID57 ) uses a conditional GAN and a classifier in the latent space, with two additional losses, respectively enforcing the cluster assumption (the classifier boundary should not cross high density region) and a virtual adversarial training (the hypothesis should be invariant under slight perturbations of the input). Interestingly, DA and MDL (like deep learning in general) tend to combine quite some losses; two benefits are expected from using a mixture of losses, a smoother optimization landscape and a good stability of the representation BID11. Definition. BID32 BID4 Given a domain X, two distributions D and D over that domain and a binary hypothesis class H on X, the H-divergence between D and D is defined as: DISPLAYFORM0 Theorem 2. Given an input space X, we consider n distributions D i over X ×{0; 1} and a hypothesis class H on X of VC dimension d. Let α and γ be in the simplex of dimension n. If S is a sample of size m which contains γ i m samples from D i, andĥ is the empirical minimizer of i α iˆ i on (S i) i, then for any δ > 0, with probability at least 1 − δ, the compound empirical error is upper bounded as: DISPLAYFORM0 with DISPLAYFORM1 A tighter bound can be obtained by BID5 operates on the symmetric difference hypothesis space H∆H. However, divergence H∆H does not lend itself to empirical estimation: even BID5 fall back on H-divergence in their empirical validation. DISPLAYFORM2 the n-dimensional simplex and h ∈ H, we note α (h) = i α i i (h).We have for α in the simplex of dimension n, h ∈ H and j ∈ {1, . . ., m}, using the triangle inequality (similarly to the proof of Theorem 4 in BID5) DISPLAYFORM0 The last line follows from the definitions of β i,j and H-divergence. Thus using lemma 6 in DISPLAYFORM1 Hence the . Proof of proposition 1 We have for h ∈ H and j ∈ [1, . . ., m], using the triangle inequality and the definition of i (similarly to the proof of Theorem 1 in BID4) DISPLAYFORM0 We have for i DISPLAYFORM1 The second line follows from the triangle inequality and the definition of the H-divergence. Thus DISPLAYFORM2 By symmetry we obtain 1 n DISPLAYFORM3 Thus the . Proposition 3. Given a domain X, m distributions D i over X × {0; 1} and a hypothesis class H on X, we have for h ∈ H and j ∈ [1, . . ., m] DISPLAYFORM4 The second line follows from Lemma 3 from BID5, and the third from the triangle inequality. From this and proposition 1 we obtain the . Corollaries for the 2-domain case Corollary 4. Given a domain X, two distributions D S and D T over X × {0, 1} and a hypothesis class H on X, we have for DISPLAYFORM5 Corollary 5. Given a domain X, two distributions D S and D T over X × {0; 1} and a hypothesis class H on X, we have for DISPLAYFORM6 This dataset is extracted from that published in BID31. It contains 455 biologically active images, in 11 classes, on four 384-well plates, in three channels: H2B-CFP, XRCC5-YFP and cytoplasmic-mCherry. Our analysis used 10 classes:'Actin','Aurora','DNA','ER','HDAC','Hsp90','MT','PLK','Proteasome','mTOR'.On top of the quality control from the original paper, a visual quality control was implemented to remove images with only apoptotic cells, and XRCC5-YFP channel images were smoothed using a median filter of size 2 using SciPy BID29. This dataset is designed to be similar to the Texas domain BID31, generated using the same cell line, but in a different laboratory, by a different biologist, and using different equipment. It contains 1,077 biologically active images, in 10 classes, on ten 384-well plates, in three channels: H2B-CFP, XRCC5-YFP and cytoplasmic-mCherry. The classes are:'Actin','Aurora','DNA','ER','HDAC','Hsp90','MT','PLK','Proteasome','mTOR'.Cell culture, drug screening and image acquisition Previously BID31, retroviral transduction of a marker plasmid "pSeg" was used to stably express H2B-CFP and cytoplasmicmCherry tags in A549 human lung adenocarcinoma cells. A CD-tagging approach BID58 was used to add an N-terminal YFP tag to endogenous XRCC5.Cells were maintained in RPMI1640 media containing 10% FBS, 2 mM glutamine, 50 units/ml penicillin, and 50 µg/ml streptomycin (all from Life Technologies, Inc.), at 37 • C, 5% CO 2 and 100% humidity. 24h prior to drug addition, cells were seeded onto 384-well plate at a density of 1200 cells/well. Following compound addition, cells were incubated at 37• C for 48 hours. Images were then acquired using a GE InCell Analyzer 2000. One image was acquired per well using a 10x objective lens with 2x2 binning. Image processing Uneven illumination was corrected as described in BID61. Background noise was removed using the ImageJ RollingBall plugin BID54. Images were segmented, object features extracted and biological activity determined as previously described BID31. A visual quality control was implemented to remove images with obvious anomalies (e.g. presence of a hair or out-of-focus image) and images with only apoptotic cells. YFP-XRCC5 channel images were smoothed using a median filter of size 2. This dataset was published by BID12 and retrieved from BID37. It contains 879 biologically active images of MCF7 breast adenocarcinoma cells, in 15 classes on 55 96-well plates, in 3 channels: Alexa Fluor 488 (Tubulin), Alexa Fluor 568 (Actin) and DAPI (nuclei). Classes with fewer than 15 images and absent from the other datasets ("Calcium regulation", "Cholesterol", "Epithelial", "MEK", "mTOR") were not used, which leaves 10 classes:'Actin','Aurora','DNA','ER','Eg5 inhibitor','HDAC','Kinase','MT','Proteasome','Protein synthesis'.Image processing As the images were acquired using a 20X objective, they were stitched using ImageJ plugin BID50 ) and down-scaled 2 times. Cells thus appear the same size as in the other domains. Images were segmented, object features extracted and biological activity obtained as previously described BID31. A visual quality control was implemented to remove images with obvious anomalies and images with only apoptotic cells. Images with too few cells were also removed: an Otsu filter BID45 was used to estimate the percentage of pixels containing nuclei in each image, and images with less than 1% nuclear pixels were removed. Tubulin channel images were smoothed using a median filter of size 2. Images which were not significantly distinct from negative controls were identified as previously BID31 and excluded from our analysis. Previous work on the England dataset further focused on images which "clearly [have] one of 12 different primary mechanims of action" BID37. We chose not to do so, since it in a simpler problem (90% accuracy easy to reach) with much less room for improvement. Images from all domains were down-scaled 4 times and flattened to form RGB images. Images were normalized by subtracting the intensity values from negative controls (DMSO) of the same plate in each channel. England, Texas and California share images for cell nucleus and cytoplasm, but their third channel differs: Texas and California shows the protein XRCC5, whereas England shows the Actin protein. Therefore, the experiments which combine Texas and England, and California and England used only the first two channels, feeding an empty third channel into the network. Similarly, profiles contain 443 features which are related to the first two channels, and 202 features which are related to the third channel. Only the former were used in experiments which involve the England dataset. Shift Dom. 2, labeled classes Domain 2, unlabeled classes E-C HDAC, Proteasome, Actin, Aurora DNA, MT, ER C-T DNA, HDAC, MT, ER, Aurora, mTOR, PLK Actin, Proteasome, Hsp90 T-E DNA, MT, Proteasome, Actin, ER Aurora, HDAC, Actin C-T-E DNA, MT, Proteasome, Actin, ER Aurora, HDAC, Actin BID22 BID66, a bottleneck fully connected layer is added after the last dense layer of VGG-16. Learning rates on weights (resp. biases) from "from scratch" layers is ten (resp, twenty) times that on parameters of fine-tuned layers. Instance normalization is used on DIGITS, whereas global normalization is used on OFFICE and CELL. Exponentially increasing, constant ζ 0.1, 0.8 TAB10: Range of hyper-parameters which were evaluated in cross-validation experiments. Exponentially decreasing schedule, exponentially increasing schedule, indiv. lr (learning rates from layers which were trained from scratch are multiplied by 10), as in BID22 ).E ADDITIONAL E.1 3-DOMAIN ON OFFICE We use tSNE BID69 to visualize the common feature space in the example of Webcam → Amazon. FIG3 shows that classes are overall better separated with MULANN. In particular, when using MULANN, unlabeled examples (blue) are both more grouped and closer to labeled points from the other domain. E.3 SEMI-SUPERVISED MDL ON THE BIO DATASET
Adversarial Domain adaptation and Multi-domain learning: a new loss to handle multi- and single-domain classes in the semi-supervised setting.
555
scitldr
We introduce our Distribution Regression Network (DRN) which performs regression from input probability distributions to output probability distributions. Compared to existing methods, DRN learns with fewer model parameters and easily extends to multiple input and multiple output distributions. On synthetic and real-world datasets, DRN performs similarly or better than the state-of-the-art. Furthermore, DRN generalizes the conventional multilayer perceptron (MLP). In the framework of MLP, each node encodes a real number, whereas in DRN, each node encodes a probability distribution. The field of regression analysis is largely established with methods ranging from linear least squares to multilayer perceptrons. However, the scope of the regression is mostly limited to real valued inputs and outputs BID4 BID14. In this paper, we perform distribution-todistribution regression where one regresses from input probability distributions to output probability distributions. Distribution-to-distribution regression (see work by BID17) has not been as widely studied compared to the related task of functional regression BID3. Nevertheless, regression on distributions has many relevant applications. In the study of human populations, probability distributions capture the collective characteristics of the people. Potential applications include predicting voting outcomes of demographic groups BID5 and predicting economic growth from income distribution BID19. In particular, distribution-to-distribution regression is very useful in predicting future outcomes of phenomena driven by stochastic processes. For instance, the Ornstein-Uhlenbeck process, which exhibits a mean-reverting random walk, has wide-ranging applications. In the commodity market, prices exhibit mean-reverting patterns due to market forces BID23. It is also used in quantitative biology to model phenotypic traits evolution BID0.Variants of the distribution regression task have been explored in literature BID18. For the distribution-to-distribution regression task, BID17 proposed an instance-based learning method where a linear smoother estimator (LSE) is applied across the inputoutput distributions. However, the computation time of LSE scales badly with the size of the dataset. To that end, BID16 developed the Triple-Basis Estimator (3BE) where the prediction time is independent of the number of data by using basis representations of distributions and Random Kitchen Sink basis functions. BID9 proposed the Extrapolating the Distribution Dynamics (EDD) method which predicts the future state of a time-varying probability distribution given a sequence of samples from previous time steps. However, it is unclear how it can be used for the general case of regressing distributions of different objects. Our proposed Distribution Regression Network (DRN) is based on a completely different scheme of network learning, motivated by spin models in statistical physics and similar to artificial neural networks. In many variants of the artificial neural network, the network encodes real values in the nodes BID21 BID10 BID1. DRN is novel in that it generalizes the conventional multilayer perceptron (MLP) by encoding a probability distribution in each node. Each distribution in DRN is treated as a single object which is then processed by the connecting weights. Hence, the propagation behavior in DRN is much richer, enabling DRN to represent distribution regression mappings with fewer parameters than MLP. We experimentally demonstrate that compared to existing methods, DRN achieves comparable or better regression performance with fewer model parameters. Figure 1: (Left) An example DRN with multiple input probability distributions and multiple hidden layers mapping to an output probability distribution. (Right) A connection unit in the network, with 3 input nodes in layer l − 1 connecting to a node in layer l. Each node encodes a probability distribution, as illustrated by the probability density function P (l) k. The tunable parameters are the connecting weights and the bias parameters at the output node. and Y i are univariate continuous distributions with compact support, the regression task is to learn the function f which maps the input distributions to the output distribution. DISPLAYFORM0 No further assumptions are made on the form of the distribution. It is trivial to generalize our method to regress to multiple output distributions but for simplicity of explanation we shall restrict to single output regressions in the following discussions. Fig. 1 illustrates how the regression in Eq. is realized. DRN generalizes the traditional neural network structure by encoding each node with a probability distribution and connecting the nodes with real-valued weights. The input data consists of one or more probability distributions which are fed into the first layer and propagated layerwise through the hidden layers. We emphasize our network is not a Bayesian network even though each node encodes a probability. Unlike bayes net where the conditional probability among variables are learnt by maximizing the likelihood over observed data, DRN regresses probability distributions using a feedforward network, similar to MLP. At each node in the hidden layer, the probability distribution is computed from the probability distributions of the incoming nodes in the previous layer and the network parameters consisting of the weights and bias parameters (see right of Fig. 1). Pk represents the probability density function (pdf) of the k th node in the l th layer and P DISPLAYFORM0 k ) is the density of the pdf when the node variable is s DISPLAYFORM1 Before obtaining the probability distribution P (l) k, we first compute its unnormalized formP DISPLAYFORM2 k is computed by marginalizing over the product of the unnormalized conditional probabilitỹ Q(s DISPLAYFORM3) and the incoming node probabilities. DISPLAYFORM4 represent the variables of the lower layer nodes and E is the energy given a set of node variables, which we define later in Eq.. The unnormalized conditional probability has the same form as the Boltzmann distribution in statistical mechanics, except that the partition function is omitted. This omission reduces the computational complexity of our model through factorization, shown later in Eq..Our energy function formulation is motivated by work on spin models in statistical physics where spin alignment to coupling fields and critical phenomena are studied BID12 BID8 BID27. Energy functions are also used in other network models where a scalar energy is associated to each configuration of the nodes BID24 BID11. In such energybased models, the parameters are learnt such that the observed configurations of the variables have lower energies than unobserved ones. However, the energy function used in DRN is part of the forward propagation process and is not directly optimized. For a given set of node variables, the energy function is DISPLAYFORM5 ki is the weight connecting the i th node in the lower layer to the upper layer node. b respectively. The support length of the distribution is given by ∆. All terms in Eq. are normalized by the support length so that the energy function is invariant with respect to the support. Eq. can be factorized such that instead of having multidimensional integrals, there are n univariate integrals: DISPLAYFORM6 k ) captures the bias terms of the energy function in Eq.. DISPLAYFORM7 Finally, the probability distribution from Eq. is normalized. DISPLAYFORM8 The propagation of probability distributions within a connection unit forms the basis for forward propagation. Forward propagation is performed layerwise from the input layer using Eq. to. The forward propagation in DRN has some important properties. Fig. 2 illustrates the propagation behavior for a connection unit with one input node where the bias values b DISPLAYFORM0 a,k and b DISPLAYFORM1 q,k are set as zero. DISPLAYFORM2 Figure 2: Propagation behavior for a connection unit with one input node. The biases are set as zero in these examples. When weight is zero, the output distribution is flat. Positive weights causes the output distribution to have the same peak position as the input distribution while negative weights causes the output pdf to'repel' away from the input peak. When the weight is a sufficiently large positive number, the propagation tends towards the identity mapping. When the weight is zero, the output distribution is flat and the output distribution is independent of the input. With a positive weight, the output distribution is'attracted' to the peak of the input distribution whereas a negative weight causes the output distribution to be'repelled' away from the input peak. In addition, the weight magnitude represents the strength of the'attraction' or'repulsion'. When the weight is a sufficiently large positive number, the propagation tends towards the identity mapping (top right example in Fig. 2). The implication is that like in neural networks, a deeper network should have at least the same complexity as a shallow one, as the added layers can produce the identity function. Conversely, a small positive weight causes the output peak to be at the same position as the input peak, but with more spread (second example on left column of Fig. 2).The remaining absolute and quadratic bias terms in Eq. have a similar role as the bias in a traditional neural network. Depending on the bias values b q,k respectively. The weight and bias values play a similar role as the inverse temperature in the Boltzmann distribution in statistical physics BID12 BID27. The cost function of the network given a set network parameters is measured by the Jensen-Shannon (JS) divergence between the label (Y i) and predicted DISPLAYFORM0 and D KL is the Kullback-Liebler divergence. The Jensen-Shannon divergence is a suitable cost function as it is symmetric and bounded. The network cost function C net is the average D JS over all M training data: DISPLAYFORM1 In our experiments, the integrals in Eq. and are performed numerically. This is done through discretization from continuous probability density functions (pdf) to discrete probability mass functions (pmf). Given a continuous pdf with finite support, the range of the continuous variable is partitioned into q equal widths and the probability distribution is binned into the q states. The estimation error arising from the discretization step will decrease with larger q. The network cost is a differentiable function over the network parameters. We derive the cost gradients similar to backpropagation in neural networks BID22. We use chain rule to derive at each node a q-by-q matrix which denotes the derivative of the final layer node distribution with respect to the current node distribution. DISPLAYFORM0 where DISPLAYFORM1 is the final layer output probability distribution. From the derivative DISPLAYFORM2 We evaluate DRN on synthetic and real-world datasets and compare its performance to the state-ofthe-art 3BE method and a fully-connected multilayer perceptron (MLP). For each of the datasets, DRN achieves similar or higher accuracy with fewer model parameters. In MLP, each discretized probability mass function is represented by q nodes. The MLP consists of fully connected hidden layers with ReLU units and a softmax final layer, and is optimized with mean squared error using Adam. Unlike DRN and MLP where the distribution pdfs are directly used by the methods, 3BE assumes the input and output distributions are observed through i.i.d. samples. Hence, for the first two datasets we provide 3BE with sufficient samples from the underlying distribution such that errors from density estimation are minimal. The first experiment involves a synthetic dataset similar to the one used by BID17 DISPLAYFORM0 The function h transforms the means and standard deviations using the non-linear function shown in Fig. 3a. The transformation is such that the two gaussian means will remain in their respective ranges. The sample input-output data pairs in Fig. 3b shows the complexity of the regression task with various behavior like peak splitting and peak spreading. 1000 training data and 1000 testing data were created to evaluate the regression methods. For DRN and MLP, the pdfs are discretized into q = 100 states and for 3BE, 10,000 samples from each data distribution are generated. While 3BE gives a continuous distribution as the output, DRN and MLP output the discrete pmf and require conversion to continuous pdf. Following BID18, the regression performance on the test set is measured by the L2 loss between the continuous predicted distribution,Ŷ (s) and the true distribution. We study how the regression accuracy varies with respect to the number of model parameters. For DRN and MLP, the number of parameters are varied using different depths and widths of the networks and for 3BE, we vary the number of Random Kitchen Sink features. We present the detailed DRN architecture in Appendix B. Fig. 4a shows the L2 loss on the test set as we vary the number of model parameters. Note that the x-axis is presented on the log scale. DRN's test performance is comparable to the other methods and uses fewer model parameters to attain reasonable performance. We note there is little overfitting for the three methods, as shown in the plots comparing train and test loss in Fig. 4b, though 3BE starts to exhibit overfitting when the number of model parameters approaches 10,000. Because of the Boltzmann distribution term (ref. Eq. 3), DRN models the diffusion process very well. For this experiment, we evaluate our model on data generated from the stochastic OrnsteinUhlenbeck (OU) process BID25 which combines the notion of random walk with a drift towards a long-term mean. The OU process has wide-ranging applications. In the commodity market, prices exhibit mean-reverting patterns due to market forces and hence modelling the prices with the OU process helps form valuation strategies BID23 BID28.The OU process is described by a time-varying gaussian pdf. With the long-term mean set at zero, the pdf has a mean of µ(t) = y exp(−θt) and variance of σ 2 (t) = DISPLAYFORM0. t represents time, y is the initial point mass position, and D and θ are the diffusion and drift coefficients respectively. The regression task is to map from an initial gaussian distribution at t init to the ing distribution after some time step ∆t. The gaussian distributions are truncated with support of. With different sampled values for y ∈ [0.3, 0.9] and t init ∈ [0.01, 2], pairs of distributions are created for ∆t = 1, D = 0.003 and θ = 0.1. For DRN and MLP, q = 100 was used for discretization of the pdfs while 10,000 samples were taken for each distribution to train 3BE. We compare the number of model parameters required to achieve a small L2 test loss with 100 training data. We also increased the training size to 1000 and attained similar . TAB0 and FIG5 show that a simple DRN of one input node connecting to one output node with 5 parameters performs similarly as MLP and 3BE. MLP requires 1 fully-connected hidden layer with 3 nodes, with a total of 703 network parameters. 3BE requires 64 projection coefficients for both input and output distributions and 17 Random Kitchen Sink features, ing in 272 model parameters. The regression by DRN on two random test samples are shown in FIG5 and we see that DRN is able to demonstrate the OU process. FIG5 shows the 5 DRN parameters after training. The values of these parameters are interpreted as follows. The weight parameter is positive, hence the output peak position is positively correlated to the input peak position. Moreover, w = 75.3 is such that the network mimics the diffusion property of the OU process. The bias position λ a is negative and its magnitude is 5 times the distribution support, causing the output peak to be displaced leftwards of the input peak. These two observations reflect the random walk and mean-reverting properties of the OU process. Figure 6: Single-layer network used in DRN for the stock dataset with 7 model parameters (3 weights, 4 bias parameters). We demonstrate that DRN can be useful for an important real-world problem and outperforms 3BE and MLP in terms of prediction accuracy. With greater integration of the global stock markets, there is significant co-movement of stock indices BID6 BID2. In a study by BID26, it was found that the previous day stock returns of the Nikkei and Dow Jones Industrial Average (Dow) are good predictors of the FTSE return. Modelling the co-movement of global stock indices has its value as it facilitates investment decisions. Stock indices are weighted average of the constituent companies' prices in a stock exchange, and existing research has primarily focused on the movement of returns of the indices. However, for our experiment, we predict the future distribution of returns over the constituent companies in the index as it provides more information than just a weighted average. Our regression task is as follows. Given the current day's distribution of returns of constituent companies in FTSE, Dow and Nikkei, predict the distribution of returns for constituent companies in FTSE k days later. The logarithmic return for the company's stock at day t is given by ln(V t /V t−1), where V t and V t−1 represent its closing price at day t and t − 1 respectively. The stock data consists of 9 years of daily returns from January 2007 to December 2015. To adapt to changing market conditions, we use a sliding-window training scheme where the data is split into windows of training, validation and test sets and moved foward in time BID7. A new window is created and the network is retrained after every 300 days (which is the size of test set). For each test set, the previous 500 and 100 days were used for training and validation. To reduce the noise in the data, we performed exponential window averaging on the price series for each stock with a window of 50 days following common practice BID15. The logarithmic returns of the constituent company stocks form the samples for the distributions of the returns. For DRN and MLP, the pdf is estimated using kernel density estimation with a gaussian kernel function with bandwidth of 0.001 and q = 100 was used for discretization of the pdf. The authors of 3BE have extended their method for multiple input functions (see joint motion prediction experiment in BID16). We followed their method and concatenated the basis coefficients obtained from the three input distributions. In addition, for 3BE we scale the return samples to before applying cosine basis projection. The predicted distribution is then scaled back to the original range for quantification of the regression performance. First, we performed evaluations for the task of predicting the next-day distributions. As we do not have the underlying true pdf for this real-world dataset, the regression performance is measured by the log-likelihood of the test samples. TAB1 shows the test log-likelihoods, where higher loglikelihood is favorable. Interestingly, the single-layer network in DRN (see Fig. 6) was sufficient to perform well, using just 7 network parameters. In comparison, MLP and 3BE require 4110 and 8100 parameters respectively. To visualize the regression on the test set, we compare for each day the first two moments (mean and variance) of the predicted distribution and the ground truth (see 1-day ahead panels of FIG6). Each point represents one test data and we show the Pearson correlation coefficients between the predicted and labelled moments. DRN has the best regression performance as the points lie closest to the diagonal line where the predicted and labelled moments are equal, and its correlation values are highest. As an extension, we predict the FTSE returns distribution several days ahead. The second and third rows of FIG6 and FIG6 show the moment plots for 5 and 10 days ahead respectively. Expectedly, the performance deterioriates as the number of days increases. Still, DRN outperforms the rest as shown by the moment plots and the correlation values. FIG7 summarizes the by showing the average absolute error of the mean and variance as the number of days-ahead increases. For all experiments, DRN consistently has the lowest error. Finally, we conducted experiments on a real-world cell dataset similar to the one used in BID17. The dataset is a time-series of images of NIH3T3 fibroblast cells. There are 277 time frames taken at 5-minute intervals, containing 176 to 222 cells each. In each frame, we measured the long and short-axis nuclear length of the cells and scaled the lengths to. At each time-frame, given the distribution of long-axis length, we predict the distribution of the short-axis length. The first 200 frames were used for training and last 77 for testing. For DRN and MLP, the pdf is estimated using kernel density estimation with a gaussian kernel of bandwidth 0.02 and q = 100 was used for discretization. We compare the log-likelihood on test data in TAB3. DRN had the best loglikelihood with a simple network of one input node connecting to one output node. In contrast, MLP and 3BE used more model parameters but achieved lower log-likelihoods. This validated DRN's advantage at learning distribution regressions on real-world data with fewer model parameters. The distribution-to-distribution regression task has many useful applications ranging from population studies to stock market prediction. In this paper, we propose our Distribution Regression Network which generalizes the MLP framework by encoding a probability distribution in each node. Our DRN is able to learn the regression mappings with fewer model parameters compared to MLP and 3BE. MLP has not been used for distribution-to-distribution regression in literature and we have adapted it for this task. Though both DRN and MLP are network-based methods, they encode the distribution very differently. By generalizing each node to encode a distribution, each distribution in DRN is treated as a single object which is then processed by the connecting weight. Thus, the propagation behavior in DRN is much richer, enabling DRN to represent the regression mappings with fewer parameters. In 3BE, the number of model parameters scales linearly with the number of projection coefficients of the distributions and number of Random Kitchen Sink features. In our experiments, DRN is able to achieve similar or better regression performance using less parameters than 3BE. Furthermore, the runtime for DRN is competitive with other methods (see comparison of mean prediction times in Appendix C).For future work, we look to extend DRN for variants of the distribution regression task such as distribution-to-real regression and distribution classification. Extensions may also be made for regressing multivariate distributions. In this section, show the DRN network architecture used for the synthetic dataset presented in Fig. 4a. There is one input node and one output node connected by a number of hidden layers of arbitrary width. All layers are fully-connected. We compare the mean prediction time per data for DRN and the baseline methods. All runs were conducted on the CPU. For the synthetic dataset, we have shown the test loss for varying parameter sizes. For a fair comparison of runtime, for each method we chose a model size which gave a test L2 loss of about 0.37. For all the datasets, MLP has the fastest prediction time, followed by DRN and then 3BE.
A learning network which generalizes the MLP framework to perform distribution-to-distribution regression
556
scitldr
Existing sequence prediction methods are mostly concerned with time-independent sequences, in which the actual time span between events is irrelevant and the distance between events is simply the difference between their order positions in the sequence. While this time-independent view of sequences is applicable for data such as natural languages, e.g., dealing with words in a sentence, it is inappropriate and inefficient for many real world events that are observed and collected at unequally spaced points of time as they naturally arise, e.g., when a person goes to a grocery store or makes a phone call. The time span between events can carry important information about the sequence dependence of human behaviors. In this work, we propose a set of methods for using time in sequence prediction. Because neural sequence models such as RNN are more amenable for handling token-like input, we propose two methods for time-dependent event representation, based on the intuition on how time is tokenized in everyday life and previous work on embedding contextualization. We also introduce two methods for using next event duration as regularization for training a sequence prediction model. We discuss these methods based on recurrent neural nets. We evaluate these methods as well as baseline models on five datasets that resemble a variety of sequence prediction tasks. The experiments revealed that the proposed methods offer accuracy gain over baseline models in a range of settings. Event sequence prediction is a task to predict the next event 1 based on a sequence of previously occurred events. Event sequence prediction has a broad range of applications, e.g., next word prediction in language modeling BID10, next place prediction based on the previously visited places, or next app to launch given the usage history. Depending on how the temporal information is modeled, event sequence prediction often decomposes into the following two categories: discrete-time event sequence prediction and continuous-time event sequence prediction. Discrete-time event sequence prediction primarily deals with sequences that consist of a series of tokens (events) where each token can be indexed by its order position in the sequence. Thus such a sequence evolves synchronously in natural unit-time steps. These sequences are either inherently time-independent, e.g, each word in a sentence, or ed from sampling a sequential behavior at an equally-spaced point in time, e.g., busy or not busy for an hourly traffic update. In a discrete-time event sequence, the distance between events is measured as the difference of their order positions. As a consequence, for discrete-time event sequence modeling, the primary goal is to predict what event will happen next. Continuous-time event sequence prediction mainly attends to the sequences where the events occur asynchronously. For example, the time interval between consecutive clinical visits of a patient may potentially vary largely. The duration between consecutive log-in events into an online service can change from time to time. Therefore, one primary goal of continuous-time event sequence prediction is to predict when the next event will happen in the near future. Although these two tasks focus on different aspects of a future event, how to learn a proper representation for the temporal information in the past is crucial to both of them. More specifically, even though for a few discrete-time event sequence prediction tasks (e.g., neural machine translation), they do not involve an explicit temporal information for each event (token), a proper representation of the position in the sequence is still of great importance, not to mention the more general cases where each event is particularly associated with a timestamp. For example, the next destination people want to go to often depends on what other places they have gone to and how long they have stayed in each place in the past. When the next clinical visit BID3 will occur for a patient depends on the time of the most recent visits and the respective duration between them. Therefore, the temporal information of events and the interval between them are crucial to the event sequence prediction in general. However, how to effectively use and represent time in sequence prediction still largely remains under explored. A natural and straightforward solution is to bring time as an additional input into an existing sequence model (e.g., recurrent neural networks). However, it is notoriously challenging for recurrent neural networks to directly handle continuous input that has a wide value range, as what is shown in our experiments. Alternatively, we are inspired by the fact that humans are very good at characterizing time span as high-level concepts. For example, we would say "watching TV for a little while" instead of using the exact minutes and seconds to describe the duration. We also notice that these high-level descriptions about time are event dependent. For example, watching movies for 30 minutes might feel much shorter than waiting in the line for the same amount of time. Thus, it is desirable to learn and incorporate these time-dependent event representations in general. Our paper offers the following contributions:• We propose two methods for time-dependent event representation in a neural sequence prediction model: time masking of event embedding and event-time joint embedding. We use the time span associated with an event to better characterize the event by manipulating its embedding to give a recurrent model additional resolving power for sequence prediction.• We propose to use next event duration as a regularizer for training a recurrent sequence prediction model. Specifically, we define two flavors of duration-based regularization: one is based on the negative log likelihood of duration prediction error and the other measures the cross entropy loss of duration prediction in a projected categorical space.• We evaluated these proposed methods as well as several baseline methods on five datasets (four are public). These datasets span a diverse range of sequence behaviors, including mobile app usage, song listening pattern, and medical history. The baseline methods include vanilla RNN models and those found in the recent literature. These experiments offer valuable findings about how these methods improve prediction accuracy in a variety of settings. In recent years, recurrent neural networks (RNN) especially with Long-Short Term Memory (LSTM) BID9 have become popular in solving a variety of discretetime event sequence prediction problems, including neural machine translation BID0, image captioning BID21 and speech recognition BID16. In a nutshell, given the sequence of previously occurred events, {e 1, e 2, ..., e t}, the conditional probability P(e t+1 |{e 1, e 2, ..., e t}) = P(e t+1 |h t, θ) of the next event e t+1 is estimated by using a recurrent neural network with parameters θ and the hidden state vector h t = f (h t−1, e t, θ) which is assumed to encode the information of the past events. To feed an event into a recurrent neural network, the event, often described as a categorical variable, needs to be represented in a continuous vector space. A common way to achieve this is to use embedding BID1 x t = 1(e t)E x where 1(e t) is a one-hot vector. For the jth event in the vocabulary V, e j, its one-hot vector has 0s for all the entries except the jth entry being 1. E x ∈ R |V |×E is the embedding matrix, where |V | is the number of unique events (the vocabulary size) and E is the embedding dimension. The use of embedding provides a dense representation for an event that improves learning BID18. Through training, the embedding vector of an event encodes its meaning relative to other events. Events that are similar tend to have embedding vectors closer to each other in the embedding space than those that are not. On the other hand, temporal point processes are mathematical abstractions for the continuous-time event sequence prediction task by explicitly modeling the inter-event interval as a continuous random variable. Since the occurrence of an event may be triggered by what happened in the past, we can essentially specify different models for the timing of the next event given what we have already known so far. Very recently, BID6 BID13 BID19 b) focus on expanding the flexibility of temporal point processes using recurrent neural networks where the prediction of the next event time is based on the current hidden state h t of RNN. However, all of these work use the direct concatenation between the inter-event interval and the respective event embedding as the input to the recurrent layer where the representation of the temporal information is limited. Because it is not clear how to properly represent time as input, in this work, we intend to let the model learn a proper representation for encoding temporal information in a sequence, similar to learning embeddings for words. Rather than proposing a new model, our approach should be considered an "embedding" approach for time that can be used by general event sequence prediction models, including models proposed previously BID6 BID13. There are two notions about time spans in a sequential behavior: duration and intervals. Duration is how long an event lasts, e.g., listening to music for an half hour, and an interval is the time span between two adjacent events. To unify both types of time spans, we treat the idle period when no event is occurring (e.g., the person is not using any app for an app usage sequence) as a special event. Thus, duration becomes an inherent property of an event-the interval between two events is the duration of an idle event (see FIG0). With this, h t = f (h t−1, e t, d t ; θ) where d t is the duration of event e t. We here propose two methods to bring continuous time, d t, into a neural sequence prediction model. Both achieve timedependent event representation by manipulating event embedding vectors using time. Our methods are schematically illustrated in Figure 2. Recent work by BID4 revealed that in neural machine translation the embedding vector of a word encodes multiple meanings of the word. As a , it requires a recurrent layer to sacrifice its capacity to disambiguate a word based on its context, instead of focusing on its main task for learning the higher-level compositional structure of a sentence. To address this problem, they used a mask computed based on all the words in a sentence to contextualize the embedding of a target word. Based on this recent work, we propose a method to learn a time mask to "contextualize" event embedding, by which we hope a time-dependent embedding would give the recurrent layer additional resolving power. Similar to the word mask proposed by Choi et al. BID4, we first compute a time context vector for duration, c d. DISPLAYFORM0 φ is a nonlinear transformation of d t and is implemented as a feedforward neural network parameterized by θ. d t is log transformed before it is fed to φ to effectively cover the wide numerical range of duration values, e.g., it can range from seconds to hours for app usage events. Figure 2: A time-dependent RNN for event sequence prediction. d t is used to generate time-dependent event embedding. Next event duration can be used as a regularizer, which can be applied to the recurrent layer and/or any post recurrent layer. We compute a time mask by linearly transforming c d with weights W d ∈ R C×E and bias b d ∈ R E, which is followed by a sigmoid nonlinear activation, σ, to generate a mask m d ∈ R E and R E →. C is the size of the time context vector, and E is the event embedding dimension. DISPLAYFORM1 We then apply the mask to an event embedding by performing an element-wise multiplication,, between the embedding vector and the mask. Finally, the product is fed to the recurrent layer. DISPLAYFORM2 3.2 EVENT-TIME JOINT EMBEDDING Humans developed many ways to tokenize continuous time in everyday life. For example, we would say "talk to someone briefly" instead of using exact minutes and seconds to characterize the length of the conversation. Such a kind of tokenization is extensively used in natural languages. In addition, our perception about the duration also depends on the specific event that we are experiencing. Based on these intuitions, we propose a method to first encode the duration of an event using soft one-hot encoding and then use the encoding to form the joint embedding with the event. To do so, we first project the scalar duration value onto a vector space, where W d ∈ R 1×P is the weight matrix, b d ∈ R P is the bias vector, and P is the projection size. DISPLAYFORM3 We then compute the soft one-hot encoding, s d, of a duration value by applying a softmax function to the projection vector, p d. Softmax has been typically used in the output layer BID7 and in the attention mechanisms BID0 BID21 for selecting one out of many. The ith entry of the encoding vector is calculated as the following and p d i is the ith entry in p d. DISPLAYFORM4 All the entries in the soft one-hot encoding are positive. Similar to a regular one-hot encoding, DISPLAYFORM5 We then project the soft one-hot encoding onto a time embedding space, g d. It has the same dimension as the event embedding. E s ∈ R P ×E is the embedding matrix. DISPLAYFORM6 Embedding for a regular one-hot encoding essentially takes a single row of the embedding matrix that is corresponding to the non-zero entry as the embedding vector. In contrast, embedding for a soft one-hot encoding computes a weighted sum over all the rows in the embedding matrix. Finally, we form the joint embedding of an event and its duration by taking the mean of their embedding vectors, which is then fed to the recurrent layer. DISPLAYFORM7 4 NEXT EVENT DURATION AS A REGULARIZER While our goal here is to predict next event, it can help learning by introducing an additional loss component based on the prediction of the next event duration (see Figure 2). The duration prediction of the next event at step t, d t+1, is computed from a linear transformation of the recurrent layer. A loss defined on the prediction error of d t+1 provides additional information during back propagation, acting like a regularizer. Optionally, one can use the concatenation of the recurrent layer output and a hidden layer on the path for event prediction to regularize more layers. We discuss two alternatives for the loss function over d t+1. A common way for the loss over a continuous value is to use the squared error. Here, it is (d t+1 − d t+1) 2 where d t+1 is the observed duration of the next event. However, such a loss needs to be at the same scale as that of of event prediction, which is typically a log likelihood of some form. Hinton and Van Camp (Hinton & van) have shown that minimizing the squared error can be in fact formulated as maximizing the probability density of a zero-mean Gaussian distribution. Note that this does not require duration to obey a Gaussian distribution but rather the prediction error. We define our regularizer, R N t, as the negative log likelihood of duration prediction error at step t. DISPLAYFORM0 The variance, σ i, is seeded with an initial value (e.g., the variance of duration values in the training data) and updated iteratively during training based on the duration prediction error distribution of the learned model at each update i. In Section 3.2, we proposed to use softmax to project a continuous duration value onto a categorical space. Using the same technique, by projecting both d t+1 and d t+1 onto a categorical space, we can then compute a cross entropy loss based on the two projections as another regularizer R X t. DISPLAYFORM0 P roj is the softmax projection process we defined in Equation 4 and 5, P roj k is the kth entry in the projection vector. When event-time joint embedding and R X t are both used, the embedding and the regularizer can use the same projection function, i.e., sharing the same projection weights (Equation 4). In this section, we evaluate the effectiveness of our proposed approaches on the following five real-world datasets across a diverse range of domains.• Electrical Medical Records. MIMIC II medical dataset is a collection of de-identified clinical visit records of Intensive Care Unit patients for seven years. The filtered dataset released by BID6 include 650 patients and 204 diseases. The goal is to predict which major disease will happen to a given patient.• Stack Overflow Dataset. The Stack Overflow dataset includes two years of user awards on a question-answering website. The awarded badges are treated as the events. BID6 collected 6,000 users with a total of 480,000 events. The goal is to predict the next badge a user will receive.• Financial Transaction Dataset. BID6 ) collected a long stream of high frequency transactions for a single stock from NYSE where the events correspond to the "buy" and "sell" actions. The task is to predict the next action a user might take.• App Usage Dataset. Mobile users often use a large number of apps, ranging from tens to hundreds. It is time consuming to find a target app on mobile devices. One promising way to address this problem is to predict the next app a user will use based on their app usage history. Being able to predict next apps also allows the mobile platform to preload an app in memory to speed up its startup. We have collected 5,891 app usage sequences comprising of 2.8 million app usage events. The task is to predict the next app that will be used for a given user.• Music Recommendation. The music dataset represents the longitudinal listening habits of 992 users BID12 BID2 involving millions of listening events. The goal is to predict the next five unique songs that the user has not listened given the user's listen history. For the MIMIC II, Stack Overflow, and Financial data, we follow BID6 to pre-process the data and seek to predict every single held-out event from the history. We evaluate the prediction accuracy with the binary 0-1 loss. For the app usage data, to avoid users who participated in the data collection only briefly, we exclude sequences that have fewer than 50 app launches or if the time span of the sequence is shorter than a week. This ed in 5,891 app usage sequences, one from each unique user. These sequences include 2,863,095 app usage events and the longest sequence spanned 551 days. We split the dataset on users into the training (80%), validation (10%) and test (10%) such that each user is only in one of these partitions. Hence there is no intersection of users between training, validation and test sets. For an event that has fewer than 5 occurrences in the training dataset, we assign it the OOV id for out of vocabulary. In total, there are 7,327 events in the vocabulary, including 7,325 unique apps, the idle event and the OOV (out of vocabulary). In practice, predicting the next 5 apps is often desired so we use Precision@K to evaluate the performance. For the music recommendation, each listen event has a timestamp. We removed sequences that are shorter than 50 and songs that have fewer than 50 listens. We thus generate a collection of examples where each example consists of a listen history and a set of 5 unique songs to recommend. To do so, we split each original listen sequence into segments. We first take the 40 events out in order from the beginning of the sequence as the listen history, and then take more events out from the beginning of the sequence until we find 5 unique songs that have not occurred in the listen history. We do so repeatedly to extract each example until we exhaust all the original sequences. This data processing ed in 221,920 sequence examples with 71,619 unique songs (the vocabulary size). We then allocate these sequence examples for the training (80%), validation (10%) and test (10%). Because the original dataset does not have the duration information for each listen event, we did not inject the additional idle event in the sequence to differentiate duration versus intervals. Because in practice, the ranking order of the recommended music often matters, we further use MAP@K and Prevision@K to evaluate the performance. We compare with the following five models: NoTime in which a simple LSTM sequence model is used; TimeConcat in which we feed time (log transformed) directly into the recurrent layer along the event embedding; TimeMask (Section 3.1) and TimeJoint (Section 3.2) for generating time-dependent event embedding as input to the recurrent layer; and RMTPP for the model introduced previously by BID6. Moreover, we also include four regularized models based on R FORMULA3 ). For the App Usage and Music Recommendation experiments, we use a two-layer hierarchical softmax BID14 for the output layer due to the large vocabulary size, while we use a full sofmax for the rest experiments. For the MIMIC II, Stack Overflow, and Financial data, we follow BID6 for RMTPP's model parameters. For the app usage data, we determined the parameters of each model based on the training and the validation datasets on a distributed parallel tuning infrastructure. We used LSTM units BID9 for the recurrent layer, and Rectified Linear Units (ReLu) BID15 for the activation function in the nonlinear projection layer. The event embedding dimension, the number of LSTM units, and the nonlinear projection layer size are all set to 128. For the music recommendation data, we use a setting similar to the app prediction experiment where we chose the embedding size as 128 and LSTM size as 256. We did not use the nonlinear projection layer after the LSTM layer for this task because it does not seem to help. We implemented all the models in TensorFlow . For the experiments based on MIMIC II, Stack Overflow and Financial Transaction datasets, we use the same training and testing strategy of BID6. For App Usage and Music Recommendation tasks, we selected the model architecture and hyper parameters with early stopping based on the validation dataset of each task, and report the performance of each model based on the test dataset. For the App Usage experiment, we used truncated back-propagation through time with the number of unroll to be 30. We used an adaptive gradient descent optimizer BID22, using a learning rate of 0.024 with a threshold for gradient clipping of 1.0, and a batch size of 32. We decided not to use dropout as it did not seem to improve accuracy on this task. For the Music Recommendation experiment, we used the full sequence back-propagation through time with 2% dropout ratio on the recurrent layer for better generalization. We used the Adam optimizer by BID11 for adaptive learning with a learning rate of 0.00005 and a gradient clipping threshold at 1.0. The mini-batch size is 256.We trained the models by minimizing the cross-entropy loss, plus the regularization loss if the duration regularizer is used, over all the sequences in the training dataset. The training for App Usage and Music Recommendation was conducted on a distributed learning infrastructure BID5 with 50 GPU cores where updates are applied asynchronously across multiple replicas. Effectiveness of Temporal Representation. FIG2 presents the comparisons between all the models on three released public datasets. We can observe a consistent performance gain with using the proposed methods for time-dependent event embedding compared to the NoTime baseline and the simple TimeConcat approach. TimeJoint significantly outperformed all other methods on both the Stack Overflow and the Financial dataset, with p<0.05 using Paired T-test. But none of the methods for using time is able to improve accuracy on the MIMIC II dataset. This indicates that using time might not always help. However, when it does, our methods such as TimeJoint enable more efficient representation of time than simply using the scalar value of time in RNN models. Our methods also outperformed RMTPP for event prediction. The performance gain of our models are more pronounced on the App Usage and Music Recommendation datasets as shown in TAB3 and 2. TimeJoint seems to outperform the rest on most measures and TimeMask also performs well compared to other previous methods. We also notice that using time directly without representing them appropriately in RNN, i.e., TimeConcat, can sometime hurt the performance. Effectiveness of Event Duration Regularization. We demonstrate the performance boosting gained from our proposed temporal regularization in TAB3 and 2, respectively. We can observe that our proposed regularizers can bring additional performance gain on many cases. In particular, the crossentropy regularizer, R X t, is able to give consistent performance gain with the temporal embedding approaches. Learned Time Representation. Our motivation in this work is to let the model learn a proper representation of time from data. We here briefly discuss what the TimeJoint approach learns about how to project a scalar value of time into a soft one-hot encoding 4. It seems that for small time periods, e.g., shorter than 20 seconds for the Next App prediction task, more dimensions are needed to express the differences of continuous time values. As the time period grows, we need less dimensions for representing time, e.g., two of the curves have converged to the same small values. We proposed a set of methods for leveraging the temporal information for event sequence prediction. Based on our intuition about how humans tokenize time spans as well as previous work on contextual representation of words, we proposed two methods for time-dependent event representation. They transform a regular event embedding with learned time masking and form time-event joint embedding based on learned soft one-hot encoding. We also introduced two methods for using next duration as a way of regularization for training a sequence prediction model. Experiments on a diverse range of real data demonstrate consistent performance gain by blending time into the event representation before it is fed to a recurrent neural network.
Proposed methods for time-dependent event representation and regularization for sequence prediction; Evaluated these methods on five datasets that involve a range of sequence prediction tasks.
557
scitldr
Off-policy reinforcement learning algorithms promise to be applicable in settings where only a fixed data-set (batch) of environment interactions is available and no new experience can be acquired. This property makes these algorithms appealing for real world problems such as robot control. In practice, however, standard off-policy algorithms fail in the batch setting for continuous control. In this paper, we propose a simple solution to this problem. It admits the use of data generated by arbitrary behavior policies and uses a learned prior -- the advantage-weighted behavior model (ABM) -- to bias the RL policy towards actions that have previously been executed and are likely to be successful on the new task. Our method can be seen as an extension of recent work on batch-RL that enables stable learning from conflicting data-sources. We find improvements on competitive baselines in a variety of RL tasks -- including standard continuous control benchmarks and multi-task learning for simulated and real-world robots. Batch reinforcement learning (RL) is the problem of learning a policy from a fixed, previously recorded, dataset without the opportunity to collect new data through interaction with the environment. This is in contrast to the typical RL setting which alternates between policy improvement and environment interaction (to acquire data for policy evaluation). In many real world domains collecting new data is laborious and costly, both in terms of experimentation time and hardware availability but also in terms of the human labour involved in supervising experiments. This is especially evident in robotics applications (see e.g. Haarnoja et al. 2018b; Kalashnikov et al. 2018 for recent examples learning on robots). In these settings where gathering new data is expensive compared to the cost of learning, batch RL promises to be a powerful solution. There exist a wide class of off-policy algorithms for reinforcement learning designed to handle data generated by a behavior policy µ which might differ from π, the policy that we are interested in learning (see e.g. for an introduction). One might thus expect solving batch RL to be a straightforward application of these algorithms. Surprisingly, for batch RL in continuous control domains, however, found that policies obtained via the naïve application of off-policy methods perform dramatically worse than the policy that was used to generate the data. This highlights the key challenge in batch RL: we need to exhaustively exploit the information that is in the data but avoid drawing for which there is no evidence (i.e. we need to avoid over-valuing state-action sequences not present in the training data). As we will show in this paper, the problems with existing methods in the batch learning setting are further exacerbated when the provided data contains behavioral trajectories from different policies µ 1,..., µ N which solve different tasks, or the same task in different ways (and thus potentially execute conflicting actions) that are not necessarily aligned with the target task that π should accomplish. We empirically show that previously suggested adaptations for off-policy learning can be led astray by behavioral patterns in the data that are consistent (i.e. policies that try to accomplish a different task or a subset of the goals for the target task) but not relevant for the task at hand. This situation is more damaging than learning from noisy or random data where the behavior policy is sub-optimal but is not predictable, i.e. the randomness is not a correlated signal that will be picked up by the learning algorithm. We propose to solve this problem by restricting our solutions to'stay close to the relevant data'. This is done by: 1) learning a prior that gives information about which candidate policies are potentially supported by the data (while ensuring that the prior focuses on relevant trajectories), 2) enforcing the policy improvement step to stay close to the learned prior policy. We propose a policy iteration algorithm in which the prior is learned to form an advantage-weighted model of the behavior data. This prior biases the RL policy towards previously experienced actions that also have a high chance of being successful in the current task. Our method enables stable learning from conflicting data sources and we show improvements on competitive baselines in a variety of RL tasks -including standard continuous control benchmarks and multi-task learning for simulated and real-world robots. We also find that utilizing an appropriate prior is sufficient to stabilize learning; demonstrating that the policy evaluation step is implicitly stabilized when a policy iteration algorithm is used -as long as care is taken to faithfully evaluate the value function within temporal difference calculations. This in a simpler algorithm than in previous work . In the following we consider the problem of reinforcement learning, modeling the environment as a markov decision process (MDP) consisting of the continuous states s ∈ S, actions a ∈ A, and transition probability distribution p(s t+1 |s t, a t) -describing the evolution of the system dynamics over time (e.g. probability of reaching s t+1 from state s t when executing action a t) -together with the state-visitation distribution p(s). The goal of reinforcement learning is to find a policy π(a|s) that maximizes the cumulative discounted return, for the reward function r(s, a) ∈ R. We also define the state-action value function for taking action a t in state s t, and thereafter following π:, which we can relate to the objective J via J(π) =, where π * is the optimal policy. We parameterize the policy π θ (a|s) by θ but we will omit this dependency where unambiguous. In some of the experiments we will also consider a setting where we learn about multiple tasks k ∈ {1, . . . K}, each with their own reward function r k (s, a). We condition the policy and Q-function on the task index k (i.e. Q π k and π(a|s, k)), changing the objective to maximize the sum of returns across all tasks. For the batch RL setting we assume that we are given a dataset D µ containing trajectory snippets (i.e. sub-trajectories of length N) τ = {(s 0, a 0), · · ·, (s T, a T)}, with τ ∈ D µ. We assume access to the reward function r for the task of interest and can evaluate it for all transitions in D µ (for example, r may be some function of s). We further assume D µ was filled, prior to training, by following a set of arbitrary N behavior policies µ 1, · · ·, µ N. Note that these behavior policies may try to accomplish the task we are interested in; or might indeed generate trajectories unrelated to the task at hand. To stabilize off-policy RL from batch data, we want to restrict the learned policy to those parts of the state-action space supported by the batch. In practice this means that we need to approximately restrict the policy to the support of the empirical state-conditional action distribution. This prevents the policy from taking actions for which the Q-function cannot be trained and for which it might thus give erroneous, overly optimistic values . In this paper we achieve this by adopting a policy iteration procedure -in which the policy is constrained in the improvement step. As in standard policy iteration , the procedure consists of two alternating steps. First, starting with a given policy π i = π θi in iteration i (with π θ0 corresponding to a randomly initialized policy distribution), we find an approximate action-value function Q πi (s, a) ≈Q(s, a; φ i), with parameters φ (Section 3.1) (as with the policy we will drop the dependence on φ i and writeQ πi (s, a) where unambiguous). Second, we optimize for π i+1 with respect toQ πi subject to a constraint that ensures closeness to the empirical state-conditional action distribution of the batch (Section 3.2). Iterating these steps, overall, optimizes J(π). We realize both policy evaluation and improvement via a fixed number of gradient descent steps -holding π andQ πi fixed via the use of target networks . We refer to Algorithm 1 for details. To learn the task action-value function in each iteration we minimize the squared temporal difference error for a given reward -note that when performing offline RL from a batch of data the reward r might be computed post-hoc and does not necessarily correspond to the reward optimized by the behavior policies µ 1, · · ·, µ N. The after iteration i is given as We approximate the expectation required to calculateV πi (s) with M samples from As further discussed in the related work Section 4, the use of policy evaluation is different from the Q-learning approach pursued in; , which requires a maximum over actions and may be more susceptible to overestimation of Q-values. We find that when enough samples are taken (we use M = 20) and the policy is appropriately regularized (see Section 3.2) learning is stable without additional modifications. In the policy improvement step we solve the following constrained optimization problem where D µ is the behavior data, π the policy being learned, and π prior is the prior policy. This is similar to the policy improvement step in but instead of enforcing closeness to the previous policy here the constraint is with respect to a separately learned "prior" policy, the behavior model. The role of π prior in Equation 2 is to keep the policy close to the regime of the actions found in D µ. We consider two different ways to express this idea by learning a prior alongside the policy optimization. It should be noted that, if the prior itself is already good enough to solve the task we can set π i+1 = π prior (corresponding to = 0) -i.e. if the data stems from an expert and the prior takes the learned Q-values into account -skipping the policy improvement step and further simplifying the algorithm. For learning the prior, we first consider simply modeling the raw behavior data. This is similar to the approach of BCQ and BEAR-QL , but we use a parametric behavior model and measure distance by KL; we refer to the related work for a discussion. The behavior model can be learned by maximizing the log likelihood of the observed data where θ bm are the parameters of the behavior model prior. Regularizing towards the behavior model can help to prevent the use of unobserved actions, but it may also prevent the policy from improving over the behavior in D µ. In effect, the simple behavior prior in Equation 3 regularizes the new policy towards the empirical state-conditional action distribution in D µ. This may be acceptable for datasets dominated by successful trajectories for the task of interest or when the unsuccessful trajectories are not predictable (i.e. they correspond to random behaviour). However, we here are interested in the case where D µ is collected from imperfect data and from multiple tasks. In this case, D µ will contain a diverse set of trajectories -both (partially) successful and actively harmful for the target task. With this in mind, we consider a second learned prior, the advantage-weighted behavior model, π abm, with which we can bias the RL policy to choose actions that are both supported by D µ and also good for the current task (i.e. keep doing actions that work). We can formulate this as maximizing the following objective: where f is an increasing, non-negative function, and the difference R (τ t:N) −V πi is akin to an n-step advantage function, but here calculated off-policy representing the "advantage" of the behavior snippet over the policy π i. This objective still tries to maximize the log likelihood of observed actions, and avoids taking actions not supported by data. However, by "advantage weighting" we focus the model on "good" actions while ignoring poor actions. We let f = 1 + (the unit step function with f (x) = 1 for x ≥ 0 and 0 otherwise) both for simplicty -to keep the number of hyperparameters to a minimum while keeping the prior broad -and because it has an intuitive interpretation: such a prior will start by covering the full data and, over time, filter out trajectories that would lead to worse performance than the current policy, until it eventually converges to the best trajectory snippets contained in the data. We note that Equation 4 is similar to a policy gradient, though samples here stem from the buffer and it will thus not necessarily converge to the optimal policy in itself; π θabm will instead only cover the best trajectories in the data due to no importance weighting being performed for off-policy data. This bias is in fact desirable in a batch-RL setting; we want a broad prior that only considers actions present in the data. We also note that we tried several different functions for f including exponentiation, e.g. f (x) = exp(x), but found that choice of function did not make a significant difference in our experiments. Using either π θbm or π θabm as π prior, Equation 2 can be solved with a variety of optimization schemes. We experimented with an EM-style optimization following the derivations for the MPO algorithm, as well as directly using the stochastic value gradient ofQ πi wrt. policy parameters . We can optimize the objective from Equation 2 using a two-step procedure. Following, we first notice that the optimal π for Equation 2 can be expressed asπ(a|s) ∝ π prior (a|s) exp(Q π i (s,a) /η), where η is a temperature that depends on the used for the KL constraint and can be found automatically by a convex optimization (Appendix B.1). Conveniently, we can sample from this distribution by queryingQ πi using samples from π prior. These samples can then be used to learn the parametric policy by minimizing the divergence KL(π π θi+1), which is equivalent to maximizing the weighted log likelihood which we optimize via gradient descent subject to an additional trust-region constraint on π θ given as KL(π θi π θ) < trust to ensure conservative updates (Appendix B.1). Stochastic value gradient optimization Alternatively, we can use Langrangian relaxation to turn Equation 2 into an objective amenable to gradient descent. Inserting π θ for π and relaxing in for η > 0 and which we can optimize by alternating gradient descent steps on θ and η respectively, taking the stochastic gradient of the Q-value through the sampling of a ∼ π θ (·, s) via re-parameterization. See Appendix B.2 for a derivation of this gradient. Full pseudocode for our approach can be found in Appendix A, Algorithm 1. There exist a number of off-policy RL algorithms that have been developed since the inception of the RL paradigm (see e.g. for an overview). Most relevant for our work, some of these have been studied in combination with function approximators (for estimating value functions and policies) with an eye on convergence properties in the batch RL setting. In particular, several papers have theoretically analyzed the accumulation of bootstrapping errors in approximate dynamic programming and approximate policy iteration ; for the latter of which there exist well known algorithms that are stable at least with linear function approximation (see e.g.). Work on RL with non-linear function approximators has mainly considered the "online" or "growing batch" settings, where additional exploration data is collected (; ;); though some success for batch RL in discrete domains has been reported . For continuous action domains, however, off-policy algorithms that are commonly used with powerful function approximators fail in the fixed batch setting. Prior work has identified the cause of these failures as extrapolation or bootstrapping errors which occur due to a failure to accurately estimate Q-values, especially for state-action pairs not present in the fixed data set. Greedy exploitation of such misleading Q-values (e.g. due to a max operation) can then cause further propagation of such errors in the Bellman backup, and to inappropriate action choices during policy execution (leading to suboptimal behavior). In non-batch settings, new data gathered during exploration allows for the Q-function to be corrected. In the batch setting, however, this feedback loop is broken, and correction never occurs. To mitigate these problems, previous algorithms based on Q-learning identified two potential solutions: 1) correcting for overly optimistic Q-values in the Bellman update, and 2) restricting the policy from taking actions unlikely to occur in the data. To address 1) prior work uses a Bellman backup operator in which the max operation is replaced by a generative model of actions which a learned policy is only allowed to minimally perturb; or via a maximum over actions sampled from a policy which is constrained to stay close to the data (implemented through a constraint on the distance to a model of the empirical data, measured either in terms of maximum mean discrepancy or relative entropy). To further penalize uncertainty in the Q-values this can be combined with Clipped Double-Q learning or an ensemble of Q-networks . To address 2) prior work uses a similarly constrained max also during execution, by considering only actions sampled from the perturbed generative model or the constrained policy , and choosing the best among them. Our work is based on a policy iteration scheme instead of Q-learning -exchanging the max for an expectation. Thus we directly learn a parametric policy that we also use for execution. We estimate the Q-function as part of the policy evaluation step with standard TD-0 backups. We find that for an appropriately constrained policy no special treatment of the backup operator is necessary, and that it is sufficient to simply use an adequate number of samples to approximate the expectation when estimating V (see Equation 1). The only modification required is in the policy improvement step where we constrain the policy to remain close to the adaptive prior in Equation 2. As we demonstrate in the empirical evaluation it is the particular nature of the adaptive prior -which can adapt to the task at hand (see Equation 4) -that makes this constraint work well. Additional measures to account for uncertainty in the Q values could also be integrated into our policy evaluation step but we did not find it to be necessary for this work; we thus forego this in favor of our simpler procedure. Our policy iteration scheme also bears similarity to previous works that use (relative) entropy regularized policy updates which implement constraints with respect to either a fixed (e.g. uniform) policy (e.g. a) or, in a trust-region like scheme, to the previous policy (e.g. . Other work has also focused on policy priors that are optimized to be different from the actual policy but so far mainly in the multi-task or transfer-learning setup, i.e. to share knowledge across or to transfer knowledge to new tasks . The constrained updates are also related to trustregion optimization in action space, e.g. in TRPO / PPO (; and MPO, which ensures stable learning in the standard RL setting by enforcing conservative updates. The idea of conservative policy optimization can be traced back to . Here we take a slightly different perspective: we enforce a trust region constraint not on the last policy in the policy optimization loop (conservative updates) but wrt. the advantage weighted behavior distribution. Figure 2: Control suite when using only the first 2k episodes of low-quality data from each run; return for best behavior episode in the data was 718/159/435/728 (left to right). While plain RL is able to learn on walker, learned priors improve performance and stabilize learning for cheetah and quadruped (with overall lower performance than when learning from good data). We experiment with continuous control tasks in two different settings. In a first set of experiments we compare our algorithm to strong off-policy baselines on tasks from the DeepMind control suite -to give a reference point as to how our algorithm performs on common benchmarks. We then turn to the more challenging setting of learning multiple tasks involving manipulation of blocks using a robot arm in simulation. These span tasks from reaching toward a block to stacking one block on top of another. Finally, we experiment with analogous tasks on a real robot. We use the same networks for all algorithms that we compare, optimize parameters using Adam , and utilize proprioceptive features (e.g. joint positions / velocities) together with task relevant information (mujoco state for the control suite, and position/velocity estimates of the blocks for the manipulation tasks). All algorithms were implemented in the same framework, including our reproduction of BCQ and BEAR, and differ only in their update rules. Note that for BEAR we use a KL instead of the MMD as we found this to work well, see appendix. In the multi-task setting (Section 5.1) we learn a task conditional policy π θ (a|s, k) and Q-functionQ π φ (s, a, k) where k is a one-hot encoding of the task identifier, that is provided as an additional network input. We refer to the appendix for additional details. We start by performing experiments on four tasks from the DeepMind control suite: Cheetah, Hopper, Quadruped. To obtain data for the offline learning experiments we first generate a fixed dataset via a standard learning run using MPO, storing all transitions generated; we repeat this with 5 seeds for each environment. We then separate this collected data into two sets: for experiments in the high data regime, we use the first 10,000 episodes generated from each seed. For experiments with low-quality data we use the first 2,000 episodes from each seed. The high data regime therefore has both more data and data from policies which are of higher quality on average. A plot showing the performance of the initial training seeds over episodes is given in the appendix, Figure 6. Bring to corner ABM+MPO BM+MPO MPO Figure 4: (Left) Learning curves for the tasks "bring to corner" and "bring to center". These tasks were learned using only data from the seven intial stacking tasks. The stacking dataset was rich enough to learn these new tasks fully offline with ABM. (Right) Simulated Sawyer environment. For our offline learning experiments, we reload this data into a replay buffer. 1 The dataset is then fixed and no new transitions are added; the offline learner never receives any data that any of its current or previous policies have generated. We evaluate performance by concurrently testing the policy in the environment. The of this evaluation are shown in Figure 1. As can be observed, standard off-policy RL algorithms (MPO / SVG) can learn some tasks offline with enough data, but learning is unstable even on these relatively simple control suite tasks -confirming previous findings from . In contrast the other methods learn stably in the high-data regime, with BCQ lagging behind in Hopper and Quadruped (sticking too close to the VAE prior actions, an effect already observed in ). Remarkably our simple method of combining a policy iteration loop with a behavior model prior (BM+MPO in the plot) performs as well or better than the more complex baselines (BEAR and BCQ from the literature). Further improvement can be obtained using our advantage weighted behavior model even in some of these simple domains (ABM+SVG and ABM+MPO). Comparing the performance of the priors (BM [prior] vs ABM [prior], dotted lines) on Hopper we can understand the advantage that ABM has over BM: the BM prior performs well on simple tasks, but struggles when the data contains conflicting trajectories (as in Hopper, where some of the seeds learn sub-optimal jumping) leading to too hard constraints on the RL policy. Interestingly, the ABM prior itself performs as well or better than the baseline methods for the control-suite domains. Furthermore, in additional experiments presented in the appendix we find that competitive performance can be achieved, in simple domains, when training only an ABM prior (effectively setting = 0); providing an even simpler method when one does not care about squeezing out every last bit of performance. A test on lower quality data, Figure 2, shows similar trends, with our method learning to perform slightly better than the best trajectories in the data. We experiment with a Sawyer robot arm simulated in Mujoco in a multi-task setting -as described above. The seven tasks are to manipulate blocks that are placed in the workspace of the robot. They include: reaching for the green block (Reach), grasping any block (Grasp), lifting the green block (Lift), hovering the green block over the yellow block (Place Wide), hovering the green block over the center of the yellow block (Place Narrow), stacking the green block on top of yellow (Stack and Stack/Leave i.e. without gripper contact). To generate the data for this experiment we again run MPO -here simultaneously learning all task-conditional policies for the full seven tasks. Data-was collected by randomly switching tasks after each episode (of 200 control steps) with random resets of the robot position every 20 episodes. As before, data from all executed tasks is collected in one big data-set annotating each trajectory snippet with all rewards (i.e. this is similar to the SAC-R setting from). During offline learning we then compare the performance of MPO and RL with a behavior modelling prior (BM+MPO and ABM+MPO). As shown in Figure 3, behavioral modelling priors improve performance across all tasks over standard MPO -which struggles in these more challenging tasks. This is likely due to the sequential nature of the tasks: later tasks implicitly include earlier tasks but only a smaller fraction of trajectories achieve success on stack and leave (and the actions needed for stack conflict, e.g., with lifting), this causes the BM prior to be overly broad (see plots in appendix). The ABM+MPO, on the other hand, achieves high performance across all tasks. Interestingly, even with ABM in place, the RL policy learned using this prior still outperforms the prior, demonstrating that RL is still useful in this setting. As an additional experiment we test whether we can learn new tasks entirely from previously recorded data. Since our rewards are specified as functions of observations, we compute rewards for two new tasks (bringing the green block to the center and bringing it to the corner) for the entire datasetwe then test the ing policy in the simulator. As depicted in Figure 4, this is successful with ABM+MPO, demonstrating that we can learn tasks which were not originally executed in the dataset (as long as trajectory snippets that lead to successful task execution are contained in the data). Finally, to validate that our approach is a feasible solution to performing fast learning for real-robot experiments, we perform an experiment using a real Sawyer arm and the same set of seven tasks (implemented on the real robot) from Section 5.2. As before, data from all executed tasks is collected in one big data-set annotating each trajectory snippet with all rewards. The full buffer after about two weeks of real robot training is used as the data for offline learning; which we here only performed with ABM+MPO due to the costly evaluation. The goal is to re-learn all seven original tasks. Figure 5 shows the of this experiment -we ran an evaluation script on the robot, continuously testing the offline learned policy, and stopped when there was no improvement in average reward (as measured over a window of 50 episodes). As can be seen, ABM with MPO as the optimizer manages to reliably re-learn all seven tasks purely from the logged data in less than 12 hours. All tasks can jointly be learned with only small differences in convergence time -while during the initial training run the harder tasks, of course, took the most time to learn. This suggests that gathering large data-sets of experience from previous robot learning experiments, and then quickly extracting the skills of interest, might be a viable strategy for making progress in robotics. In this work, we considered the problem of stable learning from logged experience with off-policy RL algorithms. Our approach consists of using a learned prior that models the behavior distribution contained in the data (the advantage weighted behavior model) towards which the policy of an RL algorithm is regularized. This allows us to avoid drawing for which there is no evidence in the data. Our approach is robust to large amounts of sub-optimal data, and compares favourably to strong baselines on standard continuous control benchmarks. We further demonstrate that our approach can work in challenging robot manipulation domains -learning some tasks without ever seeing a single trajectory for them. A ALGORITHM A full algorithm listing for our procedure is given in Algorithm 1. Input: N steps number of learning steps, N TU steps between target update, M number of action samples, KL regularization parameter, initial parameters for θ, η, α and φ initialize N = 0, θ = θ, φ = φ while i ≤ N steps do sample a batch B of trajectories τ from replay buffer D µ sample M actions from policies to estimate expectations below // compute gradient for prior model For BM prior: δ θprior ← ∇ θprior τ ∈B (st,at)∈τ log π θbm (a t |s t) For ABM prior: δ θprior ← ∇ θabm τ ∈B (st,at)∈τ 1 + (R(τ 1:|τ |) −V φ (s t)) log π θabm (a t |s t) // compute gradients for Q, π and η δ φ ← ∇ φ τ ∈B i∼I (st,at)∈τ r(s t, a t) + γV φ (s t) −Q φ (s t, a t) 2 For MPO: We here give additional details on the implementation of the policy improvement step in our algorithm. Depending on the policy optimizer used (MPO or SVG) different update rules are used to maximize the objective given in Equation 4. This is also outlined in Algorithm 1. We here describe the general form for both algorithms, using π prior to represent the prior which then will be instantiated as either the ABM or BM prior. For the EM-style optimization based on MPO we first notice that the optimal non-parametric policy that respects the KL constraint wrt. π prior iŝ π(a|s) = π prior (a|s) exp(Q π i (s,aj) /η − log Z), with Z = π prior (a|s) exp(Q π i (s,a) /η)da and where η is a temperature that depends on the desired constraint. In practice we estimate Z based on the M samples {a 1, . . ., a M} ∼ π prior (a|s) that we draw for each state to perform the optimization of π θ. That is we set Z ≈ 1 /M M j=1 exp(Q π i (s,aj) /η). Using these samples we can then optimize for η in a way analogous to what is described in. Specifically, we find that the objective for finding η is which can approximate based on a batch B of trajectories sampled from D µ (sampling M actions from π prior for each state therein) and corresponding action samples where we used the samples which can readily be differentiated wrt. η. We then use Adam (with standard settings and learning rate 2e − 4) to take a gradient step in direction of ∇ η g(η) (we want to maximize g(η)) for each batch. We start our optimization with η = 3 to ensure stable optimization (i.e. to avoid large changes in the policy parameters in the beginning of optimization). Further, after each gradient step, we project η to the positive numbers i.e. we set η = max(η, 0.001), as η is required to be positive. We find that this procedure is capable of fulfilling the desired KL constraints well. The same batch, and action samples, are then also used to take an optimization step for the policy parameters θ. In particular we find the parametric policy by minimizing the divergence KL(π π θi+1), which is equivalent to maximizing the weighted log likelihood of sampled actions (Equation 5). We only take M = 20 samples here, which is a relatively crude representation of the behavior model at state s; therefore, to prevent the policy from converging too quickly it can be useful to employ an additional trust region constraint in this step that ensures slow convergence. We do this by adjusting the maximum likelihood objective Equation 5 to contain an additional KL regularization towards the previous policy, yielding the following optimization problem using samples {a 1, . . ., a M} ∼ π prior (a|s): for α > 0, which is a Langrangian relaxation to the maximum likelihood problem under the additional constraint that KL(π θi π θ) < trust, where α is the Langrange multiplier. This objective can be differentiated wrt. both θ and α and we simply take alternating gradient descent steps (one per batch for both θ and α) using Adam (starting with a random θ 0 and α = 1) and projecting α back to the positive regime if it becomes negative; i.e. we set α = max(α, 0.001). To optimize the policy parameters θ via the stochastic value gradient (under a KL constraints) we can directly calculate the derivative of the Langrangian relaxation from Equation 6. In particular, again assuming that we have sampled a batch B, the gradient wrt. θ can be obtained via the reparameterization trick . For this we first require that a sample from our policy π θ (a|s) can be obtained via a deterministic function applied to a standard noise source. We first specify the policy class used in the paper to be that of Gaussian policies, parameterized as π θ (a|s) = N (a|µ θ (s), Iσ 2 θ (s)), where N (a|µ, Σ) denotes the pdf of a standard Normal distribution and where we assume µ θ is directly given as one output of the network whereas we parameterize the diagonal standard deviation as σ θ (s) = sof tplus(h θ (s)) with h θ (s) being output by the neural network. We can then obtain samples via the deterministic transformation f (s, ξ; θ) = µ θ (s) + σ θ (s)ξ, where ξ ∼ N (0, I) (I being the identity matrix). Using this definition we can obtain the following expression for the value gradient: where we use the Gaussian samples {ξ 1, . . ., ξ M} ∼ N (0, I). The gradient for the Langrangian multiplier is given as where we dropped terms independent of η in the second line. Following δ θ to maximize the objective, and conversely moving in the opposite direction of δ η to minimize the objective wrt. η, can then be performed by taking alternating gradient steps. We perform one step per batch for both η and θ via Adam; starting from an initially random θ and η = 1. As in the MPO procedure we ensure that η is positive by projecting it to the positive regime after each gradient step. Hyperparameters for our MPO, SVG, and BCQ single-task experiments are shown in tables 1, 2, and 3, respectively. For multitask experiments, we modify the parameters shown in 5. To provide strong off-policy learning baselines we re-implemented BCQ and BEAR in the same framework that we used to implement our own algorithm. As mentioned in the main paper we used the same network architecture for ll algorithms. Algorithm specific hyperparameters where tuned via a coarse grid search on the control suite tasks; while following the advice from the original papers on good parameter ranges. To avoid bias in our comparisons we did not utilize ensembles of Q-functions for any of the methods (e.g. we removed them from BEAR), we note that ensembling did not seem to have a major impact on performance (see appendix in ). Parameters for all methods where optimized with Adam. Furthermore, to apply BEAR and BCQ in the multi-task setting we employed the same conditioning of the policy and Q-function on a one-hot task vector (which is used to select among multiple network "heads", yielding per task parameters, see description below). For BCQ we used a range of [0.25, 0.25] for the perturbative actions generated by the DDPG trained network, and chose a latent dimensionality of 64 for the VAE, see Table 3 for the full hyperparameters. For BEAR we used a KL constraint rather than the maximum mean discrepancy. This ensures comparability with our method and we did not see any issues with instability when using a KL. To ensure good satisfaction of constraints we used the exact same optimization for the Langragian multiplier required in BEAR that was also used for our method -see description of SVG above. The hyperparameters for the BEAR training run on the control suite are given in The task setup for both the simulated and real robot experiments is described in the following. A detailed description of the robot setup will be given in an accompanying paper. We nonetheless give a description here for completeness. We make no claim to have contributed these tasks specifically for this paper and merely use them as an evaluation test-bed. As the robot we utilize a Sawyer robotic arm mounted on a table and equipped with a Robotiq 2F-85 parallel gripper. A basket is positioned in front of the robot which contains three cubes (the proportions of cubes and basket sizes are consistent between simulation and reality). Three cameras on the basket track the cube using augmented reality tags. To model the tasks as an MDP we provide both proprioceptive information from the robot sensors (joint positions, velocities and torques) and the tracked cube position, velocity (both in 3 dimensions) and orientation to the policy and Q-function. Replay buffer size 2 × 10 6 Batch size 512 Table 5: Network parameters for multitask experiments. "->" indicates the network branching into separate "heads" for each task, which are selected based on the one-hot task vector. This architecture is similar to the design presented in. Note that transitions are duplicated in replay for each task (so that transitions are sampled independently for each task). Where d(a, b) denotes the euclidean distance between a and b. We also define two tolerance functions with outputs scaled between 0 and 1, i.e, D ADDITIONAL EXPERIMENTAL We present additional plots that show some aspects of the developed algorithm in more detail. We provide expanded plots for MPO on the control suite in Figure 7. These show in detail that the learned advantage weighted behavior model (ABM, in red in the left column) is far superior to the standard behavior model prior, leading to less constrained RL policies. Figure 8 shows full for the simulated robot stacking task, including all 7 intentions as well as performance of the prior policies themselves during learning. The task set is structured: earlier tasks like reaching and lifting are necessary to perform most other tasks, so the simple behavioral model performs well on these. For the more difficult stacking tasks, however, the presence of conflicting data means the simple behavioral model doesn't achieve high reward, though it still significantly improves performance of the regularized policy. Table 6 shows final performance for all methods on control suite tasks. Table 7 shows final performance on simulated robotics tasks to make comparison between algorithms easier. The episode returns are averaged over the final 10% of episodes. ABM provides a performance boost over BM Table 7: Returns on each simulated robotics task -for comparison between algorithms. alone, particularly for difficult tasks such as block stacking and quadruped. The RL policy further improves performance on the difficult tasks. As noted in the main paper, one additional option, to further simplify the algorithm is to omit the policy improvement step, setting π i+1 = π prior (i.e. considering the case where = 0) and, conversely learning the Q-values of the prior. In additional experiments we have found that this procedure roughly recovers the performance of the ABM prior when trained together with MPO (ABM+MPO); i.e. this is an option for a simpler algorithm to implement at the cost of some performance loss (especially on the most complicated domains). We included this setting as ABM (= 0, |τ | = 2) in the table -noting that in this case it is vital to choose short trajectory snippets in order to allow π prior to pick the best action in each state.
We develop a method for stable offline reinforcement learning from logged data. The key is to regularize the RL policy towards a learned "advantage weighted" model of the data.
558
scitldr
One of the main challenges in applying graph convolutional neural networks on gene-interaction data is the lack of understanding of the vector space to which they belong and also the inherent difficulties involved in representing those interactions on a significantly lower dimension, viz Euclidean spaces. The challenge becomes more prevalent when dealing with various types of heterogeneous data. We introduce a systematic, generalized method, called iSOM-GSN, used to transform ``multi-omic'' data with higher dimensions onto a two-dimensional grid. Afterwards, we apply a convolutional neural network to predict disease states of various types. Based on the idea of Kohonen's self-organizing map, we generate a two-dimensional grid for each sample for a given set of genes that represent a gene similarity network. We have tested the model to predict breast and prostate cancer using gene expression, DNA methylation and copy number alteration, yielding prediction accuracies in the 94-98% range for tumor stages of breast cancer and calculated Gleason scores of prostate cancer with just 11 input genes for both cases. The scheme not only outputs nearly perfect classification accuracy, but also provides an enhanced scheme for representation learning, visualization, dimensionality reduction, and interpretation of the . Large scale projects such as "The Cancer Genome Atlas" (TCGA) generate a plethora of multidimensional data by applying high-resolution microarrays and next generation sequencing. This leads to diverse multi-dimensional data in which the need for devising dimensionality reduction and representation learning methods to integrate and analyze such data arises. An earlier study by Shen et al. proposed algorithms iCluster (a) and iCluster+ (b), which made use of the latent variable model and principal component analysis (PCA) on multi-omic data and aimed to cluster cancer data into sub-types; even though it performed well, it did not use multi-omics data. In another study, attempted to apply heatmaps as a dimensionality reduction scheme on gene expression data to deduce biological insights and then classify cancer types from a Pan-cancer cohort. However, the accuracy obtained by using that method was limited to 97% on Pan-cancer data, lacking the benefits of integrated multi-omics data. In a recent study used self-Organizing maps (SOMs) to embed gene expression data into a lower dimensional map, while the works of (; ; ;) generate clusters using SOMs on gene expression data with different aims. In addition, the work of combines gene expression and DNA methylation to identify subtypes of cancer similar to those of , which identifies modules of co-expressing genes. On the other hand, the work of uses SOMs to create a generalized regression neural network, while the model proposed in uses SOMs to classify documents based on a word-tovector model. Apart from dimensionality reduction methods, attempts have been made by applying supervised deep machine learning, such as deepDriver , which predicts candidate driver genes based on mutation-based features and gene similarity networks. Although these works have been devised to use embedding and conventional machine learning approaches, the use deep neural networks on multi-omics data integration is still in its infancy. In addition, these methods lack Gleason Score Number of Samples Group 3+4 147 34 4+3 101 43 4+5,5+4 139 9 Table 1: Distribution of the different Gleason groups considered for PRCA. in adequacy to generalize them multi-omics data to predic disease states. More specifically, none of these models combine the strength of SOMs for representation learning combined with the CNN for image classification as we do in this work. In this paper, a deep learning-based method is proposed, and is used to predict disease states by integrating multi-omic data. The method, which we call iSOM-GSN, leverages the power of SOMs to transform multi-omic data into a gene similarity network (GSN) by the use of gene expression data. Such data is then combined with other genomic features to improve prediction accuracy and help visualization. To our knowledge, this the first deep learning model that uses SOMs to transform multi-omic data into a GSN for representation learning, and uses CNNs for classification of disease states or other clinical features. The main contributions of this work can be summarized as follows: • A deep learning method for prediction of tumor aggressiveness and progression using iSOM-GSN. • A new strategy to derive gene similarity networks via self-organizing maps. • Use of iSOM-GSN to identify relevant biomarkers without handcrafted feature engineering. • An enhanced scheme to interpret and visualize multi-dimensional, multi-omics data. • An efficient model for graph representation learning. We considered two datasets as part of our study: The Cancer Genome Atlas (TCGA) Prostate Adenocarcinoma (PRCA) and The Cancer Genome Atlas (TCGA) Breast Invasive Carcinoma (BRCA) . Our aim here is to classify patients based on Gleason scores for PRCA , and tumor stage for BRCA . The total number of samples for PRCA and BRCA were 499 and 570 respectively. Both datasets had approximately 60,000 features for gene expression data alone. Thus, a variance threshold of 0.2% was applied to these data, which removes all features that have at least 80% zero values; this step reduced the feature set size to 16,000. The data were then normalized on a common scale for all omics, including DNA methylation and CNA data. The gene names were preserved in HUGO format and the names considered irrelevant by HUGO were removed. All the three types of data were then combined, based on patient ID, which yielded data for 387 and 392 patients for PRCA and BRCA respectively, containing all three required omic data. Since imbalance was observed across all classes in the PRCA dataset, we considered only three distinct Gleason scores. It is worthwhile to note that samples with Gleason score 7 were considered as two different classes, i.e., 3+4 and 4+3, for example, since these two groups are clinically different. More details on class distribution are shown in Tables 1 and 2. MultisgCV was used to further process the data . The MutsigCV algorithm identifies significantly mutated genes by building a patient-specific mutation model based on gene expression and DNA methylation data. This method takes the whole genome or exome sequence as input and identifies genes that are mutated more often. The top 14 mutated genes from Multisig were considered for the rest of the experiment. 2A 179 2B 129 3A 84 Table 2: Distribution of the different tumor groups considered for BRCA. We consider the problem of integrating multiple types of omics data. For this purpose, we propose a three-step approach, which we call iSOM-GSN, and whose main steps are depicted in Figure 1. First, we create a GSN by extracting features from one data type, in our case, gene expression data. Then, for each sample, we integrate all data types by considering features extracted from the first step. Finally, we apply a CNN to perform classification with training and test split at 70:30 ratio to test the model. We assume the input data is a set of matrices S = {s oj}, where i = {1, 2, 3, . . ., n} represents the samples, j = {1, 2, 3, . . ., m} represents the genes, and o = {1, 2, 3, . . ., p} represents types of data (omics). Here, n is the number of samples, m is the number of genes, and p is the number of types of omics. The first step consists of creating a gene similarity network (GSN) by applying a self-organizing map learning algorithm. In this step, we consider only one type of data, i.e., gene expression. Let i,j=1 denote one omic data where j = {1, 2, 3, . . ., m} represents the set of genes and i = {1, 2, 3, . . ., n} represents the set of samples. S 1 is the input to the SOM. A SOM is a lower-dimensional representation of complex, higher-dimensional data in such a way that distances among vectors in the original space are preserved in the new representation. A SOM is learned via an unsupervised clustering algorithm, which takes sample vectors as inputs, and groups them based on the similarities derived by the features. In our case, the input vectors to the SOM are the samples with gene expression values of all samples as features. The following are the main steps that are followed to construct a SOM. 1. Initialize m neurons with random weights assigned to each neuron c k, where k = 1, 2,..., m, where m is the number of genes under consideration, in our case 14. 2. Calculate the Euclidean distance between each gene g j and its neuron c k, and identify the winning neuron, i.e., the neuron that has the smallest distance to its respective neuron. The Euclidean distance is calculated as follows: where s 1j = g j represents the gene vector for i th sample and c j1 represents neuron vector. 3. Suppose that c k is the winning neuron, i.e., it is the closest to gene g j. Then, we update the weight of c k using Equation. The winning neuron is also known as best matching unit (BMU). 4. Update the weights of the neurons that are in proximity to the BMU, c k. To account for this, we use a neighbourhood function that is defined by Equation. 5. Repeat steps 2 -4 for e iterations or until desired convergence (i.e., the weights remain unchanged or the change is less than a threshold). 6. Finally, obtain c m neurons, which represent g m genes in the two-dimensional space, represented by Equation. where L(t) is the learning rate regulation function defined in Equation. where L 0 is initial learning rate. where (x j, y j) represents the coordinates of g j. As a of running the training algorithm, a SOM is obtained in which the genes are organized based on their similarity, representing a GSN. This network is represented as a two-dimensional lattice whose coordinates are denoted as in Equation. Figure 2 shows the two SOMs derived from the two datasets, BRCA and PRCA. Observing the evolution of the SOM learning algorithms through the different epochs for both datasets (see Figures 10-13 of the Supplementary Material), show how complex, high-dimensional relationships among related genes are revealed and visualized in a simple way on a two-dimensional map. The second step of iSOM-GSN is to integrate multiple data types. We use the GSN generated in the first step as a template image; in the example, the genes are indexed with numbers by followng the mapping listed in Table 3. We then expand a circular region around the points with a predefined radius and color the circles as shown in Figure 3. We color each circle by considering each data view by using the RGB color scheme, where Red is represented by gene expression, Green by DNA methylation and Blue by CNA. In our case, S Index Gene Name BRCA Gene Name PRCA 0 RUNX1 SPOP 1 PIK3CA FOXA1 2 GATA3 CTNNB1 3 FOXA1 CLPTM1L 4 SF3B1 DPYSL2 5 PTEN NEIL1 6 CBFB PITPNM2 7 CDH1 ATM 8 MAP2K4 EMG1 9 MAP3K1 ETV3 10 ERBB2 BRAF 11 NCOR1 NKX3-1 12 FAM86B2 ZMYM3 13 CDKN1B SALL1 Table 3: Indices of gene names for the BRCA and PRCA datasets. If point (p, q) is within certain radius of g j, 0 otherwise. where and B As a , we obtain a set of matrices, one per each sample, defined as follows: Figure 3 represents a sample image created after integrating multiple omics for the BRCA dataset. As can be observed, various shades of colors for different genes represent their values with respect to the three different types of omic data. The last step of iSOM-GSN is to feed the images generated in the previous step to the CNN, to predict the state of the disease as the final output. The architecture of the CNN is proven to be the most effective in learning visual representations. The CNN is also known to perform better than the human eye in many visual processing problems. The usage of the CNN in any method is just a variation in how the convolution and pooling layers are combined, and how the network is trained. A more detailed, schematic diagram of the entire network design is depicted in Figure 7 of the Supplementary Material. The network includes two convolutional layers and two fully-connected layers with a small number of neurons. Our choice of a smaller network design is motivated both from our desire to reduce the risk of over-fitting as well as to simplify the nature of the classification. All three color channels, i.e., RGB, are processed directly by the network. The subsequent convolutional and fully connected layers are then defined as follows: • 32 filters of size 3 × 3 pixels are applied to the input in the first convolutional layer, followed by a rectified linear operator (ReLU), a Max-pooling layer taking the maximal value of 2 × 2 regions with two-pixel strides and a local response normalization layer. • The output of the previous layer is then processed by the second convolutional layer, containing 32 filters of size 3 × 3 pixels. Again, this is followed by ReLU, a Max-pooling layer and a local response normalization layer with the same hyper-parameters as before. • First fully connected layer that receives the output of the second convolutional layer and contains 128 neurons, followed by ReLU and a dropout layer. • Second fully connected layer that receives the output of the first fully connected layer and output three neurons, followed by ReLU and a dropout layer. Finally, the output of the last fully-connected layer is fed to a Soft-max layer that assigns a probability to each class. The prediction itself is performed by applying Soft-max to choose the class with maximal probability for the given test image. Aside from using a lean network architecture, i.e., fewer layers, we apply two additional methods to further limit the risk of over-fitting. First, we apply dropout learning, i.e., randomly setting the output value of the network neurons to zero. The network includes three dropout layers with a dropout ratio of 0.5 (50% chance of setting a neuron's output value to zero). Second, we use data augmentation by taking a random input image, and scaling and mirroring it in each forward-backward training pass. Training is done using the Adam optimizer . To assess the performance of iSOM-GSN, the data was divided into training and test datasets with a ratio 70:30. Minmax scaling was then applied on the test dataset followed by ranging the training dataset accordingly. Note that the test data is scaled using the same criterion applied to the training dataset, and without any information about the classes. We then calculated the main performance measures that include categorical accuracy, precision, recall, F1-score and mean absolute error. iSOM-GSN has been run on two multi-omic datasets namely PRCA and BRCA using the model and parameters described earlier in this paper. Figure 4 depicts the plot of how various performance measures convolve with an increasing number of epochs. In general, it can be seen that the predictive performance is almost perfect with respect to various parameters. However, regarding the number of genes obtained after filtering, both datasets used the exact number of genes retained for effective classification. In addition, Figure 6 of the Supplementary Material depicts the plot of the receiver operating characteristic (ROC), area under curve (AUC). In general terms, it can be seen that the predictive performance is in the range 94-98% with respect to various parameters. However, regarding the number of genes obtained after filtering, both datasets used the same number of genes retained for effective classification. This shows that only 14 genes are enough to classify any clinical variable using the proposed model, and those genes are significantly mutated. We illustrate the ability to discover and visualize patterns of genomic interactions in biological comprehensive context for classification. When the goal is to identify potential biomarkers and factors that characterize biological and clinical aspects, the proposed approach comes to the rescue. As part of a feature selection step, we narrowed down the relevant features to just 14 genes, which are sufficient for classification and achieve 96% accuracy. These genes are listed in Table 5. All genes are identified either as tumor suppressor genes or oncogenes for known pathways as shown in Table 5. TP53 and PTEN are the common genes that are highly mutated in both PRCA and BRCA datasets. These gene are tumor suppressor genes, which are well known to express high gene expression values and are considered as biomarkers for cancer in general (https://www.genecards.org/, 2019). On the other hand, SALL1 and PITPNM2 are genes that are not known as cancer related genes. While validating the genes related to BRCA data set, we have found in that genes MAP3K1 and MAP2K4 are strong predictors of MEK inhibitors which are frequently found in breast, prostate and colon cancers. These can be potential targets for drugs . In another study, we have found co-occurring mutations of PIK3CA and MAP3K1 are functionally significant in breast cancer and MAP3K1 mutational status may be considered as a predictive biomarker for efficacy in PI3K pathway inhibitor trials . We have also found that expression of genes GATA-3 and FOXA1 are sufficient to differentiate breast carcinoma from other and hence are excellent bio-markers . This is also supported by the findings reported in Figure 5: Most relevant genes found to predict Gleason groups of PRCA and stages of BRCA. , which claim that these two genes are associated with a less aggressive phenotype and they give better prognosis in patients with HR-positive or HER2-negative breast cancer. Thus, we confirm that the GSN's formed are helpful to find potential novel bio-markers. A practical challenge for validating this framework is the unavailability of independent datasets with all data types. An intrinsic question arises as to what extent a single data type (e.g., gene expression) is effective in classifying, based on our framework. The closest method to compare our work with is the one that uses gene expression and DNA methylation , though they use the two types of data separately, and apply it to a single disease. To that end, we ran iSOM-GSN on a single omic data at a time. We discovered that gene expression data alone yielded 90% accuracy, whereas DNA methylation and CNA, alone, yielded 87% and 89% classification accuracy, respectively. This demonstrates the advantages of combining multi-omics, the strengths of the SOMs for representation learning combined with the power of deep CNNs for image-based data classification. This paper presents a framework that uses a self-organizing map and a convolutional neural network used to conduct data integration, representation learning, dimensionality reduction, feature selection and classification simultaneously to harness the full potential of integrated high-dimensional large scale cancer genomic data. We have introduced a new way to create gene similarity networks, which can lead to novel gene interactions. We have also provided a scheme to visualize high-dimensional, multi-omics data onto a two-dimensional grid. In addition, we have devised an approach that could also be used to integrate other types of multi-omic data and predict any clinical aspects or states of diseases, such as laterality of the tumor, survivability, or cancer sub types, just to mention a few. This work can also be extended to classify Pan-cancer data. Omics can be considered as a vector and more than three types of data (i.e., beyond RGB images) can be incorporated for classification. Apart from integrating multi-omics data, the proposed approach can be considered as an unsupervised clustering algorithm, because of the competitive learning nature of SOMs. We can also apply iSOM-GSN on other domains, such as predicting music genre's for users based on their music preference. As a first step, we have applied the SOM to a Deezer dataset and the are encouraging 14. Applications of iSOM-GSN can also be in drug response or re-purposing, prediction of passenger or oncogenes, revealing topics in citation networks, and other prediction tasks. The source code has been posted on a Github project, available at the following anonymous website: https://gitlab.com/NF2610/isom_gsn. Figure 6: ROC plots for the proposed model run on the PRCA dataset.
This paper presents a deep learning model that combines self-organizing maps and convolutional neural networks for representation learning of multi-omics data
559
scitldr
State-of-the-art performances on language comprehension tasks are achieved by huge language models pre-trained on massive unlabeled text corpora, with very light subsequent fine-tuning in a task-specific supervised manner. It seems the pre-training procedure learns a very good common initialization for further training on various natural language understanding tasks, such that only few steps need to be taken in the parameter space to learn each task. In this work, using Bidirectional Encoder Representations from Transformers (BERT) as an example, we verify this hypothesis by showing that task-specific fine-tuned language models are highly close in parameter space to the pre-trained one. Taking advantage of such observations, we further show that the fine-tuned versions of these huge models, having on the order of $10^8$ floating-point parameters, can be made very computationally efficient. First, fine-tuning only a fraction of critical layers suffices. Second, fine-tuning can be adequately performed by learning a binary multiplicative mask on pre-trained weights, \textit{i.e.} by parameter-sparsification. As a , with a single effort, we achieve three desired outcomes: learning to perform specific tasks, saving memory by storing only binary masks of certain layers for each task, and saving compute on appropriate hardware by performing sparse operations with model parameters. One very puzzling fact about overparameterized deep neural networks is that sheer increases in dimensionality of the parameter space seldom make stochastic gradient-based optimization more difficult. Given an effective network architecture reflecting proper inductive biases, deeper and/or wider networks take just about the same, if not a lower, number of training iterations to converge, a number often by orders of magnitude smaller than the dimensionality of the parameter space. For example, ResNet-18 (parameter count 11.7M) and ResNet-152 (parameter count 60.2M) both train to converge, at similar convergence rates, in no more than 600K iterations on Imagenet . Meaningful optimization seems to happen in only a very low-dimensional parameter subspace, viz. the span of those relatively few weight updates, with its dimensionality not ostensibly scaling with the model size. In other words, the network seems already perfectly converged along most of the parameter dimensions at initialization, suggesting that training only marginally alters a high-dimensional parameter configuration. This phenomenon is epitomized in fine-tuning of pre-trained models. Pre-training is a, often unsupervised, learning procedure that yields a good common initialization for further supervised learning of various downstream tasks. The better a pre-trained model is, the fewer iterations are required on average to fine-tune it to perform specific tasks, ing in fine-tuned models hypothetically closer 1 to the pre-trained one in parameter space. However, better pre-trained models are, almost always, larger models , and nowhere is this trend more prominent than recent pretrained language models that achieved state-of-the-art natural language understanding performance, e.g. GPT-2 has 1.5B parameters. Thus, a problem naturally arises hand-in-hand with an obvious hint to its solution: as pre-trained models get larger, on the one hand, computation of each fine-tuned model becomes more expensive in terms of both memory and compute for inference, while on the other hand, greater closeness between the pre-trained and fine-tuned models in the parameter space prescribes a higher degree of computational redundancy that could be potentially avoided. Additionally, there might exist more computationally efficient fine-tuned networks that are not necessarily close to, but cheaply attainable from, the pre-trained parameters, which are shared across all tasks. In this study, we seek to address these questions, using Bidirectional Encoder Representations from Transformers (BERT) and the General Language Understanding Evaluation (GLUE) benchmark tasks as a working example. We first found that the fine-tuned and pre-trained parameters are both L 1 -close and angular-close in parameter space, consistent with the small number of fine-tuning iterations separating them. Next, we demonstrated that there also exist good fine-tuned models that are L 0 -close (i.e. having a small number of different components) to the pre-trained one. Further, we showed that there exist good fine-tuned parameters that are L 0 -small (i.e. sparse, or having a large fraction of zero components). Finally, we successfully found fine-tuned language models that are both L 0 -small and L 0 -close to the pre-trained models. We remark the practical implications of these constraints. By forcing fine-tuned parameters to be L 0 -close to the pre-trained ones, one only needs to store a small number of different weights per task, in addition to the common pre-trained weights, substantially saving parameter memory. By forcing fine-tuned parameters to be sparse, one potentially saves memory and compute, provided proper hardware acceleration of sparse linear algebraic operations. Surprisingly, our findings also reveal an abundance of good task-specific parameter configurations within a sparse L 0 -vicinity of large pre-trained language models like BERT: a specific task can be learned by simply masking anywhere between 1% to 40% of the pre-trained weights to zero. See Figure 1 for an explanation of the L 0 -and sparse L 0 -vicinities. Figure 1: An illustration of the L 0 -vicinity and the sparse L 0 -vicinity of a pre-trained parameter in a three-dimensional parameter space. The L 0 -vicinity is continuous and contains parameters that are L 0 -close, whereas the sparse L 0 -vicinity is a discrete subset of L 0 -close parameters that are also L 0 -small. Our search for L 0 -close fine-tuning solutions is motivated by the observation that sensitivities of the optimization objective to different layers in a network are highly variable . trained fine-grain sparse connectivity patterns over randomly initialized network parameters, termed supermasks, suggesting a similar and complementary role model sparsification plays to gradient-based learning of the objective. This is also related to network architecture search (NAS). The most similar study to ours is piggyback and its variants , where in a multi-task visual object classification scenario, the authors trained taskspecific binary masks on top of a shared set of pre-trained parameters. In this work, we not only applied similar techniques further to larger pre-trained language models, but also studied the trade- off between L 0 -closeness and sparseness in a systematic way. Also, note that randomly generated high-dimensional masks can also support multi-task learning, e.g.. A large body of literature is concerned with sparsification of large networks for efficient inference. Here, we employed iterative pruning during fine-tuning to produce highperformance fine-grain sparse models. To enforce parameter sparsity differentiably in combination with the L 0 -closeness constraint, instead of principled approaches to imposing L 0 -regularization , we used simpler straight-through estimator, much like binary quantization techniques ; note that this is also used by and. Consider a pre-trained network F θ: x → F (x; θ), parameterized by θ, noted as subscript for convenience. The fine-tuning procedure to perform a task t ∈ T can be described as a supervised training procedure of model G task-specific last layer unique to task t, and • denotes function composition. In the case of BERT, we have a stack of modules among which E is the embedding layer, P a final pooling layer and each B is a transformer block where collects all the learnable parameter matrices in the block. A(·, ·, ·) represents scaled dot-product attention , DO(·) dropout, LN (·) layer normalization and GeLU(·) the Gaussian error linear unit activation function . We experimented with the BERT BASE model , for which L = 12, with total parameter count of 109M 2. See Table 1 for additional taskspecific parameter counts, all 5 orders of magnitude smaller than the total parameter count. Optimization of them alone fails to fine-tune (see Appendix A). The GLUE benchmark is a collection of diverse natural language understanding tasks . We fine-tune on these tasks and report the evaluation performances. We exclude the problematic WNLI set 3. F1 scores are reported for QQP and MRPC, Spearman correlations are reported for STS-B, and accuracy scores are reported for all other tasks. For all fine-tuning procedures, we use the exact hyperparameters as described in the original paper unless specified otherwise, with additional constraints described as follows. No constraints are imposed on task-specific last layers G (t). L 0 -close fine-tuning To search for fine-tuned solutions that are L 0 -close to the pre-trained parameters, we selectively fix certain parameter matrices at pre-trained values and perform fine-tuning optimization on a lower-dimensional parameter space. Sparse (L 0 -small) fine-tuning We use iterative pruning during fine-tuning to produce sparse models. Pruning is based on weight magnitudes in each layer and is performed periodically during fine-tuning with sparsity gradually increasing from 0% to a final level according to a cubic schedule. Iterative pruning successfully ensures that parameters are L 0 -small (see Appendix B). Supermask training as fine-tuning (sparse and L 0 -close) In order to search for fine-tuned networks that are both sparse and L 0 -close to the pre-trained one, we reparameterize the model by a multiplicative binary mask θ =θ µ, whereθ is the pre-trained parameters, and µ ∈ {0, 1} N the mask, N being the dimensionality of the parameter space and the Hadamard product. If learning is purely through optimizing the mask µ while holdingθ constant, the mask is called a supermask . Since µ is discrete-valued and thus not differentiable, we reparame- where Bern(p) denotes an element-wise independent Bernoulli sampler with probability p, and σ(·) the sigmoid function, applied element-wise on ν ∈ R N, the continuous mask parameter. We treat gradient backpropagation through µ as a straight-through estimator, similar to the techniques used in;. Same fine-tuning hyperparameters were used except for the learning rate (see Appendix C). Control over the final sparsity is exerted by initialization of µ for fine-tuning. We initialize ν according to a soft magnitude-based pruning mask: a fraction of small-magnitude values are initialized to ν = −5 and the rest to ν = 5. We found that the initial sparsity directly controls the final sparsity (see Appendix D), allowing us to produce masks with sparsity levels ranging from 1% to 89%. We observe that the original fine-tuning procedures for GLUE tasks all take 10 2 to 10 4 parameter update steps (Table 1), negligible compared to the dimensionality of the parameter space, viz. 10 8. Thus, we first questioned whether fine-tuned parameters are indeed close to the pre-trained ones in parameter space. We measured the L 1 -distances, i.e. L 1 -norm of parameter difference, and angular distances (Table 2). Specifically, we inspect the weight matrices in self-attention layers, of size 768× 768 where 768 is the hidden state dimension. We report the minimum and maximum values across GLUE tasks: MNLI showed the largest values, and RTE showed the smallest values. Evidently, we see a significantly higher L 1 -and angular-closeness between fine-tuned and pre-trained parameters as compared to the expected distance between two independent random initializations. This suggests that, during the course of fine-tuning, the very few model parameter updates traversed a very short distance in the parameter space. Comparing the parameter distance across GLUE tasks, we find that it scales with the number of fine-tuning iterations (see Appendix E). Further, we inspect the closeness in parameter subspaces for each layer. We found that, though all layers change very little during fine-tuning, there is nevertheless a high degree of variability across different parameter matrices (Figure 2). Blocks deeper in the encoder stack are less L 1 -close but more angular-close than shallower ones. In all self-attention modules, value and output projection Next, inspired by the high degree of variability in each layer's parameter change during fine-tuning, we ask whether effective fine-tuning can be achieved by optimizing only a fraction of layers while having others fixed at pre-trained values, ing in fine-tuned models L 0 -close in parameter space. Our suggest this is indeed feasible (Table 3). Informed by different layers' sensitivity to finetuning, we performed fine-tuning experiments by progressively excluding: key projection layers in self-attention across all encoder stack layers, the penultimate and ultimate encoder stacks, and the word embedding layer. Each of these exclusions independently or all three combined do not significantly degrade performance, while reducing the number of parameters to fine-tune by up to 40% (from 109M to 66M). Encouraged by these , we ask whether more aggressive constraints can be imposed to the finetuning process to further cut computational cost. Though L 0 -close fine-tuning obviates optimization of a large fraction of parameters, avoiding full storage of all parameters for each fine-tuned task, all operations still need to be performed at inference time. In order to reduce operations, we seek to sparsify parameters. Combined with strict L 0 -closeness, this amounts to a search over a binary mask in a high-dimensional parameter space. We adopt supermask training (see Section 3) to this end. Figure 3 shows fine-tuned model performance across GLUE tasks obtained by supermask training. Final sparsity level of the supermask is controlled by its initialization (see Section 3 and Appendix D). We note that there is little task performance degradation between 1% and 40% of parameter sparsity. Layer-wise sparsity levels of supermasks also demonstrate systematic trends (Figure 4): across GLUE tasks, W Q, W K and W I tend to be sparser than W V, W D and W O, and shallower encoder stack layers are sparser than deeper ones. Moreover, we show that supermask fine-tuning of only a fraction of sensitive layers could also achieve high performance (Table 3). Figure 4: Supermask sparsity levels across layers. Shown is the low-sparsity MNLI supermask with a global sparsity level of 12.9%, but similar patterns are observed across all GLUE tasks. How does the learning of a supermask actually work? Does a supermask trivially learn to prune away the weights with smallest magnitudes? To address these questions, we inspect the magnitudes of the pre-trained weights zeroed by the supermasks (Figure 5, Table 4). These weights turn out to have remarkably higher magnitudes than the next smallest entries, suggesting the learning of supermaks is mechanistically distinct from trivial magnitude-based pruning. Table 4: Comparison between supermask pruned weights and magnitude-based pruned weights. Specifically, we compare the weights pruned with low-sparsity supermasks (initialized at 0% sparsity) and weights pruned with one-shot magnitude-based pruning at the same final sparsity. We report the maximum and mean magnitude of the pruned weights. The last row shows percentages of the overlap between the supermask and the magnitude-based pruning mask, i.e. the percentages of weights zeroed by the supermask that are also the smallest weights. One surprising finding of this study is the many occurrences of good fine-tuned parameters among the 2 N configurations in the set θ: θ =θ µ, µ ∈ {0, 1} N, viz. vertices of an N -dimensional hypercube. First, there exist supermasks up to 40% sparse without significant performance degrada- Table 5: Low-sparsity supermask performance. We report the sparsity levels achieved when the supermasks were initialized at 0% sparsity. For several tasks, fine-tuning is achieved with less than 3% of pre-trained weights pruned. For the supermask evaluation , we include the mean and standard deviation of 10 Bernoulli samplings of a single run. tion for all GLUE tasks, for some tasks even sparser (Figure 3). It is natural that performance drops as the mask becomes sparser; however, it is rather counterintuitive that there exist good supermasks at the dense end (Figure 3), because we know that the pre-trained model with only the task-specific last layer fine-tuned utterly fails to learn any task (Appendix A). Figure 6: Low-sparsity supermask performance, i.e. task performance of super-masks initialized at 0% sparsity, compared against baseline. To shed more light on this phenomenon, we study the supermasks trained with all-dense initialization (Figure 6). Surprisingly, these low-sparsity supermasks successfully learned to perform all the tasks without significant degradation from baseline. Essentially, complicated tasks like MNLI and QQP can be learned by clamping 12 − 13% of the pre-trained weights to zero, whereas for simple tasks like MRPC and RTE, setting only 1 − 2% of the pre-trained weight entries to zero suffices to learn the task (Table 5). Fine-tuning can indeed be very fine, suggesting relative frequent occurrences of good solutions within a sparse L 0 -neighborhood of the pre-trained parameters. Finally, we question whether the supermasks learned to perform different tasks share commonalities. Specifically, we quantify the amount of overlapping zeros in learned supermasks across different tasks (Figure 7). It seems the overlaps are not substantially larger than what would have been caused by chance, suggesting that, even though there seem to be many good supermasks for each task, these masks are largely distinct from each other, each unique to the task it learns. Figure 7: Fractions of overlap of zero elements in supermasks across GLUE tasks, compared to randomly generated masks. Each value in the grid shows the fraction of pruned elements in one task (horizontal axis) that are also pruned in the other (vertical axis). Here, we show low-sparsity supermasks (initialized at 0% sparsity) and compare the masks in the value layer of the first encoder, which is one of the most sparse layers in the entire model. We show that, due to surprisingly frequent occurrences of good parameter configurations in the sparse L 0 -vicinity of large pre-trained language models, two techniques are highly effective in producing efficient fine-tuned networks to perform specific language understanding tasks: optimizing only the most sensitive layers and learning to sparsify parameters. In contrast to commonly employed post-training compression methods that have to trade off with performance degradation, our procedure of generating sparse networks is by itself an optimization process that learns specific tasks. Optimization of only task-specific layers does not lead to successful fine-tuning. For instance, for the MRPC task, freezing parameter weights in the pre-trained model and optimizing the task-specific last layer alone yields a low-performing model. Across 10 independent runs, the model consistently predicts all 1's for the paraphrase classification task, yielding an F1 score of 81.2. This is a significant degradation compared to the baseline performance of 89.4 ± 0.7 across multiple runs (Table 3). Thus, it is critical to fine-tune layers in the pre-trained model and not just the task-specific layers alone. Iterative pruning during fine-tuning (Figure 8) outperforms supermask training (Figure 3) at higher sparsity levels. While supermask training remains successful up to 40% sparsity, iterative pruning produces binary masks up to 50% sparse and for some tasks even sparser without significant performance degradation. However, while iterative pruning saves compute on appropriate hardware by performing sparse operations with model parameters, supermask training further saves memory by storing only binary masks of certain layers for each task. Figure 8: Iterative pruning during fine-tuning. We plot the evaluation performance at sparsity levels from 10% to 90% across GLUE tasks. Note the baseline performance for each task marked by the leftmost end of each curve (0% sparsity). Supermask training requires a much larger learning rate compared to typical training . While a learning rate of 2 × 10 −5 is used for optimizing weights, a learning rate of 2 × 10 −1 is used for optimizing masks. We notice a degradation in performance at smaller learning rates for supermask training (Table 6). Table 6: MRPC low-sparsity supermask performance at learning rates from 2 × 10 −5 and 2 × 10 −1. We note that supermask training requires a much higher learning rate than typical parameter training. At low learning rates, the model significantly degrades in performance and predicts all 0's for the paraphrase classification task, yielding an F1 score of 0.0. This pattern holds true across GLUE tasks. There is no straightforward to control the amount of weights pruned when training supermasks . We find that setting the initial sparsity through a soft magnitude-based pruning mask controls the final sparsity level. Figure 9 shows this correlation between initial and final sparsities of supermasks across GLUE tasks. The supermasks are initialized to a soft magnitude-based pruning mask and the sparsity level shifts during supermask training. This figure shows the initial sparsity level plotted against the final sparsity level. We note that, at lower initial sparsity levels, the supermask is pushed to a greater sparsity level, whereas at higher sparsity levels, the supermask is pushed to a lower sparsity level. This pattern is similar across GLUE tasks but is most prominent in the MNLI task, scaling with the number of fine-tuning steps (Table 1). We find that parameter distance scales with the number of fine-tuning steps (Figure 10). Figure 10: Scaling of parameter distance with the number of fine-tuning iterations. We find that angular distance correlates with the amount of fine-tuning (Table 1). Each data point corresponds to a different GLUE task.
Sparsification as fine-tuning of language models
560
scitldr
We present a simple and effective algorithm designed to address the covariate shift problem in imitation learning. It operates by training an ensemble of policies on the expert demonstration data, and using the variance of their predictions as a cost which is minimized with RL together with a supervised behavioral cloning cost. Unlike adversarial imitation methods, it uses a fixed reward function which is easy to optimize. We prove a regret bound for the algorithm in the tabular setting which is linear in the time horizon multiplied by a coefficient which we show to be low for certain problems in which behavioral cloning fails. We evaluate our algorithm empirically across multiple pixel-based Atari environments and continuous control tasks, and show that it matches or significantly outperforms behavioral cloning and generative adversarial imitation learning. Training artificial agents to perform complex tasks is essential for many applications in robotics, video games and dialogue. If success on the task can be accurately described using a reward or cost function, reinforcement learning (RL) methods offer an approach to learning policies which has been shown to be successful in a wide variety of applications However, in other cases the desired behavior may only be roughly specified and it is unclear how to design a reward function to characterize it. For example, training a video game agent to adopt more human-like behavior using RL would require designing a reward function which characterizes behaviors as more or less human-like, which is difficult. Imitation learning (IL) offers an elegant approach whereby agents are trained to mimic the demonstrations of an expert rather than optimizing a reward function. Its simplest form consists of training a policy to predict the expert's actions from states in the demonstration data using supervised learning. While appealingly simple, this approach suffers from the fact that the distribution over states observed at execution time can differ from the distribution observed during training. Minor errors which initially produce small deviations from the expert trajectories become magnified as the policy encounters states further and further from its training distribution. This phenomenon, initially noted in the early work of , was formalized in the work of who proved a quadratic O(T 2) bound on the regret and showed that this bound is tight. The subsequent work of showed that if the policy is allowed to further interact with the environment and make queries to the expert policy, it is possible to obtain a linear bound on the regret. However, the ability to query an expert can often be a strong assumption. In this work, we propose a new and simple algorithm called DRIL (Disagreement-Regularized Imitation Learning) to address the covariate shift problem in imitation learning, in the setting where the agent is allowed to interact with its environment. Importantly, the algorithm does not require any additional interaction with the expert. It operates by training an ensemble of policies on the demonstration data, and using the disagreement in their predictions as a cost which is optimized through RL together with a supervised behavioral cloning cost. The motivation is that the policies in the ensemble will tend to agree on the set of states covered by the expert, leading to low cost, but are more likely to disagree on states not covered by the expert, leading to high cost. The RL cost thus pushes the agent back towards the distribution of the expert, while the supervised cost ensures that it mimics the expert within the expert's distribution. Our theoretical show that, subject to realizability and optimization oracle assumptions, our algorithm obtains a O(κ T) regret bound for tabular MDPs, where κ is a measure which quantifies a tradeoff between the concentration of the demonstration data and the diversity of the ensemble outside the demonstration data. We evaluate DRIL empirically across multiple pixel-based Atari environments and continuous control tasks, and show that it matches or significantly outperforms behavioral cloning and generative adversarial imitation learning, often recovering expert performance with only a few trajectories. Denote by S the state space, A the action space, and Π the class of policies the learner is considering. Let T denote the task horizon and π the expert policy whose behavior the learner is trying to mimic. For any policy π, let d π denote the distribution over states induced by following π. Denote C(s, a) the expected immediate cost of performing action a in state s, which we assume is bounded in. In the imitation learning setting, we do not necessarily know the true costs C(s, a), instead we observe expert demonstrations. Our goal is to find a policy π which minimizes an observed surrogate loss between its actions and the actions of the expert under the induced distribution of states, i.e.π For the following, we will assume is the total variation distance (denoted by ·), which is an upper bound on the 0 − 1 loss. Our goal is thus to minimize the following quantity, which represents the distance between the actions taken by our policy π and the expert policy π: The following shows that if represents an upper bound on the 0 − 1 loss and C satisfies certain smoothness conditions, then minimizing this loss within translates into an O(T) regret bound on the task cost Theorem 1. Let π be such that J exp (π) =, and Unfortunately, it is often not possible to optimize J exp directly, since it requires evaluating the expert policy on the states induced by following the current policy. The supervised behavioral cloning cost J BC, which is computed on states induced by the expert, is often used instead: Minimizing this loss within yields a quadratic regret bound on regret: Furthermore, this bound is tight: as we will discuss later, there exist simple problems which match the worst-case lower bound. Our algorithm is motivated by two criteria: i) the policy should perform similarly to the expert on the expert's data distribution, and ii) the policy should move towards the expert's data distribution if it is away from it. These two criteria are addressed by combining two losses: a standard behavior cloning loss, and an additional loss which represents the variance over actions induced by sampling different policies from the posterior given the demonstration data D. We call this the uncertainty cost, which is defined as: 2: Initialize policy π and policy ensemble E = {π e} 3: for e = 1, E do 4: Sample D e ∼ D with replacement, with |D e | = |D|. Train π e to minimize J BC (π e) on D e to convergence. 6: end for 7: for i = 1,... do Perform one gradient update to minimize J BC (π) using a minibatch from D. Perform one step of policy gradient to minimize E s∼dπ,a∼π(·|s) [C clip U (s, a)]. 10: end for The motivation is that the variance over plausible policies is high outside the expert's distribution, since the data is sparse, but it is low inside the expert's distribution, since the data there is dense. Minimizing this cost encourages the policy to return to regions of dense coverage by the expert. Intuitively, this is what we would expect the expert policy π to do as well. The total cost which the algorithm optimizes is given by: The first term is a behavior cloning loss and is computed over states generated by the expert policy, of which the demonstration data D is a representative sample. The second term is computed over the distribution of states generated by the current policy and can be optimized using policy gradient. A number of methods can be used to approximate the posterior p(π|D). In our experiments, we use an ensemble E = {π e} |E| e=1 of models with different initializations which are trained on different bootstrap samples of the demonstration data. Note that the demonstration data is fixed, and this ensemble can be trained once offline. We then interleave the supervised behavioral cloning updates and the policy gradient updates which minimize the variance of the posterior. The full algorithm is shown in Algorithm 1. Other methods for approximating the posterior include Bayesian neural networks using diagonal Gaussian approximations or MCdropout , which we also found to work well (see Appendix D.2). In practice, for the supervised loss we optimize the KL divergence between the actions predicted by the policy and the expert actions, which is an upper bound on the total variation distance due to Pinsker's inequality. We also found it helpful to use a clipped uncertainty cost: where the threshold q is a top quantile of the raw uncertainty costs computed over the demonstration data. The threshold q defines a normal range of uncertainty based on the demonstration data, and values above this range incur a positive cost (or negative reward). The RL cost can be optimized using any policy gradient method. In our experiments we used advantage actor-critic (A2C) , which estimates the expected cost using rollouts from multiple parallel actors all sharing the same policy (see Appendix C for details). We note that model-based RL methods could in principle be used as well if sample efficiency is a constraint. We now analyze DRIL for tabular MDPs. We will show that, subject to assumptions that the policy class contains an optimal policy and that we are able to optimize costs within of their global minimum, our algorithm obtains a regret bound which is linear in κ T, where κ is a quantity specific to the environment and d π. Intuitively, κ represents a tradeoff between how concentrated the demonstration data is and how high the variance of the posterior is outside the expert distribution. Assumption 1. (Realizability) π ∈ Π. Assumption 2. (Optimization Oracle) For any given cost function J, our minimization procedure returns a policyπ ∈ Π such that J(π) ≤ arg min π∈Π J(π) +. The motivation behind our algorithm is that the policies in the ensemble agree inside the expert's distribution and disagree outside of it. This defines a reward function which pushes the learner back towards the expert's distribution if it strays away. However, what constitutes inside and outside the distribution, or sufficient agreement or disagreement, is ambiguous. Below we define quantities which makes these ideas precise. Definition 1. For any set U ⊆ S, define the maximum probability ratio between the state distributions induced by the expert policy and by policies in the policy class inside of U as α(U) = max π∈Π s∈U. For a set U, α(U) will be low if the expert distribution has high mass inside of U, and the states in U is reachable by policies in the policy class. Definition 2. Define the minimum variance of the posterior outside of U as We now define the κ coefficient as the minimum ratio of these two quantities over all possible subsets of S. Definition 3. We define κ(U) = α(U) β(U), and κ = min U ⊆S κ(U). We can view minimizing κ(U) over different U ⊆ S as minimizing a tradeoff between coverage by the expert policy inside of U, and variance of the posterior outside of U. Note that by making U very small, it may be easy to make α(U) small, but doing so may also make β(U) small and κ(U) large. Conversely, making U large may make β(U) large but may also make α(U) large as a . We now establish a relationship between the κ coefficient just defined, the cost our algorithm optimizes, and J exp defined in Equation which we would ideally like to minimize and which translates into a regret bound. All proofs can be found in Appendix A. This shows that if κ is not too large, and we are able to make our cost function J alg (π) small, then we can ensure J exp (π) is also be small. This is only useful if our cost function can indeed achieve a small minimum. The next lemma shows that this is the case. Here is the threshold specified in Assumption 2. Combining these two lemmas with the previous of , we get a regret bound which is linear in κ T. Theorem 3. Letπ be the of minimizing J alg using our optimization oracle, and assume that Our bound is an improvement over that of behavior cloning if κ is less than O(T). Note that DRIL does not require knowledge of κ. The quantity κ is problem-dependent and depends on the environment dynamics, the expert policy and the class of policies available to the learner. We next compute κ exactly for a problem for which behavior cloning is known to perform poorly, and show that it is independent of T. Example 1. Consider the tabular MDP given in as an example of a problem where behavioral cloning incurs quadratic regret, shown in Figure 1. There are 3 states S = (s 0, s 1, s 2) and two actions (a 1, a 2). Each policy π can be represented as a set of probabilities π(a 1 |s) for each state s ∈ S 1. The posterior p(π(a 1 |s)|D) is given by a Beta distribution with parameters Beta(n 1 + 1, n 2 + 1) where n 1, n 2 are the number of times the pairs (s, a 1) and (s, a 2) occur, respectively, in the demonstration data D. The agent always starts in s 0 and the expert's policy is given by π (a 1 |s 0) = 1, π (a 1 |s 1) = 0, π (a 1 |s 2) = 1. due to the dynamics of the MDP, so dπ(s) d π (s) ≤ 1 for s ∈ {s 0, s 1}. Furthermore, since s 2 is never visited in the demonstration data, p(π(a 1 |s 2)|D) = Beta = U nif orm, which also implies that p(π(a 1 |s 2)|D) = U nif orm. It follows that Var π∼p(π|D) (π(a|s 2)) is equal to the variance of a uniform distribution over, i.e. 1 12. Therefore: Applying our from Theorem 3, we see that our algorithm obtains an O(T) regret bound on this problem, in contrast to the O(T 2) regret of behavioral cloning 2. The idea of learning through imitation dates back at least to the work of , who trained a neural network to imitate the steering actions of a human driver using images as input. The problem of covariate shift was already observed, as the author notes: "when driving for itself, the network may occasionally stray from the center of the road and so must be prepared to recover by steering the vehicle back to the center of the road". This issue was formalized in the work of , who on one hand proved an O(T 2) regret bound, and on the other hand provided an example showing this bound is tight. The subsequent work proposed the DAGGER algorithm which obtains linear regret, provided the agent can both interact with the environment, and query the expert policy. Our approach also requires environment interaction, but importantly does not require the ability to query the expert. Also of note is the work of , which extended DAGGER to time series prediction problems by using the true targets as expert corrections. Imitation learning has been used within the context of modern RL to help improve sample efficiency or overcome exploration . These settings assume the reward is known and that the policies can then be fine-tuned with reinforcement learning. In this case, covariate shift is less of an issue since it can be corrected using the reinforcement signal. The work of also proposed a method to address the covariate shift problem when learning from demonstrations when the reward is known, by conservatively extrapolating the value function outside the training distribution using negative sampling. This addresses a different setting from ours, and requires generating plausible states which are off the manifold of training data, which may be challenging when the states are high dimensional such as images. The work of proposed to treat imitation learning within the Q-learning framework, setting a positive reward for all transitions inside the demonstration data and zero reward for all other transitions in the replay buffer. This rewards the agent for repeating (or returning to) the expert's transitions. The work of also incorporates a mechanism for reducing covariate shift by fitting a Q-function that classifies whether the demonstration states are reachable from the current state. Random Expert Distillation uses Random Network Distillation (RND) to estimate the support of the expert's distribution in state-action space, and minimizes an RL cost designed to guide the agent towards the expert's support. This is related to our method, but differs in that it minimizes the RND prediction error rather than the posterior variance and does not include a behavior cloning cost. The behavior cloning cost is essential to our theoretical and avoids certain failure modes, see Appendix B for more discusion. Generative Adversarial Imitation Learning (GAIL) is a state-of-the-art algorithm which addresses the same setting as ours. It operates by training a discriminator network to distinguish expert states from states generated by the current policy, and the negative output of the discriminator is used as a reward signal to train the policy. The motivation is that states which are outside the training distribution will be assigned a low reward while states which are close to it will be assigned a high reward. This encourages the policy to return to the expert distribution if is strays away from it. However, the adversarial training procedure means that the reward function is changing over time, which can make the algorithm unstable or difficult to tune. In contrast, our approach uses a simple fixed reward function. We include comparisons to GAIL in our experiments. Using disagreement between models in an ensemble to represent uncertainty has recently been explored in several contexts. The works of used disagreement between different dynamics models to drive exploration in the context of model-based RL. Conversely, used variance across different dropout masks to prevent policies from exploiting error in dynamics models. Ensembles have also been used to represent uncertainty over Q-values in model-free RL in order to encourage exploration . Within the context of imitation learning, the work of used the variance of the ensemble together with the DAGGER algorithm to decide when to query the expert demonstrator to minimize unsafe situations. Here, we use disagreement between different policies sampled from the posterior to address covariate shift in the context of imitation learning. As a first experiment, we applied DRIL to the tabular MDP of shown in Figure 1. We computed the posterior over the policy parameters given the demonstration data using . Shaded region represents range between 5 th and 95 th quantiles, computed across 500 trials. Behavior cloning exhibits poor worstcase regret, whereas DRIL has low regret across all trials. a separate Beta distribution for each state s with parameters determined by the number of times each action was performed in s. For behavior cloning, we sampled a single policy from this posterior. For our method, we sampled 5 policies and used their negative variance to define an additional reward function. We combined this with a reward which was the probability density function of a given state-action pair under the posterior distribution, which corresponds to the supervised learning loss, and used tabular Q-learning to optimize the sum of these two reward functions. This experiment was repeated 500 times for time horizon lengths up to 500 and N = 1, 5, 10 expert demonstration trajectories. Figure 2 shows plots of the regret over the 500 different trials across different time horizons. Although the average performance of BC improves with more expert demonstrations, it exhibits poor worst-case performance with some trials incurring very high regret, especially when using fewer demonstrations. Our method has low regret across all trials, which stays close to constant independantly of the time horizon, even with a single demonstration. This performance is better than that suggested by our analysis, which showed a worst-case linear bound with respect to time horizon. We next evaluated our approach on six different Atari environments. We used pretrained PPO agents from the stable baselines repository to generate N = {1, 3, 5, 10, 15, 20} expert trajectories. We compared against two other methods: standard behavioral cloning (BC) and Generative Adversarial Imitation Learning (GAIL). Results are shown in Figure 3a. DRIL outperforms behavioral cloning across most environments and numbers of demonstrations, often by a substantial margin. In the worst case its performance matches that of behavior cloning. In many cases, our method is able to match the expert's performance using a small number of trajectories. Figure 3b shows the evolution of the uncertainty cost and the policy reward throughout training. In all cases, the test reward improves while the uncertainty cost decreases. Interestingly, there is correspondence between the change in the uncertainty cost during training and the gap in performance between behavior cloning and DRIL. For example, in MsPacman there is both a small improvement in uncertainty cost over time and a small gap between behavior cloning and our method, whereas in Breakout there is a large improvement in uncertainty cost and a large gap between behavior cloning and our method. This suggests that the gains from our method comes from redirecting the policy back towards the expert manifold, which is manifested as a decrease in the uncertainty cost. We were not able to obtain meaningful performance for GAIL on these domains, despite performing a hyperparameter search across learning rates for the policy and discriminator, and across different numbers of discriminator updates. We additionally experimented with clipping rewards in an effort to stabilize performance. These are consistent with those of , who also reported negative when running GAIL on images. While improved performance might be possible with more sophisticated adversarial training techniques, we note that this contrasts with our method which uses a fixed reward function obtained through simple supervised learning. In Appendix D we provide ablation experiments examining the effects of different cost function choices and the role of the BC loss. We also compare the ensemble approach to a dropout-based approach for posterior approximation and show that DRIL works well in both cases. We next report of running our method on a 6 different continuous control tasks from the PyBullet 3 and OpenAI Gym environments. We again used pretrained agents to generate expert demonstrations. Results are shown in Figure 4. In these environments we found behavior cloning to be a much stronger baseline than for the Atari environments, and in many tasks it was able to match expert performance using as little as 3 trajectories. Our method exhibits a modest improvement on Walker2D and BipedalWalkerHardcore when a single trajectory is used, and otherwise has similar performance to behavior cloning. The fact that our method does not perform worse than behavior cloning on tasks where covariate shift is likely less of an issue provides evidence of its robustness. Addressing covariate shift has been a long-standing challenge in imitation learning. In this work, we have proposed a new method to address this problem by penalizing the disagreement between an ensemble of different policies sampled from the posterior. Importantly, our method requires no additional labeling by an expert. Our experimental demonstrate that DRIL can often match expert performance while using only a small number of trajectories across a wide array of tasks, ranging from tabular MDPs to pixel-based Atari games and continuous control tasks. On the theoretical side, we have shown that our algorithm can provably obtain a low regret bound for tabular problems in which the κ parameter is low. There are multiple directions for future work. On the theoretical side, extending our analysis to continuous state spaces and characterizing the κ parameter on a larger array of problems would help to better understand the settings where our method can expect to do well. Empirically, there are many other settings in structured prediction (Daumé et al., 2009) where covariate shift is an issue and where our method could be applied. For example, in dialogue and language modeling it is common for generated text to become progressively less coherent as errors push the model off the manifold it was trained on. Our method could potentially be used to fine-tune language or translation models after training by applying our uncertainty-based cost function to the generated text. A PROOFS Proof. We will first show that for any π ∈ Π and U ⊆ S, we have. We can rewrite this as: We begin by bounding the first term: We next bound the second term: Now observe we can decompose the RL cost as follows: Putting these together, we get the following: Here we have used the fact that β(U) ≤ 1 since 0 ≤ π(a|s) ≤ 1 and α(U) ≥ s∈U Taking the minimum over subsets U ⊆ S, we get J exp (π) ≤ κJ alg (π). Proof. Plugging the optimal policy into J alg, we get: We will first bound Term 1: We will next bound Term 2: The last step follows from our optimization oracle assumption: Combining the bounds on the two terms, we get J alg (π) ≤ 2. Since π ∈ Π, the follows. Theorem 1. Letπ be the of minimizing J alg using our optimization oracle, and assume that Proof. By our optimization oracle and Lemma 2, we have Combining with Lemma 1, we get: Applying Theorem 1 from , we get J(π) ≤ J(π) + 3uκ T. The following example shows how minimizing the uncertainty cost alone without the BC cost can lead to highly sub-optimal policies if the demonstration data is generated by a stochastic policy which is only slightly suboptimal. Consider the following deterministic chain MDP: The agent always starts in s 1, and gets a reward of 1 in s 3 and 0 elsewhere. The optimal policy is given by: Assume the demonstration data is generated by the following policy, which is only slightly suboptimal: Let us assume realizability and perfect optimization for simplicity. If both transitions (s 2, a 0) and (s 2, a 1) appear in the demonstration data, then Random Expert Distillation (RED) will assign zero cost to both transitions. If we do not use bootstrapped samples to train the ensemble, then DRIL without the BC cost (we will call this UO-DRIL for Uncertainty-Only DRIL) will also assign zero cost to both transitions since all models in the ensemble would recover the Bayes optimal solution given the demonstration data. If we are using bootstrapped samples, then the Bayes optimal solution for each bootstrapped sample may differ and thus the different policies in the ensemble might disagree in their predictions, although given enough demonstration data we would expect these differences (and thus the uncertainty cost) to be small. Note also that since no samples at the state s 0 occur in the demonstration data, both RED and UO-DRIL will likely assign high uncertainty costs to state-action pairs at (s 0, a 0), (s 0, a 1) and thus avoid highly suboptimal policies which get stuck at s 0. Now consider policiesπ 1,π 2 given by:π Both of these policies only visit state-action pairs which are visited by the demonstration policy. In the case described above, both RED and UO-DRIL will assignπ 1 andπ 2 similarly low costs. However,π 1 will cycle forever between s 1 and s 2, never collecting reward, whileπ 2 will with high probability reach s 3 and stay there, thus achieving high reward. This shows that minimizing the uncertainty cost alone does not necessarily distinguish between good and bad policies. However,π 1 will incur a higher BC cost thanπ 2, sinceπ 2 more closely matches the demonstration data at s 2. This shows that including the BC cost can be important for further disambiguating between policies which all stay within the distribution of the demonstration data, but have different behavior within that distribution. C EXPERIMENTAL DETAILS C.1 ATARI ENVIRONMENTS All behavior cloning and ensemble models were trained to minimize the negative log-likelihood classification loss on the demonstration data for 500 epochs using Adam and a learning rate of 2.5 · 10 −4. For our method, we initially performed a hyperparameter search on Space Invaders over the following values: We then chose the best values and kept those hyperparameters fixed for all other environments. All other A2C hyperparameters follow the default values in the repo : policy networks consisted of 3-layer convolutional networks with 8−32−64 feature maps followed by a single-layer MLP with 512 hidden units. For GAIL, we used the implementation in and replaced the MLP discriminator by a CNN discriminator with the same architecture as the policy network. We initially performed a hyperparameter search on Breakout with 10 demonstrations over the values shown in Table 2. However, we did not find any hyperparameter configuration which performed better than behavioral cloning. All behavior cloning and ensemble models were trained to minimize the mean-squared error regression loss on the demonstration data for 500 epochs using Adam and a learning rate of 2.5 · 10 −4. Policy networks were 2-layer fully-connected MLPs with tanh activations and 64 hidden units. • DRIL This is the regular DRIL agent, which optimizes both the BC cost and the clipped cost: • DRIL (clipped cost 0/1) This is the same as the regular DRIL agent, except that we use the following clipped cost function: • DRIL (raw cost) This is the same as the regular DRIL agent, except that we use the raw cost C U (s, a) rather than the clipped cost. • DRIL (no BC cost) This is the same as the regular DRIL agent, except that we remove the BC updates and only optimize C clip U (s, a). Results are shown in Figure 5. First, switching from the clipped cost in {−1, +1} to the clipped cost in {0, 1} or the raw cost causes a large drop in performance across most environments. One explanation may be that since the costs are always positive for both variants (which corresponds to a reward which is always negative), the agent may learn to terminate the episode early in order to minimize the total cost incurred. Using a cost/reward which has both positive and negative values avoids this behavior. Second, optimizing the pure BC cost performs better than the pure uncertainty cost for some environments (MsPacman, SpaceInvaders, BeamRider) while optimizing the pure uncertainty cost performs better than BC for others (Breakout, Qbert). DRIL, which optimizes both, has robust performance and performs the best out of the different variants over most environments and numbers of trajectories. We provide additional comparing the ensembling and MC-dropout approaches to posterior estimation. For MC-Dropout we trained a single policy network with a dropout rate of 0.1 applied to all layers except the last, and estimated the variance for each state-action pair using 10 different dropout masks. Similarly to the ensemble approach, we computed the 98 th quantile of the variance on the demonstration data and used this value in our clipped cost. Results are shown for three environments in Figure 6. MC-dropout performs similarly to the ensembling approach, which shows that our method can be paired with different approaches to posterior estimation.
Method for addressing covariate shift in imitation learning using ensemble uncertainty
561
scitldr
We present and discuss a simple image preprocessing method for learning disentangled latent factors. In particular, we utilize the implicit inductive bias contained in features from networks pretrained on the ImageNet database. We enhance this bias by explicitly fine-tuning such pretrained networks on tasks useful for the NeurIPS2019 disentanglement challenge, such as angle and position estimation or color classification. Furthermore, we train a VAE on regionally aggregate feature maps, and discuss its disentanglement performance using metrics proposed in recent literature. Fully unsupervised methods, that is, without any human supervision, are doomed to fail for tasks such as learning disentangled representations . In this contribution, we utilize the implicit inductive bias contained in models pretrained on the ImageNet database , and enhance it by finetuning such models on challenge-relevant tasks such as angle and position estimation or color classification. In particular, our submission for challenge stage 2 builds on our submission from stage 1 1, in which we employed pretrained CNNs to extract convolutional feature maps as a preprocessing step before training a VAE . Although this approach already in partial disentanglement, we identified two issues with the feature vectors extracted this way. Firstly, the feature extraction network is trained on ImageNet, which is rather dissimilar to the MPI3d dataset used in the challenge. Secondly, the feature aggregation mechanism was chosen ad-hoc and likely does not retain all information needed for disentanglement. We attempt to fix these issues by finetuning the feature extraction network as well as learning the aggregation of feature maps from data by using the labels of the simulation datasets MPI3d-toy and MPI3d-realistic. Our method consists of the following three steps: supervised finetuning of the feature extraction CNN (section 2.1), extracting a feature vector from each image in the dataset using the finetuned network (section 2.2), training a VAE to reconstruct the feature vectors and disentangle the latent factors of variation (section 2.3). In this step, we finetune the feature extraction network offline (before submission to the evaluation server). The goal is to adapt the network such that it produces aggregated feature vectors that capture the latent variables well. In particular, the network is finetuned by learning to predict the value of each latent factor from the aggregated feature vector of an image. To this end, we use the simulation datasets MPI3d-toy and MPI3d-realistic 2, namely the images as inputs and the labels as supervised classification targets. For the feature extraction network, we use the VGG19-BN architecture of the torchvision package. The input images are standardized using mean and variance across each channel computed from the ImageNet dataset. We use the output feature maps of the last layer before the final average pooling (dimensionality 512ˆ2ˆ2) as the input to a feature aggregation module which reduces the feature map to a 512-dimensional vector 3. This aggregation module consists of three convolution layers with 1024, 2048, 512 feature maps and kernel sizes 1, 2, 1 respectively. Each layer is followed by batch normalization and ReLU activation. We also employ layerwise dropout with rate 0.1 before each convolution layer. Finally, the aggregated feature vector is 2-normalized, which was empirically found to be important for the ing disentanglement performance. Then, for each latent factor, we add a linear classification layer computing the logits of each class from the aggregated feature vector. These linear layers are discarded after this step. We use both MPI3d-toy and MPI3d-realistic for training to push the network to learn features that identify the latent factors in a robust way, regardless of details such as reflections or specific textures. In particular, we use a random split of 80% of each dataset as the training set, and the remaining samples as a validation set. VGG19-BN is initialized with a set of weights ing from ImageNet training 4, and the aggregation module and linear layers were randomly initialized using uniform He initialization . The network is trained for 5 epochs using the RAdam optimizer with learning rate 0.001, β 0 " 0.999, β 1 " 0.9, a batch size of 512 and a weight decay of 0.01. We use a multi-task classification loss consisting of the sum of cross entropies between the prediction and the ground truth of each latent factor. After training, the classification accuracy on the validation set is around 98% for the two degrees of freedom of the robot arm, and around 99.9% for the remaining latent factors. In this step, we use the finetuned feature extraction network to produce a set of aggregated feature vectors. We simply run the network detailed in the previous step on each image of the dataset and store the aggregated 512-dimensional vectors in memory. Again, inputs to the feature extractor are standardized such that mean and variance across each channel correspond to the respective ones from the ImageNet dataset. 2. Pretraining using any data was explicitly stated to be allowed by the challenge organizers. 3. Using aggregated feature vectors instead of feature maps is necessitated by the memory requirements of the challenge. 4. https://download.pytorch.org/models/vgg19_bn-c79401a0.pth Finally, we train a standard β-VAE on the set of aggregated feature vectors ing from the previous step. The encoder network consists of 4 fully-connected layers with 1024 neurons each, followed by two fully-connected layers parametrizing mean and log variance of the factorized Gaussian approximate posterior q pz | xq " N`µ, σ 2w ith C " 16 latent factors. The number of latent factors was experimentally determined. The decoder network consists of 4 fully-connected layers with 1024 neurons each, followed by a fully-connected layer parametrizing the mean of the factorized Gaussian conditional distribution p px | zq " N pμ, Iq. The mean is constrained to range p0, 1q using the sigmoid activation. All fully-connected layers but the final ones use batch normalization and are followed by ReLU activation functions. We use orthogonal initialization for all layers and assume a factorized Gaussian prior p pzq " N p0, Iq on the latent variables. For optimization, we use the RAdam optimizer with a learning rate of 0.001, β 0 " 0.999, β 1 " 0.9 and a batch size of B " 256. The VAE is trained for N " 100 epochs by minimizing where β is a hyperparameter to balance the losses of the MSE reconstruction and the KLD penalty terms. As the scale of the KLD term depends on the numbers of latent factors C, we normalize it by C such that β can be varied independently of C. It can be harmful to start training with too much weight on the KLD term . Therefore, we use the following cosine schedule to smoothly anneal β from β start " 0.001 to β end " 0.4 over the course of training: where βptq is the value for β in training episode t P t0,..., N´1u, and annealing runs from epoch t start " 10 to epoch t end " 49. This schedule lets the model initially learn to reconstruct the data well and only then puts pressure on the latent variables to be factorized which we found to considerably improve performance. On the public leaderboard (i.e. on MPI3D-real), our best submission achieves the first rank on the FactorVAE , and DCI metrics, with a large gap to the second-placed entry. See appendix A for a discussion of the . Unsurprisingly, introducing prior knowledge simplifies the disentanglement task considerably, reflected in improved scores. To do so, our approach makes use of task-specific supervision obtained from simulation, which restricts its applicability. Nevertheless, it constitutes a demonstration that this type of supervision can transfer to better disentanglement on real world data, which was one of the goals of the challenge.
We use supervised finetuning of feature vectors to improve transfer from simulation to the real world
562
scitldr
A reinforcement learning agent that needs to pursue different goals across episodes requires a goal-conditional policy. In addition to their potential to generalize desirable behavior to unseen goals, such policies may also enable higher-level planning based on subgoals. In sparse-reward environments, the capacity to exploit information about the degree to which an arbitrary goal has been achieved while another goal was intended appears crucial to enable sample efficient learning. However, reinforcement learning agents have only recently been endowed with such capacity for hindsight. In this paper, we demonstrate how hindsight can be introduced to policy gradient methods, generalizing this idea to a broad class of successful algorithms. Our experiments on a diverse selection of sparse-reward environments show that hindsight leads to a remarkable increase in sample efficiency. In a traditional reinforcement learning setting, an agent interacts with an environment in a sequence of episodes, observing states and acting according to a policy that ideally maximizes expected cumulative reward. If an agent is required to pursue different goals across episodes, its goal-conditional policy may be represented by a probability distribution over actions for every combination of state and goal. This distinction between states and goals is particularly useful when the probability of a state transition given an action is independent of the goal pursued by the agent. Learning such goal-conditional behavior has received significant attention in machine learning and robotics, especially because a goal-conditional policy may generalize desirable behavior to goals that were never encountered by the agent BID17 BID3;; BID16 BID29;;; BID11. Consequently, developing goal-based curricula to facilitate learning has also attracted considerable interest (; ; BID20 BID19 . In hierarchical reinforcement learning, goal-conditional policies may enable agents to plan using subgoals, which abstracts the details involved in lower-level decisions BID10 BID26 ;).In a typical sparse-reward environment, an agent receives a non-zero reward only upon reaching a goal state. Besides being natural, this task formulation avoids the potentially difficult problem of reward shaping, which often biases the learning process towards suboptimal behavior BID9. Unfortunately, sparse-reward environments remain particularly challenging for traditional reinforcement learning algorithms BID0 ). For example, consider an agent tasked with traveling between cities. In a sparse-reward formulation, if reaching a desired destination by chance is unlikely, a learning agent will rarely obtain reward signals. At the same time, it seems natural to expect that an agent will learn how to reach the cities it visited regardless of its desired destinations. In this context, the capacity to exploit information about the degree to which an arbitrary goal has been achieved while another goal was intended is called hindsight. This capacity was recently introduced by BID0 to off-policy reinforcement learning algorithms that rely on experience replay . In earlier work, introduced hindsight to policy search based on Bayesian optimization BID5.In this paper, we demonstrate how hindsight can be introduced to policy gradient methods BID27 BID28 BID22, generalizing this idea to a successful class of reinforcement learning algorithms BID13 ).In contrast to previous work on hindsight, our approach relies on importance sampling BID2. In reinforcement learning, importance sampling has been traditionally employed in order to efficiently reuse information obtained by earlier policies during learning BID15 BID12; BID7. In comparison, our approach attempts to efficiently learn about different goals using information obtained by the current policy for a specific goal. This approach leads to multiple formulations of a hindsight policy gradient that relate to well-known policy gradient . In comparison to conventional (goal-conditional) policy gradient estimators, our proposed estimators lead to remarkable sample efficiency on a diverse selection of sparse-reward environments. We denote random variables by upper case letters and assignments to these variables by corresponding lower case letters. We let Val(X) denote the set of valid assignments to a random variable X. We also omit the subscript that typically relates a probability function to random variables when there is no risk of ambiguity. For instance, we may use p(x) to denote p X (x) and p(y) to denote p Y (y).Consider an agent that interacts with its environment in a sequence of episodes, each of which lasts for exactly T time steps. The agent receives a goal g ∈ Val(G) at the beginning of each episode. At every time step t, the agent observes a state s t ∈ Val(S t), receives a reward r(s t, g) ∈ R, and chooses an action a t ∈ Val(A t). For simplicity of notation, suppose that Val(G), Val(S t), and Val(A t) are finite for every t. In our setting, a goal-conditional policy defines a probability distribution over actions for every combination of state and goal. The same policy is used to make decisions at every time step. Let τ = s 1, a 1, s 2, a 2,..., s T −1, a T −1, s T denote a trajectory. We assume that the probability p(τ | g, θ) of trajectory τ given goal g and a policy parameterized by θ ∈ Val(Θ) is given by p(τ | g, θ) = p(s 1) T −1 t=1 p(a t | s t, g, θ)p(s t+1 | s t, a t).(In contrast to a Markov decision process, this formulation allows the probability of a state transition given an action to change across time steps within an episode. More importantly, it implicitly states that the probability of a state transition given an action is independent of the goal pursued by the agent, which we denote by S t+1 ⊥ ⊥ G | S t, A t . For every τ, g, and θ, we also assume that p(τ | g, θ) is non-zero and differentiable with respect to θ. Assuming that G ⊥ ⊥ Θ, the expected return η(θ) of a policy parameterized by θ is given by DISPLAYFORM0 T t=1 r(s t, g).The action-value function is given by Q This section presents for goal-conditional policies that are analogous to well-known for conventional policies BID13. They establish the foundation for the presented in the next section. The corresponding proofs are included in Appendix A for completeness. The objective of policy gradient methods is finding policy parameters that achieve maximum expected return. When combined with Monte Carlo techniques BID2, the following allows pursuing this objective using gradient-based optimization. Theorem 3.1 (Goal-conditional policy gradient). The gradient ∇η(θ) of the expected return with respect to θ is given by DISPLAYFORM0 The following allows employing a baseline to reduce the variance of the gradient estimator. Theorem 3.2 (Goal-conditional policy gradient, baseline formulation). For every t, θ, and associated real-valued (baseline) function b θ t, the gradient ∇η(θ) of the expected return with respect to θ is given by DISPLAYFORM1 Appendix A.7 presents the constant baselines that minimize the (elementwise) variance of the corresponding estimator. However, such baselines are usually impractical to compute (or estimate), and the variance of the estimator may be reduced further by a baseline function that depends on state and goal. Although generally suboptimal, it is typical to let the baseline function b θ t approximate the value function V θ t . Lastly, actor-critic methods may rely on the following for goal-conditional policies. Theorem 3.3 (Goal-conditional policy gradient, advantage formulation). The gradient ∇η(θ) of the expected return with respect to θ is given by DISPLAYFORM2 This section presents the novel ideas that introduce hindsight to policy gradient methods. The corresponding proofs can be found in Appendix B.Suppose that the reward r(s, g) is known for every combination of state s and goal g, as in previous work on hindsight BID0 ). In that case, it is possible to evaluate a trajectory obtained while trying to achieve an original goal g for an alternative goal g. Using importance sampling, this information can be exploited using the following central . Theorem 4.1 (Every-decision hindsight policy gradient). For an arbitrary (original) goal g, the gradient ∇η(θ) of the expected return with respect to θ is given by DISPLAYFORM0 In the formulation presented above, every reward is multiplied by the ratio between the likelihood of the corresponding trajectory under an alternative goal and the likelihood under the original goal (see Eq. 1). Intuitively, every reward should instead be multiplied by a likelihood ratio that only considers the corresponding trajectory up to the previous action. This intuition underlies the following important , named after an analogous for action-value functions by BID15.Theorem 4.2 (Per-decision hindsight policy gradient). For an arbitrary (original) goal g, the gradient ∇η(θ) of the expected return with respect to θ is given by DISPLAYFORM1 This section details gradient estimation based on the presented in the previous section. The corresponding proofs can be found in Appendix C. DISPLAYFORM0 where each trajectory τ (i) is obtained using a policy parameterized by θ in an attempt to achieve a goal g (i) chosen by the environment. The following points to a straightforward estimator based on Theorem 4.2. Theorem 5.1. The per-decision hindsight policy gradient estimator, given by DISPLAYFORM1 is a consistent and unbiased estimator of the gradient ∇η(θ) of the expected return. In preliminary experiments, we found that this estimator leads to unstable learning progress, which is probably due to its potential high variance. The following , inspired by weighted importance sampling BID2, represents our attempt to trade variance for bias. Theorem 5.2. The weighted per-decision hindsight policy gradient estimator, given by DISPLAYFORM2 is a consistent estimator of the gradient ∇η(θ) of the expected return. In simple terms, the likelihood ratio for every combination of trajectory, (alternative) goal, and time step is normalized across trajectories by this estimator. In Appendix C.3, we present a that enables the corresponding consistency-preserving weighted baseline. t, g) = 0} composed of so-called active goals during the i-th episode. The feasibility of the proposed estimators relies on the fact that only active goals correspond to non-zero terms inside the expectation over goals in Expressions 11 and 12. In many natural sparse-reward environments, active goals will correspond directly to states visited during episodes (for instance, the cities visited while trying to reach other cities), which enables computing said expectation exactly when the goal distribution is known. The proposed estimators have remarkable properties that differentiate them from previous (weighted) importance sampling estimators for off-policy learning. For instance, although a trajectory is often more likely under the original goal than under an alternative goal, in policies with strong optimal substructure, a high probability of a trajectory between the state a and the goal (state) c that goes through the state b may naturally allow for a high probability of the corresponding (sub)trajectory between the state a and the goal (state) b. In other cases, the (unnormalized) likelihood ratios may become very small for some (alternative) goals after a few time steps across all trajectories. After normalization, in the worst case, this may even lead to equivalent ratios for such goals for a given time step across all trajectories. In any case, it is important to note that only likelihood ratios associated to active goals for a given episode will affect the gradient estimate. Additionally, an original goal will always have (unnormalized) likelihood ratios equal to one for the corresponding episode. Under mild additional assumptions, the proposed estimators also allow using a dataset containing goals chosen arbitrarily (instead of goals drawn from the goal distribution). Although this feature is not required by our experiments, we believe that it may be useful to circumvent catastrophic forgetting during curriculum learning BID4 ). This section reports of an empirical comparison between goal-conditional policy gradient estimators and hindsight policy gradient estimators.1 Because there are no well-established sparsereward environments intended to test agents under multiple goals, this comparison focuses on our own selection of environments. These environments are diverse in terms of stochasticity, state space dimensionality and size, relationship between goals and states, and number of actions. In every one of these environments, the agent receives the remaining number of time steps plus one as a reward for reaching the goal state, which also ends the episode. In every other situation, the agent receives no reward. Importantly, the weighted per-decision hindsight policy gradient estimator used in our experiments (HPG) does not precisely correspond to Expression 12. Firstly, the original estimator requires a constant number of time steps T, which would often require the agent to act after the end of an episode in the environments that we consider. Secondly, although it is feasible to compute Expression 12 exactly when the goal distribution is known (as explained in Sec. 5), we sometimes subsample the sets of active goals per episode. Furthermore, when including a baseline that approximates the value function, we again consider only active goals, which by itself generally in an inconsistent estimator (HPG+B). As will become evident in the following sections, these compromised estimators still lead to remarkable sample efficiency. We assess sample efficiency through learning curves and average performance scores, which are obtained as follows. After collecting a number of batches (composed of trajectories and goals), each of which enables one step of gradient ascent, an agent undergoes evaluation. During evaluation, the agent interacts with the environment for a number of episodes, selecting actions with maximum probability according to its policy. A learning curve shows the average return obtained during each evaluation step, averaged across multiple runs (independent learning procedures). The curves presented in this text also include a 95% bootstrapped confidence interval. The average performance is given by the average return across evaluation steps, averaged across runs. During both training and evaluation, goals are drawn uniformly at random. Note that there is no held-out set of goals for evaluation, since we are interested in evaluating sample efficiency instead of generalization. For every combination of environment and batch size, grid search is used to select hyperparameters for each estimator according to average performance scores (after the corresponding standard deviation across runs is subtracted, as suggested by). Definitive are obtained by using the best hyperparameters found for each estimator in additional runs. In this section, we discuss definitive for small and medium batch sizes. More details about our experiments can be found in Appendices E.1 and E.2. Appendix E.3 contains unabridged , a supplementary empirical study of likelihood ratios (Appendix E.3.6), and an empirical comparison with hindsight experience replay (Appendix E.3.7). In a bit flipping environment, the agent starts every episode in the same state (0, represented by k bits), and its goal is to reach a randomly chosen state. The actions allow the agent to toggle (flip) each bit individually. The maximum number of time steps is k + 1. Despite its apparent simplicity, this environment is an ideal testbed for reinforcement learning algorithms intended to deal with sparse rewards, since obtaining a reward by chance is unlikely even for a relatively small k. BID0 employed a similar environment to evaluate their hindsight approach. Figure 4 presents the learning curves for k = 8. Goal-conditional policy gradient estimators with and without an approximate value function baseline (GCPG+B and GCPG, respectively) obtain excellent policies and lead to comparable sample efficiency. HPG+B obtains excellent policies more than 400 batches earlier than these estimators, but its policies degrade upon additional training. Additional experiments strongly suggest that the main cause of this issue is the fact that the value function baseline is still very poorly fit by the time that the policy exhibits desirable behavior. In comparison, HPG obtains excellent policies as early as HPG+B, but its policies remain remarkably stable upon additional training. The learning curves for k = 16 are presented in Figure 5. Clearly, both GCPG and GCPG+B are unable to obtain policies that perform better than chance, which is explained by the fact that they rarely incorporate reward signals during training. Confirming the importance of hindsight, HPG leads to stable and sample efficient learning. Although HPG+B also obtains excellent policies, they deteriorate upon additional training. Similar can be observed for a small batch size (see App. E.3.3). The average performance documented in Appendix E.3.5 confirm that HPG leads to remarkable sample efficiency. Importantly, Appendices E.3.1 and E.3.2 present hyperparameter sensitivity graphs suggesting that HPG is less sensitive to hyperparameter settings than the other estimators. The same two appendices also document an ablation study where the likelihood ratios are removed from HPG, which notably promotes increased hyperparameter sensitivity. This study confirms the usefulness of the correction prescribed by importance sampling. In the grid world environments that we consider, the agent starts every episode in a (possibly random) position on an 11 × 11 grid, and its goal is to reach a randomly chosen (non-initial) position. Some of the positions on the grid may contain impassable obstacles (walls). The actions allow the agent to move in the four cardinal directions. Moving towards walls causes the agent to remain in its current position. A state or goal is represented by a pair of integers between 0 and 10. The maximum number of time steps is 32. In the empty room environment, the agent starts every episode in the upper left corner of the grid, and there are no walls. In the four rooms environment BID23, the agent starts every episode in one of the four corners of the grid (see FIG0). There are walls that partition the grid into four rooms, such that each room provides access to two other rooms through single openings (doors). With probability 0.2, the action chosen by the agent is ignored and replaced by a random action. Figure 6 shows the learning curves for the empty room environment. Clearly, every estimator obtains excellent policies, although HPG and HPG+B improve sample efficiency by at least 200 batches. The learning curves for the four rooms environment are presented in Figure 7. In this surprisingly challenging environment, every estimator obtains unsatisfactory policies. However, it is still clear that HPG and HPG+B improve sample efficiency. In contrast to the experiments presented in the previous section, HPG+B does not give rise to instability, which we attribute to easier value function estimation. Similar can be observed for a small batch size (see App. E.3.3). HPG achieves the best average performance in every grid world experiment except for a single case, where the best average performance is achieved by HPG+B (see App. E.3.5). The hyperparameter sensitivity graphs presented in Appendices E.3.1 and E.3.2 once again suggest that HPG is less sensitive to hyperparameter choices, and that ignoring likelihood ratios promotes increased sensitivity (at least in the four rooms environment). The Ms. Pac-man environment is a variant of the homonymous game for ATARI 2600 (see FIG1). The agent starts every episode close to the center of the map, and its goal is to reach a randomly chosen (non-initial) position on a 14 × 19 grid defined on the game screen. The actions allow the agent to move in the four cardinal directions for 13 game ticks. A state is represented by the of preprocessing a sequence of game screens (images) as described in Appendix E.1. A goal is represented by a pair of integers. The maximum number of time steps is 28, although an episode will also end if the agent is captured by an enemy. In comparison to the grid world environments considered in the previous section, this environment is additionally challenging due to its high-dimensional states and the presence of enemies. Figure 8 presents the learning curves for a medium batch size. Approximate value function baselines are excluded from this experiment due to the significant cost of systematic hyperparameter search. Although HPG obtains better policies during early training, GCPG obtains better final policies. However, for such a medium batch size, only 3 active goals per episode (out of potentially 28) are subsampled for HPG. Although this harsh subsampling brings computational efficiency, it also appears to handicap the estimator. This hypothesis is supported by the fact that HPG outperforms GCPG for a small batch size, when all active goals are used (see Apps. E.3.3 and E.3.5). Policies obtained using each estimator are illustrated by videos included on the project website. The FetchPush environment is a variant of the environment recently proposed by BID14 to assess goal-conditional policy learning algorithms in a challenging task of practical interest (see FIG2). In a simulation, a robotic arm with seven degrees of freedom is required to push a randomly placed object (block) towards a randomly chosen position. The arm starts every episode in the same configuration. In contrast to the original environment, the actions in our variant allow increasing the desired velocity of the gripper along each of two orthogonal directions by ±0.1 or ±1, leading to a total of eight actions. A state is represented by a 28-dimensional real vector that contains the following information: positions of the gripper and block; rotational and positional velocities of the gripper and block; relative position of the block with respect to the gripper; state of the gripper; and current desired velocity of the gripper along each direction. A goal is represented by three coordinates. The maximum number of time steps is 50. Figure 9 presents the learning curves for a medium batch size. HPG obtains good policies after a reasonable number of batches, in sharp contrast to GCPG. For such a medium batch size, only 3 active goals per episode (out of potentially 50) are subsampled for HPG, showing that subsampling is a viable alternative to reduce the computational cost of hindsight. Similar are observed for a small batch size, when all active goals are used (see Apps. E.3.3 and E.3.5). Policies obtained using each estimator are illustrated by videos included on the project website. We introduced techniques that enable learning goal-conditional policies using hindsight. In this context, hindsight refers to the capacity to exploit information about the degree to which an arbitrary goal has been achieved while another goal was intended. Prior to our work, hindsight has been limited to off-policy reinforcement learning algorithms that rely on experience replay BID0 and policy search based on Bayesian optimization .In addition to the fundamental hindsight policy gradient, our technical include its baseline and advantage formulations. These are based on a self-contained goal-conditional policy framework that is also introduced in this text. Besides the straightforward estimator built upon the per-decision hindsight policy gradient, we also presented a consistent estimator inspired by weighted importance sampling, together with the corresponding baseline formulation. A variant of this estimator leads to remarkable comparative sample efficiency on a diverse selection of sparsereward environments, especially in cases where direct reward signals are extremely difficult to obtain. This crucial feature allows natural task formulations that require just trivial reward shaping. The main drawback of hindsight policy gradient estimators appears to be their computational cost, which is directly related to the number of active goals in a batch. This issue may be mitigated by subsampling active goals, which generally leads to inconsistent estimators. Fortunately, our experiments suggest that this is a viable alternative. Note that the success of hindsight experience replay also depends on an active goal subsampling heuristic (, Sec. 4.5).The inconsistent hindsight policy gradient estimator with a value function baseline employed in our experiments sometimes leads to unstable learning, which is likely related to the difficulty of fitting such a value function without hindsight. This hypothesis is consistent with the fact that such instability is observed only in the most extreme examples of sparse-reward environments. Although our preliminary experiments in using hindsight to fit a value function baseline have been successful, this may be accomplished in several ways, and requires a careful study of its own. Further experiments are also required to evaluate hindsight on dense-reward environments. There are many possibilities for future work besides integrating hindsight policy gradients into systems that rely on goal-conditional policies: deriving additional estimators; implementing and evaluating hindsight (advantage) actor-critic methods; assessing whether hindsight policy gradients can successfully circumvent catastrophic forgetting during curriculum learning of goal-conditional policies; approximating the reward function to reduce required supervision; analysing the variance of the proposed estimators; studying the impact of active goal subsampling; and evaluating every technique on continuous action spaces. Theorem A.1. The gradient ∇η(θ) of the expected return with respect to θ is given by DISPLAYFORM0 Proof. The partial derivative ∂η(θ)/∂θ j of the expected return η(θ) with respect to θ j is given by DISPLAYFORM1 The likelihood-ratio trick allows rewriting the previous equation as DISPLAYFORM2 Note that DISPLAYFORM3 Therefore, DISPLAYFORM4 A.2 THEOREM 3.1Theorem 3.1 (Goal-conditional policy gradient). The gradient ∇η(θ) of the expected return with respect to θ is given by DISPLAYFORM5 Proof. Starting from Eq. 17, the partial derivative ∂η(θ)/∂θ j of η(θ) with respect to θ j is given by DISPLAYFORM6 The previous equation can be rewritten as DISPLAYFORM7 Let c denote an expectation inside Eq. 19 for t ≥ t. In that case, A t ⊥ ⊥ S t | S t, G, Θ, and so DISPLAYFORM8 Reversing the likelihood-ratio trick, DISPLAYFORM9 Therefore, the terms where t ≥ t can be dismissed from Eq. 19, leading to DISPLAYFORM10 The previous equation can be conveniently rewritten as DISPLAYFORM11 A.3 LEMMA A.1Lemma A.1. For every j, t, θ, and associated real-valued (baseline) function b DISPLAYFORM12 Proof. Letting c denote an expectation inside Eq. 24, DISPLAYFORM13 Reversing the likelihood-ratio trick, DISPLAYFORM14 A.4 THEOREM 3.2 Theorem 3.2 (Goal-conditional policy gradient, baseline formulation). For every t, θ, and associated real-valued (baseline) function b θ t, the gradient ∇η(θ) of the expected return with respect to θ is given by DISPLAYFORM15 Proof. The is obtained by subtracting Eq. 24 from Eq. 23. Importantly, for every combination of θ and t, it would also be possible to have a distinct baseline function for each parameter in θ. A.5 LEMMA A.2 Lemma A.2. The gradient ∇η(θ) of the expected return with respect to θ is given by DISPLAYFORM16 Proof. Starting from Eq. 23 and rearranging terms, DISPLAYFORM17 By the definition of action-value function, DISPLAYFORM18 A.6 THEOREM 3.3Theorem 3.3 (Goal-conditional policy gradient, advantage formulation). The gradient ∇η(θ) of the expected return with respect to θ is given by DISPLAYFORM19 Proof. The is obtained by choosing b θ t = V θ t and subtracting Eq. 24 from Eq. 29.A.7 THEOREM A.2For arbitrary j and θ, consider the following definitions of f and h. DISPLAYFORM20 DISPLAYFORM21 For every b j ∈ R, using Theorem 3.1 and the fact that DISPLAYFORM22 Proof. The is an application of Lemma D.4. The following theorem relies on importance sampling, a traditional technique used to obtain estimates related to a random variable X ∼ p using samples from an arbitrary positive distribution q. This technique relies on the following equalities: Theorem 4.1 (Every-decision hindsight policy gradient). For an arbitrary (original) goal g, the gradient ∇η(θ) of the expected return with respect to θ is given by DISPLAYFORM0 Proof. Starting from Theorem 3.1, importance sampling allows rewriting the partial derivative ∂η(θ)/∂θ j as DISPLAYFORM1 B.2 THEOREM 4.2 Theorem 4.2 (Per-decision hindsight policy gradient). For an arbitrary (original) goal g, the gradient ∇η(θ) of the expected return with respect to θ is given by DISPLAYFORM2 Proof. Starting from Eq. 36, the partial derivative ∂η(θ)/∂θ j can be rewritten as DISPLAYFORM3 If we split every trajectory into states and actions before and after t, then ∂η(θ)/∂θ j is given by g p(g) DISPLAYFORM4 where z is defined by DISPLAYFORM5 Using Lemma D.2 and canceling terms, DISPLAYFORM0 Using Lemma D.2 once again, DISPLAYFORM1 Using the fact that DISPLAYFORM2 Substituting z into Expression 38 and returning to an expectation over trajectories, DISPLAYFORM3 B.3 LEMMA 4.1Lemma 4.1. For every g, t, θ, and associated real-valued (baseline) function b DISPLAYFORM4 Proof. Let c denote the j-th element of the vector in the left-hand side of Eq. 8, such that DISPLAYFORM5 Using Lemma D.1 and writing the expectations explicitly, DISPLAYFORM6 Canceling terms, using Lemma D.1 once again, and reversing the likelihood-ratio trick, DISPLAYFORM7 Pushing constants outside the summation over actions at time step t, DISPLAYFORM8 Theorem B.1 (Hindsight policy gradient, baseline formulation). For every g, t, θ, and associated real-valued (baseline) function b θ t, the gradient ∇η(θ) of the expected return with respect to θ is given by DISPLAYFORM9 where DISPLAYFORM10 Proof. The is obtained by subtracting Eq. 8 from Eq. 7. Importantly, for every combination of θ and t, it would also be possible to have a distinct baseline function for each parameter in θ. Lemma B.1 (Hindsight policy gradient, action-value formulation). For an arbitrary goal g, the gradient ∇η(θ) of the expected return with respect to θ is given by DISPLAYFORM0 Proof. Starting from Eq. 29, the partial derivative ∂η(θ)/∂θ j can be written as DISPLAYFORM1 Using importance sampling, for an arbitrary goal g, DISPLAYFORM2 Using Lemma D.1 and rewriting the previous equation using expectations, DISPLAYFORM3 B.6 THEOREM 4.3 Theorem 4.3 (Hindsight policy gradient, advantage formulation). For an arbitrary (original) goal g, the gradient ∇η(θ) of the expected return with respect to θ is given by DISPLAYFORM4 Proof. The is obtained by choosing b θ t = V θ t and subtracting Eq. 44 from Eq. 53. For arbitrary g, j, and θ, consider the following definitions of f and h. DISPLAYFORM0 DISPLAYFORM1 For every b j ∈ R, using Theorem 4.2 and the fact that E [h(T) | g, θ] = 0 by Lemma 4.1, DISPLAYFORM2 is given by DISPLAYFORM3.Proof. The is an application of Lemma D.4. This appendix contains proofs related to the estimators presented in Section 5: Theorem 5.1 (App. C.1) and Theorem 5.2 (App. C.2). Appendix C.3 presents a that enables a consistency-preserving weighted baseline. In this appendix, we will consider a dataset DISPLAYFORM0 where each trajectory τ (i) is obtained using a policy parameterized by θ in an attempt to achieve a goal g (i) chosen by the environment. Because D is an iid dataset given Θ, DISPLAYFORM1 C.1 THEOREM 5.1Theorem 5.1. The per-decision hindsight policy gradient estimator, given by DISPLAYFORM2 is a consistent and unbiased estimator of the gradient ∇η(θ) of the expected return. Proof. Let I (N) j denote the j-th element of the estimator, which can be written as DISPLAYFORM3 where DISPLAYFORM4 Using Theorem 4.2, the expected value E I (N) j | θ is given by DISPLAYFORM5 Therefore, I(N) j is an unbiased estimator of ∂η(θ)/∂θ j.Conditionally on Θ, the random variable DISPLAYFORM6 is an average of iid random variables with expected value ∂η(θ)/∂θ j (see Eq. 61). By the strong law of large numbers BID18, Theorem 2.3.13), DISPLAYFORM7 Therefore, I(N) j is a consistent estimator of ∂η(θ)/∂θ j. Theorem 5.2. The weighted per-decision hindsight policy gradient estimator, given by DISPLAYFORM0, FORMULA2 is a consistent estimator of the gradient ∇η(θ) of the expected return. Proof. Let W (N) j denote the j-th element of the estimator, which can be written as DISPLAYFORM1 where X(g, t, t) DISPLAYFORM2 DISPLAYFORM3 DISPLAYFORM4 DISPLAYFORM5 Consider the expected value DISPLAYFORM6 Using the fact that t > t, Lemma D.1, and canceling terms, E Xi can be written as DISPLAYFORM0 Because S t ⊥ ⊥ G | S 1:t −1, A 1:t −1, Θ, DISPLAYFORM1 Conditionally on Θ, the variable X(g, t, t) DISPLAYFORM2 is an average of iid random variables with expected value E Xi. By the strong law of large numbers BID18, Theorem 2.3.13), X(g, t, t) DISPLAYFORM3 Conditionally on Θ, the variable Y (g, t, t) DISPLAYFORM4 is an average of iid random variables with expected value 1. By the strong law of large numbers, Y (g, t, t) (N) j a.s.− − → 1. and Y (g, t, t) (N) j converge almost surely to real numbers (, Ch. 3, Property 2), DISPLAYFORM0 By Theorem 3.1 and the fact that W (N) j is a linear combination of terms X(g, t, t) DISPLAYFORM1 C.3 THEOREM C.1Theorem C.1. The weighted baseline estimator, given by DISPLAYFORM2, converges almost surely to zero. Proof. Let B (N) j denote the j-th element of the estimator, which can be written as DISPLAYFORM3 where DISPLAYFORM4 DISPLAYFORM5 DISPLAYFORM6 DISPLAYFORM7 Using Eqs. 44 and 47, the expected value DISPLAYFORM8 Conditionally on Θ, the variable X(g, t) DISPLAYFORM9 is an average of iid random variables with expected value zero. By the strong law of large numbers BID18, Theorem 2.3.13), X(g, t) DISPLAYFORM10 The fact that Y (g, t)(N) j a.s.− − → 1 is already established in the proof of Theorem 5.2. Because both DISPLAYFORM11 and Y (g, t)(N) j converge almost surely to real numbers (, Ch. 3, Property 2), DISPLAYFORM12 Because B(N) j is a linear combination of terms X(g, t) DISPLAYFORM13 is a consistent estimator of a some quantity given θ, then so is E (N) − B DISPLAYFORM14 Proof. In order to employ backward induction, consider the case t = T − 1. By marginalization, DISPLAYFORM15 which completes the proof of the base case. Assuming the inductive hypothesis is true for a given 2 ≤ t ≤ T − 1 and considering the case t − 1, DISPLAYFORM16 Lemma D.2. For every τ, g, θ, and 1 ≤ t ≤ T, DISPLAYFORM17 Proof. The case t = 1 can be inspected easily. Consider 2 ≤ t ≤ T. By definition, DISPLAYFORM18 Using Lemma D.1, DISPLAYFORM19 DISPLAYFORM20 DISPLAYFORM21 Proof. From the definition of action-value function and using the fact that DISPLAYFORM22 Let z denote the second term in the right-hand side of the previous equation, which can also be written as DISPLAYFORM23 Consider the following three independence properties: DISPLAYFORM24 DISPLAYFORM25 DISPLAYFORM26 Together, these properties can be used to demonstrate that DISPLAYFORM27 From the definition of value function, DISPLAYFORM28 Theorem 4.4. For every t and θ, the advantage function DISPLAYFORM29 Proof. The follows from the definition of advantage function and Lemma D.3. DISPLAYFORM30 Consider a discrete random variable X and real-valued functions f and h. Suppose also that DISPLAYFORM31 Proof. Let v = Var [f (X) − bh(X)]. Using our assumptions and the definition of variance, DISPLAYFORM32 The first and second derivatives of v with respect to b are given by dv DISPLAYFORM33 Our assumptions guarantee that E h(X) 2 > 0. Therefore, by Fermat's theorem, if b is a local minimum, then dv/db = 0, leading to the desired equality. By the second derivative test, b must be a local minimum. This appendix contains additional information about the experiments introduced in Section 6. Appendix E.1 details policy and baseline representations. Appendix E.2 documents experimental settings. Appendix E.3 presents unabridged . In every experiment, a policy is represented by a feedforward neural network with a softmax output layer. The input to such a policy is a pair composed of state and goal. A baseline function is represented by a feedforward neural network with a single (linear) output neuron. The input to such a baseline function is a triple composed of state, goal, and time step. The baseline function is trained to approximate the value function using the mean squared (one-step) temporal difference error BID21. Parameters are updated using Adam . The networks are given by the following. Bit flipping environments and grid world environments. Both policy and baseline networks have two hidden layers, each with 256 hyperbolic tangent units. Every weight is initially drawn from a Gaussian distribution with mean 0 and standard deviation 0.01 (and redrawn if far from the mean by two standard deviations), and every bias is initially zero. Ms. Pac-man environment. The policy network is represented by a convolutional neural network. The network architecture is given by a convolutional layer with 32 filters (8×8, stride 4); convolutional layer with 64 filters (4 × 4, stride 2); convolutional layer with 64 filters (3 × 3, stride 1); and three fully-connected layers, each with 256 units. Every unit uses a hyperbolic tangent activation function. Every weight is initially set using variance scaling , and every bias is initially zero. These design decisions are similar to the ones made by BID6.A sequence of images obtained from the Arcade Learning Environment BID1 ) is preprocessed as follows. Individually for each color channel, an elementwise maximum operation is employed between two consecutive images to reduce rendering artifacts. Such 210 × 160 × 3 preprocessed image is converted to grayscale, cropped, and rescaled into an 84 × 84 image x t. A sequence of images x t−12, x t−8, x t−4, x t obtained in this way is stacked into an 84 × 84 × 4 image, which is an input to the policy network (recall that each action is repeated for 13 game ticks). The goal information is concatenated with the flattened output of the last convolutional layer. FetchPush environment. The policy network has three hidden layers, each with 256 hyperbolic tangent units. Every weight is initially set using variance scaling , and every bias is initially zero. Tables 1 and 2 document the experimental settings. The number of runs, training batches, and batches between evaluations are reported separately for hyperparameter search and definitive runs. The number of training batches is adapted according to how soon each estimator leads to apparent convergence. Note that it is very difficult to establish this setting before hyperparameter search. The number of batches between evaluations is adapted so that there are 100 evaluation steps in total. Other settings include the sets of policy and baseline learning rates under consideration for hyperparameter search, and the number of active goals subsampled per episode. In Tables 1 and 2, R 1 = {α×10 −k | α ∈ {1, 5} and k ∈ {2, 3, 4, 5}} and R 2 = {β ×10 −5 | β ∈ {1, 2.5, 5, 7.5, 10}}.As already mentioned in Section 6, the definitive runs use the best combination of hyperparameters (learning rates) found for each estimator. Every setting was carefully chosen during preliminary experiments to ensure that the best for each estimator is representative. In particular, the best performing learning rates rarely lie on the extrema of the corresponding search range. In the single case where the best performing learning rate found by hyperparameter search for a goal-conditional policy gradient estimator was such an extreme value (FetchPush, for a small batch size), evaluating one additional learning rate lead to decreased average performance. This appendix contains unabridged experimental . Appendices E.3.1 and E.3.2 present hyperparameter sensitivity plots for every combination of environment and batch size. A hyperparameter sensitivity plot displays the average performance achieved by each hyperparameter setting (sorted from best to worst along the horizontal axis). Appendices E.3.3 and E.3.4 present learning curves for every combination of environment and batch size. Appendix E.3.5 presents average performance . Appendix E.3.6 presents an empirical study of likelihood ratios. Appendix E.3.7 presents an empirical comparison with hindsight experience replay BID0 ). hyperparameter setting (best to worst) This study is conveyed through plots that encode the distribution of active likelihood ratios computed during training, individually for each time step within an episode. Each plot corresponds to an agent that employs HPG and obtains the highest definitive average performance for a given environment FIG2. Note that the length of the largest bar for a given time step is fixed to aid visualization. The most important insight provided by these plots is that likelihood ratios behave very differently across environments, even for equivalent time steps (for instance, compare bit flipping environments to grid world environments). In contrast, after the first time step, the behavior of likelihood ratios changes slowly across time steps within the same environment. In any case, alternative goals have a significant effect on gradient estimates, which agrees with the presented in Section 6. This appendix documents an empirical comparison between goal-conditional policy gradients (GCPG), hindsight policy gradients (HPG), deep Q-networks (, DQN), and a combination of DQN and hindsight experience replay (, DQN+HER).Experience replay. Our implementations of both DQN and DQN+HER are based on OpenAI Baselines , and use mostly the same hyperparameters that BID0 used in their experiments on environments with discrete action spaces, all of which resemble our bit flipping environments. The only notable differences in our implementations are the lack of both Polyak-averaging and temporal difference target clipping. Concretely, a cycle begins when an agent collects a number of episodes FORMULA6 by following an -greedy policy derived from its deep Q-network (= 0.2). The corresponding transitions are included in a replay buffer, which contains at most 10 6 transitions. In the case of DQN+HER, hindsight transitions derived from a final strategy are also included in this replay buffer. Completing the cycle, for a total of 40 different batches, a batch composed of 128 transitions chosen at random from the replay buffer is used to define a loss function and allow one step of gradient-based minimization. The targets required to define these loss functions are computed using a copy of the deep Q-network from the start of the corresponding cycle. Parameters are updated using Adam . A discount factor of γ = 0.98 is used, and seems necessary to improve the stability of both DQN and DQN+HER.Network architectures. In every experiment, the deep Q-network is implemented by a feedforward neural network with a linear output neuron corresponding to each action. The input to such a network is a triple composed of state, goal, and time step. The network architectures are the same as those described in Appendix E.1, except that every weight is initially set using variance scaling , and all hidden layers use rectified linear units BID8. For the Ms. Pac-man environment, the time step information is concatenated with the flattened output of the last convolutional layer (together with the goal information). In comparison to the architecture employed by BID0 for environments with discrete action spaces, our architectures have one or two additional hidden layers (besides the convolutional architecture used for Ms. Pac-man).Experimental protocol. The experimental protocol employed in our comparison is very similar to the one described in Section 6. Each agent is evaluated periodically, after a number of cycles that depends on the environment. During this evaluation, the agent collects a number of episodes by following a greedy policy derived from its deep Q-network. For each environment, grid search is used to select the learning rates for both DQN and DQN+HER according to average performance scores (after the corresponding standard deviation across runs is subtracted, as in Section 6). The candidate sets of learning rates are the following. Bit flipping and grid world environments: {α × 10 −k | α ∈ {1, 5} and k ∈ {2, 3, 4, 5}}, FetchPush: {10 −2, 5 × 10 −3, 10 −3, 5 × 10 −4, 10 −4}, Ms. Pac-man: {10 −3, 5 × 10 −4, 10 −4, 5 × 10 −5, 10 −5}. These sets were carefully chosen such that the best performing learning rates do not lie on their extrema. Definitive for a given environment are obtained by using the best hyperparameters found for each method in additional runs. These definitive are directly comparable to our previous for GCPG and HPG (batch size 16), since every method will have interacted with the environment for the same number of episodes before each evaluation step. For each environment, the number of runs, the number of training batches (cycles), the number of batches (cycles) between evaluations, and the number of episodes per evaluation step are the same as those listed in Tables 1 and 2.Results. The definitive for the different environments are represented by learning curves Pg. 38). In the bit flipping environment for k = 8 (Figure 40), HPG and DQN+HER lead to equivalent sample efficiency, while GCPG lags far behind and DQN is completely unable to learn. In the bit flipping environment for k = 16 FIG0 ), HPG surpasses DQN+HER in sample efficiency by a small margin, while both GCPG and DQN are completely unable to learn. In the empty room environment FIG1 ), HPG is arguably the most sample efficient method, although DQN+HER is more stable upon obtaining a good policy. GCPG eventually obtains a good policy, whereas DQN exhibits instability. In the four rooms environment FIG2 ), DQN+HER outperforms all other methods by a large margin. Although DQN takes much longer to obtain good policies, it would likely surpass both HPG and GCPG given additional training cycles. In the Ms. Pac-man environment (Figure 44), DQN+HER once again outperforms all other methods, which achieve equivalent sample efficiency (although DQN appears unstable by the end of training). In the FetchPush environment FIG3, HPG dramatically outperforms all other methods. Both DQN+HER and DQN are completely unable to learn, while GCPG appears to start learning by the end of the training process. Note that active goals are harshly subsampled to increase the computational efficiency of HPG for both Ms. Pac-man and FetchPush (see Sec. 6.3 and Sec. 6.4).Discussion. Our suggest that the decision between applying HPG or DQN+HER in a particular sparse-reward environment requires experimentation. In contrast, the decision to apply hindsight was always successful. Note that we have not employed heuristics that are known to sometimes increase the performance of policy gradient methods (such as entropy bonuses, reward scaling, learning rate annealing, and simple statistical baselines) to avoid introducing confounding factors. We believe that such heuristics would allow both GCPG and HPG to achieve good in both the four rooms environment and Ms. Pac-man. Furthermore, whereas hindsight experience replay is directly applicable to state-of-the-art techniques, our work can probably benefit from being extended to state-of-the-art policy gradient approaches, which we intend to explore in future work. Similarly, we believe that additional heuristics and careful hyperparameter settings would allow DQN+HER to achieve good in the FetchPush environment. This is evidenced by the fact that BID0 achieve good using the deep deterministic policy gradient (, DDPG) in a similar environment (with a continuous action space and a different reward function). The empirical comparisons between either GCPG and HPG or DQN and DQN+HER are comparatively more conclusive, since the similarities between the methods minimize confounding factors. Regardless of these empirical , policy gradient approaches constitute one of the most important classes of model-free reinforcement learning methods, which by itself warrants studying how they can benefit from hindsight. Our approach is also complementary to previous work, since it is entirely possible to combine a critic trained by hindsight experience replay with an actor that employs hindsight policy gradients. Although hindsight experience replay does not require a correction analogous to importance sampling, indiscriminately adding hindsight transitions to the replay buffer is problematic, which has mostly been tackled by heuristics (, Sec. 4.5). In contrast, our approach seems to benefit from incorporating all available information about goals at every update, which also avoids the need for a replay buffer.
We introduce the capacity to exploit information about the degree to which an arbitrary goal has been achieved while another goal was intended to policy gradient methods.
563
scitldr
Computer vision has undergone a dramatic revolution in performance, driven in large part through deep features trained on large-scale supervised datasets. However, much of these improvements have focused on static image analysis; video understanding has seen rather modest improvements. Even though new datasets and spatiotemporal models have been proposed, simple frame-by-frame classification methods often still remain competitive. We posit that current video datasets are plagued with implicit biases over scene and object structure that can dwarf variations in temporal structure. In this work, we build a video dataset with fully observable and controllable object and scene bias, and which truly requires spatiotemporal understanding in order to be solved. Our dataset, named CATER, is rendered synthetically using a library of standard 3D objects, and tests the ability to recognize compositions of object movements that require long-term reasoning. In addition to being a challenging dataset, CATER also provides a plethora of diagnostic tools to analyze modern spatiotemporal video architectures by being completely observable and controllable. Using CATER, we provide insights into some of the most recent state of the art deep video architectures. While deep features have revolutionized static image analysis, video descriptors have struggled to outperform classic hand-crafted descriptors . Though recent works have shown improvements by merging image and video models by inflating 2D models to 3D , simpler 2D models (b) still routinely appear among top performers in video benchmarks such as the Kinetics Challenge at CVPR'17. This raises the natural question: are videos trivially understandable by simply averaging the predictions over a sampled set of frames? At some level, the answer must be no. Reasoning about high-level cognitive concepts such as intentions, goals, and causal relations requires reasoning over long-term temporal structure and order . Consider, for example, the movie clip in Fig. 1 (a), where an actor leaves the table, grabs a firearm from another room, and returns. Even though no gun is visible in the final frames, an observer can easily infer that the actor is surreptitiously carrying the gun. Needless to say, any single frame from the video seems incapable of supporting that inference, and one needs to reason over space and time in order to reach that . As a simpler instance of the problem, consider the cup-and-balls magic routine 1, or the gamblingbased shell game 2, as shown in Fig. 1 (b). In these games, an operator puts a target object (ball) under one of multiple container objects (cups), and moves them about, possibly revealing the target at various times and recursively containing cups within other cups. The task at the end is to tell which of the cups is covering the ball. Even in its simplest instantiation, one can expect any human or computer system that solves this task to require the ability to model state of the world over long temporal horizons, reason about occlusion, understand the spatiotemporal implications of containment, etc. An important aspect of both our motivating examples is the adversarial nature of the task, where the operator in control is trying to make the observer fail. Needless to say, a frame by frame prediction model would be incapable of solving such tasks. Figure 1: Real world video understanding. Consider this iconic movie scene from The Godfather in (a), where the protagonist leaves the table, goes to the bathroom to extract a hidden firearm, and returns to the table presumably with the intentions of shooting a person. While the gun itself is visible in only a few frames of the whole clip, it is trivial for us to realize that the protagonist has it in the last frame. An even simpler instantiation of such a reasoning task could be the cup-and-ball shell game in (b), where the task is to determine which of the cups contain the ball at the end of the trick. Can we design similarly hard tasks for computers? Given these motivating examples, why don't spatiotemporal models dramatically outperform their static counterparts for video understanding? We posit that this is due to limitations of existing video benchmarks. Even though video datasets have evolved from the small regime with tens of labels (; ;) to large with hundreds of labels , tasks have remained highly correlated to the scene and object context. For example, it is trivial to recognize a swimming action given a swimming pool in the (b). This is further reinforced by the fact that state of the art pose-based action recognition models are outperformed by simpler frame-level models (b) on the Kinetics benchmark, with a difference of nearly 45% in accuracy! Sigurdsson et al. also found similar for their Charades benchmark, where adding ground truth object information gave the largest boosts to action recognition performance . In this work, we take an alternate approach to developing a video understanding dataset. Inspired by the recent CLEVR dataset (that explores spatial reasoning in tabletop scenes) and inspired by the adversarial parlor games above (that require temporal reasoning), we introduce CATER, a diagnostic dataset for Compositional Actions and TEmporal Reasoning in dynamic tabletop scenes. We define three tasks on the dataset, each with an increasingly higher level of complexity, but set up as classification problems in order to be comparable to existing benchmarks for easy transfer of existing models and approaches. Specifically, we consider primitive action recognition, compositional action recognition, and adversarial target tracking under occlusion and containment. However, note that this does not limit the usability of our dataset to these tasks, and we provide full metadata with the rendered videos that can be used for more complex, structured prediction tasks like detection, tracking, forecasting, and so on. Our dataset does not model an operator (or hand) moving the tabletop objects, though this could be simulated as well in future variants, as in . Being synthetic, CATER can easily be scaled up in size and complexity. It also allows for detailed model diagnostics by controlling various dataset generation parameters. We use CATER to benchmark state-of-the-art video understanding models ), and show even the best models struggle on our dataset. We also uncover some insights into the behavior of these models by changing parameters such as the temporal duration of an occlusion, the degree of camera motion, etc., which are difficult to both tune and label in real-world video data. Spatiotemporal networks: Video understanding for action recognition has evolved from iconic hand-designed models (; ;) to sophisticated spatiotemporal deep networks (; ; ; ; ; . While similar developments Dataset Size Len Task #cls TO STR LTR CSB UCF101 13K 7s cls 101 HMDB51 5K 4s cls 51 Kinetics 300K 10s cls 400 AVA 430 15m det 80 VLOGs 114K 10s cls 30 DAHLIA 51 39m det 7 TACoS in the image domain have lead to large improvements on tasks like classification (; a; and localization , video models have struggled to out-perform previous hand-crafted descriptors . Even within the set of deep video architectures, models capable of temporal modeling, such as RNNs and 3D convolutions (; a) have not shown significantly better performance than much simpler, per-frame prediction models, such as variants of two-stream architectures (b;). Though some recent works have shown improvements by merging image and video models by inflating 2D models to 3D , simple 2D models (b) were still among the top performers in the Kinetics Challenge at CVPR'17. Video action understanding datasets: There has been significant effort put forth to collecting video benchmarks. One line of attack employs human actors to perform scripted actions. This is typically done in controlled environments (; ;), but recent work has pursued online crowd sourcing . Another direction collects videos from movies and online sharing platforms. Many popular video benchmarks follow this route for diverse, in-the-wild videos, such as UCF-101 , HMDB-51 and more recently Kinetics and VLOGs . As discussed earlier, such datasets struggle with the strong bias of actions with scenes and objects. Our underlying thesis is that the field of video understanding is hampered by such biases because they favor image-based baselines. One might argue that since such biases are common in the visual world, video benchmarks should reflect them. We take the view that a diverse set of benchmarks are needed to enable comprehensive diagnostics and validation of the state-of-affairs in video understanding. Table 1 shows that CATER fills a missing gap in the benchmark landscape, most notably because of its size, label distribution, and relative resilience to object and scene bias. Synthetic data in computer vision: Our work, being synthetically generated, is also closely related to other works in using synthetic data for computer vision applications. There has been a large body of work in this direction, with the major focus on using synthetic training data for real world applications. This includes semantic scene understanding (; ;), 3D scene understanding (; ; ;, human understanding (b;), optical flow and navigation, RL or embodied learning;; ). Our work, on the other hand, attempts to develop a benchmark for video based action understanding. Similar attempts have been made for scene understanding through abstract scenes , with more recently focusing on building a complex reasoning benchmark, CLEVR . In the video domain, Long et al. proposed Flash-MNIST, with the task of recognizing all the digits that appear. We build upon CLEVR and extend it for spatiotemporal reasoning in videos, defining tasks that truly require spatiotemporal reasoning to be solved.: CATER dataset and tasks. Sampled frames from a random video from CATER. We show some of the actions afforded by objects in this video, as labeled on the top using arrows. We define three tasks on these videos. Task 1 requires identifying all active actions in the video. Task 2 requires identifying all active compositional actions. Task 3 requires quantized spatial localization of the snitch object at the end of the video. Note that, as in this case, the snitch may be occluded or'contained' by another object, and hence models would require spatiotemporal understanding to complete the task. Please refer to the supplementary video for more example videos. Object tracking: Detecting and tracking objects has typically been used as an initial representation for long-term video and activity understanding (; ;). Extensions include adversarial tracking, where the objects are designed to be hidden from plain view. It has typically been used for tasks such as determining if humans are carrying an object or abandoned / exchanging objects . We embrace this direction of work and include state-of-the-art deep trackers in our benchmark evaluation. CATER provides a video understanding dataset that requires long term temporal reasoning to be solved. Additionally, it provides diagnostic tools that can evaluate video models in specific scenarios, such as with or without camera motion, with varying number of objects and so on. This control over the dataset parameters is achieved by synthetically rendering the data. These videos come with a ground truth structure that can be used to design various different video understanding tasks, including but not limited to object localization and spatiotemporal action composition. Unlike existing video understanding benchmarks, this dataset is free of object or scene bias, as the same set of simple objects are used to render the videos. Fig. 2 describes the dataset and the associated tasks. We provide sample videos from the dataset in the supplementary video. The CATER universe is built upon CLEVR , inheriting most of the standard object shapes, sizes, colors and materials present in it. This includes three object shapes (cube, sphere, cylinder), in three sizes (small, medium, large), two materials (shiny metal and matte rubber) and eight colors, as well as a large "table" plane on which all objects are placed. In addition to these objects, we add two new object shapes: inverted cones and a special object called a'snitch'. Cones also come in the same set of sizes, materials and colors. The'snitch' is a special object shaped like three intertwined toruses in metallic gold color. Actions: We define four atomic actions:'rotate','pick-place','slide' and'contain'; a subset of which is afforded by each object. The'rotate' action means that the object rotates by 90°about its Y (or horizontal) axis, and is afforded by cubes, cylinders and the snitch. The'pick-place' action means the object is picked up into the air along the Y axis, moved to a new position, and placed down. This is afforded by all objects. The'slide' action means the object is moved to a new location by sliding along the bottom surface, and is also afforded by all objects. Finally,'contain' is a special operation, only afforded by the cones, in which a cone is pick-placed on top of another object, which may be a sphere, a snitch or even a smaller cone. This allows for recursive containment, as a cone can contain a smaller cone that contains another object. Once a cone'contains' an object, it is constrained to only'slide' actions and effectively slides all objects contained within the cone. This Figure 3: Allen's temporal algebra. Exhaustive list of temporal relations between intervals, as defined by Allen's algebra . For simplicity, we group them into three broad relations to define classes for composite actions, although in principle we could use all thirteen. Figure courtesy of (Alspaugh). holds until the top-most cone is pick-placed to another location, effectively ending the containment for that top-most cone. Animation process: We start with an initial setup similar to CLEVR. A random number (N) of objects with random parameters are spawned at random locations at the beginning of the video. They exist on a 6 × 6 portion of a 2D plane with the global origin in the center. In addition to the random objects, we ensure that every video has a snitch and a cone. For the purposes of this work, we render 300-frame 320x240px videos, at 24 FPS, making it comparable to standard benchmarks (; ;). We split the video into 30-frame slots, and each action is contained within these slots. At the beginning of each slot, we iterate through up to K objects in a random order and attempt to add an action afforded by that object one by one without colliding with another object. As we describe later, we use K = 2 for our initial tasks and K = N for the final task. For each action, we pick a random start and end time from within the 30-frame slot. To further add to the diagnostic ability of this dataset, we render an additional set of videos with camera motion, with all other aspects of the data similarly distributed as the static camera case. For this, the camera is always kept pointed towards the global origin, and moved randomly between a predefined set of 3D coordinates. These coordinates include X and Y ∈ {−10, 10} and Z ∈ {8, 10, 12}. Every 30 frames, we randomly pick a new location from the Cartesian product of X, Y, Z, and move the camera to that location over the next 30 frames. However, we do constrain the camera to not change both X and Y coordinates at the same time, as that causes a jarring viewpoint shift as the camera passes over the (0, 0, Z) point. Also, we ensure all the camera motion videos start from the same viewpoint, to make it easy to register the axes locations for localization task. Spatiotemporal compositions: We wish to label our animations with the atomic actions present, as well as their compositions. Atomic actions have a well-defined spatiotemporal footprint, and so we can define composites using spatial relations ("a cylinder is rotating behind a sliding red ball"), similar to CLEVR. Unique to CATER is the ability to designate temporal relationships ("a cylinder rotates before a ball is picked-and-placed"). Because atomic actions occupy a well-defined temporal extent, we need temporal logic that reasons about relations between intervals rather than instantaneous events. While the latter can be dealt with timestamps, the former can be described with Allen's interval algebra with thirteen basic relations (Figure 3) along with composition operations. For simplicity, we group those into three broad relations. However, our dataset contains examples of all such interval relations and can be used to explore fine-grained temporal relationships. Given this CATER universe with videos, ground truth objects and their actions at any time point, we can define arbitrarily complex tasks for a video understanding system. Our choice of tasks is informed by two of the main goals of video understanding: 1) Recognizing the states of the actor, including spatiotemporal compositions of those atomic actions. For example, a spatiotemporal composition of atomic human body movements can be described as an exercise or dance routine. And 2) Recognizing the effect of those actions on the state of the world. For example, an action involving picking and placing a cup would change the position of the cup and any constituent objects contained within it, and understanding this change in the world state would implicitly require understanding the action itself. Given these two goals, we define three tasks on CATER. Each has progressively higher complexity, and tests for a higher level reasoning ability. To be consistent with existing popular benchmarks (; ; ;), we stick to standard single or multi-label classification setup, with standard evaluation metrics, as described next. For each of these tasks, we start by rendering 5500 total videos, to be comparable in size with existing popular benchmarks . Since tasks 1 and 2 (defined next) explicitly require recognizing individual actions, we use K = 2 for the videos rendered to keep the number of actions happening in any given video small. For task 3, we set K = N as the task is to recognize the end effect of actions, and not necessarily the actions themselves. We split the data randomly in 70:30 ratio into a training and test set. We similarly render a same size dataset with camera motion, and define tasks and splits in the same way as for the static camera. Task 1: Atomic action recognition. This first task on CATER is primarily designed as a simple debugging task, which should be easy for contemporary models to solve. Given the combinations of object shapes and actions afforded by them, we define 14 classes such as'slide(cone)','rotate(cube)' and so on. Since each video can have multiple actions, we define it as a multi-label classification problem. The task is to produce 14 probability values, denoting the likelihood of that action happening in the video. The performance is evaluated using average precision per-class. Final dataset-level performance is computed by mean over all classes, to get mean average precision (mAP). This is a popular metric used in other multi-label action classification datasets . Task 2: Compositional action recognition. While recognizing individual objects and motions is important, it is clearly not enough. Real world actions tend to be composite in nature, and humans have no difficulty recognizing them in whole or in parts. To that end, we construct a compositional action recognition task through spatiotemporal composition of the basic actions used in Task 1. For simplicity, we limit composites to pairs of 14 atomic actions, where the temporal relation is grouped into broad categories of'before','during' and'after' as shown in Figure 3. Combining all possible atomic actions with the three possible relations, we get a total of 14 × 14 × 3 = 588 classes, and removing duplicates (such as 'X after Y' is a duplicate of 'Y before X'), leaves 301 classes. Similar to task 1, multiple compositions can be active in any given video, so we set it up as a multi-label classification problem, evaluated using mAP. If certain compositions never occur in the dataset, those are ignored for the final evaluation. Task 3: Snitch localization. The final, and the flagship task in CATER, tests models' ability to recognize the effect of actions on the environment. Just as in the case of cup-and-ball trick, the ability of a model to recognize location of objects after some activity can be thought of as an implicit evaluation of its ability to understand the activity itself. The task now is to predict the location of the special object introduced above, the Snitch. While it may seem trivial to localize it from the last frame, it may not always be possible to do that due to occlusions and recursive containments. The snitch can be contained by other objects (cones), which can further be contained by other larger cones. All objects move together until'uncontained', so the final location of the snitch would require long range reasoning about these interactions. For simplicity, we pose this as a classification problem by quantizing the 6 × 6 grid into 36 cells and asking which cell the snitch is in, at the end of the video. We ablate the grid size in experiments. Since the snitch can only be at a single location at the end of the video, we setup the problem as a single label classification, and evaluate it using standard percentage accuracy metrics such as top-1 and top-5 accuracy. However, one issue with this metric is that is would penalize predictions where the snitch is slightly over the cell boundaries. While the top-5 metric is somewhat robust to this issue, we also report mean L 1 distance of predicted grid cell from the ground truth, as a metric that is congnizant of the grid structure in this task. Hence, it would penalize confusion between adjacent cells less than those between distant cells. The data is also amenable to a purely regression-style evaluation, though we leave that to future work. We now experiment with CATER using recently introduced state of the art video understanding and temporal reasoning models . I3D , called R3D when implemented using a ResNet (a) in, brings the best of image models to video domain by inflating it into 3D for spatiotemporal feature learning. Non-local networks further build upon that to add a spatiotemporal interaction layer that gives strong improvements and out-performs many multi-stream architectures (that use audio, flow etc) on Kinetics and Charades benchmarks. For our main task, snitch localization, we also experiment with a 2D-conv based approach, Temporal Segment Networks (TSN) (b), which another top performing method on standard benchmarks . This approaches uses both RGB and flow modalities. All these architectures learn a model for individual frames or short clips, and at test time aggregate the predictions by averaging over those clips. While simple averaging works well enough on most recent datasets (; ;), it clearly loses all temporal information and may not be well suited to our set of tasks. Hence, we also experiment with a learned aggregation strategy: specifically using an LSTM for aggregation, which is the tool of choice for temporal modelling in various domains including language and audio. We use a common LSTM implementation for aggregating either (b) or ) that operates on the last layer features (before logits). We extract these features for subclips from train and test videos, and train a 2-layer LSTM with 512 hidden units in each layer on the train subclips. The LSTM produces an output at each clip it sees, and we enforce a classification loss at the end, once the model has seen all the clips. At test time we take the prediction from the last clip as the aggregated prediction. We report the LSTM performance averaged over three runs to control for random variation. It is worth noting that LSTMs have been previously used for action recognition in videos , however with only marginal success over simple average pooling. As we show later, LSTMs actually perform significantly better on CATER, indicating the importance of temporal reasoning. For task 3, we also experiment with a state-of-the-art visual tracking method . We start by using the GT information of the starting position of snitch, and project it to screen coordinates using the render camera parameters. We defined a fixed size box around it to initialize the tracker, and run it until the end of the video. At the last frame, we project the center point of the tracked box to the 3D plane (and eventually, the class label) by using a homography transformation between the image and the 3D plane. This provides a more traditional, symbolic reasoning baseline for our dataset, and as we show in , is also not enough to solve the task. Finally, we do note that many other video models have been proposed in literature involving 2.5D convolutions , VLAD-style aggregation and other multi-modal architectures (a;). We focus on the most popular and best performing models, and leave a more comprehensive study to future work. A random baseline is also provided for all tasks, computed as the average performance of random scores passed into the evaluation functions. Implementation details for all baselines are provided in the supplementary and code will be released. Task 1: Atomic action recognition: Table 2 (a) shows the performance of R3D with and without the non-local (NL) blocks, using different number of frames in the clips. We use a fixed sampling rate of 8, but experiment with different clip sizes. Adding more frames helps significantly in this case. Given the ease of the task, R3D obtains fairly strong performance for static camera, but not so much for moving camera, suggesting potential future work in building models agnostic to camera motion. Task 2: Compositional action recognition: Next we experiment with the compositional action recognition task. The training and testing is done in the same way as Task 1, except this predicts confidence over 301 classes. As evident from Table 2 (b), this task is harder for the existing models, presumably as recognizing objects and simple motions would no longer solve it, and models need to reason about spatiotemporal compositions as well. It is interesting to note that non-local blocks now add to the final performance, which was not the case for Task 1, suggesting modeling spatiotemporal relations is more useful for this task. LSTM aggregation also helps quite a bit as the model can learn to reason about long-range temporal compositions. As expected, moving camera makes the problem harder. Task 3: Snitch localization: Finally we turn to the localization task. Since this is setup as a single label classification, we use softmax cross entropy loss to train and classification accuracy for evaluation. For tracking, no training is required as we use the pre-trained model from and run it on the validation videos. Table 3: Long term reasoning. Comparing the best reported performance of standard models on existing datasets and CATER (task 3). Unlike previous benchmarks, temporal modeling using LSTM helps and local temporal cues (flow) are not effective by itself on CATER. 2S here refers to'Two Stream'. TSN performance from (; 2016). evaluated at different clip lengths and frame rates. For this task we also experiment with TSN (b), though it ends up performing significantly worse than R3D. Note that this contrasts with standard video datasets , where it tends to perform similar to R3D . We also experiment with the flow modality and observe it obtains even lower performance, which is expected as this task requires recognizing objects which is much harder from flow. Again, note that flow models obtain similar if not better performance as RGB on standard datasets . We also note higher performance on considering longer clips with higher sample rate. This is not surprising as a task like this would require long term temporal reasoning, which is aided by looking at longer videos. This is also reinforced by the observation that using LSTM for aggregation leads to a major improvement in performance for most models. Finally, the tracking approach also only solves about a third of the videos, as even the state of the art tracker ends up drifting due to occlusions and contain operations. In Table 4, we ablate the performance with respect to the underlying grid granularity, with 6 × 6 being the default used in Table 2 (c). We observe tracking is a stronger baseline as the localization task gets more fine-grained. Finally in Table 3 we compare performance of some of these models on existing benchmarks and CATER. Analysis: Having close control over the dataset generation process enables us to perform diagnostics impossible with any previous dataset. We use the R3D+NL, 32-frame, static camera model with average (or LSTM, when specified) pooling for all following visualizations. We first analyze aggregate performance of our model over multiple bins in Figure 4, and observe some interesting phenomena. (a) Performance drops if the snitch keeps moving until the end. This makes sense: if the snitch reaches its final position early in the video, models have a lot more frames to reinforce their hypothesis of its final location. Between LSTM and avg-pooling, LSTM is much better able to handle the motion of the snitch, as expected. Perhaps not surprisingly, the tracker is much less effected by snitch movement, indicating the power of such classic computational pipelines for longterm spatiotemporal understanding. (b) Drops if the snitch is contained in the end. Being contained in the final frame makes the snitch harder to spot and track (just like the cups and ball game!), hence the lower performance.: Diagnostic analysis of localization performance. We bin the test set using certain parameters. For each, we show the test set distribution with the bar graph, the performance over that bin using the line plot, and performance of that model on the full val set with the dotted line. We find that localization performance, (a) Drops significantly if the snitch is kept moving till the end. This is possibly because for cases when snitch only moves in the beginning and is static after, the models have a lot more evidence to predict the correct location from. Interestingly the tracker is much less affected by this, as it tracks the snitch until the very end; (b) Drops if the snitch is'contain'-ed by another object in the end, and the tracker is the worst affected by it; (c) Drops initially with increasing displacement of the snitch from its start position, but is stable after that; and (d) Is relatively stable with different number of objects in the scene. Next, we visualize the videos that our models gets right or wrong. We sort all validation videos based on the softmax confidence score for the ground truth class, and visualize the top and bottom six in Figure 5 (full video in supplementary). We find that the easiest videos for avg-pooled model tend to be ones with little snitch motion, i.e. the object stays at the position it starts off in. On the other hand, the LSTM-aggregated model fares better with snitch motion, as long as it happens early in the video. The hardest videos for both tend to be ones with sudden motion of the snitch towards the end of the video, as shown by the bright golden trail denoting the motion towards the end (better viewed in supplementary video). These observations are supported by the quantitative plots in Figure 4 (a) and (c). We use CATER to analyze several leading network designs on hard spatiotemporal tasks. We find most models struggle on our proposed dataset, especially on the snitch localization task which requires long term reasoning. Interestingly, average pooling clip predictions or short temporal cues (optical flow) perform rather poorly on CATER, unlike most previous benchmarks. Such temporal reasoning challenges are common in the real world (eg. Fig. 1 (a) ), and solving those would be the cornerstone of the next improvements in machine video understanding. We believe CATER would serve as an intermediary in building systems that will reason over space and time to understand actions. That said, CATER is, by no means, a complete solution to the video understanding problem. Like any other synthetic or simulated dataset, it should be considered in addition to real world benchmarks. While we have focused on classification tasks for simplicity, our fully-annotated dataset can be used for much richer parsing tasks such as spacetime action localization. One of our findings is that while high-level semantic tasks such as activity recognition may be addressable with current architectures given a richly labeled dataset, "mid-level" tasks such as tracking still pose tremendous challenges, particularly under long-term occlusions and containment. We believe addressing such challenges will enable broader temporal reasoning tasks that capture intentions, goals, and causal behavior. We analyze the top most confident a) correct and b) incorrect predictions on the test videos for localization task. For each video, we show the last frame, followed by a top-down view of the 6 × 6 grid. The grid is further overlayed with: 1) the ground truth positions of the snitch over time, shown as the golden trail, which fades in color over time =⇒ brighter yellow depicts later positions; and 2) the softmax prediction confidence scores for each location (black is low, white is high). The model has easiest time classifying the location when the snitch does not move much or moves early on in the video. Full video in supplementary. We use the provided implementation for ResNet-3D (R3D) and non-local (NL) block from, and temporal segment networks (TSN) from (b) for all our experiments. For, all the models are based on ResNet-50 base architecture, and trained with hyperparameters scaled down from Kinetics as per CATER size. For non-local (NL) experiments, we replace the conv3 and conv4 blocks in ResNet with the NL blocks. All models are trained with classification loss implemented using sigmoid cross-entropy for Task 1 and 2 (multi-label classification task), and softmax cross-entropy for task 3. At test time, we split the video into 10 temporal clips and 3 spatial clips. When aggregating using average pooling, we average the predictions from all 30-clips. For LSTM, we train and test on the 10 center clips. We experiment with varying the number of frames (#frames) and sampling rate (SR). For TSN (b), the model is based on BN-inception , with hyperparamters following their implementation on HMDB given its similar size and setup to our dataset. For optical flow we use the TVL1 (Sánchez Pérez et al., 2013) implementation. At test time we aggregate the predictions over 250 frames uniformly sampled from the video, either by averaging or using LSTM. While CATER videos look different from real world, we found the networks much easier to optimize with the ImageNet initialization for both approaches. This is consistent with prior work (b) that finds ImageNet initialization is useful even when training diverse modalities (such as optical flow). We will make the code, generated data and models available for more implementation details. Figure 6 shows the data distribution over classes for each of the tassk. The supplementary video 3 visualizes: 1. Sample videos from the dataset (with and without camera motion). 2. Easiest and hardest videos for task 3. We rank all validation videos for task 3 based on their softmax probability for the correct class. We show the top-6 (easiest) and bottom-6 (hardest) for 32-frame stride-8 non-local + LSTM model. We observe the hardest ones involve sudden motion towards the end of the video. This reinforces the observation made in Figure 5 (a) in the main paper, that videos where snitch keeps moving till the end are the hardest. If the snitch stops moving earlier, models have more evidence for the final location of the snitch, making the task easier. 3. Tracking . We visualize the of tracking the snitch over the video as one approach to solving task 3. We observe that while it works in the simple scenarios, it fails when there is a lot of occlusion or complex contain operations. 4. Model bottom-up attention. We visualize where does the model look for Task 3. As suggested in , we visualize the l 2 -norm of the last layer features from our 32-frame stride-8 non-local model on the center video crop. The deep red color denotes large norm value at that spatiotemporal location. We find that the model automatically learns to focus on the snitch towards the end of clips, which makes sense as that is the most important object for solving the localization task. 35 30 23 1 31 13 3 20 33 16 17 26 27 18 9 32 6 28 24 11 2 14 15 4 34 19 21 7 22 12 10 8 Histograms of training and validation data distribution for different tasks we define on the dataset. (a) requires the model to recognize atomic actions, such as'a sphere slides'. We defined 13 such classes. (b) requires recognizing spatiotemporal compositions of actions, such as'a sphere slides while a cube rotates'. Since there are a total of 588 combinations, we omit the labels here for ease of visualization. Finally (c) evaluates the snitch localization task, where the model needs to answer where the snitch is on the board, quantized into a 6 × 6 grid, at the end of the video. This is defined as a 36-way classification problem.
We propose a new video understanding benchmark, with tasks that by-design require temporal reasoning to be solved, unlike most existing video datasets.
564
scitldr
We address the efficiency issues caused by the straggler effect in the recently emerged federated learning, which collaboratively trains a model on decentralized non-i.i.d. (non-independent and identically distributed) data across massive worker devices without exchanging training data in the unreliable and heterogeneous networks. We propose a novel two-stage analysis on the error bounds of general federated learning, which provides practical insights into optimization. As a , we propose a novel easy-to-implement federated learning algorithm that uses asynchronous settings and strategies to control discrepancies between the global model and delayed models and adjust the number of local epochs with the estimation of staleness to accelerate convergence and resist performance deterioration caused by stragglers. Experiment show that our algorithm converges fast and robust on the existence of massive stragglers. Distributed machine learning has received increasing attention in recent years, e.g., distributed stochastic gradient descent (DSGD) approaches and the well-known parameter server paradigm (; ; 2014). However, these approaches always suffer from communication overhead and privacy risk . Federated learning (FL) (Konečnỳ et al., 2016) is proposed to alleviate the above issues, where a subset of devices are randomly selected, and training data in devices are locally kept when training a global model, thus reducing communication and protecting user privacy. Furthermore, FL approaches are dedicated to a more complex context with 1) non-i.i.d. (Non-independent and identically distributed), unbalanced and heterogeneous data in devices, 2) constrained computing resources with unreliable connections and unstable environments (; Konečnỳ et al., 2016). Typically, FL approaches apply weight averaging methods for model aggregation, e.g., FedAvg and its variants (; ; ; ;). Such methods are similar to the synchronous distributed optimization domain. However, synchronous optimization methods are costly in synchronization , and they are potentially inefficient due to the synchrony even when collecting model updates from a much smaller subset of devices (b). Besides, waiting time for slow devices (i.e., stragglers or stale workers) is inevitable due to the heterogeneity and unreliability as mentioned above. The existence of such devices is proved to affect the convergence of FL . To address this problem, scholars propose asynchronous federated learning (AFL) methods (a; ;) that allow model aggregation without waiting for slow devices. However, asynchrony magnifies the straggler effect because 1) when the server node receives models uploaded by the slow workers, it probably has already updated the global model for many times, and 2) real-world data are usually heavy-tailed in distributed heterogeneous devices, where the rich get richer, i.e., the straggler effect accumulates when no adjustment operations in stale workers, and eventually it affects the convergence of the global model. Furthermore, dynamics in AFL brings more challenges in parameter tuning and speed-accuracy trade-off, and the guidelines for designing efficient and stale-robust algorithms in this context are still missing. Contributions Our main contributions are summarized as follows. We first establish a new twostage analysis on federated learning, namely training error decomposition and convergence analysis. To the best of our knowledge, it is the first analysis based on the above two stages that address the optimization roadmap for the general federated learning entirely. Such analysis provides insight into designing efficient and stale-robust federated learning algorithms. By following the guidelines of the above two stages, we propose a novel FL algorithm with asynchronous settings and a set of easy-to-implement training strategies. Specifically, the algorithm controls model training by estimating the model consistency and dynamically adjusting the number of local epochs on straggle workers to reduce the impact of staleness on the convergence of the global model. We conduct experiments to evaluate the efficiency and robustness of our algorithm on imbalanced and balanced data partitions with different proportions of straggle worker nodes. Results show that our approach converges fast and robust on the existence of straggle worker nodes compared to the state-of-the-art solutions. Related Work Our work is targeting the AFL and staleness resilience approaches in this context. Straggler effect (also called staleness) is one of the main problems in the similar asynchronous gradient descent (Async-SGD) approaches, which has been discussed by various studies and its remedies have been proposed (; ; ; ; ; ; ;). However, these works are mainly targeting the distributed Async-SGD scenarios, which is different from FL as discussed in the previous section. Existing FL solutions that address the straggler effect are mainly consensus-based. Consensus mechanisms are introduced where a threshold metric (i.e., control variable) is computed, and only the workers who satisfy this threshold are permitted to upload their model (; ;). Thus it significantly reduces the number of communications and updates model without waiting for straggle workers. However, current approaches are mainly focusing on synchronized FL. Xie et al. (2019a) propose an AFL algorithm which uses a mixing hyperparameter to adaptively control the trade-off between the convergence speed and error reduction on staleness. However, this work and above mentioned FL solutions only consider the staleness caused by network delay instead of imbalanced data size in each worker and only evaluate on equal size of local data, which is inconsistent with the real-world cases. Our approach is similar to (a), but instead we adaptively control the number of local epochs combined with the approximation of staleness and model discrepancy, and prove the performance guarantee on imbalanced data partitions. We illustrate our approach in the rest of this paper. We first summarize the general form of FL. Generally, an FL system consists of M distributed worker nodes (e.g., mobile phones) and a server node. The goal is training a global model across these worker nodes without uploading local data. Each worker node employs the same machine learning model, and an optimizer (e.g., stochastic gradient descent) to iteratively optimize the loss function of the local model. At t-th communication round, the server node uses an aggregation operator (e.g., averaging) to aggregate the local models uploaded by worker nodes, and broadcasts the aggregated global model to workers. We use mi to present local data points in worker node i, where m i is the size of data points in this worker. The whole dataset χ = i X (i), where i ∈ {1, 2, 3, ..., M}. We assume that X (i) X (j) = ∅ for i = j, and apparently, the total size of data m = M i=1 m i. We denote the model in worker node i by ω i ∈ R d, and the objective function of worker node i by where g(·) is the user-defined aggregation function, and ξ t is a vector which describes the settings of activated workers, such as worker ID, number of local epochs, and the learning rate. Here and thereafter, we use g(ω t) to represent g(ω t, ξ t) for convenience. We denote update term as h(·), a userdefined function which represents the model parameter differences between the collected models from activated worker nodes and previous global model, and Here τ i is the time when worker node i received the global model In this section, we aim to design an efficient and robust FL algorithm. To do so, we first establish a twostage analysis, and finally, propose our new FL algorithm by combining the insights provided by the two stages. Stage 1: Traning Error Decomposition. We first discuss the main errors of the general FL. We assume that each worker node has a local optimal model ω * i = arg min F i (ω). Then at the communication round t, we define the global error as where · is L 2 norm. For worker node i, two terms in the right-hand side of inequality 7 respectively represent 1) initialization and local error: the error between the local model at communication round t and the optimal local model (the well known empirical risk). Here, the initialization error (i.e., the error between the initial model and local model at communication round t) partially contributes to the first term. 2) local-global error: the error between optimal local models and optimal global solution, which is a constant given a specific learning task. Figure 1 illustrates these errors. Usually, the error between the initial model and the optimal global model is greater than the local-global error, and thus at the early stage of training, the first term is greater than the second term in the right-hand side of inequality 7. Therefore, reducing the initialization error and the local error at the beginning of model training can reduce the global error ω t − ω *. Afterward, when initialization and local error is minimized, the local-global error dominates the global error. However, as we mentioned previously, the local-global error is a constant that can not be solved directly. Therefore, we need a more sophisticated analysis to reach ω * since 7 is no longer appropriate to guide the optimization other than the early stage of FL training. Following the above analysis, we analyze the convergence bounds of the general FL (Eq. 6) on the rest of the training stages other than the early stage. Stage 2: Convergence Analysis. First, we make the following assumptions on the objective functions: Assumption 1. Smoothness. For all i in {1, 2, 3, ..., M} and given constant β, the objective function F (ω) and F i (ω) are β-smooth, i.e., Assumption 2. The first and second moment conditions. The objective function F (ω) and the aggregation operation g(ω t) satisfy the following: 2. E(·) is abbreviation of E ξt (·) which denotes the expected value w.r.t. the distribution of the random variable ξ t given t. Assumption 3. Strong convexity. For all i in {1, 2, 3, ..., M} and given constant c, the objective function F (ω) and F i (ω) are c-strong convex, i.e., Theorem 1. Convergence for strongly-convex problems. When c and β in assumption 1 and 3 satisfy c ≤ β, we can set the step size η t =η, where, and L G = 1 + δ 2 G. Withη, the upper error bound of global model satisfies: The proof of theorem 1 is provided in appendix A.1. Theorem 1 gives an error bound for the general form of model aggregation without assuming that g(ω t) should come from ∇F (ω t). Note that the scalars δ and δ G are equal to 1 when g(ω t) is the unbiased estimation of ∇F (ω t). However, current convergence bound in theorem 1 is too loose, and it can be further optimized by introducing controlled local epoch settings. We assume that ∇F i (ω Then we can extend theorem 1 with the local epochĒ. Theorem 2. Convergence with selected local epoch. We use δ 0 and L 0 to represent the scalar δ and L in assumption 2 whenĒ = 1, and all worker nodes are assumed to participate in model training, then under 9, 8 can be rewritten as The theorem 2 gives us the error bound of FL with the selected number of local epochs for stronglyconvex problems. The proof of theorem 2 is provided in appendix A.2. The right-hand side of theorem 2 implies the dynamics of hyper-parameters in local models for efficiency and robustness trade-off. In a general FL algorithm, e.g., FedAvg, the model settings in worker nodes are always predefined and kept fixed, which may lead to a higher variance. We now discuss such dynamics and practical insights on designing efficient and robust FL algorithms. Selection of local epochs. We discuss how to reduce the global error and communication round simultaneously for general FL. From the second term of the right-hand side of 10, we can see that theorem 2 yields to linear convergence when E( . In this condition, to quickly reduce the global error, we can reduce the second term of the right-hand side of 10 by increasing the local epochĒ while reducing the communication round t. Therefore, we can dynamically assign each worker with a bigger number of local epoch while reducing the communication round. Asynchronous acceleration with stragglers. We discuss why asynchronous strategies are needed in FL. We rearrange 10 as: When t increases, and we fixĒ andη, the global error only depends on L 0 . L 0 can be controlled by sampling more worker nodes within a communication round. Specifically, we compare n-worker participation with M -worker participation for model aggregation at the server node. When we select n workers out of M, L 0 increases according to assumption 2(c) since the variance increases. Thus, to get the same precision, we decreaseη, while it significantly slows the convergence speed. However, in practice, waiting for all the workers to be ready is time-consuming. Thus, we can introduce asynchronous mechanisms in model aggregation without waiting for all workers. Robust training on straggle workers. We discuss how to reduce the global error for FL on the existence of stragglers. As we mentioned above, asynchronous strategies can accelerate model training by reducing the waiting time at each communication round. However, the straggler effect is magnified by asynchrony, as discussed in section 1. Stale workers accumulate their staleness, which increases the variances and affects the convergence of the global model. A practical strategy to tame such effect is increasing the number of local epoch under the considerations that when the distributions of local data are far away from the global data, we use more epochs to train from the local data. However, the divergence of these local epoch numbers between stale and non-stale workers may affect variance adversely, and we can adjust the number of local epoch with the normalized epochs from all workers to reduce such variance. Theorem 3. Convergence for non-convex problems With the Assumption 1 and 2, we can select a step size The expected error bound satisfies: Theorem 3 is similar to theorem 2 that the first term of the right-hand side of does not decrease by iterative training. Note that the above remarks are also applicable to theorem 3. We provide proof of theorem 3 in appendix A.3. Proposed Algorithm. Under the guidance of the above analysis and the practical insights discussed above, we propose a fast and stale-robust AFL algorithm. Algorithm 1 and 2 illustrate the processes in worker nodes and the server node, respectively. H(t) is a predefined function at communication round t which determines how long should the server node waiting for the updated models from workers. H(t) can be used to control the accuracy-speed trade-off. The training processes on the server node can be divided into two stages, i.e., the initial stage and the converging stage. We switch the stages by estimating the consistency of model updates. Set t ← t + 1. During ∆t time, receive the triplet (ω i, τ i, E i) from any worker i. Update ω t with ω i with τ i = t − 1 using 5. Broadcast (ω t, t) to each worker. Calculate U using 13. 9: until U ≤ 0.1 10: Broadcast start flag to each worker. 11: repeat // the converging stage. Set t ← t + 1, ∆t ← H(t). During ∆t time, receive the triplet (ω i, τ i, E i) from any worker i. Update ω t by 6 and 14. Set E i ← mean(E) * s. 5: end if 6: Set τ i ← t, E i ← 0. 7: for e in 1, 2, 3,..., E i do 8: Randomly divide X (i) with batch size B i. Update ω i by using opt for each batch. Send triplet (ω i, τ i, E i) to the server. 14: e ← e − 1. end if 18: end for Definition 1. Update consistency. The model update consistency of n worker nodes is the similarities between worker models at communication round t, i.e., U is consistent with the global error in inequality 7, and in algorithm 1 we empirically set 0.1 as the threshold to switch from the initial stage and the converging stage of the global model training. At the initial stage of global model training, we use a bigger local epoch to accelerate training time as discussed above, and repeat this process until U ≤ 0.1. After the initial stage, we define the update term as ϕi ϕ is the above mentioned normalized local epochĒ with, and ϕ is the regularization term where ϕ = n i=1 ϕ i. Finally, we define a stale-related penalty function of ϕ i as: Here, ω t+1 f resh is the average model of worker nodes with τ i = t. The key processes of worker nodes are 1) estimating its staleness level, and 2) assign the number of local epoch using mean(E) in the received triplet from the server node and the previously estimated staleness level. In the next section, we evaluate the performance of our algorithms. We evaluate the performance of our approach on both imbalanced and balanced data partitions with the existence of stale worker nodes. Experiment Settings. We conduct experiments on Fashion-MNIST and CIFAR-10 to test the accuracy of our approach on 100 simulated workers, where 60 workers are stale. We use 55,000 on Fashion-MNIST and 50,000 on CIFAR-10 for training and 10,000 for testing. normalization is used in the data preprocessing. We conduct all experiments on CPU devices. We use a light-weight convolutional neural network (CNN) model, which is suitable for mobile edge devices. It has 4 convolutional layers that use 3 × 3 kernels with the size of 32, 64, 64, 128. Rectified linear unit (ReLU) is used in each convolutional layer, and every two convolutional layers are followed by a 2 × 2 max-pooling layer and a dropout of 50%. Finally, we use a 512-unit dense layer with ReLU and a dropout of 50% and an output layer with softmax. We use an SGD optimizer with a learning rate of 0.01. We set batch size as 50, and the initial number of local epochsĒ as 50. We randomly split the data size in each worker node ranging from 2 to 2122 with a standard deviation of 480 on CIFAR-10, and 9 to 2157 with a standard deviation of 540 on Fashion-MNIST. For the balanced cases we randomly assign each worker with 500 samples. The communication speed of nodes is divided into ten levels ranging from 100 milliseconds to 1 second, and the 60 stale workers are assigned with bigger levels. Finally, we set H(t) = 0.4s. We compare the performance of our proposed method with four approaches: 1) FedAvg (synchronized). We set the sampling rate C = 0.1 FedProx (synchronized). We set C = 0.1, µ = 1 as the best parameters provided in their paper. Results and Analysis. Figure 2 shows the performance of our proposed algorithm and four baselines. Our method converge faster compared to all the baselines, and the convergence is promised with 60% stale workers. Furthermore, the whole upload times of our method do not increase with the same level of accuracy. From the experiment on Fashion-MNIST, we can see that our method has the same accuracy level on test data compared with synchronized approach such as FedAvg. We can also see that on imbalanced data partitions (i.e., more realistic FL scenarios), our method is faster and more stable compared to other baselines. Finally, we can clearly see the stage transition from the initial training stage to the converging stage (e.g., the transitions in imbalanced cases in figure 2(b) and (d)), which validates the efficiency of our approach. Figure 3 shows the performance of our method with different proportion of stale nodes in 1,000 global communication rounds. Our method outperforms the AFL baseline (i.e., FedAsync) in both accuracy and loss, and when the proportion of stale workers is less than 80%, our method outperforms the synchronized FL baseline (i.e., FedAvg). In this paper, we propose a new two-stage analysis on federated learning, and inspired by such analysis, we propose a novel AFL algorithm that accelerates convergence and resists performance deterioration caused by stragglers simultaneously. Experimental show that our approach converges two times faster than baselines, and it can resist the straggler effect without sacrificing accuracy and communication. As a byproduct, our approach improves the generalization ability of neural network models. We will theoretically analyze it in future work. Besides, while not the focus of our work, security and privacy are essential concerns in federated learning, and as the future work, we can apply various security methods to our approach. Furthermore, besides the stale- We respectively test the performance with 20%, 60%, 80%, and 90% of stale workers. The green dotted line is FedAvg which waits all selected workers. resistance ability, the discrepancy estimation in our method also has the potential ability to resist malicious attacks to the worker nodes such as massive Byzantine attacks, which has been addressed in (; ; Muñoz-González et al., 2019). We will analyze and evaluate such ability in future work. A.1 PROOF OF THEOREM 1 Lemma 1. Under the assumption 1, we can get: Proof: Under the assumption 1, for any ω and ω, we have: Then using 18, we have: Taking expectations in 19 w.r.t the distribution of ξ t, we complete the proof. Lemma 2. Under the assumption 1 and 2, we can get: Proof: Using assumption 2(b) and 2(c), we have: Then using 21, assumption 2(b) and lemma 1, we have: We can easily get Lemma 2 by rearranging 22. Then we prove Theorem 1 under the assumption 1, 2, 3. First, we define Function F is a quadratic model relevant toω. Then it has the minimal value when all the partial derivatives are 0. That is Then, when we selectω =ω for 23, we get the minimal of 23, which is From assumption 3, we have which is equivalent to Then from Lemma 2, when a fixedη ≤ δ βL G is selected, we have: And using 26, we have Subtracting F (ω *) from both sides and moving F (ω t) from left to right, we get Taking the whole expectations and rearranging 29, we obtain Substracting the constantη βL 2cδ from both sides of 30, we have The left hand side of 31 is a geometric series with common ratio 1 − η t δc, then we complete the proof. We first prove 9. Assume ∇F i (ω Let ∇F i (ω t i) = a and ν∇F i (ω Since a+b a > 1 and ν ≤ 1, we have h(ν) min = h = 0, and a ≈ b when η t is small. We know that the smaller ν is, the smaller Res t i is. Then we consider the situation that E i = E = 1, and define ∇F (ω t) E(g E=1 (ω t)) ≥ δ 0 ∇F (ω t) 2 and Then sum all the form of 38 from 1 to t. We have Besides, we can easily understand that F inf ≤ E(F (ω t+1)), because F inf is the minimal value of F. Then we have By rearrange 40, we have Dividing t from both sides of 41, we get Then using equation 9, we complete the proof. We conduct additional experiments to evaluate stale-robustness of our algorithm on CIFAR-10 based on the settings in section 4. We visualize the impact of different staleness levels at different communication rounds with cosine angles (i.e., discrepancies) between the update terms (i.e., update directions of local models) of stale workers and fresh workers in figure 4. The show that our method (in the first row) effectively adjusts the update direction of the reversed stale nodes while angles of stale nodes reverse with FedAvg compared to our algorithm, which shows the robustness of our method. Figure 4: Impact visualization of different levels of staleness using cosine angles between the update terms defined in section 2 of fresh nodes (40 out of 100) and stale nodes (60 out of 100 worker nodes) on CIFAR-10 at different communication round. The blue numbers represent the staleness levels by using the differences of version numbers of models between the stale nodes and the fresh nodes. E.g., the staleness level is 10 at this communication round means that the fresh nodes has updated 10 more versions compared to the stale nodes.
We propose an efficient and robust asynchronous federated learning algorithm on the existence of stragglers
565
scitldr
Long Short-Term Memory (LSTM) units have the ability to memorise and use long-term dependencies between inputs to generate predictions on time series data. We introduce the concept of modifying the cell state (memory) of LSTMs using rotation matrices parametrised by a new set of trainable weights. This addition shows significant increases of performance on some of the tasks from the bAbI dataset. In the recent years, Recurrent Neural Networks (RNNs) have been successfully used to tackle problems with data that can be represented in the shape of time series. Application domains include Natural Language Processing (NLP) (translation BID12, summarisation BID9, question answering and more), speech recogition BID5 BID3 ), text to speech systems BID0, computer vision tasks BID13 BID16, and differentiable programming language interpreters BID10 BID11 ).An intuitive explanation for the success of RNNs in fields such as natural language understanding is that they allow words at the beginning of a sentence or paragraph to be memorised. This can be crucial to understanding the semantic content. Thus in the phrase "The cat ate the fish" it is important to memorise the subject (cat). However, often later words can change the meaning of a senstence in subtle ways. For example, "The cat ate the fish, didn't it" changes a simple statement into a question. In this paper, we study a mechanism to enhance a standard RNN to enable it to modify its memory, with the hope that this will allow it to capture in the memory cells sequence information using a shorter and more robust representation. One of the most used RNN units is the Long Short-Term Memory (LSTM) BID7. The core of the LSTM is that each unit has a cell state that is modified in a gated fashion at every time step. At a high level, the cell state has the role of providing the neural network with memory to hold long-term relationships between inputs. There are many small variations of LSTM units in the literature and most of them yield similar performance BID4.The memory (cell state) is expected to encode information necessary to make the next prediction. Currently the ability of the LSTMs to rotate and swap memory positions is limited to what can be achieved using the available gates. In this work we introduce a new operation on the memory that explicitly enables rotations and swaps of pairwise memory elements. Our preliminary tests show performance improvements on some of the bAbI tasks compared with LSTM based architectures. In this section we introduce the idea of adding a new set of parameters for the RNN cell that enable rotation of the cell state. The following subsection shows how this is implemented in the LSTM unit. One of the key innovations of LSTMs was the introduction of gated modified states so that if the gate neuron i is saturated then the memory c i (t − 1) would be unaltered. That is, c i (t − 1) ≈ c i (t) with high accuracy. The fact that the amplification factor is very close to 1 prevents the memory vanishing or exploding over many epochs. To modify the memory, but retain an amplification factor of 1 we take the output after appling the forget and add gates (we call it d t), and apply a rotation matrix U to obtain a modified memory c t = Ud t. Note that, for a rotation matrix U T U = I so that d t = c t.We parametrise the rotation by a vector of angles DISPLAYFORM0 where W rot is a weight matrix and b rot is a bias vector which we learn along with the other parameters. x is the vector of our concatenated inputs (in LSTMs given by concatenating the input for the current timestep with the output from the previous time step).A full rotation matrix is parametrisable by n(n − 1)/2 parameters (angles). Using all of these would introduce a huge number of weights, which is likely to over-fit. Instead, we have limited ourselves to considering rotations between pairs of inputs d i (t) and d i+1 (t). Exploring more powerful sets of rotations is currently being investigated. Our rotation matrix is a block-diagonal matrix of 2D rotations DISPLAYFORM1 where the cell state is of size n. Our choice of rotations only needs n/2 angles. In this section we show how to add memory rotation to the LSTM unit. The rotation is applied after the forget and add gates and before using the current cell state to produce an output. The RotLSTM equations are as follows: DISPLAYFORM0 DISPLAYFORM1 where W {f,i,o,rot,c} are weight matrices, b {f,i,o,rot,c} are biases (Ws and bs learned during training), h t−1 is the previous cell output, h t is the output the cell produces for the current timestep, similarly c t−1 and c t are the cell states for the previous and current timestep, • is element-wise multiplication and [·, ·] is concatenation. U as defined in Equation 2, parametrised by u t. Figure 1 shows a RotLSTM unit in detail. Assuming cell state size n, input size m, the RotLSTM has n(n + m)/2 extra parameters, a 12.5% increase (ignoring biases). Our expectation is that we can decrease n without harming performance and the rotations will enforce a better representation for the cell state. To empirically evaluate the performance of adding the rotation gate to LSTMs we use the toy NLP dataset bAbI with 1000 samples per task. The bAbI dataset is composed of 20 different tasks of various difficulties, starting from easy questions based on a single supporting fact (for example: DISPLAYFORM0 x is the concatenation of h t−1 and x t in the diagram (green and blue lines). Note that this differs from a regular LSTM by the introduction of the network producing angles u t and the rotation module marked U. In the diagram input size is 4 and cell state size is 3.John is in the kitchen. Where is John? A: Kitchen) and going to more difficult tasks of reasoning about size (example: The football fits in the suitcase. The box is smaller than the football. Will the box fit in the suitcase? A: yes) and path finding (example: The bathroom is south of the office. The bathroom is north of the hallway. How do you go from the hallway to the office? A: north, north). A summary of all tasks is available in Table 2. We are interested in evaluating the behaviour and performance of rotations on RNN units rather than beating state of the art. We compare a model based on RotLSTM with the same model based on the traditional LSTM. All models are trained with the same hyperparameters and we do not perform any hyperparameter tuning apart from using the sensible defaults provided in the Keras library and example code BID2.For the first experiment we train a LSTM and RotLSTM based model 10 times using a fixed cell state size of 50. In the second experiment we train the same models but vary the cell state size from 6 to 50 to assess whether the rotations help our models achieve good performance with smaller state sizes. We only choose even numbers for the cell state size to make all units go through rotations. The model architecture, illustrated in FIG0, is based on the Keras example implementation 1. This model architecture, empirically, shows better performance than the LSTM baseline published in. The input question and sentences are passed thorugh a word embedding layer (not shared, embeddings are different for questions and sentences). The question is fed into an RNN which produces a representation of the question. This representation is concatenated to every word vector from the story, which is then used as input to the second RNN. Intuitively, this helps the second RNN (Query) to focus on the important words to answer the question. The output of the second RNN is passed to a fully connected layer with a softmax activation of the size of the dictionary. The answer is the word with the highest activation. The categorical cross-entropy loss function was used for training. All dropout layers are dropping 30% of the nodes. The train-validation dataset split used was 95%-5%. The optimizer used was Adam with learning rate 0.001, no decay, β 1 = 0.9, β 2 = 0.999, = 10 −8. The training set was randomly shuffled before every epoch. All models were trained for 40 epochs. After every epoch the model performance was evaluated on the validation and training sets, and every 10 epochs on the test set. We set the random seeds to the same number for reproducibility and ran the experiments 10 times with 10 different random seeds. The source code is available at https://goo.gl/ Eopz2C 2. In this subsection we compare the the performance of models based on the LSTM and RotLSTM units on the bAbI dataset. Applying rotations on the unit memory of the LSTM cell gives a slight improvement in performance overall, and significant improvements on specific tasks. Results are shown in TAB0. The most significant improvements are faster convergence, as shown in Figure 3, and requiring smaller state sizes, illustrated in FIG2.On tasks 1 (basic factoid), 11 (basic coreference), 12 (conjunction) and 13 (compound coreference) the RotLSTM model reaches top performance a couple of epochs before the LSTM model consistently. The RotLSTM model also needs a smaller cell state size, reaching top performance at state size 10 to 20 where the LSTM needs 20 to 30. The top performance is, however, similar for both models, with RotLSTM improving the accuracy with up to 2.5%.The effect is observed on task 18 (reasoning about size) at a greater magnitude where the RotLSTM reaches top performance before epoch 20, after which it plateaus, while the LSTM model takes 40 epochs to fit the data. The training is more stable for RotLSTM and the final accuracy is improved by 20%. The RotLSTM reaches top performance using cell state 10 and the LSTM needs size 40. Similar performance increase for the RotLSTM (22.1%) is observed in task 5 (three argument relations), reaching top performance around epoch 25 and using a cell state of 50. Task 7 (counting) shows a similar behaviour with an accuracy increase of 14% for RotLSTM.Tasks 4 (two argument relations) and 20 (agent motivation) show quicker learning (better performance in the early epochs) for the RotLSTM model but both models reach their top performance after the same amount of traning. On task 20 the RotLSTM performance reaches top accuracy using state size 10 while the LSTM incremetally improves until using state size 40 to 50.Signs of overfitting for the RotLSTM model can be observed more prominently than for the LSTM model on tasks 15 (basic deduction) and 17 (positional reasoning).Our models, both LSTM and RotLSTM, perform poorly on tasks 2 and 3 (factoid questions with 2 and 3 supporting facts, respectively) and 14 (time manipulation). These problem classes are solved very well using models that look over the input data many times and use an attention mechanism that allows the model to focus on the relevant input sentences to answer a question BID14 BID8. Our models only look at the input data once and we do not filter out irrelevant information. A limitation of the models in our experiments is only applying pairwise 2D rotations. Representations of past input can be larger groups of the cell state vector, thus 2D rotations might not fully exploit the benefits of transformations. In the future we hope to explore rotating groups of elements and multi-dimensional rotations. Rotating groups of elements of the cell state could potentially also force the models to learn a more structured representation of the world, similar to how forcing a model to learn specific representations of scenes, as presented in BID6, yields semantic representations of the scene. Rotations also need not be fully flexible. Introducing hard constraints on the rotations and what groups of parameters can be rotated might lead the model to learn richer memory representations. Future work could explore how adding such constraints impacts learning times and final performance on different datasets, but also look at what constraints can qualitatively improve the representation of long-term dependencies. In this work we presented prelimiary tests for adding rotations to simple models but we only used a toy dataset. The bAbI dataset has certain advantages such as being small thus easy to train many models on a single machine, not having noise as it is generated from a simulation, and having a wide range of tasks of various difficulties. However it is a toy dataset that has a very limited vocabulary and lacks the complexity of real world datasets (noise, inconsistencies, larger vocabularies, more complex language constructs, and so on). Another limitation of our evaluation is only using text, specifically question answering. To fully evaluate the idea of adding rotations to memory cells, in the future, we aim to look into incorporating our rotations on different domains and tasks including speech to text, translation, language generation, stock prices, and other common problems using real world datasets. Tuning the hyperparameters of the rotation models might give better insights and performance increases and is something we aim to incorporate in our training pipeline in the future. A brief exploration of the angles produced by u and the weight matrix W rot show that u does not saturate, thus rotations are in fact applied to our cell states and do not converge to 0 (or 360 degress). A more in-depth qualitative analysis of the rotation gate is planned for future work. Peeking into the activations of our rotation gates could help understand the behaviour of rotations and to what extent they help better represent long-term memory. A very successful and popular mutation of the LSTM is the Gated Recurrent Unit (GRU) unit BID1. The GRU only has an output as opposed to both a cell state and an output and uses fewer gates. In the future we hope to explore adding rotations to GRU units and whether we can obtain similar . We have introduced a novel gating mechanism for RNN units that enables applying a parametrised transformation matrix to the cell state. We picked pairwise 2D rotations as the transformation and shown how this can be added to the popular LSTM units to create what we call RotLSTM. Figure 3: Accuracy comparison on training, validation (val) and test sets over 40 epochs for LSTM and RotLSTM models. The models were trained 10 times and shown is the average accuracy and in faded colour is the standard deviation. Test set accuracy was computed every 10 epochs. We trained a simple model using RotLSTM units and compared them with the same model based on LSTM units. We show that for the LSTM-based architetures adding rotations has a positive impact on most bAbI tasks, making the training require fewer epochs to achieve similar or higher accuracy. On some tasks the RotLSTM model can use a lower dimensional cell state vector and maintain its performance. Significant accracy improvements of approximatively 20% for the RotLSTM model over the LSTM model are visible on bAbI tasks 5 (three argument relations) and 18 (reasoning about size).
Adding a new set of weights to the LSTM that rotate the cell memory improves performance on some bAbI tasks.
566
scitldr
We address the problem of marginal inference for an exponential family defined over the set of permutation matrices. This problem is known to quickly become intractable as the size of the permutation increases, since its involves the computation of the permanent of a matrix, a #P-hard problem. We introduce Sinkhorn variational marginal inference as a scalable alternative, a method whose validity is ultimately justified by the so-called Sinkhorn approximation of the permanent. We demonstrate the efectiveness of our method in the problem of probabilistic identification of neurons in the worm C.elegans Let P ∈ R n×n be a binary matrix representing a permutation of n elements (i.e. each row and column of P contains a unique 1). We consider the distribution over P defined as where A, B F is the Frobenius matrix inner product, log L is a parameter matrix and Z L is the normalizing constant. Here we address the problem of marginal inference, i.e. computing the matrix of expectations ρ:= E(P). This problem is known to be intractable since it requires access to Z L, also known as the permanent of L, and whose computation is known to be a #P-hard problem To overcome this difficulty we introduce Sinkhorn variational marginal inference, which can be computed efficiently and is straightforward to implement. Specifically, we approximate ρ as S(L), the Sinkhorn operator applied to L . S(L) is defined as the (infinite) successive row and column normalization of L (;, a limit that is known to in a doubly stochastic matrix . In section 2 we argue the Sinkhorn approximation is sensible, and in section 3 we describe the problem of probabilistic inference of neural identity in C.elegans and demonstrate the Sinkhorn approximation produces the best . Our argument bases on the well-known relation between marginal inference and the normalizing constant , valid for exponential families. Specifically, (1.1) defines an exponential family with sufficient statistic P and parameter log L. By virtue of Theorem 3.4 in: where M is the marginal polytope (here, the Birkhoff polytope, the set of doubly stochastic matrices) and A * (µ) is the dual function log Z L, i.e. Moreover, for a given L, µ(L) achieving the supremum in (2.1) is exactly the matrix of marginals, µ(L) = ρ L and the dual function A * (µ(L)) coincides with the negative entropy of (1.1). Then, marginal inference of ρ L and computation of the permanent Z L = perm(L) are linked by the optimization problem in (2.2). As in any generic variational inference scheme , we obtain an approximate ρ by replacing the variational representation of Z L in (2.1), by a different, more tractable optimization problem. Typically, the quality of the approximated ρ depends on how tight is the approximation to Z L. Our approximation is based on replacing the intractable dual function A * (µ) by a component-wise entropy, and whose solution is exactly S(L). In detail, the following variational representation holds ): By using the component-wise entropy in (2.1) we obtain an approximation of the normalizing constant, that we call as the Sinkhorn permanent , perm S (L). In the following proposition we provide bounds for this approximation. The following bounds hold We note the Sinkhorn approximation has recently been proposed independently . However, there the approximation is proposed rather heuristically, without any appeal to a theoretical framework. Additionally, the so-called Bethe variational inference method is a rather general rationale for obtaining variational approximations in graphical models, where the dual function A * (µ) is approximated by the value it would take if the underlying Markov random field had a tree structure . This approximation has successfully been applied to permutations (; ; ;), where the corresponding approximate marginal B(L) is computed through belief propagation , enjoying also better theoretical guarantees than the Sinkhorn approximation. Indeed, for the Bethe approximation of the permanent, perm B (·) the following bounds are known (; However, there are also important computational differences. A single iteration of the Sinkhorn algorithms corresponds to a row and column normalization, but the message computations in the belief propagation-like routine for the Bethe approximation are more complex. Explicit formulae of such Sinkhorn and Bethe iterations are available in Appendix C. Fig 1 (b) shows that in practice the Bethe approximation also produces better permanent approximations, confirming theoretical predictions. We considered the simple case where n = 8 and the permanent and marginal can be computed by enumeration, so comparisons with ground truth are possible. However, and quite interestingly, in many cases (see Figs 1(a) and A.1(a) in the Appendix) the Sinkhorn approximation produced qualitatively better marginals, putting more mass on more non-zero entries than the Bethe approximation, regardless of possibly worse permanents. Additionally, we observed that for moderate n the Sinkhorn approximation scaled better. For example, if n = 710, each Bethe iteration took on average 0.035 seconds, while each Sinkhorn iteration took only 0.0027 seconds (see Fig A. 2 in the Appendix for details). Comparison of Bethe and Sinkhorn approximations. 1,000 submatrices of size n = 8 were randomly sampled from the C.elegans dataset described in section 3. (a) Examples of a (log) true marginal matrix ρ along with Sinkhorn and Bette approximation. The rightmost plot is a histogram of the log permanent across the samples. (b) Differences between approximate and true log permanent (left) and mean absolute errors of log marginals (right) for our two approximations. We considered additional 1,000'random' submatrices made by uniformly sampling entries between the minimum and maximum values of each C.elegans submatrix Finally, we note that sampling-based methods may be also used for marginal inference. Indeed, quite sophisticated samplers have been proposed to show polynomial approximability of the permanent ; however, their practical appeal is limited. In section 3 we show that an elementary MCMC sampler failed to produce sensible marginal inferences at reasonable time. The worm C.elegans is a unique species since their nervous system is stereotypical; i.e., the number of neurons (roughly, 300) and the connections between those neurons remain unchanged from animal to animal. Recent advances in neurotechnology have enabled whole brain imaging so that the long-standing fundamental question about how the activity in the worm brain relates to its behavior in the world can be now studied and settled. However, before that, a technical problem has to be solved: given volumetric images of the worm neurons have to be identified; that is, canonical labels (names) must be assigned to each. We applied our methodology for such probabilistic neural identification in the context of NeuroPAL , a multicolor C.elegans transgene where neuron colors were designed to facilitate neural identification (see Fig 1 for an example). Specifically, given n observed neurons represented as vectors in R 6 (position and color), we aim to estimate the matrix of marginal ρ such that ρ k,i is the probability that observed neuron k is identified with the canonical identity i. These probabilities are relevant as they provide uncertainty estimates for model predictions, giving a much more complete picture than point estimates (e.g. a permutation found via maximum likelihood). We consider a gaussian model for each canonical neuron, whose parameters (µ k, Σ k) are inferred beforehand from previously annotated worms (see for details). Let π denote the permutation so that π(k) is the canonical index of the k − th observed neuron. Then, the likelihood of observing data Y = (y k) writes as: (3.1) Suppose a flat prior is assumed over P. Then, it is plain to verify that equation (3.1) induces a posterior over P that has the form of (1.1), with L defined as Figure 2: A worm's head displaying the deterministic coloring scheme identical across all NeuroPAL worms, with neuron names (determined by a human) over each neuron. In the context of NeuroPAL we consider a downstream task involving the computation of the approximate probabilistic neural identifies ρ. Specifically, in this task a human is asked to manually label the neurons for which the model estimates are the most uncertain; i.e., the rows of ρ that are closest to the uniform distribution. As the human progressively annotates neurons this uncertainty resolves and the corresponding model update lead to an increases in identification accuracy for the remaining neurons. Ideally the human will only require a few annotations to reach a high accuracy, and therefore, as a proxy for approximation quality we measure how much faster accuracy increases in comparison to simple baselines; e.g., where at each time a neuron is randomly chosen. Results are shown in Fig 3, and further details are described in the Appendix. We considered several alternatives: i) Sinkhorn approximation, ii) Bethe approximation, iii) MCMC, iv) the random baseline described above, v) a naive baseline where uncertainty estimates are made by scaling only the rows of the likelihood matrix, i.e., without imposing any one-to-one assignment structure, and vi) a'ground truth', the protocol where the labels that are chosen are the ones where the model makes a wrong prediction (this oracle cannot be realized in practice). Results of Sinkhorn and Bethe approximations are similar but the former slightly better, presumably a consequence of more accurate estimates of low probability marginals (see Figs 1(a) and A.1(a)). They both are substantially better than any baseline other than the oracle. Contrarily, we see MCMC does not provide better than the naive baseline, suggesting lack of convergence for chain lengths leading to computational times comparable to the ones of approximated methods. We have introduced the Sinkhorn approximation for marginal inference, and our it is a sensible alternative to sampling, and it may provide faster, simpler and more accurate approximate marginals than the Bethe approximation, despite typically leading to worse permanent approximations. We leave for future work a thorough analysis of the relation between quality of permanent approximation and corresponding marginals. Also, it can be verified that S(L) = diag(x)Ldiag(y), where diag(x), diag(y) are some positive vectors x, y turned into diagonal matrices (Peyré et al., 2019). Then, Additionally, we obtain the (log) Sinkhorn approximation of the permanent of L, perm S (L), by evaluating S(L) in the problem it solves, (2.3). By simple algebra and using the fact that S(L) is a doubly stochastic matrix we see that By combining the last three displays we obtain from which the follows. We used the dataset described in. This consists on ten NeuroPAL worm heads with available human labels, and with number of neurons n ranging from 180 to 195. Each of these worms is summarized through a n × n log-likelihood matrix L computed with the methods described in (, Supplemental Information). For both the Sinkhorn and Bethe approximation we used 200 iterations. These values led to the computation times described in Fig 1, and preliminary showed were sufficient to ensure convergence (that is, none of the would change dramatically for a larger number of iterations). For the MCMC sampler we used the method described in. We used 100 chains of length 1000, and for each of them considered we took as samples of the multiples of 10 starting from iteration 500 on. All were obtained on a desktop computer with an Intel Xeon W-2125 processor. r e t u r n np. exp (logP) Bethe approximation The following is an efficient log-space implementation of the message passing algorithm described in (, Lemma 29), which was subsequently simplified by. The parameter eps is introduced for numerical estability..., 710, a number of 1, 000 submatrices of size n were randomly drawn from the ten available log likelihood C.elegans matrices (see text on Appendix B, indexes were drawn with replacement). Error bars are omitted because they were too small to be noticed.
New methodology for variational marginal inference of permutations based on Sinkhorn algorithm, applied to probabilistic identification of neurons
567
scitldr
The robustness of neural networks to adversarial examples has received great attention due to security implications. Despite various attack approaches to crafting visually imperceptible adversarial examples, little has been developed towards a comprehensive measure of robustness. In this paper, we provide theoretical justification for converting robustness analysis into a local Lipschitz constant estimation problem, and propose to use the Extreme Value Theory for efficient evaluation. Our analysis yields a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness. The proposed CLEVER score is attack-agnostic and is computationally feasible for large neural networks. Experimental on various networks, including ResNet, Inception-v3 and MobileNet, show that (i) CLEVER is aligned with the robustness indication measured by the $\ell_2$ and $\ell_\infty$ norms of adversarial examples from powerful attacks, and (ii) defended networks using defensive distillation or bounded ReLU indeed give better CLEVER scores. To the best of our knowledge, CLEVER is the first attack-independent robustness metric that can be applied to any neural network classifiers. Recent studies have highlighted the lack of robustness in state-of-the-art neural network models, e.g., a visually imperceptible adversarial image can be easily crafted to mislead a well-trained network BID28 BID9 BID3. Even worse, researchers have identified that these adversarial examples are not only valid in the digital space but also plausible in the physical world BID17 BID8. The vulnerability to adversarial examples calls into question safety-critical applications and services deployed by neural networks, including autonomous driving systems and malware detection protocols, among others. In the literature, studying adversarial examples of neural networks has twofold purposes: (i) security implications: devising effective attack algorithms for crafting adversarial examples, and (ii) robustness analysis: evaluating the intrinsic model robustness to adversarial perturbations to normal examples. Although in principle the means of tackling these two problems are expected to be independent, that is, the evaluation of a neural network's intrinsic robustness should be agnostic to attack methods, and vice versa, existing approaches extensively use different attack as a measure of robustness of a target neural network. Specifically, given a set of normal examples, the attack success rate and distortion of the corresponding adversarial examples crafted from a particular attack algorithm are treated as robustness metrics. Consequently, the network robustness is entangled with the attack algorithms used for evaluation and the analysis is limited by the attack capabilities. More importantly, the dependency between robustness evaluation and attack approaches can cause biased analysis. For example, adversarial training is a commonly used technique for improving the robustness of a neural network, accomplished by generating adversarial examples and retraining the network with corrected labels. However, while such an adversarially trained network is made robust to attacks used to craft adversarial examples for training, it can still be vulnerable to unseen attacks. Motivated by the evaluation criterion for assessing the quality of text and image generation that is completely independent of the underlying generative processes, such as the BLEU score for texts BID25 and the INCEPTION score for images BID27, we aim to propose a comprehensive and attack-agnostic robustness metric for neural networks. Stemming from a perturbation analysis of an arbitrary neural network classifier, we derive a universal lower bound on the minimal distortion required to craft an adversarial example from an original one, where the lower bound applies to any attack algorithm and any p norm for p ≥ 1. We show that this lower bound associates with the maximum norm of the local gradients with respect to the original example, and therefore robustness evaluation becomes a local Lipschitz constant estimation problem. To efficiently and reliably estimate the local Lipschitz constant, we propose to use extreme value theory BID6 for robustness evaluation. In this context, the extreme value corresponds to the local Lipschitz constant of our interest, which can be inferred by a set of independently and identically sampled local gradients. With the aid of extreme value theory, we propose a robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness. We note that CLEVER is an attack-independent robustness metric that applies to any neural network classifier. In contrast, the robustness metric proposed in BID11, albeit attack-agnostic, only applies to a neural network classifier with one hidden layer. We highlight the main contributions of this paper as follows:• We propose a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness. To the best of our knowledge, CLEVER is the first robustness metric that is attack-independent and can be applied to any arbitrary neural network classifier and scales to large networks for ImageNet.• The proposed CLEVER score is well supported by our theoretical analysis on formal robustness guarantees and the use of extreme value theory. Our robustness analysis extends the in BID11 from continuously differentiable functions to a special class of non-differentiable functions -neural+ networks with ReLU activations.• We corroborate the effectiveness of CLEVER by conducting experiments on state-of-theart models for ImageNet, including ResNet BID10, Inception-v3 BID29 and MobileNet . We also use CLEVER to investigate defended networks against adversarial examples, including the use of defensive distillation BID23 and bounded ReLU BID34. Experimental show that our CLEVER score well aligns with the attack-specific robustness indicated by the 2 and ∞ distortions of adversarial examples. One of the most popular formulations found in literature for crafting adversarial examples to mislead a neural network is to formulate it as a minimization problem, where the variable δ ∈ R d to be optimized refers to the perturbation to the original example, and the objective function takes into account unsuccessful adversarial perturbations as well as a specific norm on δ for assuring similarity. For instance, the success of adversarial examples can be evaluated by their cross-entropy loss BID28 BID9 or model prediction BID2. The norm constraint on δ can be implemented in a clipping manner BID18 or treated as a penalty function BID2. The p norm of δ, defined as δ p = (DISPLAYFORM0 for any p ≥ 1, is often used for crafting adversarial examples. In particular, when p = ∞, δ ∞ = max i∈{1, ...,d} |δ i | measures the maximal variation among all dimensions in δ. When p = 2, δ 2 becomes the Euclidean norm of δ. When p = 1, δ 1 = p i=1 |δ i | measures the total variation of δ. The state-of-the-art attack methods for ∞, 2 and 1 norms are the iterative fast gradient sign method (I-FGSM) BID9 BID18, Carlini and Wagner's attack (CW attack) BID2, and elastic-net attacks to deep neural networks (EAD) BID4, respectively. These attacks fall into the category of white-box attacks since the network model is assumed to be transparent to an attacker. Adversarial examples can also be crafted from a black-box network model using an ensemble approach BID20, training a substitute model, or employing zeroth-order optimization based attacks BID5. Since the discovery of vulnerability to adversarial examples BID28, various defense methods have been proposed to improve the robustness of neural networks. The rationale for defense is to make a neural network more resilient to adversarial perturbations, while ensuring the ing defended model still attains similar test accuracy as the original undefended network. Papernot et al. proposed defensive distillation BID23, which uses the distillation technique BID12 and a modified softmax function at the final layer to retrain the network parameters with the prediction probabilities (i.e., soft labels) from the original network. BID34 showed that by changing the ReLU function to a bounded ReLU function, a neural network can be made more resilient. Another popular defense approach is adversarial training, which generates and augments adversarial examples with the original training data during the network training stage. On MNIST, the adversarially trained model proposed by BID21 can successfully defend a majority of adversarial examples at the price of increased network capacity. Model ensemble has also been discussed to increase the robustness to adversarial examples BID30 BID19. In addition, detection methods such as feature squeezing BID33 and example reforming BID22 can also be used to identify adversarial examples. However, the CW attack is shown to be able to bypass 10 different detection methods BID1. In this paper, we focus on evaluating the intrinsic robustness of a neural network model to adversarial examples. The effect of detection methods is beyond our scope. BID28 compute global Lipschitz constant for each layer and use their product to explain the robustness issue in neural networks, but the global Lipschitz constant often gives a very loose bound. BID11 gave a robustness lower bound using a local Lipschitz continuous condition and derived a closed-form bound for a multi-layer perceptron (MLP) with a single hidden layer and softplus activation. Nevertheless, a closed-form bound is hard to derive for a neural network with more than one hidden layer. BID31 utilized terminologies from topology to study robustness. However, no robustness bounds or estimates were provided for neural networks. On the other hand, works done by BID7; BID15 b); BID14 focus on formally verifying the viability of certain properties in neural networks for any possible input, and transform this formal verification problem into satisfiability modulo theory (SMT) and large-scale linear programming (LP) problems. These SMT or LP based approaches have high computational complexity and are only plausible for very small networks. Intuitively, we can use the distortion of adversarial examples found by a certain attack algorithm as a robustness metric. For example, BID0 proposed a linear programming (LP) formulation to find adversarial examples and use the distortions as the robustness metric. They observe that the LP formulation can find adversarial examples with smaller distortions than other gradient-based attacks like L-BFGS BID28. However, the distortion found by these algorithms is an upper bound of the true minimum distortion and depends on specific attack algorithms. These methods differ from our proposed robustness measure CLEVER, because CLEVER is an estimation of the lower bound of the minimum distortion and is independent of attack algorithms. Additionally, unlike LP-based approaches which are impractical for large networks, CLEVER is computationally feasible for large networks like Inception-v3. The concept of minimum distortion and upper/lower bound will be formally defined in Section 3. In this section, we provide formal robustness guarantees of a classifier in Theorem 3.2. Our robustness guarantees are general since they only require a mild assumption on Lipschitz continuity of the classification function. For differentiable classification functions, our are consistent with the main theorem in BID11 but are obtained by a much simpler and more BID11 ) is in fact a special case of our analysis. We start our analysis by defining the notion of adversarial examples, minimum p distortions, and lower/upper bounds. All the notations are summarized in TAB0. Definition 3.1 (perturbed example and adversarial example). Let DISPLAYFORM0 DISPLAYFORM1 An adversarial example is a perturbed example x a that changes c(x 0). A successful untargeted attack is to find a x a such that c(x a) = c(x 0) while a successful targeted attack is to find a x a such that c(x a) = t given a target class t = c(x 0). Definition 3.2 (minimum adversarial distortion ∆ p,min). Given an input vector x 0 of a classifier f, the minimum p adversarial distortion of x 0, denoted as ∆ p,min, is defined as the smallest ∆ p over all adversarial examples of x 0. Definition 3.3 (lower bound of ∆ p,min). Suppose ∆ p,min is the minimum adversarial distortion of DISPLAYFORM2, is defined such that any perturbed examples of x 0 with δ p ≤ β L are not adversarial examples. Definition 3.4 (upper bound of ∆ p,min). Suppose ∆ p,min is the minimum adversarial distortion of x 0. An upper bound of ∆ p,min, denoted by β U where β U ≥ ∆ p,min, is defined such that there exists an adversarial example of x 0 with δ p ≥ β U.The lower and upper bounds are instance-specific because they depend on the input x 0. While β U can be easily given by finding an adversarial example of x 0 using any attack method, β L is not easy to find. β L guarantees that the classifier is robust to any perturbations with δ p ≤ β L, certifying the robustness of the classifier. Below we show how to derive a formal robustness guarantee of a classifier with Lipschitz continuity assumption. Specifically, our analysis obtains a lower bound of DISPLAYFORM3 Lemma 3.1 (Lipschitz continuity and its relationship with gradient norm (Paulavičius &Žilinskas, 2006) ). Let S ⊂ R d be a convex bounded closed set and let h(x): S → R be a continuously differentiable function on an open set containing S. Then, h(x) is a Lipschitz function with Lipschitz constant L q if the following inequality holds for any x, y ∈ S: DISPLAYFORM4 where DISPLAYFORM5 ) is the gradient of h(x), and DISPLAYFORM6 Given Lemma 3.1, we then provide a formal guarantee to the lower bound β L. Theorem 3.2 (Formal guarantee on lower bound β L for untargeted attack). Let x 0 ∈ R d and f: R d → R K be a multi-class classifier with continuously differentiable components f i and let c = argmax 1≤i≤K f i (x 0) be the class which f predicts for x 0. For all δ ∈ R d with DISPLAYFORM7 argmax 1≤i≤K f i (x 0 + δ) = c holds with DISPLAYFORM8 is a lower bound of minimum distortion. The intuitions behind Theorem 3.2 is shown in FIG0 with an one-dimensional example. The function value g(x) = f c (x) − f j (x) near point x 0 is inside a double cone formed by two lines passing (x 0, g(x 0)) and with slopes equal to ±L q, where L q is the (local) Lipschitz constant of g(x) near x 0. In other words, the function value of g(x) around x 0, i.e. g(x 0 + δ) can be bounded by g(x 0), δ and the Lipschitz constant L q. When g(x 0 + δ) is decreased to 0, an adversarial example is found and the minimal change of δ is DISPLAYFORM9 Lq. The complete proof is deferred to Appendix A. DISPLAYFORM10 is the Lipschitz constant of the function involving cross terms: f c (x) − f j (x), hence we also call it cross Lipschitz constant following BID11.To distinguish our analysis from BID11, we show in Corollary 3.2.1 that we can obtain the same in BID11 by Theorem 3.2. In fact, the analysis in BID11 ) is a special case of our analysis because the authors implicitly assume Lipschitz continuity on f i (x) when requiring f i (x) to be continuously differentiable. They use local Lipschitz constant (L q,x0) instead of global Lipschitz constant (L q) to obtain a tighter bound in the adversarial perturbation δ. DISPLAYFORM11. By Theorem 3.2, we obtain the bound in BID11 ): DISPLAYFORM12 An important use case of Theorem 3.2 and Corollary 3.2.1 is the bound for targeted attack: Corollary 3.2.2 (Formal guarantee on β L for targeted attack). Assume the same notation as in Theorem 3.2 and Corollary 3.2.1. For a specified target class j, we have δ p ≤ min DISPLAYFORM13 In addition, we further extend Theorem 3.2 to a special case of non-differentiable functions -neural networks with ReLU activations. In this case the Lipchitz constant used in Lemma 3.1 can be replaced by the maximum norm of directional derivative, and our analysis above will go through. Lemma 3.3 (Formal guarantee on β L for ReLU networks).3 Let h(·) be a l-layer ReLU neural network with W i as the weights for layer i. We ignore bias terms as they don't contribute to gradient. DISPLAYFORM14 is the one-sided directional direvative, then Theorem 3.2, Corollary 3.2.1 and Corollary 3.2.2 still hold. In this section, we provide an algorithm to compute the robustness metric CLEVER with the aid of extreme value theory, where CLEVER can be viewed as an efficient estimator of the lower bound β L and is the first attack-agnostic score that applies to any neural network classifiers. Recall in Section 3 2 proof deferred to Appendix B 3 proof deferred to Appendix C we show that the lower bound of network robustness is associated with g(x 0) and its cross Lipschitz constant L j q,x0, where g(DISPLAYFORM0 is readily available at the output of a classifier and L j q,x0 is defined as max x∈Bp(x0,R) ∇g(x) q. Although ∇g(x) can be calculated easily via back propagation, computing L j q,x0 is more involved because it requires to obtain the maximum value of ∇g(x) q in a ball. Exhaustive search on low dimensional x in B p (x 0, R) seems already infeasible, not to mention the image classifiers with large feature dimensions of our interest. For instance, the feature dimension d = 784, 3072, 150528 for MNIST, CIFAR and ImageNet respectively. One approach to compute L j q,x0 is through sampling a set of points x (i) in a ball B p (x 0, R) around x 0 and taking the maximum value of ∇g(x (i) ) q. However, a significant amount of samples might be needed to obtain a good estimate of max ∇g(x) q and it is unknown how good the estimate is compared to the true maximum. Fortunately, Extreme Value Theory ensures that the maximum value of random variables can only follow one of the three extreme value distributions, which is useful to estimate max ∇g(x) q with only a tractable number of samples. It is worth noting that although BID32 also applied extreme value theory to estimate the Lipschitz constant. However, there are two main differences between their work and this paper. First of all, the sampling methodology is entirely different. BID32 calculates the slopes between pairs of sample points whereas we directly take samples on the norm of gradient as in Lemma 3.1. Secondly, the functions considered in BID32 are only one-dimensional as opposed to the high-dimensional classification functions considered in this paper. For comparison, we show in our experiment that the approach in BID32, denoted as SLOPE in Table 3 and FIG3, perform poorly for high-dimensional classifiers such as deep neural networks.4.1 ESTIMATE L j q,x0 VIA EXTREME VALUE THEORY When sampling a point x uniformly in B p (x 0, R), ∇g(x) q can be viewed as a random variable characterized by a cumulative distribution function (CDF). For the purpose of illustration, we derived the CDF for a 2-layer neural network in Theorem D.1. 4 For any neural networks, suppose we have n samples {∇g(x (i) ) q }, and denote them as a sequence of independent and identically distributed (iid) random variables Y 1, Y 2, · · ·, Y n, each with CDF F Y (y). The CDF of max{Y 1, · · ·, Y n}, denoted as F n Y (y), is called the limit distribution of F Y (y). Fisher-TippettGnedenko theorem says that F n Y (y), if exists, can only be one of the three family of extreme value distributions -the Gumbel class, the Fréchet class and the reverse Weibull class. Theorem 4.1 (Fisher-Tippett-Gnedenko Theorem). If there exists a sequence of pairs of real numbers (a n, b n) such that a n > 0 and lim n→∞ F n Y (a n y + b n) = G(y), where G is a non-degenerate distribution function, then G belongs to either the Gumbel class (Type I), the Fréchet class (Type II) or the Reverse Weibull class (Type III) with their CDFs as follows: DISPLAYFORM1 Fréchet class (Type II): DISPLAYFORM2 if y ≥ a W, where a W ∈ R, b W > 0 and c W > 0 are the location, scale and shape parameters, respectively. Theorem 4.1 implies that the maximum values of the samples follow one of the three families of distributions. If g(x) has a bounded Lipschitz constant, ∇g(x (i) ) q is also bounded, thus its limit distribution must have a finite right end-point. We are particularly interested in the reverse Weibull class, as its CDF has a finite right end-point (denoted as a W). The right end-point reveals the upper limit of the distribution, known as the extreme value. The extreme value is exactly the unknown local cross Lipschitz constant L j q,x0 we would like to estimate in this paper. To estimate L j q,x0, we first generate N s samples of x (i) over a fixed ball B p (x 0, R) uniformly and independently in each batch with a total of N b batches. We then compute ∇g(x (i) ) q and store the maximum values of each batch in set S. Next, with samples in S, we perform a maximum likelihood estimation of reverse Weibull distribution parameters, and the location estimateâ W is used as an estimate of L j q,x0. Given an instance x 0, its classifier f (x 0) and a target class j, a targeted CLEVER score of the classifier's robustness can be computed via g(x 0) and L j q,x0. Similarly, untargeted CLEVER scores can be computed. With the proposed procedure of estimating L j q,x0 described in Section 4.1, we summarize the flow of computing CLEVER score for both targeted attacks and un-targeted attacks in Algorithm 1 and 2, respectively. Algorithm 1: CLEVER-t, compute CLEVER score for targeted attack Input: a K-class classifier f (x), data example x 0 with predicted class c, target class j, batch size N b, number of samples per batch N s, perturbation norm p, maximum perturbation R Result: CLEVER Score µ ∈ R + for target class DISPLAYFORM0 Algorithm 2: CLEVER-u, compute CLEVER score for un-targeted attack Input: Same as Algorithm 1, but without a target class j Result: CLEVER score ν ∈ R + for un-targeted attack DISPLAYFORM1 We conduct experiments on CIFAR-10 (CIFAR for short), MNIST, and ImageNet data sets. For the former two smaller datasets CIFAR and MNIST, we evaluate CLEVER scores on four relatively small networks: a single hidden layer MLP with softplus activation (with the same number of hidden units as in BID11), a 7-layer AlexNet-like CNN (with the same structure as in BID2), and the 7-layer CNN with defensive distillation BID23 ) (DD) and bounded ReLU BID34 ) (BReLU) defense techniques employed. For ImageNet data set, we use three popular deep network architectures: a 50-layer Residual Network BID10 ) (ResNet-50), Inception-v3 BID29 and MobileNet . They were chosen for the following reasons: (i) they all yield (close to) state-of-theart performance among equal-sized networks; and (ii) their architectures are significantly different with unique building blocks, i.e., residual block in ResNet, inception module in Inception net, and depthwise separable convolution in MobileNet. Therefore, their diversity in network architectures is appropriate to test our robustness metric. For MobileNet, we set the width multiplier to 1.0, achieving a 70.6% accuracy on ImageNet. We used public pretrained weights for all ImageNet models 5.In all our experiments, we set the sampling parameters N b = 500, N s = 1024 and R = 5. For targeted attacks, we use 500 test-set images for CIFAR and MNIST and use 100 test-set images for ImageNet; for each image, we evaluate its targeted CLEVER score for three targets: a random target class, a least likely class (the class with lowest probability when predicting the original example), and the top-2 class (the class with largest probability except for the true class, which is usually the easiest target to attack). We also conduct untargeted attacks on MNIST and CIFAR for 100 test-set images, and evaluate their untargeted CLEVER scores. Our experiment code is publicly available 6. Figure 3. If the p-value is greater than 0.05, the null hypothesis cannot be rejected, meaning that the underlying data samples fit a reverse Weibull distribution well. Figure 3 shows that all numbers are close to 100%, validating the use of reverse Weibull distribution as an underlying distribution of gradient norm samples empirically. Therefore, the fitted location parameter of reverse Weibull distribution (i.e., the extreme value),â W, can be used as a good estimation of local cross Lipschitz constant to calculate the CLEVER score. The exact numbers are shown in TAB5 in Appendix E. All numbers for each model are close to 100%, indicating S fits reverse Weibull distributions well. We apply the state-of-the-art white-box attack methods, iterative fast gradient sign method (I-FGSM) BID9 BID18 and Carlini and Wagner's attack (CW) BID2, to find adversarial examples for 11 networks, including 4 networks trained on CIFAR, 4 networks trained on MNIST, and 3 networks trained on ImageNet. For CW attack, we run 1000 iterations for ImageNet and CIFAR, and 2000 iterations for MNIST, as MNIST has shown to be more difficult to attack BID4. Attack learning rate is individually tuned for each model: 0.001 for Inception-v3 and ResNet-50, 0.0005 for MobileNet and 0.01 for all other networks. For I-FGSM, we run 50 iterations and choose the optimal ∈ {0.01, 0.025, 0.05, 0.1, 0.3, 0.5, 0.8, 1.0} to achieve the smallest ∞ distortion for each individual image. For defensively distilled (DD) networks, 50 iterations of I-FGSM are not sufficient; we use 250 iterations for CIFAR-DD and 500 iterations for MNIST-DD to achieve a 100% success rate. For the problem to be non-trivial, images that are classified incorrectly are skipped. We report 100% attack success rates for all the networks, and thus the average distortion of adversarial examples can indicate the attack-specific robustness of each network. For comparison, we compute the CLEVER scores for the same set of images and attack targets. To the best of our knowledge, CLEVER is the first attack-independent robustness score that is capable of handling the large networks studied in this paper, so we directly compare it with the attack-induced distortion metrics in our study. We evaluate the effectiveness of our CLEVER score by comparing the upper bound β U (found by attacks) and CLEVER score, where CLEVER serves as an estimated lower bound, β L. Table 3 compares the average 2 and ∞ distortions of adversarial examples found by targeted CW and I-FGSM attacks and the corresponding average targeted CLEVER scores for 2 and ∞ norms, and FIG3 visualizes the for ∞ norm. Similarly, Table 2 compares untargeted CW and I-FGSM attacks with untargeted CLEVER scores. As expected, CLEVER is smaller than the distortions of adversarial images in most cases. More importantly, since CLEVER is independent of attack algorithms, the reported CLEVER scores can roughly indicate the distortion of the best possible attack in terms of a specific p distortion. The average 2 distortion found by CW attack is close to the 2 CLEVER score, indicating CW is a strong 2 attack. In addition, when a defense mechanism (Defensive Distillation or Bounded ReLU) is used, the corresponding CLEVER scores are consistently increased (except for CIFAR-BReLU), indicating that the network is indeed made more resilient to adversarial perturbations. For CIFAR-BReLU, both CLEVER scores and p norm of adversarial examples found by CW attack decrease, implying that bound ReLU is an ineffective defense for CIFAR. CLEVER scores can be seen as a security checkpoint for unseen attacks. For example, if there is a substantial gap in distortion between the CLEVER score and the considered attack algorithms, it may suggest the existence of a more effective attack that can close the gap. Since CLEVER score is derived from an estimation of the robustness lower bound, we further verify the viability of CLEVER per each example, i.e., whether it is usually smaller than the upper bound found by attacks. TAB4 shows the percentage of inaccurate estimations where the CLEVER score is larger than the distortion of adversarial examples found by CW and I-FGSM attacks in three ImageNet networks. We found that CLEVER score provides an accurate estimation for most of the examples. For MobileNet and Resnet-50, our CLEVER score is a strict lower bound of these two attacks for more than 96% of tested examples. For Inception-v3, the condition of strict lower bound Table 2: Comparison between the average untargeted CLEVER score and distortion found by CW and I-FGSM untargeted attacks. DD and BReLU represent Defensive Distillation and Bounded ReLU defending methods applied to the baseline CNN network. Table 3: Comparison of the average targeted CLEVER scores with average ∞ and 2 distortions found by CW, I-FSGM attacks, and the average scores calculated by using the algorithm in BID32 (denoted as SLOPE) to estimate Lipschitz constant. DD and BReLU denote Defensive Distillation and Bounded ReLU defending methods applied to the CNN network. We did not include SLOPE in ImageNet networks because it has been shown to be ineffective even for smaller networks.(a) avergage ∞ distortion of CW and I-FGSM targeted attacks, and CLEVER and SLOPE estimation. Some very large SLOPE estimates (in parentheses) exceeding the maximum possible ∞ distortion are reported as 1. Random Target Top-2 Target CW I-FGSM CLEVER SLOPE CW I-FGSM CLEVER BID32. SLOPE significantly exceeds the distortions found by attacks, thus it is an inappropriate estimation of lower bound β L.is worse (still more than 75%), but we found that in these cases the attack distortion only differs from our CLEVER score by a fairly small amount. In Figure 5 we show the empirical CDF of the gap between CLEVER score and the 2 norm of adversarial distortion generated by CW attack for the same set of images in TAB4. In Figure 6, we plot the 2 distortion and CLEVER scores for each DISPLAYFORM0 0% 0% 0% 2% 0% 0% 0% 0% 0% 0% 0% Resnet-50 4% 0% 0% 0% 2% 0% 0% 0% 1% 0% 0% 0% Inception-v3 25% 0% 0% 0% 23% 0% 0% 0% 15% 0% 0% 0% (a individual image. A positive gap indicates that CLEVER (estimated lower bound) is indeed less than the upper bound found by CW attack. Most images have a small positive gap, which signifies the near-optimality of CW attack in terms of 2 distortion, as CLEVER suffices for an estimated capacity of the best possible attack. In Figure 7, we vary the number of samples (N b = 50, 100, 250, 500) and compute the 2 CLEVER scores for three large ImageNet models, Inception-v3, ResNet-50 and MobileNet. We observe that 50 or 100 samples are usually sufficient to obtain a reasonably accurate robustness estimation despite using a smaller number of samples. On a single GTX 1080 Ti GPU, the cost of 1 sample (with N s = 1024) is measured as 2.9 s for MobileNet, 5.0 s for ResNet-50 and 8.9 s for Inception-v3, thus the computational cost of CLEVER is feasible for state-of-the-art large-scale deep neural networks. Additional figures for MNIST and CIFAR datasets are given in Appendix E. In this paper, we propose the CLEVER score, a novel and generic metric to evaluate the robustness of a target neural network classifier to adversarial examples. Compared to the existing robustness evaluation approaches, our metric has the following advantages: (i) attack-agnostic; (ii) applicable to any neural network classifier; (iii) comes with strong theoretical guarantees; and (iv) is computationally feasible for large neural networks. Our extensive experiments show that the CLEVER score well matches the practical robustness indication of a wide range of natural and defended networks. A PROOF OF THEOREM 3.2Proof. According to Lemma 3.1, the assumption that g(DISPLAYFORM0 Let x = x 0 + δ and y = x 0 in, we get DISPLAYFORM1 When g(x 0 + δ) = 0, an adversarial example is found. As indicated by, DISPLAYFORM2 no adversarial examples can be found: DISPLAYFORM3 Finally, to achieve argmax 1≤i≤K f i (x 0 + δ) = c, we take the minimum of the bound on δ p in (A) over j = c. I.e. if DISPLAYFORM4, the classifier decision can never be changed and the attack will never succeed. B PROOF OF COROLLARY 3.2.1Proof. By Lemma 3.1 and let g = f c − f j, we get L j q,x0 = max y∈Bp(x0,R) ∇g(y) q = max y∈Bp(x0,R) ∇f j (y) − ∇f c (y) q, which then gives the bound in Theorem 2.1 of BID11. DISPLAYFORM5 Proof. For any x, y, let d = y−x y−x p be the unit vector pointing from x to y and r = y − x p. Define uni-variate function u(z) = h(x + zd), then u = h(x) and u(r) = h(y) and observe that D + h(x + zd; d) and D + h(x + zd; −d) are the right-hand and left-hand derivatives of u(z), we have DISPLAYFORM6 For ReLU network, there can be at most finite number of points in z ∈ (0, r) such that g (z) does not exist. This can be shown because each discontinuous z is caused by some ReLU activation, and there are only finite combinations. Let 0 = z 0 < z 1 < · · · < z k−1 < z k = 1 be those points. Then, using the fundamental theorem of calculus on each interval separately, there existsz i ∈ (z i, z i−1) for each i such that Proof. The j th output of a one-hidden-layer neural network can be written as DISPLAYFORM7 DISPLAYFORM8 where σ(z) = max(z, 0) is ReLU activation function, W and V are the weight matrices of the first and second layer respectively, and w r is the r th row of W. Thus, we can compute g(x) and ∇g(x) q below:. The red dash line encloses the ball B 2 (x 0, R 1) and the blue dash line encloses a larger ball B 2 (x 0, R 2). If we draw samples uniformly within the balls, the probability of ∇g(x) 2 = y is proportional to the intersected volumes of the ball and the regions with ∇g(x) 2 = y. DISPLAYFORM9 As illustrated in FIG7, the hyperplanes w r x + b r = 0, r ∈ {1, . . ., U} divide the d dimensional spaces R d into different regions, with the interior of each region satisfying a different set of inequality constraints, e.g. w r+ x + b r+ > 0 and w r− x + b r− < 0. Given x, we can identify which region it belongs to by checking the sign of w r x + b r for each r. Notice that the gradient norm is the same for all the points in the same region, i.e. for any x 1, x 2 satisfying I(w r x 1 + b r) = I(w r x 2 + b r) ∀r, we have ∇g(x 1) q = ∇g(x 2) q. Since there can be at most M = d i=0 U i different regions for a d-dimensional space with U hyperplanes, ∇g(x) q can take at most M different values. Therefore, if we perform uniform sampling in a ball B p (x 0, R) centered at x 0 with radius R and denote ∇g(x) q as a random variable Y, the probability distribution of Y is discrete and its CDF is piece-wise constant with at most M pieces. Without loss of generality, assume there are M 0 ≤ M distinct values for Y and denote them as m, m,..., m (M0) in an increasing order, the CDF of Y, denoted as F Y (y), is the following: E.1 PERCENTAGE OF EXAMPLES HAVING P VALUE > 0.05 TAB5 shows the percentage of examples where the null hypothesis cannot be rejected by K-S test, indicating that the maximum gradient norm samples fit reverse Weibull distribution well. Figure 3. FIG10 shows the 2 CLEVER score with different number of samples (N b = 50, 100, 250, 500) for MNIST and CIFAR models. For most models except MNIST-BReLU, reducing the number of samples only change CLEVER scores very slightly. For MNIST-BReLU, increasing the number of samples improves the estimated lower bound, suggesting that a larger number of samples is preferred. In practice, we can start with a relatively small N b = a, and also try 2a, 4a, · · · samples to see if CLEVER scores change significantly. If CLEVER scores stay roughly the same despite increasing N b, we can conclude that using N b = a is sufficient. DISPLAYFORM10
We propose the first attack-independent robustness metric, a.k.a CLEVER, that can be applied to any neural network classifier.
568
scitldr
Multi-agent collaboration is required by numerous real-world problems. Although distributed setting is usually adopted by practical systems, local range communication and information aggregation still matter in fulfilling complex tasks. For multi-agent reinforcement learning, many previous studies have been dedicated to design an effective communication architecture. However, existing models usually suffer from an ossified communication structure, e.g., most of them predefine a particular communication mode by specifying a fixed time frequency and spatial scope for agents to communicate regardless of necessity. Such design is incapable of dealing with multi-agent scenarios that are capricious and complicated, especially when only partial information is available. Motivated by this, we argue that the solution is to build a spontaneous and self-organizing communication (SSoC) learning scheme. By treating the communication behaviour as an explicit action, SSoC learns to organize communication in an effective and efficient way. Particularly, it enables each agent to spontaneously decide when and who to send messages based on its observed states. In this way, a dynamic inter-agent communication channel is established in an online and self-organizing manner. The agents also learn how to adaptively aggregate the received messages and its own hidden states to execute actions. Various experiments have been conducted to demonstrate that SSoC really learns intelligent message passing among agents located far apart. With such agile communications, we observe that effective collaboration tactics emerge which have not been mastered by the compared baselines. Many real-world applications involve participation of multiple agents, for example, multi-robot control BID12, network packet delivery BID20 and autonomous vehicles planning BID0, etc.. Learning such systems is ideally required to be autonomous (e.g., using reinforcement learning). Recently, with the rise of deep learning, deep reinforcement learning (RL) has demonstrated many exciting in several challenging scenarios e.g. robotic manipulation BID3, visual navigation BID22 BID10, as well as the well-known application in game playing BID13 etc.. However, unlike its success in solving single-agent tasks, deep RL still faces many challenges in solving multi-agent learning scenarios. Modeling multiple agents has two extreme solutions: one is treating all agents as an unity to apply a single centralized framework, the other is modelling the agents as completely independent learners. Studies following the former design are often known as "centralized approach", for example BID18 BID14 etc. The obvious advantage of this class of approaches is a good guarantee of optimality since it is equivalent to the single agent Markov decision process (MDP) essentially. However, it is usually unfeasible to assume a global controller that knows everything about the environment in practice. The other class of methods can be marked as "independent multi-agent reinforcement learning". These approaches assumes a totally independent setting in which the agents treat all others as a part of the observed environment. BID2 has pointed out that such a setup will suffer from the problem of non-stationarity, which renders it hard to learn an optimal joint policy. In essence, there are three key factors that determine a communication. That is when, where and how the participants initiate the communication. Most of existing approaches, including the abovementioned Meanfield and Commnet, try to predefine each ingredient and thus lead to an inflexible communication architecture. Recently, VAIN BID4 and ATOC BID6 incorporate attentional communication for collaborative multi-agent reinforcement learning. Compared with Meanfield and Commnet, VAIN and ATOC have made one step further towards more flexible communication. However, the step is still limited. Take ATOC as an example, although it learns a dynamic attention to diversify agent messages, the message flow is only limited to the local range. This is unfavorable for learning complex and long range communications. The communication time is also specified manually (every ten steps). Hence it is requisite to find a new method that allows more flexible communication on both learnable time and scopes. In this regard, we propose a new solution with learnable spontaneous communication behaviours and self-organizing message flow among agents. The proposed architecture is named as "Spontaneous and Self-Organizing Communication" (SSoC) network. The key to such a spontaneous communication lies in the design that the communication is treated as an action to be learned in a reinforcement manner. The corresponding action is called "Speak". Each agent is eligible to take such an action based on its current observation. Once an agent decides to "Speak", it sends a message to partners within the communication scope. In the next step, agents receiving this message will decide whether to pass the message forward to more distant agents or keep silence. This is exactly how SSoC distinguishes itself from existing approaches. Instead of predestining when and who will participate in the communication, SSoC agents start communication only when necessary and stop transferring received messages if they are useless. A self-organizing communication policy is learned via maximizing the total collaborative reward. The communication process of SSoC is depicted in Fig.1. It shows an example of the message flow among four communicating agents. Specifically, agent 3 sends a message to ask for help for remote partners. Due to agent 3's communication range, the message can be seen only by agent 1. Then agent 1 decides to transfer the collected message to its neighbors. Finally agent 2 and agent 4 read the messages from agent 3. These two agents are directly unreachable from agent 3. In this way, each agent learns to send or transfer messages spontaneously and finally form a communication route. Compared with the communication channels predefined in previous works, the communication here is dynamically changing according to real needs of the participating agents. Hence the communication manner forms a self-organizing mechanism. We instantiate SSoC with a policy network with four functional units as shown in FIG0. Besides the agent's original action, an extra "Speak" action is output based on the current observation and hidden states. Here we simply design "Speak" as a binary {0, 1} output. Hence it works as a "switch" to control whether to send or transfer a message. The "Speak" action determines when and who to communicate in a fully spontaneous manner. A communication structure will naturally emerge after several steps of message propagation. Here in our SSoC method, the "Speak" policy is learned by a reward-driven reinforcement learning algorithm. The assumption is that a better message propagation strategy should also lead to a higher accumulated reward. We evaluate SSoC on several representative benchmarks. As we have observed, the learned policy does demonstrate novel clear message propagation patterns which enable complex collaborative strategies, for example, remote partners can be requested to help the current agent to get over hard times. We also show the high efficiency of communication by visualizing a heat map showing how often the agents "speak". The communication turns out to be much sparser than existing predefined communication channels which produce excessive messages. With such emerged collaborations enabled by SSoC's intelligent communication manner, it is also expected to see clear performance gains compared with existing methods on the tested tasks. Recently, several studies have concentrated on learning multiple agent communication for deep RL networks. Among them, one class of work tries to build pairwise structure for multi-agent communication. BID1 and BID8 are among the first to propose learnable communications via back-propagation between individual deep Q-networks. However, due to their motivating tasks, both works rely on a peer-to-peer (P2P) communication strategy and usually apply to only a limited number of agents. VAIN BID4 also builds a pair-wise communication structure whose interaction weights are learned. BID15 proposes a pair-wise MARL approach for multi-object tracking task. In such models, the essential state-action space of multiple agents grows geometrically with the number of agents, and so is the communication load. Hence the pair-wise models suffer from scale issues. Another branch of study attempts to establish a global communication channel for all agents. For example, BID14 proposes a bidirectional communication channel among all agents to facilitate effective communication. BID18 applies a centralized method with zero-order optimization to control all agents. BID7 uses a master-slave architecture to control the communication of local agents by a master. Commnet BID17 builds a broadcasting communication channel among all agents and mean hidden states are directly used as messages. Such architectures are practical communication ways but still limited since 1) such a predefined structure forces every agent to send messages at every step. This will produce lots of redundant information; 2) They all use a single net to model all agents, which makes it difficult to extend to large-scale tasks. To address the large-scale multi-agent problem in MARL, BID19 proposes Meanfield algorithm. It models a local communication between each agent and an approximating "mean-agent" of all its neighbors. This encourages information sharing within a local range, however, its capability is still limited since the average of action-value function may not be rich enough to induce complex collaborative policies for complicated tasks. Several recent studies exploit a "centralized/communicated training + decentralized/independent execution" mode. For example, in COMA BID5, a global critic was proposed, which could potentially work at a centralized level, however since critics are basically value networks, they do not provide explicit policy guidance. BID11 also presents an adaptation of actor-critic method which uses centralized action-value function that takes as input the states and inferred actions of all agents. However, this mainly fosters a better training of each independent model instead of establishing an explicit communication and information processing module at testing time which is essential for effective collaborations In order to learn more effective communication, BID6 uses a recurrent attention model to help dynamically perform communication and have better local communication. However, its message flow is only limited to the local range. This is unfavorable for learning complex and long range communications. The communication time is also specified manually (every ten steps). Compared with their work, SSoC learns both when and who to communicate with a spontaneous communication action. It also allows long range communication via multi-step message propagation. These characteristics are critical for learning better collaborative policies. Here we introduce the details of the proposed SSoC network. We assume a distributed partially observable MARL environment for SSoC. As illustrated by Fig.1, each agent can only see a neighboring circular area. The goal is to achieve a certain winning state via taking a sequence of actions. At each time t, an agent takes an action a t based on its own observed state s t as well as information sent by partner agents within a local range around it. In this paper we call the shared information m i t as "messages". The messages flowing among agents determine which agents are sharing information with others at a certain time step. Unlike previous methods which usually predestine a static communication structure, SSoC adopts a more flexible option. The key is to enable each agent to learn an extra "Speak" action to determine whether to send the message to neighboring agents. Hence only those agents "Speak" currently are able to be "heard" by its partners within its communication range. Then these partner agents are able to exploit the received messages to determine their own actions. They are also eligible to pass the received messages forward to its neighbors if they also take a "Speak" action. And their own "thought" can also be merged into the output message. Like ordinary actions, "Speak" action also takes one time step. A communication path will form naturally among all agents through multiple steps of communication. SSoC manages to establish a "global" communication channel in such a self-organizing manner. Unlike previous communication architectures, this "global" communication channel of SSoC is also applicable for a distributed MARL scenario. The "Speak" action is generated by an independent output of the network, which is parallel to the original action output of each agent. We design "Speak" action as a binary signal to control whether a message will be passed. Hence the action space of "Speak" contains 2 actions: "1" represents "Speak", "0" stands for "keep silent". This binary "Speak" signal will be multiplied with the output message of the agent. If the signal is "1", the output message is preserved and will be accessible by the neighbor agents' LWU module. Otherwise, it is hidden from other agents by a 0 signal. Since the "Speak" action is determined by the agent itself based on its own observation, the subsequent communication is started in a spontaneous way. It may happen anytime, anywhere which only depends on the agents' own policy. It is neither predetermined nor manually designed with human intuitions. Therefore, we name such kind of communication as "self-organizing". Here we re-check the three key factors of multi-agent communication mentioned in section 1. Obviously, such a "self-organizing" communication of SSoC is capable of learning when and who to communicate simultaneously. As for "how to communicate", each agent is able to combine the received message with its own "thought" to output actions. Furthermore, since SSoC implicitly builds a "global" communication channel based on multi-step of "Speak" actions, the communication is not only limited to the local area. Therefore, SSoC's self-organizing communication really addresses the key challenges of multi-agent communication in a new and more flexible way. The whole structure of SSoC is displayed as FIG0. The network mainly consists of two streams. The top "agent stream" of FIG0 shows the original policy network of each agent. By taking the input observation, a policy network will generate a hidden state "thought" h t. Then the "thought" h t will be merged via an ACU (action composition unit) module with messages m i t from the bottom "message stream" to obtain the action a t. And an extra binary "Speak" action will also be generated from h t to decide whether to send the agent's message to others. The message stream works as a message receiver & sender for the agent. At each time step, it aggregates messages from other agents within the communication range via a LWU (learnable weighting unit). Only messages from agents who "Speak" at last time step will get collected by LWU. To incorporate the agent's own "thought" and send it to other agents, the aggregated messages m i t will be combined with h t through a MCU (message composition unit) module as the output message m o t. Finally, the binary "Speak" action from agent stream will be applied to determine the sending of message m o t to other agents. FIG0 gives detailed implementation of the four functional units of SSoC. Specifically, the policy network consists of 3 fully-connected layers The LWU module works as a self-weighted aggregator of the collected messages {m o1 t, m o2 t, m o3 t, . . .} from neighbor agents. For each input message, it will predict a weight which will be multiplied with the original signal before a sum operation with other messages. MCU merges the agent's own "thought" and input message as a new message that will be spread to other agents. It adopts a learnable gating mechanism which enables the agent to decide the proportion of its own "thought" and other agents' information in its output message. Like MCU, ACU also combines the input message with its own "thought". However, its goal is to output the agent's action. It is implemented like a basic LSTM cell without temporal recurrence. The agent's own action will be updated by a regular policy gradient with baseline. And the "Speak" action takes an additional "speak" policy gradient as equation 1: DISPLAYFORM0 where T is the total time step of the episode. These two gradients will be fed to SSoC network respectively for back-propagation (as shown by the green arrows in FIG0). Note that the output messages will be re-used as the input of other agents in next time step. Hence we create a buffer to store the messages m o t at each time step. During training, messages will be taken out by each agent from this buffer for both feed-forward sampling and backward updates. The detailed training procedure is given in the appendix. Experiments are conducted on multi-camera intelligent surveillance task which is firstly proposed by this paper and the large-scale battle task (64 vs. 64) from the MAgent benchmark BID21. Both tasks assume distributed collaborative MARL environments. Agents only have local observations and take actions independently. A shared reward will be given to all agents for model training. For comparison, we select two state-of-the-art methods BID17 as well as the independent multi-agent policy gradient algorithm (denoted as Independent) as our baselines. In real life there are many occasions that need multiple agents cooperation to achieve their common goal. One interesting case is to make a set of intelligent cameras track target people in airport or railway station by actively changing their pose. Each camera is manipulated by an agent. Nearby agents can share their information with each other. Inspired by this scenario, we design a simulation task which we name as multi-camera intelligent surveillance task (as illustrated in FIG1 . In this task, each target person (simplified as the small red balls) has his/her own walking route. They will appear in the entrance area randomly at the beginning and walk to the destination. There are three self-controlling cameras (the blue sector represents the camera's current visual field) trying to actively capture the moving target balls as long as possible. The camera can choose to sway its angle left, right (by 15 degree) or keep static. Each camera can share its own "thought" only with its neighbor camera. Hence this is a distributed MARL setting. The general objective of the cameras is to capture the moving balls as long as possible by changing their angles cooperatively. We formulate the multi-camera intelligent surveillance task as a multiagent reinforcement learning problem:• Overall objective: tracking the target red balls as long as it can during the whole episode;• State: the sector visual field's position and the ball's position (given only if it is within the visual field, otherwise is empty);• Action: turning left by 15 degree, turning right by 15 degree or stay static;• Reward: if the camera changes its angle and captures the target ball, it receives a positive reward; if the ball is in the camera's possible visual field but the camera doesn't capture it, the agent receives a negative reward; Otherwise, it receives a zero reward. (The details of reward definition is given in appendix.)Next we will show how our SSoC works in this environment. We argue that the camera agent knows nothing about ball's information, it needs to learn the useful information based its local observation by itself. In our algorithm, the agent will adjust and optimize it's policy with training and learn to dynamically send necessary information to its local partners. As shown in FIG2, when the ball is captured by the camera, it chooses to speak "1" meaning that it has collected the ball's information and send the integrated message to the next camera to help the next camera learning better. When the ball is outside of all the cameras' field, the cameras choose to speak "0" meaning that there is no useful and necessary message to be delivered between cameras. We compare SSoC on the multi-camera intelligent surveillance task with Meanfield, Commnet and Independent MARL. In our environment setting, each episode contains 30 time steps And we train every algorithm for 5000 episodes in total. We show the of SSoC with baseline algorithms table 1. We find that the agents learn a smart strategy that once the agent captures the ball, it can capture the ball all the time until the ball moves outside its visual field. In addition, once an agent captures a ball, its neighbor agent will search around by turning back and forth to capture the coming ball earlier. This collaborative strategy is enabled by useful messages passing among the agents. When necessary, the agent learns to speak "1" meaning that it decides to transfer the useful message to next agent. When there is no effective message, the agent chooses to be silent. For better illustration, we visualize SSoC's learned policy in the video against other algorithms. The video is in demo. We recommend readers to have a look at the video for better understanding of the policy learned by SSoC. The large-scale battle task is one MARL scenario included in the MAgent environment BID21. In the task our algorithm controls a group of agents to eliminate the opponents. Each agent can move by one cell or attack one nearby enemy at one time step. This task is challenging due to a large number of participating agents (here 64 in our experiments). The difficulty is even larger considering the environment's distributed setting and agents' partial visibility. We adopt a self-play training for all the methods following BID21. All methods are trained for 2000 episodes. Each episode contains 400 steps. Learning rate is set to 0.0001 for all. SSoC outperforms the compared baselines on mean rewards and mean kill (average number of killed enemies in an episode) with a clear margin as shown by table 2. We also let SSoC play against Commnet, Meanfield and Independent in this task. The are shown in 6 (b). As we can see, SSoC obtains a higher win rate compared with the baselines. The demonstrate SSoC is an competitive architecture for distributed MARL tasks. To verify if it is SSoC's spontaneous and self-organizing communication that brings such a big improvement, we draw the heatmap of communication in Fig.5. The heat value here is computed as an accumulation of the "Speak" signal (1 for "Speak" and 0 for silence) of each cell in recent 5 frames. In this way, we discover several cases with meaningful message flow which help our agents win the battle. One such case is shown in Fig.5. Here we display both the heatmap and corresponding situation in 10 th, 15 th and 20 th frame here to show the learned agent policies and message flow. In this case, frontier agents (marked as group a) encounter enemies first and needs support of the agents of group b. Hence they start communication by taking "Speak" actions at 10 th frame. The messages sent by these agents transfer to group b through several steps. At 15 th frame, group b agents receive the message and "Speak" to gather nearby agents to move right and help agents on the right. At 20 th step, group b agents begin to confront enemies and join the attack. After a while, two groups of agents join force together to eliminate most of the enemies. This example shows that the learned self-organizing communication does help our agents to learn a better collaborative policy. Since the proposed scheme enables multi-step message transferring, an agent's message can be spread to agents on a much further area. Hence such a long-range reinforcement policy can be learned in this case. In addition, we can see for most agents, "Speak" action is not taken. This shows that agents only start a communication when necessary, instead of keeping sending messages all the time like in BID17 BID14. This phenomenon shows the high efficiency of SSoC communication. Table 2: mean rewards on larget-scale battle task In this paper, we propose a SSoC network for MARL tasks. Unlike previous methods which often assume a predestined communication structure, the SSoC agent learns when to start a communication or transfer its received message via a novel "Speak" action. Similar to the agent's original action, this "Speak" can also be learned in a reinforcement manner. With such a spontaneous communication action, SSoC is able to establish a dynamic self-organizing communication structure according to the current state. Experiments have been performed to demonstrate better collaborative policies and improved on communication efficiency brought by such a design. In future work, we will continue to enhance the learning of "Speak" action e.g. encoding a temporal abstraction to make the communication flow more stable or develop a specific reward for this "Speak" action.
This paper proposes a spontaneous and self-organizing communication (SSoC) learning scheme for multi-agent RL tasks.
569
scitldr
We study the BERT language representation model and the sequence generation model with BERT encoder for multi-label text classification task. We experiment with both models and explore their special qualities for this setting. We also introduce and examine experimentally a mixed model, which is an ensemble of multi-label BERT and sequence generating BERT models. Our experiments demonstrated that BERT-based models and the mixed model, in particular, outperform current baselines in several metrics achieving state-of-the-art on three well-studied multi-label classification datasets with English texts and two private Yandex Taxi datasets with Russian texts. Multi-label text classification (MLTC) is an important natural language processing task with many applications, such as document categorization, automatic text annotation, protein function prediction , intent detection in dialogue systems, and tickets tagging in client support systems . In this task, text samples are assigned to multiple labels from a finite label set. In recent years, it became clear that deep learning approaches can go a long way toward solving text classification tasks. However, most of the widely used approaches in MLTC tend to neglect correlation between labels. One of the promising yet fairly less studied methods to tackle this problem is using sequence-to-sequence modeling. In this approach, a model treats an input text as a sequence of tokens and predict labels in a sequential way taking into account previously predicted labels. used Seq2Seq architecture with GRU encoder and attention-based GRU decoder, achieving an improvement over a standard GRU model on several datasets and metrics. Yang et al. (2018b) continued this idea by introducing Sequence Generation Model (SGM) consisting of BiLSTM-based encoder and LSTM decoder coupled with additive attention mechanism. In this paper, we argue that the encoder part of SGM can be successfully replaced with a heavy language representation model such as BERT . We propose Sequence Generating BERT model (BERT+SGM) and a mixed model which is an ensemble of vanilla BERT and BERT+SGM models. We show that BERT+SGM model achieves decent after less than a half of an epoch of training, while the standard BERT model needs to be trained for 5-6 epochs just to achieve the same accuracy and several dozens epochs more to converge. On public datasets, we obtain 0.4%, 0.8%, and 1.6% average improvement in miF 1, maF 1, and accuracy respectively in comparison with BERT. On datasets with hierarchically structured classes, we achieve 2.8% and 1.5% average improvement in maF 1 and accuracy. Our main contributions are as follows: 1. We present the of BERT as an encoder in the sequence-to-sequence framework for MLTC datasets with and without a given hierarchical tree structure over classes. 2. We introduce and examine experimentally a novel mixed model for MLTC. 3. We fine-tune the vanilla BERT model to perform multi-label text classification. To the best of our knowledge, this is the first work to experiment with BERT and explore its particular properties for the multi-label setting and hierarchical text classification. 4. We demonstrate state-of-the-art on three well-studied MLTC datasets with English texts and two private Yandex Taxi datasets with Russian texts. Let us consider a set D = {(x n, y n)} N n=1 ⊆ X × Y consisting of N samples that are assumed to be identically and independently distributed following an unknown distribution P (X, Y). Multiclass classification task aims to learn a function that maps inputs to the elements of a label set L = {1, 2, . . ., L}, i.e. Y = L. In multi-label classification, the aim is to learn a function that maps inputs to the subsets of L, i.e. Y = 2 L. In text classification tasks, X is a space of natural language texts. A standard pipeline in deep learning is to use a base model that converts a raw text to its fixedsize vector representation and then pass it to a classification algorithm. Typical architectures for base models include different types of recurrent neural networks (;, convolutional neural networks , hierarchical attention networks, and other more sophisticated approaches. These models consider each instance x as a sequence of tokens x = [w 1, w 2, . . ., w T]. Each token w i is then mapped to a vector representation u i ∈ R H thus forming an embedding matrix U T ×H which can be initialized with pre-trained word embeddings . Moreover, recent works show that it is possible to pre-train entire language representation models on large corpora of texts in a selfsupervised way. Newly introduced models providing context-dependent text embeddings, such as ELMo , ULMFiT , OpenAI GPT , and BERT significantly improved previous state-of-the-art on various NLP tasks. Among the most recent works, XLNet and RoBERTa models improve these further after overcoming some limitations of original BERT. A novel approach to take account of dependencies between labels is using Seq2Seq modeling. In this framework that first appeared in the neural machine translation field , we generally have source input X and target output Y in the form of sequences. We also assume there is a hidden dependence between X and Y, which can be captured by probabilistic model P (Y|X, θ). Therefore, the problem consists of three parts: modeling the distribution P (Y|X, θ), learning the parameters θ, and performing the inference stage where we need to findŶ = arg Y max P (Y|X, θ). have shown that after introducing a total order relation on the set of classes L, the MLTC problem can be treated as sequence-to-sequence task with Y being the ordered set of relevant labels {l 1, l 2, . . ., l M} ⊆ L of an instance X = [w 1, w 2, . . ., w T]. The primary approach to model sequences is decomposing the joint probability P (Y|X, θ) into M separate conditional probabilities. Traditionally, the left-to-right (L2R) order decomposition is: demonstrated that the label ordering in effects on the model accuracy, and the order with descending label frequencies in a decent performance on image datasets. Alternatively, if an additional prior knowledge about the relationship between classes is provided in the form of a tree hierarchy, the labels can also be sorted in topological order with a depth-first search performed on the hierarchical tree. argued that both orderings work similarly well on text classification datasets. A given hierarchical structure over labels forms a particular case of text classification task known as hierarchical text classification (HTC). Such an underlying structure over the set of labels can help to discover similar classes and transfer knowledge between them improving the accuracy of the model for the labels with only a few training examples . Most of the researchers' efforts to study HTC were dedicated to computer vision applications (; ; ;), but many of these studies potentially can be or have already been adapted to the field of natural language texts. Among the most recent works, proposed a Graph-based CNN architecture with a hierarchical regularizer, and argued that mixing an output from a global classifier and the outputs from all layers of a local classifier can be beneficial to learn hierarchical dependencies. It was also shown that reinforcement learning models with special award functions can be applied to learn non-trivial losses (a; . BERT (Bidirectional Encoder Representations from Transformers) is a recently proposed language representation model for obtaining text embeddings. BERT was pre-trained on unlabelled texts for masked word prediction and next sentence prediction tasks, providing deep bidirectional representations. For classification tasks, a special token [CLS] is put to the beginning of the text and the output vector of the token [CLS] is designed to correspond to the final text embedding. The pretrained BERT model has proven to be very useful for transfer learning in multi-class and pairwise text classification. Fine-tuning the model followed by one additional feedforward layer and softmax activation function was shown to be enough for providing state-of-the-art on a downstream task . For examining BERT on the multi-label setting, we change activation function after the last layer to sigmoid so that for each label we predict their probabilities independently. The loss to be optimized will be adjusted accordingly from cross-entropy loss to binary cross-entropy loss. In sequence generation model (b), the authors use BiLSTM as an encoder with pre-trained word embeddings of dimension d = 512. For a raw text x = [w 1, w 2, . . ., w T] each word w i is mapped to its embedding u i ∈ R d, and contextual word representations are computed as follows: After that, the decoder's zeroth hidden state is initialized as We propose to use the outputs of the last transformer block in BERT model as vector representations of words and the embedding of the token [CLS] produced by BERT as the initial hidden state of the decoder. We also use a simple dot-product attention mechanism which in our setting showed similar performance as additive attention, but ed in less number of parameters to learn. The process we follow to calculate decoder's hidden states α t and the attention scores α t is described in Algorithm 1 and illustrated in Figure 1. The weight matrices It is also worth mentioning that we do not freeze BERT parameters so that they can also be fine-tuned in the training process. In order to maximize the total likelihood of the produced sequence, we train the final model to minimize the cross-entropy objective loss for a given x and ground-truth labels {l * In the inference stage, we can compute the objective 3 replacing ground-truth labels with predicted labels. To produce the final sequence of labels, we perform a beam search following the work to find candidate sequences that have the minimal objective scores among the paths ending with the <EOS> token. In further experiments, we mainly test standard BERT and sequence generating BERT models. From our experimental that will be demonstrated later on, we concluded that BERT and BERT+SGM may each have their advantages and drawbacks on different datasets. Therefore, to make the models alleviate each other's weaknesses, it might be reasonable to combine them. Our error analysis on a number of examples has shown that in some cases, BERT can predict excess labels while BERT+SGM tends to be more restrained, which suggests that the two approaches can potentially complement each other well. Another argument in favor of using a hybrid method is that in contrast to the multi-label BERT model, BERT+SGM exploits the information about the underlying structure of labels. in their work propose HMCN model in which they suggest to jointly optimize both local (hierarchical) and global classifiers and combine their final probability predictions as a weighted average. Inspired by this idea, we propose to use a mixed model which is an ensemble of multi-label BERT and sequence generating BERT models. A main challenge in creating a mixed model is that the outputs of the two models are quite different. Typically, we do not have access to a probability distribution over the labels in classic Seq2Seq framework. We suggest to tackle this problem by computing the probability distributions produced by the decoder at each stage and then perform element-wise max-pooling operation on them following the idea of the recent paper . We should emphasize that using these probabilities to produce final label sets will not necessarily in the same predictions as the original BERT + SGM model. However, in our experiments, we found that the probability distributions obtained in that way are quite meaningful and with proper prob- Table 1: Summary of the datasets. N is the number of documents, L is the number of labels, W denotes the average number of words per sample ± SD, and C denotes the average number of labels per sample ± SD. ability threshold (around 0.4-0.45 for the considered datasets) can yield predictions with accuracy comparable to the accuracy of BERT+SGM model's predictions from the inference stage. After obtaining probability distributions of both models, we can compute their weighed average to create the final probability distribution vector, as follows: This probability vector is then used to make final predictions of labels with 0.5 probability threshold. The value of α ∈ is a trade-off parameter that is optimized on validation set. The final procedure is presented in Algorithm 2. pBERT ← BERT(x) [y1, y2, . . ., yn] ← BERT+SGM(x) for l ∈ {1, 2, . . ., L} do p (l) BERT+SGM ← max{y 1l, y 2l, . . ., y nl} pmixed ← αpBERT+SGM + (1 − α)pBERT L pred ← {l | p (l) mixed We train and evaluate all the models on three public datasets with English texts and two private datasets with Russian texts. The summary of the datasets' statistics is provided in the Table 1. Preprocessing of the datasets included lower casing the texts and removing punctuation. For the baseline TextCNN and SGM models, we used the same preprocessing techniques as in (b). Reuters Corpus Volume I (RCV1-v2) is a collection of manually categorized 804 410 news stories (after dropping four empty samples from the testing set). There are 103 categories organized in a tree hierarchy, and each text sample is assigned to labels from one or multiple paths in the tree. Since there was practically no difference between topological sorting order and order by frequency in multi-path case, we chose to sort the labels from the most common ones to the rarest ones. The training/testing split for this dataset is originally 23,149 in the training set and 781,261 in the testing set . While this training/testing split is still used in modern research works (;, in some other works authors have (implicitly) shifted towards using reverse training/testing split , and several other recent research works (; a ;b) started using 802,414 samples for the training set and 1,000 samples for the validation and testing sets. This change of the split might be reasonable due to the inadequate original proportion of the sets in modern realities, yet it makes it difficult to perform an apple-to-apple comparison of different models without their reimplementation. To avoid confusion, we decided to be consistent with the original training/testing split. We also used 10% of the training data for validation. Reuters-21578 is one of the most commonly used MLTC benchmark datasets with 10,787 articles from Reuters newswire collected in 1987 and tagged with 90 labels. We use the standard ApteMod split of the dataset following the work . Arxiv Academic Paper Dataset (AAPD) is a recently collected dataset (b) consisting of abstracts of 55,840 research papers from arXiv.org. Each paper belongs to one or several academic subjects, and the task is to predict those subjects for a paper based on its abstract. The number of categories is 54. We refer the reader to Appendix B for visualization of multi-label BERT embeddings for some of the labels from this dataset. Riders Tickets from Yandex Taxi Client Support (Y.Taxi Riders) is a private dataset obtained in Yandex Taxi client support system consisting of 174,590 tickets from riders. Initially, the dataset was labeled by Yandex Taxi reviewers with one tag per each ticket sample with an estimated accuracy of labeling around 75-78%. However, using additional information about a tree hierarchical structure over labels, we substituted each label with the corresponding label set with all the parent classes lying in the path between the root node and the label node. After this procedure, we ended up with 426 labels. Since in this task there is only one path in the tree to be predicted, we will explore a natural topological label ordering for this dataset. An example of a subtree of the tree hierarchy is provided in Figure 2. Drivers Tickets from Yandex Taxi Client Support (Y.Taxi Drivers) is also a private dataset obtained in Yandex Taxi drivers support system which has similar properties with the Y.Taxi Riders dataset. In the drivers' version, there are 163,633 tickets labeled with 374 tags. We implemented all the experiments in PyTorch 1.0 and ran the computations on a GeForce GTX 1080Ti GPU. Our implementation is relied on pytorch-transformers library 1. In the experiments, we used the base-uncased versions of BERT for English texts and the base-casedmultilingual version for Russian texts. Models of both versions output 768-dimensional hidden representation vector. We set batch size to 16. For optimization, we used Adam optimizer with β 1 = 0.9, β 2 = 0.99 and learning rate 2 · 10 −5. For the multi-label BERT, we also used the same scheduling of the learning rate as in the original work by. Reuters Table 2: Results on the five considered datasets. Metrics are marked in bold if they contain the highest metrics for the dataset in their ±SD interval. Following previous research works , we used hamming accuracy, set accuracy, micro-averaged f 1, and macro-averaged f 1 to evaluate the performance of the models. To be specific, the former two metrics can be computed as ACC(y,ŷ) = 1(y =ŷ) and HA(1(y j =ŷ j) and are designed to determine the accuracy of the predicted sets as whole. The latter ones are label-based metrics and can be calculated as follows: where tp j, f n j, and f p j denote the number of true positive, false positive and false negative predictions for the label j, respectively. We use a classic convolutional neural network TextCNN as a baseline for our experiments. We implemented a two-layer CNN with each layer followed by max pooling and two feedforward fully-connected layers followed by dropout and batch normalization at the end. Our second baseline model is Sequence Generation Model SGM (b), for which we reused the implementation of the authors 2. For the sake of comparison, we also provide the of HMCN and HiLAP (Mao et al.) models for hierarchical text classification on RCV1-v2 dataset adopted from the work (Mao et al.). For Reuters-21578 dataset, we also included the of the EncDec model from the original paper on sequence-to-sequence approach to MLTC. We present the of the suggested models and baselines on the five considered datasets in Table 2. First, we can see that both BERT and BERT+SGM show favorable on multi-label classification datasets mostly outperforming other baselines by a significant margin. On RCV1-v2 dataset, it is clear that the BERT-based models perform the best in micro-F 1 metrics. The methods dealing with the class structure (tree hierarchy in HMCN and HiLAP, label frequency in BERT+SGM) also have the highest macro-F 1 score. In some cases, BERT performs better than the sequence-to-sequence version, which is especially evident on the Reuters-21578 dataset. Since BERT+SGM has more learnable parameters, a possible reason might be a fewer number of samples provided on the dataset. However, sometimes BERT+SGM might be a more preferable option: on RCV1-v2 dataset the macro-F 1 metrics of BERT + SGM is much larger while other metrics are still comparable with the BERT's . Also, for both Yandex Taxi datasets on the Russian language, we can see that the hamming accuracy and the set accuracy of the BERT+SGM model is higher compared to other models. On Y.Taxi Riders there is also an improvement in terms of macro-F 1 metrics. In most cases, better performance can be achieved after mixing BERT and BERT+SGM. On public datasets, we see 0.4%, 0.8%, and 1.6% average improvement in miF 1, maF 1, and accuracy respectively in comparison with BERT. On datasets with tree hierarchy over classes, we observe 2.8% and 1.5% average improvement in maF 1 and accuracy. Metrics of interest for the mixed model depending on α on RCV1-v2 validation set are shown in Figure 4. Visualization of feature importance for BERT and sequence generating BERT models is provided in Appendix A. In our experiments, we also found that BERT for multi-label text classification tasks takes far more epochs to converge compared to 3-4 epochs needed for multi-class datasets . For AAPD, we performed 20 epochs of training; for RCV1-v2 and Reuters-21578 -around 30 epochs; for Russian datasets -45-50 epochs. BERT + SGM achieves decent accuracy much faster than multi-label BERT and converges after 8-12 epochs. The behavior of performance of both models on the validation set of Reuters-21578 during the training process is shown in Figure 3. Another finding of our experiments is that the beam size in the inference stage does not appear to influence much on the performance. We obtained optimal with the beam size in the range from 5 to 9. However, a greedy approach with the beam size 1 still gives similar with less than 1.5% difference in the metrics. A possible explanation for this might be that, while in neural machine translation (NMT) the word ordering in the output sequence matters a lot and there might be confusing options, label set generation task is much simpler and we do not have any problems with ordering. Also, due to a quite limited'vocabulary' size |L|, we may not have as many options here to perform a beam search as in NMT or another natural sequence generation task. In this research work, we examine BERT and sequence generating BERT on the multi-label setting. We experiment with both models and explore their particular properties for this task. We also introduce and examine experimentally a mixed model which is an ensemble of vanilla BERT and sequence-to-sequence BERT models. Our experimental studies showed that BERT-based models and the mixed model, in particular, outperform current baselines by several metrics achieving state-of-the-art on three well-studied multi-label classification datasets with English texts and two private Yandex Taxi datasets with Russian texts. We established that multi-label BERT typically needs several dozens of epochs to converge, unlike to BERT+SGM model which demonstrates decent just after a few hundreds of iterations (less than a half of an epoch). A natural question arises as to whether the success of the mixed model is the of two models having different views on text features. To have a rough idea of how the networks make their prediction, we visualized the word importance scores for each model using the leave-one-out method in Figure 5. It can be seen from this example that BERT+SGM seems to be slightly more selective in terms of features to which it pays attention. Also, in this particular case, the predictions of sequence generating BERT are more accurate. BERT multi-label Figure 5: Visualization of feature importance for multi-label BERT and BERT+SGM models trained on AAPD and applied to BERT paper abstract (cs. LG -machine learning; cs. CL -computation & linguistics; cs. NE -neural and evolutionary computing). We extracted and projected to 2D-plane the label embeddings obtained from the fully connected classification layer of multi-label BERT fine-tuned on AAPD dataset. Visualization of some labels is shown in Figure 6. From this plot, we can see some clusters of labels that are close in terms of word. Figure 6: Projection of label embeddings obtained from the fully connected classification layer of multi-label BERT fine-tuned on AAPD dataset.
On using BERT as an encoder for sequential prediction of labels in multi-label text classification task
570
scitldr
Click Through Rate (CTR) prediction is a critical task in industrial applications, especially for online social and commerce applications. It is challenging to find a proper way to automatically discover the effective cross features in CTR tasks. We propose a novel model for CTR tasks, called Deep neural networks with Encoder enhanced Factorization Machine (DeepEnFM). Instead of learning the cross features directly, DeepEnFM adopts the Transformer encoder as a backbone to align the feature embeddings with the clues of other fields. The embeddings generated from encoder are beneficial for the further feature interactions. Particularly, DeepEnFM utilizes a bilinear approach to generate different similarity functions with respect to different field pairs. Furthermore, the max-pooling method makes DeepEnFM feasible to capture both the supplementary and suppressing information among different attention heads. Our model is validated on the Criteo and Avazu datasets, and achieves state-of-art performance. This paper studies the problem of predicting the Click Through Rate (CTR), which is an essential task in industrial applications, such as online advertising, and e-commerce. To be exact, the advertisements of cost-per-click (CPC) advertising system are normally ranked by the eCPM (effective cost per mille), which is computed as the prodcut of bid price and CTR (click-through rate). To predict CTR precisely, feature representation is an important step in extracting the good, interpretable patterns from training data. For example, the co-occurrence of "Valentine's Day", "chocolate" and "male" can be viewed as one meaningful indicator/feature for the recommendation. Such handcrafted feature type is predominant in CTR prediction , until the renaissance of Deep Neural Networks (DNNs). Recently, a more effective manner, i.e., representation learning has been investigated in CTR prediction with some works (; ; ; ;), which implicitly or explicitly learn the embeddings of high-order feature extractions among neurons or input elements by the expressive power of DNNs or FM. Despite their noticeable performance improvement, DNNs and explicit high order feature-based methods (; ;) seek better feature interactions merely based on the naive feature embeddings. Few efforts have been made in addressing the task of holistically understanding and learning representations of inputs. This leads to many practical problems, such as "polysemy" in the learned feature embeddings existed in previous works. For example, the input feature'chocolate' is much closer to the'snack' than'gift' in normal cases, while we believe'chocolate' should be better paired with'gift' if given the occurrence input as "Valentine's Day". This is one common polysemy problem in CTR prediction. Towards fully understanding the inputs, we re-introduce to CTR, the idea of Transformer encoder , which is oriented in Natural Language Processing (NLP). Such an encoder can efficiently accumulate and extract patterns from contextual word embeddings in NLP, and thus potentially would be very useful in holistically representation learning in CTR. Critically, the Transformer encoder has seldom been applied to CTR prediction with the only one exception arxiv paper AutoInt , which, however, simply implements the multi-head selfattention (MHSA) mechanism of encoders, to directly extract high-order feature interactions. We argue that the output of MHSA/encoder should be still considered as first-order embedding influenced by the other fields, rather than a high-order interaction feature. To this end, our main idea is to apply the encoder to learn a context-aware feature embedding, which contains the clues from the content of other features. Thus the "polysemy" problem can be solved naturally, and the second-order interaction of such features can represent more meaning. Contrast to AutoInt , which feeds the output of encoder directly to the prediction layer or a DNN, our work not only improves the encoder to be more suitable for CTR task, but also feeds the encoder output to FM, since both our encoder and FM are based on vector-wise learning mechanism. And we adopt DNN to learn the bit-wise high-order feature interactions in a parallel way, which avoids interweaving the vector-wise and bit-wise interactions in a stacked way. Formally, we propose a novel framework -Deep neural networks with Encoder enhanced Factorization Machine (DeepEnFM). DeepEnFM focuses on generating better contextual aligned vectors for FM and uses DNN as a bit-wise information supplement. The architecture adopting both Deep and FM part is inspired by DeepFM . The encoder is endowed with bilinear attention and max-pooling power. First, we observed that unlike the random order of words in a sentence, the features in a transaction are in a fixed order of fields. For example, the fields of features are arranged in an order of {Gender, Age, Price ...}. When the features are embedded in dense vectors, the first and second vectors in a transaction always represent the field "Gender" and "Age". To make use of this advantage, we add a bilinear mechanism to the Transformer encoder. We use bilinear functions to replace the simple dot product in attention. In this way, feature similarity of different field pairs is modeled with different functions. The embedding size in CTR tasks is usually around 10, which allows the application of bilinear functions without unbearable computing complexity. Second, the original multi-head outputs are merged by concatenation, which considers the outputs are complementary to each other. We argue that there are also suppressing information between different heads. We apply a max-pooling merge mechanism to extract both complementary and suppressing information from the multi-head outputs. Experimental on Criteo and Avazu datasets have demonstrated the efficacy of our proposed model. CTR prediction is a critical task in Recommendation System , which aims to predict the probability of a user click behavior based on the given item. The key challenge of the CTR is to automatically learn combinatory/cross features (e.g., a 2-way feature: "Valentine's Day" and "chocolate", a 3-way feature: "Valentine's Day", "chocolate" and "male") from the inputs. Traditional learning methods, such as Logistic Regression (LR) , learn weights of processed features in supervised way for prediction. To gain a better performance, the manually designed combinatory features are fed into LR model, which is laborious and unable to cover the numerous combination of features. To learn the combinatory features automatically in a explicit way, Rendle proposes Factorization Machine model, which treats the weight of a feature pair as the dot of two latent vectors. Moreover, HOFM and CIN are proposed to capture the interaction of arbitrary order in a explicit manner. However, the primary/first-order field embeddings are utilized without considering the contextual information, which leads to the'polysemy' problem in field embedding unsolved. Deep Neural Networks (DNN) have made remarkable progresses in CV and NLP tasks by its extraordinary representation learning power. Several DNN based CTR models are developed on the basis of primary field embeddings;;; ) or product layers with a plain DNN. One of the major drawbacks of the plain DNN is that it tackles the field embeddings as a large flat layer, which means there are only interactions between neurons. Thus, DNN can be utilized as a bit-wise interaction feature learner to assist the vector-wise learner. Recently, Attention mechanism, which has been widely used in Machine Translation and Image Caption tasks, has been adopted in CTR models . Specially, Transformer encoder, empowered with multi-head self-attention, is demonstrated to be capable of generating context-aware word embeddings in the work BERT . Therefore, it seems promising to use Transformer encoder to solve the'polysemy' problem in CTR. Thus, we propose our model DeepEnFM to employ the encoder for context-aware field embedding. A related work to our model is AutoInt . However, AutoInt only utilizes the multi-head self-attention of encoder for high-order feature extraction rather than for field embedding. Task Formulation. The CTR task is to predict the click behavior from a record of multiple fields. Assume we have a training record as {x, y}, where x represents the fields of instance and y ∈ {0, 1} represents the click behavior. The instance fields x may contain both categorical fields and numerical fields. We denote y = 1 when the click behavior is'click'; otherwise, y = 0. Given a unseen test record x *, the goal is then to learn a mapping function y * = Ψ (x *) using all available training information and predict the probability of user click behavior y *, where the y * ∈ represents the probability of click behavior. Overview. Our framework DeepEnFM is illustrated in Fig. 1(a), which is composed of five components: the embedding layer, encoder, FM, DNN and prediction layer. The workflow is depicted in Alg. 1. Firstly, the embedding layer projects the input fields into a low dimensional space. Then the encoder adaptively updates the embedding with respect to the clues from other fields through the multi-head self-attention module (MHSA). Next, FM calculates a score which integrates the explicit interactions between encoder outputs. Simultaneously, the original from the embedding layer are fed into DNN to learn implicit feature interactions at bit-wise level. Finally, we apply the prediction layer to map the intermediate outputs of the FM and DNN into the final score with sigmoid function. Embedding. The embedding layer converts the instance input x from a high dimensional space to a low dimensional, dense vector. As shown in Fig. 1(b), we denote the record input fields as x = [x 1, x 2, . . ., x M], where x i can be either a one-hot vector (categorical field) or a scalar (numerical field). We assume that the field embedding size is d, and the output of embedding layer can be arranged as a matrix E = [e 1 ; e 2 ; . . . ; e M]. For a categorical field, the field value is used to look up the corresponding embedding vector as the token index in word embedding. For a numerical field, the field index is used to look up the related embedding vector, and the field value impacts the embedding as a factor. Encoder. The encoder is a stack of L encoder layers with residual connection, which is designed to align the field embedding accumulatively according to the other field contents. The output of L-th layer can be denoted as E (L). As shown in Fig. 1(c), the encoder layer contains two modules: a multi-head self-attention module and a position-wise feed-forward module. At the end of each module, there is a residual connection followed by the layer normalization. We will further explain these two modules in Sec. 3.2. FM. FM is leveraged to explicitly capture the first order and second order feature interactions as a routine in practice. In our model, FM calculates the score based on the encoder outputs E (L), which are the context-aware field embeddings. DNN. The DNN aims to capture the implicit high-order interactions between different fields in the bit-wise level. The output of embedding layer is flattened to a large vector. Then the large vector is fed into a multiple-layer feed-forward neural network to extract features. We do not use the output of encoder as the input of DNN, because both the encoder and FM learn at vector-wise level, while DNN aims at bit-wise level. Sharing encoder input may jeopardize the performance of encoder. Prediction. The prediction layer is a full connection layer, using a simple logistic regression function based on the outputs of FM and DNN to predict the final . Multi-head Self-attention Module (MHSA). The procedure of MHSA is described in Alg. 2 and Fig. 2(a). Suppose the number of heads is h for each head of MHSA, it has the following steps. The input vector set E is mapping into Query Q, Key K and Value The similarity matrix S is calculated by each pair of query q and key k vectors. The output is calculated as a weighted sum of value vectors v according to S. Finally, the outputs of different attention heads are merged to the final output. As shown in Alg. 2, the similarity and merge functions are the key parts of MHSA. The design principle of similarity and merge function mainly depends on the generalization, efficiency and architecture requirements. We will discuss the different implements between MHSA in Transformer and DeepEnFM encoder. As for MHSA in Transformer encoder, we firstly introduce the similarity function in Transformer encoder as shown in Eq. 4. The similarity function is a scaled dot function, which computes the similarity between different feature pairs equally. It merges the output vectors by a concatenation and a linear transformation with W O ∈ R Hdv×d. for i, j ∈ {1, . . ., M} do 4: 5: 6: end for 9: end for MHSA in DeepEnFM Encoder. To be clear, we argue that the similarity function between different field pairs could be personalized since the field order is fixed and the field embedding size is very low in CTR. Thus we customize the similarity function for each field pair query q (h) i and key k (h) j by performing a bilinear function with a specified weight matrix W (h) (i,j), as shown in Fig. 2(b). Second, we believe that different heads extract features from different viewpoints, so the contain both complementary and suppressing information to each other. Instead of concatenating all of the head outputs, we keep V ∈ R m×d with the same dimensions of E, and use a max-pooling to get the salient output O. Further, we take the output O of MHSA as the residual as shown in Fig. 1 (c). Position-wise Feed-forward Module. The position-wise/field-wise feed-forward module is a one-hidden-layer neural network that firstly maps the field embedding to a higher dimensional embedding space, then back to the original dimensional space. The encoder, FM and DNN in our DeepEnFM can be combined in parallel or stacked way, we explore different architectures for ablation study. We consider that the encoder can serve as a em- bedding layer for the downstream task, as the BERT in NLP tasks . Specially, we stack the encoder and FM together as the left branch as shown in Fig. 1(a), since both of them learn at vector-wise level. Three different architectures are described in Fig. 3, which will be further discussed in ablation study. We conduct experiments on two datasets. Criteo dataset. It contains 45 million click records. Each record includes 13 dense features and 26 categorical ones. For a fair comparison with the other methods, we follow the standard split here: 80%, 10% and 10% of the data as the train, validation and test data respectively. Avazu dataset. It contains 40 million click records, 23 fields totally. We also randomly select 80%, 10% and 10% data for train, validation and test respectively. The categorical features with the occurrence more than 10 and 5 times are preserved for Criteo and Avazu respectively, others are set to'Unknown' as the data processing in AutoInt . Evaluation. The validation data is used to search for hyper-parameter setting. When the hyperparameters determined, both the the train and validation data are used for training, and the performance is evaluated on the test data. The experiment is evaluated on AUC (Area Under ROC) and Logloss (Cross Entropy). Setting. We implement our approach by TensorFlow. We choose Adam as optimizer and the learning rate is 0.001. We set the L2 regularization λ as 0.0001 and dropout rate as 0.5. The batch size, embedding size, and DNN layer size is set to 2048, 10 and 400 respectively. We use Xavier initialization for dense layers and random normal initialization for bilinear attention weights. The DeepEnFM model is trained on V100 for 1 hours per training epoch. Competitors. We compare our model against several state-of-the-art methods: LR . LR simply learns weights of first-order features. FM . FM learns a latent vector for each feature and models feature interactions by dot products of these vectors. This model learns both first and second-order features. DNN. DNN takes embedded dense vectors as its input and learns high-order features in an implicit way. IPNN . IPNN feeds the dense embeddings to a inner product layer firstly. Then the output is fed to a DNN. OPNN . OPNN replaces IPNN's inner product layer with an outer product layer. This replacement reduce the complexity of the model.PNN* . PNN* use the product layer to concatenate the inner and outer product. DeepFM . DeepFM introduces an embedding layer to get dense latent vectors for FM. Then these dense vectors are also fed to a DNN in parallel. AutoInt . AutoInt feeds the output of its embedding layer to a interacting layer. Then MHSA based interacting layer can learn high-order features, and a sigmoid output layer or DNN is applied after the interacting layer. DeepEnFM. Our model employs a encoder-enhanced FM with a DNN. We evaluate the CTR with the standard settings on Criteo and Avazu dataset, as shown in Tab. 1. We highlight the following observations. Our model achieves the best performance of both AUC and LogLoss metrics on Criteo. Specifically, on Criteo our DeepEnFM outperforms the AutoInt by 0.56%, DeepFM by 1.49% on AUC respectively. And DeepEnFM also reduces 0.0067 and 0.0140 on the Logloss comparing to AutoInt and DeepFM respectively. These strongly demonstrate the efficacy of our bilinear encoder. On Avazu dataset, our model achieves the second place in AUC and the first place in Logloss. While, DeepEnFM has still outperformed AutoInt and DeepFM on AUC and Logloss, which indicates the efficacy of our bilinear attention and encoder. The improvement of DeepFM over FM shows the contribution of implicit high-order information in DNN part. The gap between FM and LR reflects that the explicit second order interaction information is important for the CTR task. In summary, all of the implicit high-order information, explicit second order interaction information, and attentional information are critical for the CTR task. We conduct extensive ablation study of DeepEnFM as follows: The in the second part of Tab. 2 shows that'Encoder + Deep' is better than'Encoder + FM'. The gap to the full model demonstrates the effect of each component. Encoder Position. To evaluate the influence of encoder position, we build several variants with the encoder in different position as shown in Fig. 3.'Encoder for DNN': we feed the encoder output to DNN, instead to FM;'Encoder for Both': we feed the encoder output to both the DNN and FM;'Encoder in Parallel': the original embedding output is fed into DNN, FM and encoder simultaneously. Our DeepEnFM can be viewed as'Encoder for FM', which uses the encoder to align the embedding for better explicit feature interaction learning. As shown in the third part of Tab. 2, the show that our DeepEnFM model'Encoder for FM' achieves the best , which indicates that the encoder is more compatible with FM, since both the encoder and FM extract features at vector-wise level. The'Encoder in Parallel' model achieves the second place, since they can learn in an independent way. The'Encoder for Both' model is less effective than DeepEnFM, which indicates that serving simultaneously for both the vector-wise and bit-wise learning modules may harm the power of encoder. Depth of encoder layers. We conduct experiments to evaluate the impact of the encoder layers in DeepEnFM, compared with the depth of MHSA layers in AutoInt. Table 3 shows that the performance of DeepEnFM has been improved as the depth of layers grows, and the gain of of the first encoder layer is larger than the second layer. Particularly, we can see the improvement of an encoder layer is larger than a MHSA layer. Number of Heads. We implement DeepEnFM with the head number of 1, 2, 5. Table 4 shows that the model achieves the best when the number of heads is 1 for DeepEnFM. This is reasonable, the query and key vector dimension reaches the maximum when the number of heads is 1, since the query vector dimension in MHSA is calculated as d h. With more query and key dimension, the bilinear function can gain more generalization ability at the cost of more weights and computation. The achieves the best balance between the performance and cost when the number of heads is 2 for DeepEnFM. The of AutoInt show that the influence of head number is smaller than DeepEnFM, because it uses scale dot product function, which is less sensitive to the dimension of query and key vectors. In this paper, we propose a novel framework named Deep neural networks with Encoder enhanced Factorization Machine (DeepEnFM), which aims to learn a better aligned vector embedding through the encoder. The encoder combines the bilinear attention and max-pooling method to gather both the complementary and suppressing information from the content of other fields. The extensive experiments demonstrate that our approach achieves state-of-art performance on Criteo and Avazu dataset.
DNN and Encoder enhanced FM with bilinear attention and max-pooling for CTR
571
scitldr
For autonomous agents to successfully operate in the real world, the ability to anticipate future scene states is a key competence. In real-world scenarios, future states become increasingly uncertain and multi-modal, particularly on long time horizons. Dropout based Bayesian inference provides a computationally tractable, theoretically well grounded approach to learn different hypotheses/models to deal with uncertain futures and make predictions that correspond well to observations -- are well calibrated. However, it turns out that such approaches fall short to capture complex real-world scenes, even falling behind in accuracy when compared to the plain deterministic approaches. This is because the used log-likelihood estimate discourages diversity. In this work, we propose a novel Bayesian formulation for anticipating future scene states which leverages synthetic likelihoods that encourage the learning of diverse models to accurately capture the multi-modal nature of future scene states. We show that our approach achieves accurate state-of-the-art predictions and calibrated probabilities through extensive experiments for scene anticipation on Cityscapes dataset. Moreover, we show that our approach generalizes across diverse tasks such as digit generation and precipitation forecasting. The ability to anticipate future scene states which involves mapping one scene state to likely future states under uncertainty is key for autonomous agents to successfully operate in the real world e.g., to anticipate the movements of pedestrians and vehicles for autonomous vehicles. The future states of street scenes are inherently uncertain and the distribution of outcomes is often multi-modal. This is especially true for important classes like pedestrians. Recent works on anticipating street scenes BID13 BID9 BID23 do not systematically consider uncertainty. Bayesian inference provides a theoretically well founded approach to capture both model and observation uncertainty but with considerable computational overhead. A recently proposed approach BID6 BID10 uses dropout to represent the posterior distribution of models and capture model uncertainty. This approach has enabled Bayesian inference with deep neural networks without additional computational overhead. Moreover, it allows the use of any existing deep neural network architecture with minor changes. However, when the underlying data distribution is multimodal and the model set under consideration do not have explicit latent state/variables (as most popular deep deep neural network architectures), the approach of BID6; BID10 is unable to recover the true model uncertainty (see FIG0 and BID19). This is because this approach is known to conflate risk and uncertainty BID19. This limits the accuracy of the models over a plain deterministic (non-Bayesian) approach. The main cause is the data log-likelihood maximization step during optimization -for every data point the average likelihood assigned by all models is maximized. This forces every model to explain every data point well, pushing every model in the distribution to the mean. We address this problem through an objective leveraging synthetic likelihoods BID26 BID21 which relaxes the constraint on every model to explain every data point, thus encouraging diversity in the learned models to deal with multi-modality. In this work: 1. We develop the first Bayesian approach to anticipate the multi-modal future of street scenes and demonstrate state-of-the-art accuracy on the diverse Cityscapes dataset without compromising on calibrated probabilities, 2. We propose a novel optimization scheme for dropout based Bayesian inference using synthetic likelihoods to encourage diversity and accurately capture model uncertainty, 3. Finally, we show that our approach is not limited to street scenes and generalizes across diverse tasks such as digit generation and precipitation forecasting. Bayesian deep learning. Most popular deep learning models do not model uncertainty, only a mean model is learned. Bayesian methods BID15 BID18 on the other hand learn the posterior distribution of likely models. However, inference of the model posterior is computationally expensive. In BID6 this problem is tackled using variational inference with an approximate Bernoulli distribution on the weights and the equivalence to dropout training is shown. This method is further extended to convolutional neural networks in BID5. In BID10 this method is extended to tackle both model and observation uncertainty through heteroscedastic regression. The proposed method achieves state of the art on segmentation estimation and depth regression tasks. This framework is used in BID2 to estimate future pedestrian trajectories. In contrast, BID22 propose a (unconditional) Bayesian GAN framework for image generation using Hamiltonian Monte-Carlo based optimization with limited success. Moreover, conditional variants of GANs BID17 are known to be especially prone to mode collapse. Therefore, we choose a dropout based Bayesian scheme and improve upon it through the use of synthetic likelihoods to tackle the issues with model uncertainty mentioned in the introduction. Structured output prediction. Stochastic feedforward neural networks (SFNN) and conditional variational autoencoders (CVAE) have also shown success in modeling multimodal conditional distributions. SFNNs are difficult to optimize on large datasets BID25 due to the binary stochastic variables. Although there has been significant effort in improving training efficiency BID20 BID7, success has been partial. In contrast, CVAEs BID24 assume Gaussian stochastic variables, which are easier to optimize on large datasets using the re-parameterization trick. CVAEs have been successfully applied on a large variety of tasks, include conditional image generation BID1, next frame synthesis BID28, video generation BID0 BID4, trajectory prediction BID12 among others. The basic CVAE framework is improved upon in BID3 through the use of a multiple-sample objective. However, in comparison to Bayesian methods, careful architecture selection is required and experimental evidence of uncertainty calibration is missing. Calibrated uncertainties are important for autonomous/assisted driving, as users need to be able to express trust in the predictions for effective decision making. Therefore, we also adopt a Bayesian approach over SFNN or CVAE approaches. Anticipation future scene scenes. In BID13 ) the first method for predicting future scene segmentations has been proposed. Their model is fully convolutional with prediction at multiple scales and is trained auto-regressively. BID9 improves upon this through the joint prediction of future scene segmentation and optical flow. Similar to BID13 a fully convolutional model is proposed, but the proposed model is based on the Resnet-101 BID8 and has a single prediction scale. More recently, BID14 has extended the model of BID13 to the related task of future instance segmentation prediction. These methods achieve promising and establish the competence of fully convolutional models. In BID23 a Convolutional LSTM based model is proposed, further improving short-term over BID9. However, fully convolutional architectures have performed well at a variety of related tasks, including segmentation estimation BID29 BID30, RGB frame prediction BID16 BID0 among others. Therefore, we adopt a standard ResNet based fully-convolutional architecture, while providing a full Bayesian treatment. We phrase our models in a Bayesian framework, to jointly capture model (epistemic) and observation (aleatoric) uncertainty BID10. We begin with model uncertainty. Let x ∈ X be the input (past) and y ∈ Y be the corresponding outcomes. Consider f: x → y, we capture model uncertainty by learning the distribution p(f |X, Y) of generative models f, likely to have generated our data {X, Y}. The complete predictive distribution of outcomes y is obtained by marginalizing over the posterior distribution, DISPLAYFORM0 However, the integral in FORMULA0 is intractable. But, we can approximate it in two steps BID6. First, we assume that our models can be described by a finite set of variables ω. Thus, we constrain the set of possible models to ones that can be described with ω. Now, is equivalently, DISPLAYFORM1 Second, we assume an approximating variational distribution q(ω) of models which allows for efficient sampling. This in the approximate distribution, DISPLAYFORM2 For convolutional models, BID5 proposed a Bernoulli variational distribution defined over each convolutional patch. The number of possible models is exponential in the number of patches. This number could be very large, making it difficult optimize over this very large set of models. In contrast, in our approach, the number possible models is exponential in the number of weight parameters, a much smaller number. In detail, we choose the set of convolutional kernels and the biases {( DISPLAYFORM3 of our model as the set of variables ω. Then, we define the following novel approximating Bernoulli variational distribution q(ω) independently over each element w i,j k,k (correspondingly b k) of the kernels and the biases at spatial locations {i, j}, DISPLAYFORM4 Note, denotes the hadamard product, M k are tuneable variational parameters, z i,j k,k ∈ Z K are the independent Bernoulli variables, p K is a probability tensor equal to the size of the (bias) layer, |K| (|K |) is the number of kernels in the current (previous) layer. Here, p K is chosen manually. Moreover, in contrast to BID5, the same (sampled) kernel is applied at each spatial location leading to the detection of the same features at varying spatial locations. Next, we describe how we capture observation uncertainty. Observation uncertainty can be captured by assuming an appropriate distribution of observation noise and predicting the sufficient statistics of the distribution BID10. Here, we assume a Gaussian distribution with diagonal covariance matrix at each pixel and predict the mean vector µ i,j and co-variance matrix σ i,j of the distribution. In detail, the predictive distribution of a generative model draw fromω ∼ q(ω) at a pixel position {i, j} is, DISPLAYFORM0 We can sample from the predictive distribution p(y|x) by first sampling the weight matrices ω from and then sampling from the Gaussian distribution in. We perform the last step by the linear transformation of a zero mean unit diagonal variance Gaussian, ensuring differentiability, DISPLAYFORM1 where,ŷ i,j is the sample drawn at a pixel position {i, j} through the liner transformation of z (a vector) with the predicted mean µ i,j and variance σ i,j. In case of street scenes, y i,j is a class-confidence vector and sample of final class probabilities is obtained by pushingŷ i,j through a softmax. For a good variational approximation, our approximating variational distribution of generative models q(ω) should be close to the true posterior p(ω|X, Y). Therefore, we minimize the KL divergence between these two distributions. As shown in BID6 a); BID10 the KL divergence is given by (over i.i.d data points), DISPLAYFORM0 The log-likelihood term at the right of FORMULA7 considers every model for every data point. This imposes the constraint that every data point must be explained well by every model. However, if the data distribution (x, y) is multi-modal, this would push every model to the mean of the multi-modal distribution (as in FIG0 where only way for models to explain both modes is to converge to the mean). This discourages diversity in the learned modes. In case of multi-modal data, we would not be able to recover all likely models, thus hindering our ability to fully capture model uncertainty. The models would be forced to explain the data variation as observation noise BID19, thus conflating model and observation uncertainty. We propose to mitigate this problem through the use of an approximate objective using synthetic likelihoods BID26 BID21 ) -obtained from a classifier. The classifier estimates the likelihood based on whether the modelsω ∼ q(ω) explain (generate) data samples likely under the true data distribution p(y|x). This removes the constraint on models to explain every data point -it only requires the explained (generated) data points to be likely under the data distribution. Thus, this allows modelsω ∼ q(ω) to be diverse and deal with multi-modality. Next, we reformulate the KL divergence estimate of to a likelihood ratio form which allows us to use a classifier to estimate (synthetic) likelihoods, (also see Appendix), DISPLAYFORM1 In the second step of FORMULA8, we divide and multiply the probability assigned to a data sample by a model p(y|x, ω) by the true conditional probability p(y|x) to obtain a likelihood ratio. We can estimate the KL divergence by equivalently estimating this ratio rather than the true likelihood. In order to (synthetically) estimate this likelihood ratio, let us introduce the variable θ to denote, p(y|x, θ = 1) the probability assigned by our model ω to a data sample (x, y) and p(y|x, θ = 0) the true probability of the sample. Therefore, the ratio in the last term of FORMULA8 is, DISPLAYFORM2 In the last step of FORMULA9 we use the fact that the events θ = 1 and θ = 0 are mutually exclusive. We can approximate the ratio p(θ=1|x,y)1−p(θ=1|x,y) by jointly learning a discriminator D(x,ŷ) that can distinguish between samples of the true data distribution and samples (x,ŷ) generated by the model ω, which provides a synthetic estimate of the likelihood, and equivalently integrating directly over (x,ŷ), DISPLAYFORM3 Note that the synthetic likelihood DISPLAYFORM4 is independent of any specific pair (x, y) of the true data distribution (unlike the log-likelihood term in FORMULA7), its value depends only upon whether the generated data point (x,ŷ) by the model ω is likely under the true data distribution p(y|x). Therefore, the models ω have to only generate samples (x,ŷ) likely under the true data distribution. The models need not explain every data point equally well. Therefore, we do not push the models ω to the mean, thus allowing them to be diverse and allowing us to better capture uncertainty. Empirically, we observe that a hybrid log-likelihood term using both the log-likelihood terms of FORMULA0 and FORMULA7 with regularization parameters α and β (with α ≥ β) stabilizes the training process, DISPLAYFORM5 Note that, although we do not explicitly require the posterior model distribution to explain all data points, due to the exponential number of models afforded by dropout and the joint optimization (min-max game) of the discriminator, empirically we see very diverse models explaining most data points. Moreover, empirically we also see that predicted probabilities remain calibrated. Next, we describe the architecture details of our generative models ω and the discriminator D(x,ŷ). The architecture of our ResNet based generative models in our model distribution q(ω) is shown in FIG1. The generative model takes as input a sequence of past segmentation class-confidences s p, the past and future vehicle odometry o p, o f (x = {s p, o p, o f}) and produces the class-confidences at the next time-step as output. The additional conditioning on vehicle odometry is because the sequences are recorded in frame of reference of a moving vehicle and therefore the future observed sequence is dependent upon the vehicle trajectory. We use recursion to efficiently predict a sequence of future scene segmentations y = {s f}. The discriminator takes as input s f and classifies whether it was produced by our model or is from the true data distribution. In detail, generative model architecture consists of a fully convolutional encoder-decoder pair. This architecture builds upon prior work of BID13 BID9, however with key differences. In BID13, each of the two levels of the model architecture consists of only five convolutional layers. In contrast, our model consists of one level with five convolutaional blocks. The encoder contains three residual blocks with max-pooling in between and the decoder consists of a residual and a convoluational block with up-sampling in between. We double the size of the blocks following max-pooling in order to preserve resolution. This leads to a much deeper model with fifteen convolutional layers, with constant spatial convolutional kernel sizes. This deep model with pooling creates a wide receptive field and helps better capture spatio-temporal dependencies. The residual connections help in the optimization of such a deep model. Computational resources allowing, it is possible to add more levels to our model. In BID9 a model is considered which uses a Res101-FCN as an encoder. Although this model has significantly more layers, it also introduces a large amount of pooling. This leads to loss of resolution and spatial information, hence degrading performance. Our discriminator model consists of six convolutional layers with max-pooling layers in-between, followed by two fully connected layers. Finally, in Appendix E we provide layer-wise details and discuss the reduction of number of models in q(ω) through the use of Weight Dropout for our architecture of generators. Next, we evaluate our approach on MNIST digit generation and street scene anticipation on Cityscapes. We further evaluate our model on 2D data FIG0 ) and precipitation forecasting in the Appendix. Here, we aim to generate the full MNIST digit given only the lower left quarter of the digit. This task serves as an ideal starting point as in many cases there are multiple likely completions given the lower left quarter digit, e.g. 5 and 3. Therefore, the learned model distribution q(ω) should contain likely models corresponding to these completions. We use a fully connected generator with 6000-4000-2000 hidden units with 50% dropout probability. The discriminator has 1000-1000 hidden units with leaky ReLU non-linearities. We set β = 10 −4 for the first 4 epochs and then reduce it to 0, to provide stability during the initial epochs. We compare our synthetic likelihood based approach (Bayes-SL) with, 1. A non-Bayesian mean model, 2. A standard Bayesian approach (Bayes-S), 3. A Conditional Variational Autoencoder (CVAE) (architecture as in BID24). As evaluation metric we consider (oracle) Top-k% accuracy BID12. We use a standard Alex-Net based classifier to measure if the best prediction corresponds to the ground-truth class -identifies the correct mode -in TAB1 (right) over 10 splits of the MNIST test-set. We sample 10 models from our learned distribution and consider the best model. We see that our Bayes-SL performs best, even outperforming the CVAE model. In the qualitative examples in TAB1 (left), we see that generations from modelsω ∼ q(ω) sampled from our learned model distribution corresponds to clearly defined digits (also in comparision to FIG2 in BID24). In contrast, we see that the Bayes-S model produces blurry digits. All sampled models have been pushed to the mean and shows little advantage over a mean model. Next, we evaluate our apporach on the Cityscapes dataset -anticipating scenes more than 0.5 seconds into the future. The street scenes already display considerable multi-modality at this time-horizon. Evaluation metrics and baselines. We use PSPNet BID30 to segment the full training sequences as only the 20 th frame has groundtruth annotations. We always use the annotated 20 th frame of the validation sequences for evaluation using the standard mean Intersection-over-Union (mIoU) and the per-pixel (negative) conditional log-likelihood (CLL) metrics. We consider the following baselines for comparison to our Resnet based (architecture in FIG1) Bayesian (Bayes-WD-SL) model with weight dropout and trained using synthetic likelihoods: 1. Copying the last seen input; 2. A non-Bayesian (ResG-Mean) version; 3. A Bayesian version with standard patch dropout (Bayes-S); 4. A Bayesian version with our weight dropout (Bayes-WD). Note that, combination of ResG-Mean with an adversarial loss did not lead to improved (similar observations made in BID13). We use grid search to set the dropout rate (in) to 0.15 for the Bayes-S and 0.20 for Bayes-WD(-SL) models. We set α, β = 1 for our Bayes-WD-SL model. We train all models using Adam BID11 for 50 epochs with batch size 8. We use one sample to train the Bayesian methods as in BID5 and use 100 samples during evaluation. Comparison to state of the art. We begin by comparing our Bayesian models to state-of-the-art methods BID13; BID23 in TAB0. We use the mIoU metric and for a fair comparison consider the mean (of all samples) prediction of our Bayesian models. We alwyas compare to the groundtruth segmentations of the validation set. However, as all three methods use a slightly different semantic segmentation algorithm (Table 2) to generate training and input test data, we include the mIoU achieved by the Last Input of all three methods (see Appendix C for using Dialation 10). Similar to Luc et al. FORMULA0 we fine-tune (ft) to predict at 3 frame intervals for better performance at +0.54sec. Our Bayes-WD-SL model outperforms baselines and improves on prior work by 2.8 mIoU at +0.06sec and 4.8 mIoU/3.4 mIoU at +0.18sec/+0.54sec respectively. Our Bayes-WD-SL model also obtains higher relative gains in comparison to BID13 with respect the Last Input Baseline. These validate our choice of model architecture and show that our novel approach clearly outperforms the state-of-the-art. The performance advantage of Bayes-WD-SL over Bayes-S shows that the ability to better model uncertainty does not come at the cost of lower mean performance. However, at larger time-steps as the future becomes increasingly uncertain, mean predictions (mean of all likely futures) drift further from the ground-truth. Therefore, next we evaluate the models on their (more important) ability to capture the uncertainty of the future. Evaluation of predicted uncertainty. Next, we evaluate whether our Bayesian models are able to accurately capture uncertainity and deal with multi-modal futures, upto t + 10 frames (0.6 seconds) in TAB1. We consider the mean of (oracle) best 5% of predictions BID12 ) of our Bayesian models to evaluate whether the learned model distribution q(ω) contains likely models corresponding to the groundtruth. We see that the best predictions considerably improve over the mean predictionsshowing that our Bayesian models learns to capture uncertainity and deal with multi-modal futures. Quantitatively, we see that the Bayes-S model performs worst, demonstrating again that standard dropout BID10 struggles to recover the true model uncertainity. The use of weight dropout improves the performance to the level of the ResG-Mean model. Finally, we see that our Bayes-WD-SL model performs best. In fact, it is the only Bayesian model whose (best) performance exceeds that of the ResG-Mean model (also outperforming state-of-the-art), demonstrating the effectiveness of synthetic likelihoods during training. In FIG3 we show examples comparing the best prediction of our Bayes-WD-SL model and ResG-Mean at t + 9. The last row highlights the differences between the predictions -cyan shows areas where our Bayes-WD-SL is correct and ResG-Mean is wrong, red shows the opposite. We see that our Bayes-WD-SL performs better at classes like cars and pedestrians which are harder to predict (also in comparison to TAB3 in BID13). In FIG4, we show samples from randomly sampled modelsω ∼ q(ω), which shows correspondence to the range of possible movements of bicyclists/pedestrians. Next, we further evaluate the models with the CLL metric in TAB1. We consider the mean predictive distributions up to t + 10 frames. We see that the Bayesian models outperform the ResG-Mean model significantly. In particular, we see that our Bayes-WD-SL model performs the best, demonstrating that the learned model and observation uncertainty corresponds to the variation in the data. Comparison to a CVAE baseline. As there exists no CVAE BID24 based model for future segmentation prediction, we construct a baseline as close as possible to our Bayesian models Groundtruth, t + 9 ResG-Mean, t + 9 Bayes-WD-SL, t + 9 Comparison Sample #1, t + 9 Sample #2, t + 9 Sample #3, t + 9 Sample #4, t + 9 based on existing CVAE based models for related tasks BID0 BID28.Existing CVAE based models BID0 BID28 ) contain a few layers with Gaussian input noise. Therefore, for a fair comparison we first conduct a study in TAB2 to find the layers which are most effective at capturing data variation. We consider Gaussian input noise applied in the first, middle or last convolutional blocks. The noise is input dependent during training, sampled from a recognition network (see Appendix). We observe that noise in the last layers can better capture data variation. This is because the last layers capture semantically higher level scene features. Overall, our Bayesian approach (Bayes-WD-SL) performs the best. This shows that the CVAE model is not able to effectively leverage Gaussian noise to match the data variation. Uncertainty calibration. We further evaluate predicted uncertainties by measuring their calibration -the correspondence between the predicted probability of a class and the frequency of its occurrence in the data. As in BID10, we discretize the output probabilities of the mean predicted distribution into bins and measure the frequency of correct predictions for each bin. We report the at t + 10 frames in FIG5. We observe that all Bayesian approaches outperform the ResG-Mean and CVAE versions. This again demonstrates the effectiveness of the Bayesian approaches in capturing uncertainty. We propose a novel approach for predicting real-world semantic segmentations into the future that casts a convolutional deep learning approach into a Bayesian formulation. One of the key contributions is a novel optimization scheme that uses synthetic likelihoods to encourage diversity and deal with multi-modal futures. Our proposed method shows state of the art performance in challenging street scenes. More importantly, we show that the probabilistic output of our deep learning architecture captures uncertainty and multi-modality inherent to this task. Furthermore, we show that the developed methodology goes beyond just street scene anticipation and creates new opportunities to enhance high performance deep learning architectures with principled formulations of Bayesian inference. KL divergence estimate. Here, we provide a detailed derivation of. Starting from, we have, DISPLAYFORM0 Multiplying and dividing by p(y|x), the true probability of occurance, DISPLAYFORM1 Using q(ω) dω = 1, DISPLAYFORM2 As log p(y|x)d(x, y) is independent of ω, the variables we are optmizing over, we have, DISPLAYFORM3 APPENDIX B. ON SIMPLE MULTI-MODAL 2D DATA. We show on simple multi-modal 2d data as in the motivating example in the introduction. The data consists of two parts: x ∈ [−10, 0] we have y = 0 and x ∈ we have y = (−0.3, 0.3).The set of models under consideration is a two hidden layer neural network with 256-128 neurons with 50% dropout. We show 10 randomly sampled models fromω ∼ q(ω) learned by the Bayes-S approach in FIG6 and our Bayes-SL approach in FIG7 (with α = 1, β = 0). We assume constant observation uncertainty (=1). We clearly see that our Bayes-SL learns models which cover both modes, while all the models learned by Bayes-S fit to the mean. Clearly showing that our approach can better capture model uncertainty. First, we provide additional training details of our Bayes-WD-SL in TAB3. Generator learning rate 1 × 10 DISPLAYFORM0 Discriminator learning rate 1 × 10 DISPLAYFORM1 # Generator updates per iteration 1 # Discriminator updates per iteration 1 Table 6: Additional Comparison to BID13 using the same Dialation 10 approach to generate training segmentations. Note: Fine Tuned (ft) means both approaches are trained to predict at intervals of three frames (0.18 seconds).Second, we provide additional evaluation on street scenes. In Section 4.2 TAB0 we use a PSPNet to generate training segmentations for our Bayes-WD-SL model to ensure fair comparison with the state-of-the-art BID23. However, the method of BID13 uses a weaker Dialation 10 approach to generate training segmentations. Note that our Bayes-WD-SL model already obtains higher gains in comparison to BID13 with respect the Last Input Baseline, e.g. at +0.54sec, 47.8 -36.9 = 10.9 mIoU translating to 29.5% gain over the Last Input Baseline of BID13 versus 51.2 -38.3 = 12.9 mIoU translating to 33.6% gain over the Last Input Baseline of our Bayes-WD-SL model in TAB0. But for fairness, here we additionally include in Table 6 using the same Dialation 10 approach to generate training segmentations. We observe that our Bayes-WD-SL model beats the model of BID13 in both short-term (+0.18 sec) and long-term predictions (+0.54 sec). Furthermore, we see that the mean of the Top 5% of the predictions of Bayes-WD-SL leads to much improved over mean predictions. This again confirms the ability of our Bayes-WD-SL model to capture uncertainty and deal with multi-modal futures. APPENDIX D. ON HKO PRECIPITATION FORECASTING DATA.The HKO radar echo dataset consists of weather radar intensity images. We use the train/test split used in Xingjian et al. FORMULA0; BID3. Each sequence consists of 20 frames. We use 5 frames as input and 15 for prediction. Each frame is recorded at an interval of 6 minutes. Therefore, they display considerable uncertainty. We use the same network architecture as used for street scene segmentation Bayes-WD-SL FIG1 and with α = 5, β = 1), but with half the convolutional filters at each level. We compare to the following baselines: 1. A deterministic model (ResG-Mean), 2. A Bayesian model with weight dropout. We report the (oracle) Top-10% scores (best 1 of 10), over the following metrics BID27 BID3 ), 1. Rainfall-MSE: Rainfall mean squared error, 2. CSI: Critical success index, 3. FAR: False alarm rate, 4. POD: Probability of detection, and 5. Correlation, in Table 7, Note, that; BID3 reports only scores over mean of all samples. Our ResG-Mean model outperforms these state of the art methods, showing the versatility of our model architecture. Our Bayes-WD-SL can outperform the strong ResG-Mean baseline again showing that it learns to capture uncertainty (see FIG0). In comparison, the Bayes-WD baseline struggles to outperform the ResG-Mean baseline. We further compare the calibration our Bayes-SL model to the ResG-Mean model in FIG8. We plot the predicted intensity to the true mean observed intensity. Table 7: Evaluation on HKO radar image sequences. is stark in the high intensity region. The RegG-Mean model deviates strongly from the diagonal in this region -it overestimates the radar intensity. In comparison, we see that our Bayes-WD-SL approach stays closer to the diagonal. These again show that our synthetic likelihood based approach leads to more accurate predictions while not compromising on calibration. Observation Groundtruth Observation Prediction t − 5 t − 3 t − 1 t t + 2 t + 4 t + 6 t + 8 t + 10 t + 12 t + 13 t + 14 Here, we provide layer-wise details of our generative and discriminative models in TAB6. We provide layer-wise details of the recognition network of the CVAE baseline used in TAB2 (in the main paper). Finally, in TAB0 we show the difference in the number of possible models using our weight based variational distribution 4 (weight dropout) versus the patch based variational distribution (patch dropout) proposed in BID5. The number of patches is calculated using the formula, DISPLAYFORM0 because we use convolutional stride 1, padding to ensure same output resolution and each patch is dropped out (in BID5) independently for each convolutional filter. The number of weight parameters is given by the formula,Filter size × # Input Convolutional Filters × # Output Convolutional Filters + # Bias. TAB0 shows that our weight dropout scheme in significantly lower number of parameters compared to patch dropout BID5.Details of our generative model. We show the layer wise details in TAB6 Details of our discriminator model. We show the layer wise details in TAB8.Details of the recognition model used in the CVAE baseline. We show the layer wise details in TAB0. 36,700,406 11,699,192 36,700,406 11,699,192 Table 11: The difference in the number of possible models using our Weight dropout scheme versus patch dropout BID5 (Appendix D) 18,350,203 5,849,596 68.1% TAB0: Overview of the variational parameters using our Weight dropout scheme versus patch dropout BID5 of both architectures for street scene and precipitation forecasting
Dropout based Bayesian inference is extended to deal with multi-modality and is evaluated on scene anticipation tasks.
572
scitldr
Conditional generative adversarial networks (cGAN) have led to large improvements in the task of conditional image generation, which lies at the heart of computer vision. The major focus so far has been on performance improvement, while there has been little effort in making cGAN more robust to noise. The regression (of the generator) might lead to arbitrarily large errors in the output, which makes cGAN unreliable for real-world applications. In this work, we introduce a novel conditional GAN model, called RoCGAN, which leverages structure in the target space of the model to address the issue. Our model augments the generator with an unsupervised pathway, which promotes the outputs of the generator to span the target manifold even in the presence of intense noise. We prove that RoCGAN share similar theoretical properties as GAN and experimentally verify that our model outperforms existing state-of-the-art cGAN architectures by a large margin in a variety of domains including images from natural scenes and faces. Image-to-image translation and more generally conditional image generation lie at the heart of computer vision. Conditional Generative Adversarial Networks (cGAN) have become a dominant approach in the field, e.g. in dense 1 regression (; ; ; BID1 ; ; ;). They accept a source signal as input, e.g. prior information in the form of an image or text, and map it to the target signal (image). The mapping of cGAN does not constrain the output to the target manifold, thus the output can be arbitrarily off the target manifold . This is a critical problem both for academic and commercial applications. To utilize cGAN or similar methods as a production technology, we need to study their generalization even in the face of intense noise. Similarly to regression, classification also suffers from sensitivity to noise and lack of output constraints. One notable line of research consists in complementing supervision with unsupervised learning modules. The unsupervised module forms a new pathway that is trained with the same, or different data samples. The unsupervised pathway enables the network to explore the structure that is not present in the labelled training set, while implicitly constraining the output. The addition of the unsupervised module is only required during the training stage and in no additional computational cost during inference. and modified the original bottom-up (encoder) network to include top-down (decoder) modules during training. However, in dense regression both bottom-up and top-down modules exist by default, and such methods are thus not trivial to extend to regression tasks. Motivated by the combination of supervised and unsupervised pathways, we propose a novel conditional GAN which includes implicit constraints in the latent subspaces. We coin this new model'Robust Conditional GAN' (RoCGAN). In the original cGAN the generator accepts a source signal and maps it to the target domain. In our work, we (implicitly) constrain the decoder to generate samples that span only the target manifold. We replace the original generator, i.e. encoder-decoder, with a two pathway module (see FIG0). The first pathway, similarly to the cGAN generator, performs regression while the second is an autoencoder in the target domain (unsupervised pathway). The two pathways share a similar network structure, i.e. each one includes an encoder-decoder network. The weights of the two decoders are shared which promotes the latent representations of the two pathways to be semantically similar. Intuitively, this can be thought of as constraining the output of our dense regression to span the target subspace. The unsupervised pathway enables the utilization of all the samples in the target domain even in the absence of a corresponding input sample. During inference, the unsupervised pathway is no longer required, therefore the testing complexity remains the same as in cGAN. (a) The source signal is embedded into a low-dimensional, latent subspace, which is then mapped to the target subspace. The lack of constraints might in outcomes that are arbitrarily off the target manifold. (b) On the other hand, in RoCGAN, steps 1b and 2b learn an autoencoder in the target manifold and by sharing the weights of the decoder, we restrict the output of the regression (step 2a). All figures in this work are best viewed in color. In the following sections, we introduce our novel RoCGAN and study their (theoretical) properties. We prove that RoCGAN share similar theoretical properties with the original GAN, i.e. convergence and optimal discriminator. An experiment with synthetic data is designed to visualize the target subspaces and assess our intuition. We experimentally scrutinize the sensitivity of the hyper-parameters and evaluate our model in the face of intense noise. Moreover, thorough experimentation with both images from natural scenes and human faces is conducted in two different tasks. We compare our model with both the state-of-the-art cGAN and the recent method of. The experimental demonstrate that RoCGAN outperform the baseline by a large margin in all cases. Our contributions can be summarized as following:• We introduce RoCGAN that leverages structure in the target space. The goal is to promote robustness in dense regression tasks.• We scrutinize the model performance under (extreme) noise and adversarial perturbations. To the authors' knowledge, this robustness analysis has not been studied previously for dense regression.• We conduct a thorough experimental analysis for two different tasks. We outline how RoCGAN can be used in a semi-supervised learning task, how it performs with lateral connections from encoder to decoder. Notation: Given a set of N samples, s (n) denotes the n th conditional label, e.g. a prior image; y (n) denotes the respective target image. Unless explicitly mentioned otherwise || · || will declare an 1 norm. The symbols L * define loss terms, while λ * denote regularization hyper-parameters optimized on the validation set. Conditional image generation is a popular task in computer vision, dominated by approaches similar to cGAN. Apart from cGAN, the method by , widely known as'pix2pix', is the main alternative. Pix2pix includes three modifications over the baseline cGAN: i) lateral skip connections between the encoder and the decoder network are added in the generator, ii) the discriminator accepts pairs of source/gt and source/model output images, iii) additional content loss terms are added. The authors demonstrate how those performance related modifications can lead to an improved visual outcome. Despite the improved performance, the problem with the additional guarantees remains the same. That is we do not have any direct supervision in the process, since both the latent subspace and the projection are learned; the only supervision is provided by the ground-truth (gt) signal in the generator's output. Adding regularization terms in the loss function can impose stronger supervision, thus restricting the output. The most frequent regularization term is feature matching, e.g. perceptual loss , or embeddings for faces . Feature matching minimizes the distance between the projection of generated and ground-truth signals. However, the pre-defined feature space is restrictive. The method introduced by performs feature matching in the discriminator; the motivation lies in matching the low-dimensional distributions created by the discriminator layers. Matching the discriminator's features has demonstrated empirical success. However, this does not affect the generator and its latent subspaces directly. A new line of research that correlates with our goals is that of adversarial attacks BID6; ). It is observed that perturbing input samples with a small amount of noise, often imperceptible to the human eye, can lead to severe classification errors. There are several techniques to'defend' against such attacks. A recent example is the Fortified networks of which uses Denoising Autoencoders to ensure that the input samples do not fall off the target manifold. estimate the tangent space to the target manifold and use that to insert invariances to the discriminator for classification purposes. Even though RoCGAN share similarities with those methods, the scope is different since a) the output of our method is high-dimensional 2 and b) adversarial examples are not extended to dense regression 3.Except for the study of adversarial attacks, combining supervised and unsupervised learning has been used for enhancing the classification performance. In the Ladder network, modify a typical bottom-up network for classification by adding a decoder and lateral connections between the encoder and the decoder. During training they utilize the augmented network as two pathways: i) labelled input samples are fed to the initial bottom-up module, ii) input samples are corrupted with noise and fed to the encoder-decoder with the lateral connections. The latter pathway is an autoencoder; the idea is that it can strengthen the resilience of the network to samples outside the input manifold, while it improves the classification performance. Our core goal consists in constraining the model's output. Aside from deep learning approaches, such constraints in manifolds were typically tackled with component analysis. Canonical correlation analysis has been extensively used for finding common subspaces that maximally correlate the data . The recent work of combines the expressiveness of neural networks with the theoretical guarantees of classic component analysis. In this section, we elucidate our proposed RoCGAN. To make the paper self-contained we first review the original conditional GAN model (sec. 3.1), before introducing RoCGAN (sec. 3.2). Sequentially, we pose the modifications required in case of shortcut connections from the encoder to the decoder (sec. 3.3). In sec. 3.4 we assess the intuition behind our model with synthetic data. The core idea in RoCGAN is to leverage structure in the output space of the model. We achieve that by replacing the single pathway in the generator with two pathways. In the appendix, we study the theoretical properties of our method and prove that RoCGAN share the same properties as the original GAN. GAN consist of a generator and a discriminator module commonly optimized with alternating gradient descent methods. The generator samples z from a prior distribution p z, e.g. uniform, and tries to model the target distribution p d; the discriminator D tries to distinguish between the samples generated from the model and the target (ground-truth) distributions. Conditional GAN (cGAN) extend the formulation by providing the generator with additional labels. In cGAN the generator G typically takes the form of an encoder-decoder network, where the encoder projects the label into a low-dimensional latent subspace and the decoder performs the opposite mapping, i.e. from low-dimensional to high-dimensional subspace. If we denote s the conditioning label and y a sample from the target distribution, the adversarial loss is expressed as: DISPLAYFORM0 by solving the following min-max problem: DISPLAYFORM1 where w G, w D denote the generator's and the discriminator's parameters respectively. To simplify the notation, we drop the dependencies on the parameters and the noise z in the rest of the paper. The works of and demonstrate that auxiliary loss terms, i.e. feature matching and content loss, improve the final outcome, hence we consider those as part of the vanilla cGAN. The feature matching loss is: DISPLAYFORM2 where π extracts the features from the penultimate layer of the discriminator. The final loss function for the cGAN is the following: DISPLAYFORM3 where λ c, λ π are hyper-parameters to balance the loss terms. Just like cGAN, RoCGAN consist of a generator and a discriminator. The generator of RoCGAN includes two pathways instead of the single pathway of the original cGAN. The first pathway, referred as reg pathway henceforth, performs a similar regression as its counterpart in cGAN; it accepts a sample from the source domain and maps it to the target domain. We introduce an additional unsupervised pathway, named AE pathway. AE pathway works as an autoencoder in the target domain. Both pathways consist of similar encoder-decoder networks 4. By sharing the weights of their decoders, we promote the regression outputs to span the target manifold and not induce arbitrarily large errors. A schematic of the generator is illustrated in FIG1. The discriminator can remain the same as the cGAN: it accepts the reg pathway's output along with the corresponding target sample as input. To simplify the notation below, the superscript'AE' abbreviates modules of the AE pathway and'G' modules of the reg pathway. We denote G(s DISPLAYFORM0) the output of the reg pathway and DISPLAYFORM1 ) the output of the AE pathway. The unsupervised module (autoencoder in the target domain) contributes the following loss term: DISPLAYFORM2 where f d denotes a divergence metric (in this work an 1 loss).Despite sharing the weights of the decoders, we cannot ensure that the latent representations of the two pathways span the same space. To further reduce the distance of the two representations in the latent space, we introduce the latent loss term L lat. This term minimizes the distance between the encoders' outputs, i.e. the two representations are spatially close (in the subspace spanned by the encoders). The latent loss term is: DISPLAYFORM3 The final loss function of RoCGAN combines the loss terms of the original cGAN (eq. 3) with the additional two terms for the AE pathway: DISPLAYFORM4 As a future step we intend to replace the latent loss term L lat with a kernel-based method or a learnable metric for matching the distributions . The RoCGAN model of sec. 3.2 describes a family of networks and not a predefined set of layers. A special case of RoCGAN emerges when skip connections are included in the generator. In the next few paragraphs, we study the modification required, i.e. an additional loss term. Skip connections are frequently used as they enable deeper layers to capture more abstract representations without the need of memorizing all the information. Nevertheless, the effects of the skip connections in the representation space have not been thoroughly studied. The lower-level representations are propagated directly to the decoder through the shortcut, which makes it harder to train the longer path , i.e. the network excluding the skip connections. This challenge can be implicitly tackled by maximizing the variance captured by the longer path representations. To that end, we add a loss term that penalizes the correlations in the representations (of a layer) and thus implicitly encourage the representations to capture diverse and useful information. We implement the decov loss introduced by BID2: DISPLAYFORM0 where diag computes the diagonal elements of a matrix and C is the covariance matrix of the layer's representations. The loss is minimized when the covariance matrix is diagonal, i.e. it imposes a cost to minimize the covariance of hidden units without restricting the diagonal elements that include the variance of the hidden representations. A similar loss is explored by , where the decorrelation loss is applied in every layer. Their loss term has stronger constraints: i) it favors an identity covariance matrix but also ii) penalizes the smaller eigenvalues of the covariance more. We have not explored this alternative loss term, as the decov loss worked in our case without the additional assumptions of the. We design an experiment on synthetic data to explore the differences between the original generator and our novel two pathway generator. Specifically, we design a network where each encoder/decoder consists of two fully connected layers; each layer followed by a RELU. We optimize the generators only, to avoid adding extra learned parameters. The inputs/outputs of this network span a low-dimensional space, which depends on two independent variables x, y ∈ [−1, 1]. We've experimented with several arbitrary functions in the input and output vectors and they perform in a similar way. We showcase here the case with input vector [x, y, e 2x] and output vector [x + 2y + 4, e x + 1, x + y + 3, x + 2]. The reg pathway accepts the three inputs, projects it into a two-dimensional space and the decoder maps it to the target four-dimensional space. We train the baseline and the autoencoder modules separately and use their pre-trained weights to initialize the two pathway network. The loss function of the two pathway network consists of the L lat (eq. 5) and 2 content losses in the two pathways. The networks are trained either till convergence or till 100, 000 iterations (batch size 128) are completed. During testing, 6, 400 new points are sampled and the overlaid are depicted in FIG2; the individual figures for each output can be found in the appendix. The 1 errors for the two cases are: 9, 843 for the baseline and 1, 520 for the two pathway generator. We notice that the two pathway generator approximates the target manifold better with the same number of parameters during inference. Implementation details: To provide a fair comparison to previous cGAN works, our implementation is largely based on the conventions of Isola et al. FORMULA0;;. A'layer' refers to a block of three units: a convolutional unit with a 4 × 4 kernel size, followed by Leaky RELU and batch normalization . To obtain RoCGAN, we augment a vanilla cGAN model as follows: i) we duplicate the encoder/decoder; ii) we share the decoder's weights in Each plot corresponds to the respective manifolds in the output vector; the first and third depend on both x, y (xyz plot), while the rest on x (xz plot). The green color visualizes the target manifold, the red the baseline and the blue ours. Even though the two models include the same parameters during inference, the baseline does not approximate the target manifold as well as our method.the two pathways; iii) we add the additional loss terms. The values of the additional hyper-parameters are λ l = 25, λ ae = 100 and λ decov = 1; the common hyper-parameters with the vanilla cGAN, e.g. λ c, λ π, remain the same. The decov loss is applied in the output of the encoder, which in our experimentation did minimize the correlations in the longer path. The rest hyper-parameters remain the same as in the baseline. We conduct a number of auxiliary experiments in the appendix. Specifically, an ablation study on the significance and the sensitivity of the hyper-parameters is conducted; additional architectures are implemented, while we evaluate our model under more intense noise. In addition, we extend the concept of adversarial examples in regression and verify that our model is more resilient to them than the baseline. The demonstrate that our model accepts a range of hyper-parameter values, while it is robust to additional sources of noise. We experiment with two categories of images with significant applications: images from i) natural scenes and ii) faces. In the natural scenes case, we constrain the number of training images to few thousand since frequently that is the scale of the labelled examples available. The network used in the experiments below, dumped'4layer', consists of four layers in the decoder, while the decoder followed by four layers in the decoder. Two inverse tasks, i.e. denoising and sparse inpainting, are selected for our quantitative evaluation. During training, the images are corrupted, for the two tasks, in the following way: for denoising 25% of the pixels in each channel are uniformly dropped; for sparse inpainting 50% of the pixels are converted to black. During testing, we evaluate the methods in two settings: i) similar corruption as they were trained, ii) more intense corruption, i.e. we drop 35% of the pixels in the denoising case and 75% of the pixels in the sparse inpainting case. The widely used image quality loss (SSIM) is used as a quantitative metric. We train and test our method against the i) baseline cGAN, ii) the recent strong-performing OneNet . OneNet uses an ADMM learned prior, i.e. it projects the corrupted prior images into the subspace of natural images to guide the ADMM solver. In addition, we train an Adversarial Autoencoder (AAE) as an established method capable of learning compressed representations. Each module of the AAE shares the same architecture as its cGAN counterpart, while the AAE is trained with images in the target space. During testing, we provide the ground-truth images as input and use the reconstruction for the evaluation. In our experimental setting, AAE can be thought of as an upper performance limit of RoCGAN/cGAN for a given capacity (number of parameters). We train the'4layer' baseline/RoCGAN with images from natural scenes, both indoors and outdoors. The 4, 900 samples of the VOC 2007 Challenge BID4 form the training set, while the 10, 000 samples of tiny ImageNet BID3 ) consist the testing set. The quantitative evaluation with SSIM is presented in Tab. 1. OneNet does not perform as well as the baseline or our model. From our experimentation this can be attributed to the projection to the manifold of natural images that is not trivial, however it is more resilient to additional noise than the baseline. In both inverse tasks RoCGAN improve the baseline cGAN by a margin of 0.05 (10 − 13% relative improvement). When we apply additional corruption in the testing images, RoCGAN are more robust with a considerable improvement over the baseline. This DISPLAYFORM0 Figure 4: Qualitative (best viewed in color). The first row depicts the target image, the second row the corrupted one (used as input to the methods). The third row depicts the output of the baseline cGAN, while the outcome of our method is illustrated in the fourth row. There are different evaluations visualized for faces: (a) denoising, (b) denoising with additional noise at test time, (c) sparse inpainting, (d) sparse inpainting with 75% black pixels. For natural scenes the columns (e) and (f) denote the denoising and sparse inpainting respectively.can be attributed to the implicit constraints of the AE pathway, i.e. the decoder is more resilient to approximating the target manifold samples.`````````M Table 1: Quantitative in the'4layer' network in both faces and natural scenes cases. For both'objects' we compute the SSIM. In both denoising and sparse inpainting, the leftmost evaluation is the one with corruptions similar to the training, while the one on the right consists of samples with additional corruptions, e.g. in denoising 35% of the pixels are dropped. In this experiment we utilize the MS-Celeb as the training set (3, 4 million samples), and the whole Celeb-A as the testing set (202, 500 samples). The large datasets enable us to validate our model extensively in a wide range of faces. We use the whole training set to train the two compared methods (Baseline-4layer and Ours-4layer) and the 4-layer AAE. The of the quantitative evaluation exist in table 1. Our method outperforms both the baseline and OneNet by a significant margin; the difference increases when evaluated with more intense corruptions. The reason that the sparse inpainting task appears to have a smaller improvement remains elusive; in the different architectures in the appendix our model has similar performance in the two tasks. We include the AAE as an upper limit of the representation capacity of the architecture. The AAE specifies that with the given architecture the performance can be up to 0.866. We introduce the Robust Conditional GAN (RoCGAN) model, a new conditional GAN capable of leveraging unsupervised data to learn better latent representations, even in the face of large amount of noise. RoCGAN's generator is composed of two pathways. The first pathway (reg pathway), performs the regression from the source to the target domain. The new, added pathway (AE pathway) is an autoencoder in the target domain. By adding weight sharing between the two decoders, we implicitly constrain the reg pathway to output images that span the target manifold. In this following sections (of the appendix) we include additional insights, a theoretical analysis along with additional experiments. The sections are organized as following:• In sec. B we validate our intuition for the RoCGAN constraints through the linear equivalent.• A theoretical analysis is provided in sec. C.• We implement different networks in sec. D to assess whether the performance gain can be attributed to a single architecture.• An ablation study is conducted in sec. E comparing the hyper-parameter sensitivity and the robustness in the face of extreme noise. The FIG3, 7, 8 include all the outputs of the synthetic experiment of the main paper. As a reminder, the output vector is [x + 2y + 4, e x + 1, x + y + 3, x + 2] with x, y ∈ [−1, 1]. The exact nature and convergence properties of deep networks remain elusive , however we can study the linear equivalent of deep methods to build on our intuition. To that end, we explore the linear equivalent of our method. Since the discriminator in RoCGAN can remain the same as in the baseline cGAN, we focus in the generators. To perform the analysis on the linear equivalent, we simply drop the piecewise non-linear units in the generators. The linear autoencoder (AE) has a similar structure; W (AE) l denote the respective parameters for the AE. We denote with X the input signal, with Y the target signal andŶ the AE output,Ỹ the regression output. Then:Ŷ DISPLAYFORM0 is the reconstruction of the autoencoder and DISPLAYFORM1 is the regression of the generator (reg pathway). We define the auxiliary DISPLAYFORM2 and DISPLAYFORM3. Then Eq. 8 and 9 can be written as: DISPLAYFORM4 The AE approximates under mild condition robustly the target manifold of the data BID0. If we now define U D,(G) = U D, then the output of the generatorỸ spans the subspace of U D.Given that U D,(G) = U D, we constrain the output of the generator to lie in the subspaces learned with the AE.To illustrate how a projection to a target subspace can contribute to constraining the image, the following visual example is designed. We learn a PCA model using one hundred thousand images from MS-Celeb; we do not apply any pose normalization or alignment (out of the paper's scope). We maintain 90% of the variance. In FIG7 we sample a random image from Celeb-A and downscale it 5; we use bi-linear interpolation to upscale it to the original dimensions. We project and reconstruct both the original and the upscaled versions; note that the output images are similar. This similarity illustrates how the linear projection forces both images to span the same subspace. In the next few paragraphs, we prove that RoCGAN share the properties of the original GAN. We derive the optimal discriminator and then compute the optimal value of L adv (G, D). Proposition 1. If we fix the generator G (reg pathway), the optimal discriminator is: DISPLAYFORM0 where p g is the model (generator) distribution. Proof. Since the generator is fixed, the goal of the discriminator is to maximize the L adv where: DISPLAYFORM1 To maximize the L adv, we need to optimize the integrand above. We note that with respect to D the integrand has the form f (y) = a · log(y) + b · log(1 − y). The function f for a, b ∈ as in our case, obtains a global maximum in a a+b, so: DISPLAYFORM2 with DISPLAYFORM3 thus L adv obtains the maximum with D *.Proposition 2. Given the optimal discriminator D * the global minimum of L adv is reached if and only if p g = p d, i.e. when the model (generator) distribution matches the data distribution. Proof. From proposition 1, we have found the optimal discriminator as D *, i.e. the arg max D L adv. If we replace the optimal value we obtain: DISPLAYFORM4 We add and subtract log from both terms, which after few math operations provides: DISPLAYFORM5 where in the last row KL symbolizes the Kullback-Leibler divergence. The latter one can be rewritten more conveniently with the help of the Jensen-Shannon (JSD) divergence as DISPLAYFORM6 The Jensen-Shannon divergence is non-negative and obtains the zero value only if FORMULA6 and has a global minimum (under the constraint that the discriminator is optimal) when p d = p g. DISPLAYFORM7 In this section, we describe additional experimental and details. In addition to the SSIM metric, we use the 1 loss to measure the loss in the experiments of the main paper. The in table 2 confirm that RoCGAN outperform both compared methods. The larger difference in the cases of more intense noise demonstrates that our model is indeed robust to additional cases not trained on. Additional visualizations are provided in FIG0.`````````M Table 2: Quantitative in the'4layer' network in both faces and natural scenes cases. In this table, the 1 loss is reported. In each task, the leftmost evaluation is the one with corruptions similar to the training, while the one on the right consists of samples with additional corruptions, e.g. in denoising 35% of the pixels are dropped. Unless otherwise mentioned, the experiments in the following paragraphs are conducted in the face case, while the evaluation metrics remain the same as in the main paper, i.e. the noise during training/testing and the SSIM evaluation metric. Table 4: Additional quantitative (SSIM, see main paper) for the following protocols: i)'5layer' network, ii) 50 thousand training images, iii) skip connections. DISPLAYFORM0 To delineate further the performance of our model in different settings, we conduct an experiment with Imagenet , a large dataset for natural images. We utilize the training set of Imagenet which consists of 1, 2 million images and its testset that includes 98 thousand images. The experimental are depicted in table 3. The outcomes corroborate the experimental of the main paper, as RoCGAN outperforms the cGAN in both tasks. We note that the AAE works as the upper limit of the methods and denotes the representation power that the given encoder-decoder can reach. To assess whether RoCGAN's improvement is network-specific, we implement different architectures including more layers. The goal of this work is not to find the best performing architecture, thus we do not employ an exhaustive search in all proposed cGAN models. Our goal is to propose an alternative model to the baseline cGAN and evaluate how this works in different networks. We implement three additional networks which we coin'5layer','6layer' and'4layer-skip'. Those include five, six layers in the encoder/decoder respectively, while the'4layer-skip' includes a lateral connection from the output of the third encoding layer to the input of the second decoding layer. The first two increase the capacity of the network, while the'4layer-skip' implements the modification for the skip case in the'4layer' network 6.We evaluate these three networks as in the'4layer' network (main paper); the are added in table 4. Notice that both'5layer' and'6layer' networks improve their counterpart in the'4layer' case, however the'6layer' networks do not improve their'5layer' counterpart. This can be partly attributed to the increased difficulty of training deeper networks without additional regularization techniques . In addition, we emphasize that the denoising and the sparse inpainting cannot be directly compared, since they correspond to different a) types and b) amount of corruption in all evaluations. Nevertheless, the improvement in the sparse inpainting with additional noise is impressive, given that the hyper-parameters are optimized for the denoising case (see sec E).The most critical observation is that in all cases our model consistently outperforms the baseline. The FIG0: Qualitative ; best viewed in color. The first row depicts the ground-truth image, the second row the corrupted one (input to methods), the third the output of the baseline cGAN, the fourth illustrates the outcome of our method. The four first columns are based on the protocol of'4layer' network, while the four rightmost columns on the protocol'4layer-50k'. There are different evaluations visualized for faces: (a), (e) Denoising, (b), (f) denoising with augmented noise at test time, (c), (g) sparse inpainting, (d), (h) sparse inpainting with 75% black pixels. DISPLAYFORM0 difference is increasing under additional noise during inference time with up to 15% performance improvement observed in the sparse inpainting case. A side-benefit of our new model is the ability to utilize unsupervised data to learn the AE pathway. Collecting unlabelled data in the target domain is frequently easier than finding pairs of corresponding samples in the two domains. To that end, we test whether RoCGAN support such semi-supervised learning. We randomly pick 50, 000 labelled images while we use the rest three million as unlabelled. The'label' in our case is the corrupted images. The baseline model is trained with the labelled 50, 000 samples. RoCGAN model is trained with 50, 000 images in the reg pathway, while the AE pathway with all the available (unlabelled) samples. Table. 5 includes the quantitative of the semi-supervised case. As expected the performance in most experiments drops from the full training case, however we observe that the performance in RoCGAN decreases significantly less than cGAN ('baseline-4layer-50k'). In other words, RoCGAN can benefit greatly from additional examples in the target domain. We hypothesize that this enables the AE pathway to learn a more accurate representation, which is reflected to the final RoCGAN outcome. In our experimental setting every input image should be mapped (close) to its target image. To assess the domain-specific performance for faces, we utilize the cosine distance distribution plot (CDDP).One of the core features in images of faces is the identity of the person. We utilize the well-studied recognition embeddings to evaluate the similarity of the target image with the outputs of compared methods. The ground-truth identities in our case are not available in the embeddings' space; we consider instead the target image's embedding as the ground-truth for each DISPLAYFORM0 Baseline-4layer-50k 0.788 0.747 0.798 0.617 Ours-4layer-50k 0.829 0.813 0.813 0.681 Table 5: Quantitative for the semi-supervised training of RoCGAN (sec. D.3). The difference of the two models is increased (in comparison to the fully supervised case). RoCGAN utilize the additional unsupervised data to improve the mapping between the domains even with less corresponding pairs. DISPLAYFORM1 The plot is constructed as follows: For each pair of output and corresponding target image, we compute the cosine distance of their embeddings; the cumulative distribution of those distances is plotted. Mathematically the distance of the n th pair is formulated as: DISPLAYFORM2 where o (n) denotes the output of each method, y (n) the respective target image and Φ is the function computing the embedding. A perfect reconstruction per comparison, e.g. F(y (n), y (n) ), would yield a plot of a Dirac delta around one; a narrow distribution centered at one denotes proximity to the target images' embeddings. The plot with the CDDP is visualized in FIG0 for the'4layer' case as detailed in the main paper. The illustrate that AAE has embeddings that are closer to the target embeddings as expected; from the compared methods the RoCGAN outperform the cGAN in the proximity to the target embeddings. All the images utilized in this work are resized to 64 × 64 × 3. In the case of natural scenes, instead of rescaling the images during the training stage, we crop random patches in every iteration from the image. We utilize the ADAM optimizer with a learning rate of 2 · 10 −5 for all our experiments. The batch size is 128 for images of faces and 64 for the natural scenes. In table 6 the details about the layer structure for the'4layer' generator are provided; the other networks include similar architecture as depicted in tables 7, 8. The discriminator retains the same structure in all the experiments in this work (see table 9). In the following paragraphs we conduct an ablation study to assess RoCGAN in different cases, i.e. effect of hyper-parameters, loss terms, additional noise. Unless mentioned otherwise, the architecture used is the'4layer' network. The experiments are in face denoising with the similarity metric (SSIM) and the setup similar to the main paper comparisons. Table 9: Details of the discriminator. The discriminator structure remains the same throughout all the experiments in this work. Our model introduces three new loss terms, i.e. L lat, L AE and L decov (in the case with skip) with respect to the baseline cGAN. Understandably, those introduce three new hyper-parameters, which need to be validated. The validation and selection of the hyper-parameters was done in a withheld set of images. In the following paragraphs, we design an experiment where we scrutinize one hyperparameter every time, while we keep the rest in their selected value. During our experimentation, we observed that the optimal values of these three hyper-parameters might differ per case/network, however in this manuscript the hyper-parameters remain the same throughout our experimentation.(a) (b) FIG0: The layer schematics of the generators in case of (a) the'4layer-skip' case, (b) the'5layer' case. The search space for each term is decided from its theoretical properties and our intuition. For instance, the λ ae would have a value similar to the λ c 7. In a similar manner, the latent loss encourages the two streams' latent representations to be similar, however the final evaluation is performed in the pixel space, hence we assume that a value smaller than λ c is appropriate. In table 10, we assess different values for the λ l. The demonstrate that values larger than 10 the are similar, which dictates that our model resilient to the precise selection of the latent loss hyper-parameter. Even though the best are obtained with λ ae = 250, we select λ ae = 100 for our experiments. The difference for the two choices is marginal, thus we choose the value 100 since resonates with our intuition (λ ae = λ c).The third term of λ decov is scrutinized in the table 12. In our experimentation, the λ decov has a different effect per experiment; based on the of our validation we choose λ decov = 1 for our experiments. In , the additional hyper-parameters introduced by our model can accept a range of values without affecting significantly the . Table 12: Validation of λ decov values (hyper-parameter choices) in the'4layer-skip' network. The network is more sensitive to the value of the λ decov than the λ l and λ ae. To study further the significance of the four loss terms, we experiment with setting λ * = 0 alternatingly. Apart from the'4layer' network, we implement the'4layer-skip' to assess the λ decov = 0 case. The'4layer-skip' includes the same layers as the'4layer', however it includes a lateral connection from the encoder to the decoder. The experimental in table 13 confirm our prior intuition that the latent loss (L lat) is the most crucial for our model in the no-skip case, but not as significant in the skip case. In the skip case, the reconstruction losses in both pathways are significant. Table 13: Quantitative (SSIM) for setting λ * = 0 alternatingly (sec. E.2). In each column, we set the respective hyper-parameter to zero while keeping the rest fixed. DISPLAYFORM0 To evaluate whether RoCGAN/cGAN are resilient to noise, we experiment with additional noise. We include a baseline cGAN to the comparison to study whether their performance changes similarly. Both networks are trained with the 25% noise (denosing task).We evaluate the performance in two cases: i) additional noise of the same type, ii) additional noise of different type. For this experiment, we abbreviate noise as x/y where x depicts the amount of noise in denoising task (i.e. x% of the pixels in each channel are dropped with a uniform probability) and y the sparse inpainting task (i.e. y% black pixels). In both cases, we evaluate the performance by incrementally increasing the amount of noise. Specifically, both networks are tested in 25/0, 35/0, 50/0 for noise of the same type and 25/10, 25/20 and 25/25 for different type of noise. We note the networks have not been trained on any of the testing noises other than the 25/0 case. To illustrate the difference of performance between the two models, we accumulate the SSIM values of each case and divide them in 20 bins 8. In FIG0, the histograms of each case are plotted. We note that RoCGAN is much more resilient to increased or even unseen noise. Qualitative of the FIG0: Qualitative figure illustrating the different noise levels (sec. E.3). The first row depicts different target samples, while every three-row block, depicts the corrupted image, the baseline output and our output. The blocks top down correspond to the 25%, 35%, 50% noise (25/0, 35/0 and 50/0). The images in the first blocks are closer to the respective target images; as we increase the noise the baseline deteriorate faster than RoCGAN outputs. The readers can zoom-in to further notice the difference in the quality of the outputs. difference are offered in FIG0. We consider that this improvement in the robustness in the face of additional noise is in its own a considerable improvement to the original cGAN. Apart from testing in the face of additional noise, we explore the adversarial attacks. Recent works BID6; ) explore the robustness of (deep) classifiers. Adversarial attacks modify the input image (of the network) so that the network misclassifies the image. To the authors' knowledge, there has not been much investigation of adversarial attacks in the context of image-to-image translation or any other regression task. However, if we consider adversarial examples as a perturbation of the original input, we explore whether this has any effect in the methods. FIG0. The first row depicts different target samples, while every three-row block, depicts the corrupted image, the baseline output and our output. The blocks top down correspond to the 25/10, 25/20, 25/25 cases (different type of testing noise). The last block contains the most challenging noise in this work, i.e. both increased noise and of different type than the training noise. Nevertheless, our model generates a more realistic image in comparison to the baseline. Neither cGAN nor RoCGAN are designed to be robust in adversarial examples, however in this section we explore how adversarial examples can affect them. We consider the FGSM method of BID6 as one of the first and simplest methods for generating adversarial examples. In our case, we modify each source signal s as: DISPLAYFORM0 where η is the perturbation. That is defined as: DISPLAYFORM1 with a hyper-parameter, y the target signal and L the loss. In our case, we select the 1 loss as the loss between the target and the generated images. We set = 0.01 following the original paper. The evaluation is added in
We introduce a new type of conditional GAN, which aims to leverage structure in the target space of the generator. We augment the generator with a new, unsupervised pathway to learn the target structure.
573
scitldr
Though deep neural networks have achieved the state of the art performance in visual classification, recent studies have shown that they are all vulnerable to the attack of adversarial examples. To solve the problem, some regularization adversarial training methods, constraining the output label or logit, have been studied. In this paper, we propose a novel regularized adversarial training framework ATLPA,namely Adversarial Tolerant Logit Pairing with Attention. Instead of constraining a hard distribution (e.g., one-hot vectors or logit) in adversarial training, ATLPA uses Tolerant Logit which consists of confidence distribution on top-k classes and captures inter-class similarities at the image level. Specifically, in addition to minimizing the empirical loss, ATLPA encourages attention map for pairs of examples to be similar. When applied to clean examples and their adversarial counterparts, ATLPA improves accuracy on adversarial examples over adversarial training. We evaluate ATLPA with the state of the art algorithms, the experiment show that our method outperforms these baselines with higher accuracy. Compared with previous work, our work is evaluated under highly challenging PGD attack: the maximum perturbation $\epsilon$ is 64 and 128 with 10 to 200 attack iterations. In recent years, deep neural networks have been extensively deployed for computer vision tasks, particularly visual classification problems, where new algorithms reported to achieve or even surpass the human performance (; ; a). Success of deep neural networks has led to an explosion in demand. Recent studies (; ; ; ;) have shown that they are all vulnerable to the attack of adversarial examples. Small and often imperceptible perturbations to the input images are sufficient to fool the most powerful deep neural networks. In order to solve this problem, many defence methods have been proposed, among which adversarial training is considered to be the most effective one.Adversarial training (; ; ; Tramèr et al., 2017;) defends against adversarial perturbations by training networks on adversarial images that are generated on-the-fly during training. Although aforementioned methods demonstrated the power of adversarial training in defence, we argue that we need to perform research on at least the following two aspects in order to further improve current defence methods. Strictness vs. Tolerant. Most existing defence methods only fit the outputs of adversarial examples to the one-hot vectors of clean examples counterparts. also fit confidence distribution on the all logits of clean examples counterparts, they call it as Logits Pair. Despite its effectiveness, this is not necessarily the optimal target to fit, because except for maximizing the confidence score of the primary class (i.e., the ground-truth), allowing for some secondary classes (i.e., those visually similar ones to the ground-truth) to be preserved may help to alleviate the risk of over-fitting . We fit Tolerant Logit which consists of confidence distribution on top-k classes and captures inter-class similarities at the image level. We believe that limited attention should be devoted to top-k classes of the confidence score, rather than strictly fitting the confidence distribution of all classes. A More Tolerant Teacher Educates Better Students. Process vs. Result. In Fig. 1, we visualize the spatial attention map of a flower and its corresponding adversarial image on ResNet-101 pretrained on ImageNet . The figure suggests that adversarial perturbations, while small in the pixel space, lead to very substantial noise in the attention map of the network. Whereas the features for the clean image appear to focus primarily on semantically informative content in the image, the attention map for the adversarial image are activated across semantically irrelevant regions as well. The state of the art adversarial training methods only encourage hard distribution of deep neural networks output (e.g., one-hot vectors (; Tramèr et al., 2017) or logit ) for pairs of clean examples and adversarial counterparts to be similar. In our opinion, it is not enough to align the difference between the clean examples and adversarial counterparts only at the output layer of the network, and we need to align the attention maps of middle layers of the whole network, e.g.,o uter layer outputs of conv2.x, conv3.x, conv4.x, conv5.x in ResNet-101. We can't just focus on the , but also on the process. . (a) is original image and (b) is corresponding adversarial image. For ResNet-101, which we use exclusively in this paper, we grouped filters into stages as described in . These stages are conv2.x, conv3.x, conv4.x, conv5.x. The contributions of this paper are the following: • We propose a novel regularized adversarial training framework ATLPA: a method that uses Tolerant Logit and encourages attention map for pairs of examples to be similar. When applied to clean examples and their adversarial counterparts, ATLPA improves accuracy on adversarial examples over adversarial training. Instead of constraining a hard distribution in adversarial training, Tolerant Logit consists of confidence distribution on top-k classes and captures inter-class similarities at the image level. • We explain the reason why our ATLPA can improve the robustness of the model from three dimensions: average activations on discriminate parts, the diversity among learned features of different classes and trends of loss landscapes. • We show that our ATLPA achieves the state of the art defense on a wide range of datasets against strong PGD gray-box and black-box attacks. Compared with previous work, our work is evaluated under highly challenging PGD attack: the maximum perturbation ∈ {0.25, 0.5} i.e. L ∞ ∈ {0.25, 0.5} with 10 to 200 attack iterations. To our knowledge, such a strong attack has not been previously explored on a wide range of datasets. The rest of the paper is organized as follows: in Section 2 related works are summarized, in Section 3 definitions and threat models are introduced, in Section 4 our ATLPA is introduced, in Section 5 experimental are presented and discussed, and finally in Section 6 the paper is concluded. 2 RELATED WORK evaluate the robustness of nine papers (; ; ; ; ; ; ; ;) accepted to ICLR 2018 as non-certified white-box-secure defenses to adversarial examples. They find that seven of the nine defenses use obfuscated gradients, a kind of gradient masking, as a phenomenon that leads to a false sense of security in defenses against adversarial examples. Obfuscated gradients provide a limited increase in robustness and can be broken by improved attack techniques they develop. The only defense they observe that significantly increases robustness to adversarial examples within the threat model proposed is adversarial training . Adversarial training (; ; ; Tramèr et al., 2017;) defends against adversarial perturbations by training networks on adversarial images that are generated on-the-fly during training. For adversarial training, the most relevant work to our study is , which introduce a technique they call Adversarial Logit Pairing(ALP), a method that encourages logits for pairs of examples to be similar. also put forward different opinions on the robustness of ALP. Our ATLPA encourages attention map for pairs of examples to be similar. When applied to clean examples and their adversarial counterparts, ATLPA improves accuracy on adversarial examples over adversarial training. adds random noise at training and inference time, adds denoising blocks to the model to increase adversarial robustness, neither of the above approaches focuses on the attention map. Following (; ;), we propose Tolerant Logit which consists of confidence distribution on top-k classes and captures inter-class similarities at the image level. In terms of methodologies, our work is also related to deep transfer learning and knowledge distillation problems, the most relevant work to our study are (; b), which constrain the L 2 -norm of the difference between their behaviors (i.e., the feature maps of outer layer outputs in the source/target networks). Our ATLPA constrains attention map for pairs of clean examples and their adversarial counterparts to be similar. In this paper, we always assume the attacker is capable of forming untargeted attacks that consist of perturbations of limited L ∞ -norm. This is a simplified task chosen because it is more amenable to benchmark evaluations. We consider two different threat models characterizing amounts of information the adversary can have: • Gray-box Attack We focus on defense against gray-box attacks in this paper. In a grayback attack, the attacker knows both the original network and the defense algorithm. Only the parameters of the defense model are hidden from the attacker. This is also a standard setting assumed in many security systems and applications . • Black-box Attack The attacker has no information about the models architecture or parameters, and no ability to send queries to the model to gather more information. 4.1 ARCHITECTURE Fig.2 represents architecture of ATLPA: a baseline model is adversarial trained so as, not only to make similar the output labels, but to also have similar Tolerant Logits and spatial attention maps to those of original images and adversarial images. We use adversarial training with Projected Gradient Descent(PGD) as the underlying basis for our methods: wherep data is the underlying training data distribution, L(θ, x + δ, y) is a loss function at data point x which has true class y for a model with parameters θ, and the maximization with respect to δ is approximated using PGD. In this paper, the loss is defined as: Figure 2: Schematic representation of ATLPA: a baseline model is adversarial trained so as, not only to make similar the output labels, but to also have similar Tolerant Logits and spatial attention maps to those of original images and adversarial images. Where L CE is cross entropy,α and β are hyper-parameters which balance Tolerant Logit Loss L T L and Attention Map Loss L AT. When β=0, we call it ATLPA(w/o ATT), i.e., ATLPA without attention. Instead of computing an extra loss over all classes just like ALP, we pick up a few classes which have been assigned with the highest confidence scores, and assume that these classes are more likely to be semantically similar to the input image. We use top-k classes of confidence distribution which capture inter-class similarities at the image level. The logit of model is Z(x),f a k is short for the k-th largest element of Z(x). Then we can define the following loss: where w k is non-negative weight, used to adjust the influence of the k-th largest element of Z(x). In the experiments we use K = 5. We use Attention Map Loss to encourage the attention map from clean examples and their adversarial counterparts to be similar to each other. Let also I denote the indices of all activation layer pairs for which we want to pay attention. Then we can define the following total loss: are respectively the j-th pair of clean examples and their adversarial counterparts attention maps in vectorized form, and p refers to norm type (in the experiments we use p = 2). F sums absolute values of attention maps raised to the power of p. To evaluate the effectiveness of our defense strategy, we performed a series of image-classification experiments on 17 Flower Category Database and BMW-10 Database. ), we assume an adversary that uses the state of the art PGD adversarial attack method. We consider untargeted attacks when evaluating under the gray and black-box settings; untargeted attacks are also used in our adversarial training. We evaluate top-1 accuracy on validation images that are adversarially perturbed by the attacker. In this paper, adversarial perturbation is considered under L ∞ norm (i.e., maximum perturbation for each pixel), with an allowed maximum value of. The value of is relative to the pixel intensity scale of 256, we use = 64/256 = 0.25 and = 128/256 = 0.5. PGD attacker with 10 to 200 attack iterations and step size α = 1.0/256 = 0.0039. Our baselines are ResNet-101/152. There are four groups of convolutional structures in the baseline model, which are described as conv2 x, conv3 x,conv4 x and conv5 x in We performed a series of image-classification experiments on a wide range of datasets. Compared with data sets with very small image size e.g., MNIST is 28 * 28,CIFAR-10 is 32 * 32, the image size of our data sets is closer to the actual situation. All the images are resized to 256 * 256 and normalized to zero mean for each channel, following with data augmentation operations of random mirror and random crop to 224 * 224. • 17 Flower Category Database contains images of flowers belonging to 17 different categories. The images were acquired by searching the web and taking pictures. There are 80 images for each category. We use only classification labels during training. While part location annotations are used in a quantitative evaluation of show cases, to explain the effect of our algorithm. • BMW-10 dataset contains 512 images of 10 BMW sedans. The data is split into 360 training images and 152 testing images, where each class has been split roughly in a 70-30 split. To perform image classification, we use ResNet-101/152 that were trained on our data sets. We consider two different attack settings: a gray-box attack setting in which the model used to generate the adversarial images is the same as the image-classification model, viz. the ResNet-101; and a black-box attack setting in which the adversarial images are generated using the ResNet-152 model; The backend prediction model of gray-box and black-box is ResNet-101 with different implementations of the state of the art defense methods,such as IGR , PAT , RAT , Randomization , ALP, and FD. ALL the defence methods are all trained under the same adversarial training parameters: batch size is 16, iteration number is 6000, learning rate is 0.01, the ratio of original images and adversarial images is 1:1, under 2-iteration PGD attack, step size is 0.125. Ensemble learning among different algorithms and models (Tramèr et al., 2017; ;) is good idea,but here we only consider the use of one single algorithm and one single model. The hyper-parameters settings of the above algorithms use the default values provided in their papers. We will open source our code implementation if this paper is accepted. Here, we first present with ATLPA on 17 Flower Category Database. Compared with previous work, was evaluated under 10-iteration PGD attack and = 0.0625,our work are evaluated under highly challenging PGD attack: the maximum perturbation ∈ {0.25, 0.5} i.e. L ∞ ∈ {0.25, 0.5} with 10 to 200 attack iterations. The bigger the value of, the bigger the disturbance, the more significant the adversarial image effect is. To our knowledge, such a strong attack has not been previously explored on a wide range of datasets. As shown in Fig.3 that our ATLPA outperform the state-of-the-art in adversarial robustness against highly challenging gray-box and black-box PGD attacks. Table 1 shows Main Result of our work:under strong 200-iteration PGD gray-box and blackbox attacks,our ATLPA outperform the state-of-the-art in adversarial robustness on all these databases. For example, under strong 200-iteration PGD gray-box and black-box attacks on BMW-10 Database where prior art has 35% and 36% accuracy, our method achieves 61% and 62%. The maximum perturbation is ∈ {0.25, 0.5}. Our ATLPA(purple line) outperform the state-of-theart in adversarial robustness against highly challenging gray-box and black-box PGD attacks. Even our ATLPA(w/o ATT) does well, which is red line. ATLPA(w/o ATT): ATLPA without Attention. We visualized activation attention maps for defense against PGD attacks. Baseline model is ResNet-101 , which is pre-trained on ImageNet and fine-tuned on 17 Flower Category Database. We found from APPENDIX (Fig. 5) that has a higher level of activation on the whole flower,compared with other defence methods. To further understand the effect, we compared average activations on discriminate parts of 17 Flower Category Database for different defense methods. 17 Flower Category Database defined discriminative parts of flowers. So for each image, we got several key regions which are very important to discriminate its category. Using all testing examples of 17 Flower Category Database, we calculated normalized activations on these key regions of these different defense methods. As shown in Table 2, ATLPA got the highest average activations on those key regions, demonstrating that ATLPA No Defence 0 0 15 10 IGR 10 3 17 10 PAT 55 34 57 39 RAT 54 30 57 32 Randomization focused on more discriminate features for flowers recognition. In addition, the score of ATLPA is more bigger than ATLPA(w/o ATT), so it can be seen that the main factor is our Attention. No Defense 0.10 0.10 0.15 0.15 ALP 0.15 0.15 0.14 0.14 IGR 0.14 0.14 0.13 0.13 PAT 0.17 0.17 0.15 0.15 RAT 0 Previous work has shown that for a single network, promoting the diversity among learned features of different classes can improve adversarial robustness (; . As shown in AP-PENDIX (Fig. 7),the ATLPA and ATLPA(w/o ATT) training procedure conceals normal examples on low-dimensional manifolds in the final-layer hidden space. Then the detector allowable regions can also be set low-dimensional as long as the regions contain all normal examples. Therefore the white-box adversaries who intend to fool our detector have to generate adversarial examples with preciser calculations and larger noises. To further understand the effect,we compute silhouette score of the final hidden features of different defense after t-SNE . The range of silhouette score is [−1, 1]. The closer the samples of the same category are, the farther the samples of different categories are, the higher the score is. We compute the silhouette score to quantify the quality of diversity among learned features of different classes. As shown in Table 3, ATLPA got the highest silhouette score, demonstrating that ATLPA promotes the diversity among learned features of different classes. In addition, the scores of ATLPA and ATLPA(w/o ATT) are very close, so it can be seen that the main factor is our Tolerant Logit. We generate loss plots by varying the input to the models, starting from an original input image chosen from the testing set of 17 Flower Category Database. The z axis represents the loss. If x is the original input, then we plot the loss varying along the space determined by two vectors: r1 = sign(x f (x)) and r2 ∼ Rademacher(0.5). We thus plot the following function: z = loss(x · r1 + y · r2). As shown in Fig. 4,the input varies in the same range and the landscape of our ATLPA varies in the smallest range, our ATLPA has better robustness. In this paper, we propose a novel regularized adversarial training framework ATLPA a method that uses Tolerant Logit which consists of confidence distribution on top-k classes and captures inter-class similarities at the image level, and encourages attention map for pairs of examples to be similar. We show that our ATLPA achieves the state of the art defense on a wide range of datasets against strong PGD gray-box and black-box attacks. We explain the reason why our ATLPA can improve the robustness of the model from three dimensions: average activations on discriminate parts, the diversity among learned features of different classes and trends of loss landscapes. The of visualization and quantitative calculation show that our method is helpful to improve the robustness of the model. 17 Flower Category Database
In this paper, we propose a novel regularized adversarial training framework ATLPA,namely Adversarial Tolerant Logit Pairing with Attention.
574
scitldr
A fundamental trait of intelligence is the ability to achieve goals in the face of novel circumstances. In this work, we address one such setting which requires solving a task with a novel set of actions. Empowering machines with this ability requires generalization in the way an agent perceives its available actions along with the way it uses these actions to solve tasks. Hence, we propose a framework to enable generalization over both these aspects: understanding an action’s functionality, and using actions to solve tasks through reinforcement learning. Specifically, an agent interprets an action’s behavior using unsupervised representation learning over a collection of data samples reflecting the diverse properties of that action. We employ a reinforcement learning architecture which works over these action representations, and propose regularization metrics essential for enabling generalization in a policy. We illustrate the generalizability of the representation learning method and policy, to enable zero-shot generalization to previously unseen actions on challenging sequential decision-making environments. Our and videos can be found at sites.google.com/view/action-generalization/
We address the problem of generalization of reinforcement learning to unseen action spaces.
575
scitldr
Temporal point processes are the dominant paradigm for modeling sequences of events happening at irregular intervals. The standard way of learning in such models is by estimating the conditional intensity function. However, parameterizing the intensity function usually incurs several trade-offs. We show how to overcome the limitations of intensity-based approaches by directly modeling the conditional distribution of inter-event times. We draw on the literature on normalizing flows to design models that are flexible and efficient. We additionally propose a simple mixture model that matches the flexibility of flow-based models, but also permits sampling and computing moments in closed form. The proposed models achieve state-of-the-art performance in standard prediction tasks and are suitable for novel applications, such as learning sequence embeddings and imputing missing data. Visits to hospitals, purchases in e-commerce systems, financial transactions, posts in social media -various forms of human activity can be represented as discrete events happening at irregular intervals. The framework of temporal point processes is a natural choice for modeling such data. By combining temporal point process models with deep learning, we can design algorithms able to learn complex behavior from real-world data. Designing such models, however, usually involves trade-offs along the following dimensions: flexibility (can the model approximate any distribution?), efficiency (can the likelihood function be evaluated in closed form?), and ease of use (is sampling and computing summary statistics easy?). Existing methods (; ;) that are defined in terms of the conditional intensity function typically fall short in at least one of these categories. Instead of modeling the intensity function, we suggest treating the problem of learning in temporal point processes as an instance of conditional density estimation. By using tools from neural density estimation , we can develop methods that have all of the above properties. To summarize, our contributions are the following: • We connect the fields of temporal point processes and neural density estimation. We show how normalizing flows can be used to define flexible and theoretically sound models for learning in temporal point processes. • We propose a simple mixture model that performs on par with the state-of-the-art methods. Thanks to its simplicity, the model permits closed-form sampling and moment computation. • We show through a wide range of experiments how the proposed models can be used for prediction, conditional generation, sequence embedding and training with missing data. as a sequence of strictly positive inter-event times τ i = t i − t i−1 ∈ R +. Representations in terms of t i and τ i are isomorphic -we will use them interchangeably throughout the paper. The traditional way of specifying the dependency of the next arrival time t on the history H t = {t j ∈ T : t j < t} is using the conditional intensity function λ * (t):= λ(t|H t). Here, the * symbol reminds us of dependence on H t. Given the conditional intensity function, we can obtain the conditional probability density function (PDF) of the time τ i until the next event by integration as p * (τ i):= p(τ i |H ti) = λ * (t i−1 + τ i) exp − τi 0 λ * (t i−1 + s)ds. Learning temporal point processes. Conditional intensity functions provide a convenient way to specify point processes with a simple predefined behavior, such as self-exciting and self-correcting processes. Intensity parametrization is also commonly used when learning a model from the data: Given a parametric intensity function λ * θ (t) and a sequence of observations T, the parameters θ can be estimated by maximizing the log-likelihood: θ * = arg max θ i log p * θ (τ i) = arg max θ i log λ * The main challenge of such intensity-based approaches lies in choosing a good parametric form for λ We develop several approaches for modeling the distribution of inter-event times. First, we assume for simplicity that each inter-event time τ i is conditionally independent of the history, given the model parameters (that is, p * (τ i) = p(τ i)). In Section 3.1, we show how state-of-the-art neural density estimation methods based on normalizing flows can be used to model p(τ i). Then in Section 3.2, we propose a simple mixture model that can match the performance of the more sophisticated flowbased models, while also addressing some of their shortcomings. Finally, we discuss how to make p(τ i) depend on the history H ti in Section 3.3. The core idea of normalizing flows is to define a flexible probability distribution by transforming a simple one. Assume that z has a PDF q(z). Let x = g(z) for some differentiable invertible transformation g: Z → X (where Z, X ⊆ R) 2. We can obtain the PDF p(x) of x using the change of variables formula as p(x) = q(g −1 (x)) ∂ ∂x g −1 (x). By stacking multiple transformations g 1,..., g M, we obtain an expressive probability distribution p(x). To draw a sample x ∼ p(x), we need to draw z ∼ q(z) and compute the forward transformation x = (g M • · · · • g 1)(z). To get the density of an arbitrary point x, it is necessary to evaluate the inverse transformation z = (g −1 1 • · · · • g −1 M)(x) and compute q(z). Modern normalizing flows architectures parametrize the transformations using extremely flexible functions f θ, such as polynomials or neural networks . The flexibility of these functions comes at a cost -while the inverse f −1 θ exists, it typically doesn't have a closed form. That is, if we use such a function to define one direction of the transformation in a flow model, the other direction can only be approximated numerically using iterative root-finding methods . In this work, we don't consider invertible normalizing flows based on dimension splitting, such as RealNVP , since they are not applicable to 1D data. In the context of TPPs, our goal is to model the distribution p(τ) of inter-event times. In order to be able to learn the parameters of p(τ) using maximum likelihood, we need to be able to evaluate the density at any point τ. For this we need to define the inverse transformation g −1:= (g M (τ) = log τ to convert a positive τ ∈ R + into z M ∈ R. Then, we stack multiple layers of parametric functions f θ: R → R that can approximate any transformation. We consider two choices for f θ: deep sigmoidal flow (DSF) from and sum-ofsquares (SOS) polynomial flow from Jaini et al. where a, w, s, µ are the transformation parameters, K is the number of components, R is the polynomial degree, and σ(x) = 1/(1 + e −x). We denote the two variants of the model based on f and f SOS building blocks as DSFlow and SOSFlow respectively. Finally, after stacking multiple g −1 m = f θm, we apply a sigmoid transformation g −1 1 = σ to convert z 2 into z 1 ∈. For both models, we can evaluate the inverse transformations (g, which means the model can be efficiently trained via maximum likelihood. The density p(τ) defined by either DSFlow or SOSFlow model is extremely flexible and can approximate any distribution (Section 3.4). However, for some use cases, this is not sufficient. For example, we may be interested in the expected time until the next event, E p [τ]. In this case, flow-based models are not optimal, since for them E p [τ] does not in general have a closed form. Moreover, the forward transformation (g M • · · · • g 1) cannot be computed in closed form since the functions f DSF and f SOS cannot be inverted analytically. Therefore, sampling from p(τ) is also problematic and requires iterative root finding. This raises the question: Can we design a model for p(τ) that is as expressive as the flow-based models, but in which sampling and computing moments is easy and can be done in closed form? Model definition. While mixture models are commonly used for clustering, they can also be used for density estimation. Mixtures work especially well in low dimensions , which is the case in TPPs, where we model the distribution of one-dimensional inter-event times τ. Since the inter-event times τ are positive, we choose to use a mixture of log-normal distributions to model p(τ). The PDF of a log-normal mixture is defined as 2 All definitions can be extended to R D for D > 1. We consider the one-dimensional case since our goal is to model the distribution of inter-event times τ ∈ R+. Figure 1: Model architecture. Parameters of p * (τ i |θ i) are generated based on the conditional information c i. where w are the mixture weights, µ are the mixture means, and s are the standard deviations. Because of its simplicity, the log-normal mixture model has a number of attractive properties. Moments. Since each component k has a finite mean, the mean of the entire distribution can be computed as, a weighted average of component means. Higher moments can be computed based on the moments of each component (Frühwirth-). Sampling. While flow-based models from Section 3.1 require iterative root-finding algorithms to generate samples, sampling from a mixture model can be done in closed form: where z is a one-hot vector of size K. In some applications, such as reinforcement learning , we might be interested in computing gradients of the samples w.r.t. the model parameters. The samples τ drawn using the procedure above are differentiable with respect to the means µ and scales s. By using the Gumbel-softmax trick when sampling z, we can obtain gradients w.r.t. all the model parameters (Appendix D.6). Such reparametrization gradients have lower variance and are easier to implement than the score function estimators typically used in other works . Other flexible models (such as multi-layer flow models from Section 3.1) do not permit sampling through reparametrization, and thus are not well-suited for the above-mentioned scenario. In Section 5.4, we show how reparametrization sampling can also be used to train with missing data by performing imputation on the fly. History. A crucial feature of temporal point processes is that the time τ i = (t i − t i−1) until the next event may be influenced by all the events that happened before. A standard way of capturing this dependency is to process the event history H ti with a recurrent neural network (RNN) and embed it into a fixed-dimensional vector h i ∈ R H . Conditioning on additional features. The distribution of the time until the next event might depend on factors other than the history. For instance, distribution of arrival times of customers in a restaurant depends on the day of the week. As another example, if we are modeling user behavior in an online system, we can obtain a different distribution p * (τ) for each user by conditioning on their metadata. We denote such side information as a vector y i. Such information is different from marks , since (a) the metadata may be shared for the entire sequence and (b) y i only influences the distribution p * (τ i |y i), not the objective function. In some scenarios, we might be interested in learning from multiple event sequences. In such case, we can assign each sequence T j a learnable sequence embedding vector e j. By optimizing e j, the model can learn to distinguish between sequences that come from different distributions. The learned embeddings can then be used for visualization, clustering or other downstream tasks. Obtaining the parameters. We model the conditional dependence of the distribution p * (τ i) on all of the above factors in the following way. The history embedding h i, metadata y i and sequence embedding e j are concatenated into a context vector c i = [h i ||y i ||e j]. Then, we obtain the parameters of the distribution p * (τ i) as an affine function of c i. For example, for the mixture model we have where the softmax and exp transformations are applied to enforce the constraints on the distribution parameters, and {V w, V s, V µ, b w, b s, b µ} are learnable parameters. Such model resembles the mixture density network architecture . The whole process is illustrated in Figure 1. We obtain the parameters of the flow-based models in a similar way (see Appendix D). Universal approximation. The SOSFlow and DSFlow models can approximate any probability density on R arbitrarily well (, Theorem 3), (, Theorem 4). It turns out, a mixture model has the same universal approximation (UA) property. Theorem 1 (, Theorem 33.2). Let p(x) be a continuous density on R. If q(x) is any density on R and is also continuous, then, given ε > 0 and a compact set S ⊂ R, there exist number of components K ∈ N, mixture coefficients w ∈ ∆ K−1, locations µ ∈ R K, and scales This shows that, in principle, the mixture distribution is as expressive as the flow-based models. Since we are modeling the conditional density, we additionally need to assume for all of the above models that the RNN can encode all the relevant information into the history embedding h i. This can be accomplished by invoking the universal approximation theorems for RNNs (; Schäfer &). Note that this , like other UA theorems of this kind , does not provide any practical guarantees on the obtained approximation quality, and doesn't say how to learn the model parameters. Still, UA intuitively seems like a desirable property of a distribution. This intuition is supported by experimental . In Section 5.1, we show that models with the UA property consistently outperform the less flexible ones. Interestingly, Theorem 1 does not make any assumptions about the form of the base density q(x). This means we could as well use a mixture of distribution other than log-normal. However, other popular distributions on R + have drawbacks: log-logistic does not always have defined moments and gamma distribution doesn't permit straightforward sampling with reparametrization. Intensity function. For both flow-based and mixture models, the conditional cumulative distribution function (CDF) F * (τ) and the PDF p * (τ) are readily available. This means we can easily compute the respective intensity functions (see Appendix A). However, we should still ask whether we lose anything by modeling p * (τ) instead of λ * (t). The main arguments in favor of modeling the intensity function in traditional models (e.g. self-exciting process) are that it's intuitive, easy to specify and reusable . " Intensity function is intuitive, while the conditional density is not." -While it's true that in simple models (e.g. in self-exciting or self-correcting processes) the dependence of λ * (t) on the history is intuitive and interpretable, modern RNN-based intensity functions (as in ; ;) cannot be easily understood by humans. In this sense, our proposed models are as intuitive and interpretable as other existing intensity-based neural network models. "λ * (t) is easy to specify, since it only has to be positive. On the other hand, p * (τ) must integrate to one." -As we saw, by using either normalizing flows or a mixture distribution, we automatically enforce that the PDF integrates to one, without sacrificing the flexibility of our model. "Reusability: If we merge two independent point processes with intensitites λ * 1 (t) and λ * 2 (t), the merged process has intensity λ * (t) = λ * 1 (t) + λ * 2 (t)." -An equivalent exists for the CDFs F * 1 (τ) and F * 2 (τ) of the two independent processes. The CDF of the merged process is obtained as 2 (τ) (derivation in Appendix A). As we just showed, modeling p * (τ) instead of λ * (t) does not impose any limitation on our approach. Moreover, a mixture distribution is flexible, easy to sample from and has well-defined moments, which favorably compares it to other intensity-based deep learning models. Neural temporal point processes. Fitting simple TPP models (e.g. self-exciting or self-correcting processes) to real-world data may lead to poor because of model misspecification. Multiple recent works address this issue by proposing more flexible neural-network-based point process models. These neural models are usually defined in terms of the conditional intensity function. For example, propose a novel RNN architecture that can model sophisticated intensity functions. This flexibility comes at the cost of inability to evaluate the likelihood in closed form, and thus requiring Monte Carlo integration. suggest using an RNN to encode the event history into a vector h i. The history embedding h i is then used to define the conditional intensity, for example, using the constant intensity model λ et al., 2018; ) or the more flexible exponential intensity model λ et al., 2016; ). By considering the conditional distribution p * (τ) of the two models, we can better understand their properties. Constant intensity corresponds to an exponential distribution, and exponential intensity corresponds to a Gompertz distribution (see Appendix B). Clearly, these unimodal distributions cannot match the flexibility of a mixture model (as can be seen in Figure 8). introduce a flexible fully neural network (FullyNN) intensity model, where they model the cumulative intensity function Λ * (τ) with a neural net. The function Λ * converts τ into an exponentially distributed random variable with unit rate , similarly to how normalizing flows model p * (τ) by converting τ into a random variable with a simple distribution. However, due to a suboptimal choice of the network architecture, the PDF of the FullyNN model does not integrate to 1, and the model assigns non-zero probability to negative inter-event times (see Appendix C). In contrast, SOSFlow and DSFlow always define a valid PDF on R +. Moreover, similar to other flow-based models, sampling from the FullyNN model requires iterative root finding. Several works used mixtures of kernels to parametrize the conditional intensity function (; ;). Such models can only capture self-exciting influence from past events. Moreover, these models do not permit computing expectation and drawing samples in closed form. Recently, Biloš et al. and Türkmen et al. proposed neural models for learning marked TPPs. These models focus on event type prediction and share the limitations of other neural intensity-based approaches. Other recent works consider alternatives to the maximum likelihood objective for training TPPs. Examples include noise-contrastive estimation , Wasserstein distance (; 2018;), and reinforcement learning . This line of research is orthogonal to our contribution, and the models proposed in our work can be combined with the above-mentioned training procedures. Neural density estimation. There exist two popular paradigms for learning flexible probability distributions using neural networks: In mixture density networks , a neural net directly produces the distribution parameters; in normalizing flows , we obtain a complex distribution by transforming a simple one. Both mixture models (; ;) and normalizing flows have been applied for modeling sequential data. However, surprisingly, none of the existing works make the connection and consider these approaches in the context of TPPs. We evaluate the proposed models on the established task of event time prediction (with and without marks) in Sections 5.1 and 5.2. In the remaining experiments, we show how the log-normal mixture model can be used for incorporating extra conditional information, training with missing data and learning sequence embeddings. We use 6 real-world datasets containing event data from various domains: Wikipedia (article edits), MOOC (user interaction with online course system), Reddit (posts in social media) , Stack Overflow (badges received by users), LastFM (music playback) , and Yelp (check-ins to restaurants). We also generate 5 synthetic datasets (Poisson, Renewal, Self-correcting, Hawkes1, Hawkes2), as described in. Detailed descriptions and summary statistics of all the datasets are provided in Appendix E. Setup. We consider two normalizing flow models, SOSFlow and DSFlow (Equation 1), as well a log-normal mixture model (Equation 2), denoted as LogNormMix. As baselines, we consider RMTPP (i.e. Gompertz distribution / exponential intensity from) and FullyNN model by. Additionally, we use a single log-normal distribution (denoted LogNormal) to highlight the benefits of the mixture model. For all models, an RNN encodes the history into a vector h i. The parameters of p * (τ) are then obtained using h i (Equation 3). We exclude the NeuralHawkes model from our comparison, since it is known to be inferior to RMTPP in time prediction , and, unlike other models, doesn't have a closed-form likelihood. Each dataset consists of multiple sequences of event times. The task is to predict the time τ i until the next event given the history H ti. For each dataset, we use 60% of the sequences for training, 20% for validation and 20% for testing. We train all models by minimizing the negative log-likelihood (NLL) of the inter-event times in the training set. To ensure a fair comparison, we try multiple hyperparameter configurations for each model and select the best configuration using the validation set. Finally, we report the NLL loss of each model on the test set. All are averaged over 10 train/validation/test splits. Details about the implementation, training process and hyperparameter ranges are provided in Appendix D. For each real-world dataset, we report the difference between the NLL loss of each method and the LogNormMix model (Figure 3). We report the differences, since scores of all models can be shifted arbitrarily by scaling the data. Absolute scores (not differences) in a tabular format, as well as for synthetic datasets are provided in Appendix F.1. Results. Simple unimodal distributions (Gompertz/RMTPP, LogNormal) are always dominated by the more flexible models with the universal approximation property (LogNormMix, DSFlow, SOSFlow, FullyNN). Among the simple models, LogNormal provides a much better fit to the data than RMTPP/Gompertz. The distribution of inter-event times in real-world data often has heavy tails, and the Gompertz distributions fails to capture this behavior. We observe that the two proposed models, LogNormMix and DSFlow consistently achieve the best loss values. Setup. We apply the models for learning in marked temporal point processes. Marks are known to improve performance of simpler models , we want to establish whether our proposed models work well in this setting. We use the same setup as in the previous section, except for two differences. The RNN takes a tuple (τ i, m i) as input at each time step, where m i is the mark. Moreover, the loss function now includes a term for predicting the next mark: Results. Figure 3 (right) shows the time NLL loss (i.e. − i log p * (τ i)) for Reddit and MOOC datasets. LogNormMix shows dominant performance in the marked case, just like in the previous experiment. Like before, we provide the in tabular format, as well as report the marks NLL loss in Appendix F. Setup. We investigate whether the additional conditional information (Section 3.3) can improve performance of the model. In the Yelp dataset, the task is predict the time τ until the next check-in for a given restaurant. We postulate that the distribution p * (τ) is different, depending on whether it's a weekday and whether it's an evening hour, and encode this information as a vector y i. We consider 4 variants of the LogNormMix model, that either use or don't use y i and the history embedding h i. Results. Figure 5 shows the test set loss for 4 variants of the model. We see that additional conditional information boosts performance of the LogNormMix model, regardless of whether the history embedding is used. In practical scenarios, one often has to deal with missing data. For example, we may know that records were not kept for a period of time, or that the data is unusable for some reason. Since TPPs are a generative model, they provide a principled way to handle the missing data through imputation. Setup. We are given several sequences generated by a Hawkes process, where some parts are known to be missing. We consider 3 strategies for learning from such a partially observed sequence: (a) ignore the gaps, maximize log-likelihood of observed inter-event times (b) fill the gaps with the average τ estimated from observed data, maximize log-likelihood of observed data, and (c) fill the gaps with samples generated by the model, maximize the expected log-likelihood of the observed points. The setup is demonstrated in Figure 4. Note that in case (c) the expected value depends on the parameters of the distribution, hence we need to perform sampling with reparametrization to optimize such loss. A more detailed description of the setup is given in Appendix F.4. Results. The 3 model variants are trained on the partially-observed sequence. Figure 4 shows the NLL of the fully observed sequence (not seen by any model at training time) produced by each strategy. We see that strategies (a) and (b) overfit the partially observed sequence. In contrast, strategy (c) generalizes and learns the true underlying distribution. The ability of the LogNormMix model to draw samples with reparametrization was crucial to enable such training procedure. Different sequences in the dataset might be generated by different processes, and exhibit different distribution of inter-event times. We can "help" the model distinguish between them by assigning a trainable embedding vector e j to each sequence j in the dataset. It seems intuitive that embedding vectors learned this way should capture some notion of similarity between sequences. Learned sequence embeddings. We learn a sequence embedding for each of the sequences in the synthetic datasets (along with other model parameters). We visualize the learned embeddings using t-SNE in Figure 7 colored by the true class. As we see, the model learns to differentiate between sequences from different distributions in a completely unsupervised way. Generation. We fit the LogNormMix model to two sequences (from self-correcting and renewal processes), and, respectively, learn two embedding vectors e SC and e RN. After training, we generate 3 sequences from the model, using e SC, 1 /2(e SC + e RN) and e RN as sequence embeddings. Additionally, we plot the learned conditional intensity function of our model for each generated sequence (Figure 6). The model learns to map the sequence embeddings to very different distributions. We use tools from neural density estimation to design new models for learning in TPPs. We show that a simple mixture model is competitive with state-of-the-art normalizing flows methods, as well as convincingly outperforms other existing approaches. By looking at learning in TPPs from a different perspective, we were able to address the shortcomings of existing intensity-based approaches, such as insufficient flexibility, lack of closed-form likelihoods and inability to generate samples analytically. We hope this alternative viewpoint will inspire new developments in the field of TPPs. CDF and conditional intensity function of proposed models. The cumulative distribution function (CDF) of a normalizing flow model can be obtained in the following way. If z has a CDF Q(z) and τ = g(z), then the CDF F (τ) of τ is obtained as Since for both SOSFlow and DSFlow we can evaluate g −1 in closed form, F (τ) is easy to compute. For the log-normal mixture model, CDF is by definition equal to where Φ(·) is the CDF of a standard normal distribution. Given the conditional PDF and CDF, we can compute the conditional intensity λ * (t) and the cumulative intensity Λ * (τ) for each model as where t i−1 is the arrival time of most recent event before t . Merging two independent processes. We replicate the setup from and consider what happens if we merge two independent TPPs with intensity functions λ * 1 (t) and λ * 2 (t) (and respectively, cumulative intensity functions Λ * 1 (τ) and Λ * 2 (τ)). According to , the intensity function of the new process is λ * (t) = λ * 1 (t) + λ * 2 (t). Therefore, the cumulative intensity function of the new process is Using the previous , we can obtain the CDF of the merged process as The PDF of the merged process is obtained by simply differentiating the CDF w.r.t. τ. This means that by using either normalizing flows or mixture distributions, and thus directly modeling PDF / CDF, we are not losing any benefits of the intensity parametrization. Constant intensity model as exponential distribution. The conditional intensity function of the constant intensity model is defined as λ H is the history embedding produced by an RNN, and b ∈ R is a learnable parameter. By setting c = exp(v T h i + b), it's easy to see that the PDF of the constant intensity model p * (τ) = c exp(−c) corresponds to an exponential distribution. PDF of a Gompertz distribution is defined as The conditional intensity function of the exponential intensity model is defined as λ, where h i ∈ R H is the history embedding produced by an RNN, and v ∈ R H, b ∈ R, w ∈ R + are learnable parameters. By defining d = v T h i + b, we obtain the PDF of the exponential intensity model (, Equation 12) as By setting α = exp(d) and β = w we see that the exponential intensity model is equivalent to a Gompertz distribution. Discussion. Figure 8 shows densities that can be represented by exponential and Gompertz distributions. Even though the history embedding h i produced by an RNN may capture rich information, the ing distribution p * (τ i) for both models has very limited flexibility, is unimodal and light-tailed. In contrast, a flow-based or a mixture model is significantly more flexible and can approximate any density. Summary The main idea of the approach by is to model the integrated conditional intensity function using a feedforward neural network with non-negative weights are non-negative weight matrices, and ∈ R are the remaining model parameters. FullyNN as a normalizing flow Let z ∼ Exponential, that is We can view f: R + → R + as a transformation that maps τ to z We can now use the change of variables formula to obtain the conditional CDF and PDF of τ. Alternatively, we can obtain the conditional intensity as and use the fact that p Both approaches lead to the same However, the first approach also provides intuition on how to draw samplesτ from the ing distribution p * (τ) -an approach known as the inverse method 1. Samplez ∼ Exponential 2. Obtainτ by solving f (τ) −z = 0 for τ (using e.g. bisection method) Similarly to other flow-based models, sampling from the FullyNN model cannot be done exactly and requires a numerical approximation. 1. The PDF defined by the FullyNN model doesn't integrate to 1. By definition of the CDF, the condition that the PDF integrates to 1 is equivalent to lim τ →∞ F * (τ) = 1, which in turn is equivalent to lim τ →∞ Λ * (τ) = ∞. However, because of saturation of tanh activations (i.e. sup x∈R | tanh( Therefore, the PDF doesn't integrate to 1. 2. The FullyNN model assigns a non-zero amount of probability mass to the (−∞, 0) interval, which violates the assumption that inter-event times are strictly positive. Since the inter-event times τ are assumed to be strictly positive almost surely, it must hold that Prob(τ ≤ 0) = F * = 0, or equivalently Λ * = 0. However, we can see that which means that the FullyNN model permits negative inter-event times. We implement SOSFlow, DSFlow and LogNormMix, together with baselines: RMTPP (Gompertz distribution), exponential distribution and a FullyNN model. All of them share the same pipeline, from the data preprocessing to the parameter tuning and model selection, differing only in the way we calculate p * (τ). This way we ensure a fair evaluation. Our implementation uses Pytorch. From arival times t i we calculate the inter-event times τ i = t i − t i−1. Since they can contain very large values, RNN takes log-transformed and centered inter-event time and produces h i ∈ R H. In case we have marks, we additionally input m i -the index of the mark class from which we get mark embedding vector m i. In some experiments we use extra conditional information, such as metadata y i and sequence embedding e j, where j is the index of the sequence. As illustrated in Section 3.3 we generate the parameters θ of the distribution p * (τ i) from [h i ||y i ||e j] using an affine layer. We apply a transformation of the parameters to enforce the constraints, if necessary. All decoders are implemented using a common framework relying on normalizing flows. By defining the base distribution q(z) and the inverse transformation (g) we can evaluate the PDF p * (τ) at any τ, which allows us to train with maximum likelihood (Section 3.1). The log-normal mixture distribution is defined in Equation 2. We generate the parameters of the distribution w ∈ R K, µ ∈ R K, s ∈ R K (subject to k w k = 1, w k ≥ 0 and s k > 0), using an affine transformation (Equation 3). The log-normal mixture is equivalent to the following normalizing flow model By using the affine transformation z 2 = az 1 + b before the exp transformation, we obtain a better initialization, and thus faster convergence. This is similar to the batch normalization flow layer (log τ i − b) are estimated using the entire dataset, not using batches. Forward direction samples a value from a Gaussian mixture, applies an affine transformation and applies exp. In the bacward direction we apply log-transformation to an observed data, center it with an affine layer and compute the density under the Gaussian mixture. We implement FullyNN model as described in Appendix C, using the official implementation as a reference 4. The model uses feed-forward neural network with non-negative weights (enforced by clipping values at 0 after every gradient step). Output of the network is a cumulative intensity function Λ * (τ) from which we can easily get intensity function λ * (τ) as a derivative w.r.t. τ using automatic differentiation in Pytorch. We get the PDF as p We implement RMTPP / Gompertz distribution 5 and the exponential distribution models as described in Appendix B. All of the above methods define the distribution p * (τ). Since the inter-event times may come at very different scales, we apply a linear scalingτ = aτ, where a = 1 N N i=1 τ i is estimated from the data. This ensures a good initialization for all models and speeds up training. A single layer of DSFlow model is defined as. We obtain the parameters of each layer using Equation 3. We define p(τ) through the inverse transformation (g We use the the batch normalization flow layer between every pair of consecutive layers, which significantly speeds up convergence. A single layer of SOSFlow model is defined as There are no constraints on the polynomial coefficients a ∈ R (R+1)×K. We obtain a similarly to Equation 3 as a = V a c + b a, where c is the context vector. We define p(τ) by through the inverse transformation (g Same as for DSFlow, we use the the batch normalization flow layer between every pair of consecutive layers. When implementing SOSFlow, we used Pyro 6 for reference. Using a log-normal mixture model allows us to sample with reparametrization which proves to be useful, e.g. when imputing missing data (Section 5.4). In a score function estimator given a random variable x ∼ p θ (x), where θ are parameters, we can compute. This is an unbiased estimator of the gradients but it often suffers from high variance. If the function f is differentiable, we can obtain an alternative estimator using the reparametrization trick: ∼ q, x = g θ . Thanks to this reparametrization, we can compute. Such reparametrization estimator typically has lower variance than the score function estimator . In both cases, we estimate the expectation using Monte Carlo. To sample with reparametrization from the mixture model we use the Straight-Through Gumbel Estimator . We first obtain a relaxed sample z * = softmax((log w + o)/T ), where each o i is sampled i.i.d. from a Gumbel distribution with zero mean and unit scale, and T is the temperature parameter. Finally, we get a one-hot sample z = onehot(arg max k z * k). While a discrete z is used in the forward pass, during the backward pass the gradients will flow through the differentiable z *. The gradients obtained by the Straight-Through Gumbel Estimator are slightly biased, which in practice doesn't have a significant effect on the model's performance. There exist alternatives that provide unbiased gradients, but are more expensive to compute. E DATASET STATISTICS E.1 SYNTHETIC DATA Synthetic data is generated according to using well known point processes. We sample 64 sequences for each process, each sequence containing 1024 events. Poisson. Conditional intensity function for a homogeneous (or stationary) Poisson point process is given as λ * (t) = 1. Constant intensity corresponds to exponential distribution. Renewal. A stationary process defined by a log-normal probability density function p(τ), where we set the parameters to be µ = 1.0 and σ = 6.0. Sequences appear clustered. LastFM 929 1268385 Reddit 10000 672350 Stack Overflow 6633 480414 MOOC 7047 396633 Wikipedia 1000 157471 Yelp 300 215146 Table 2: Dataset statistics. Self-correcting. Unlike the previous two, this point process depends on the history and is defined by a conditional intensity function λ * (t) = exp(t − ti<t 1). After every new event the intensity suddenly drops, inhibiting the future points. The ing point patterns appear regular. Hawkes. We use a self-exciting point process with a conditional intensity function given as λ In addition we use real-world datasets that are described bellow. Table 2 shows their summary. All datasets have a large amount of unique sequences and the number of events per sequence varies a lot. Using marked temporal point processes to predict the type of an event is feasible for some datasets (e.g. when the number of classes is low), and is meaningless for other. 7 The dataset contains sequences of songs that selected users listen over time. Artists are used as an event type. 8 On this social network website users submit posts to subreddits. In the dataset, most active subreddits are selected, and posts from the most active users on those subreddits are recodered. Each sequence corresponds to a list of submissions a user makes. The data contains 984 unique subreddits that we use as classes in mark prediction. 9 Users of a question-answering website get rewards (called badges) over time for participation. A sequence contains a list of rewards for each user. Only the most active users are selected and only those badges that users can get more than once. 8 Contains the interaction of students with an online course system. An interaction is an event and can be of various types (97 unique types), e.g. watching a video, solving a quiz etc. 8 A sequence corresponds to edits of a Wikipedia page. The dataset contains most edited pages and users that have an activity (number of edits) above a certain threshold. 10 We use the data from the review forum and consider the reviews for the 300 most visited restaurants in Toronto. Each restaurant then has a corresponding sequence of reviews over time. After splitting the data into the 3 sets, we break down long training sequences into sequences of length at most 128. Optimization is performed using Adam with learning rate 10 −3. We perform training using mini-batches of 64 sequences. We train for up to 2000 epochs (1 epoch = 1 full pass through all the training sequences). For all models, we compute the validation loss at every epoch. If there is no improvement for 100 epochs, we stop optimization and revert to the model parameters with the lowest validation loss. We select hyperparameter configuration for each model that achieves the lowest average loss on the validation set. For each model, we consider different values of L 2 regularization strength C ∈ {0, 10 −5, 10 −3}. Additionally, for SOSFlow we tune the number of transformation layers M ∈ {1, 2, 3} and for DSFlow M ∈ {1, 2, 3, 5, 10}. We have chosen the values of K such that the mixture model has approximately the same number of parameters as a 1-layer DSFlow or a 1-layer FullyNN model. More specifically, we set K = 64 for LogNormMix, DSFlow and FullyNN. We found all these models to be rather robust to the choice of K, as can be seen in Table 3 for LogNormMix. For SOSFlow we used K = 4 and R = 3, ing in a polynomial of degree 7 (per each layer). Higher values of R led to unstable training, even when using batch normalization. Additional discussion. In this experiment, we only condition the distribution p * (τ i) on the history embedding h i. We don't learn sequence embeddings e j since they can only be learned for the training sequences, and not fore the validation/test sets. There are two important aspects related to the NLL loss values that we report. First, the absolute loss values can be arbitrarily shifted by rescaling the data. Assume, that we have a distribution p(τ) that models the distribution of τ. Now assume that we are interested in the distribution q(x) of x = aτ (for a > 0). Using the change of variables formula, we obtain log q(x) = log p(τ) + log a. This means that by simply scaling the data we can arbitrarily offset the log-likelihood score that we obtain. Therefore, the absolute values of of the (negative) log-likelihood L for different models are of little interest -all that matters are the differences between them. The loss values are dependent on the train/val/test split. Assume that model 1 achieves loss values L 1 = {1.0, 3.0} on two train/val/test splits, and model 2 achieves L 2 = {2.0, 4.0} on the same splits. If we first aggregate the scores and report the averageL 1 = 2.0 ± 1.0,L 2 = 3.0 ± 1.0, it may seem that the difference between the two models is not significant. However, if we first compute the differences and then aggregate (L 2 − L 1) = 1.0 ± 0.0 we see a different picture. Therefore, we use the latter strategy in Figure 3. For completeness, we also report the numbers obtained using the first strategy in Table 4. As a baseline, we also considered the constant intensity / exponential distribution model . However, we excluded the for it from Figure 3, since it consistently achieved the worst loss values and had high variance. We still include the for the constant intensity model in Table 4. We also performed all the experiments on the synthetic datasets (Appendix E.1). The are shown in Table 5, together with NLL scores under the true model. We see that LogNormMix and DSFlow, besides achieving the best , recover the true distribution. Finally, in Figure 9 we plot the conditional distribution p(τ |H) with models trained on Yelp dataset. The events represent check-ins into a specific restaurant. Since check-ins mostly happen during the opening hours, the inter-event time is likely to be on the same day (0h), next day (24h), the day after (48h), etc. LogNormMix can fully recover this behavior from data while others either cannot learn multimodal distributions (e.g. RMTPP) or struggle to capture it (e.g. FullyNN). Table 4: Time prediction test NLL on real-world data. Detailed setup. We use the same setup as in Section F.1, except two differences. For learning in a marked temporal point process, we mimic the architecture from. The RNN takes a tuple (τ i, m i) as input at each time step, where m i is the mark. Moreover, the loss function now includes a term for predicting the next mark: The next mark m i at time t i is predicted using a categorical distribution p * (m i). The distribution is parametrized by the vector π i, where π i,c is the probability of event m i = c. We obtain π i using the history embedding h i passed through a feedforward neural network π i = softmax V π are the parameters of the neural network. Additional discussion. In Figure 3 (right) we reported the differences in time NLL between different models L time (θ) = − 1 N N i=1 log p * θ (τ i). In Table 6 we additionally provide the total NLL L total (θ) = − 1 N N i=1 [log p * θ (τ i) + log p * θ (m i)] averaged over multiple splits. Using marks as input to the RNN improves time prediction quality for all the models. However, since we assume that the marks are conditionally independent of the time given the history (as was done in earlier works), all models have similar mark prediction accuracy. Detailed setup. In the Yelp dataset, the task is to predict the time τ i until the next customer checkin, given the history of check-ins up until the current time t i−1. We want to verify our intuition that the distribution p * (τ i) depends on the current time t i−1. For example, p * (τ i) might be different depending on whether it's a weekday and / or it's an evening hour. Unfortunately, a model that processes the history with an RNN cannot easily obtain this information. Therefore, we provide this information directly as a context vector y i when modeling p * (τ i). The first entry of context vector y i ∈ {0, 1} 2 indicates whether the previous event t i−1 took place on a weekday or a weekend, and the second entry indicates whether t i−1 was in the 5PM-11PM time Table 6: Time and total NLL and mark accuracy when learning a marked TPP. window. To each of the four possibilities we assign a learnable 64-dimensional embedding vector. The distribution of p * (τ i) until the next event depends on the embedding vector of the time stamp t i−1 of the most recent event. Detailed setup. The dataset for the experiment is generated as a two step process: 1) We generate a sequence of 100 events from the model used for Hawkes1 dataset (Appendix E.1) ing in a sequence of arrival times {t 1, . . . t N}, 2) We choose random t i and remove all the events that fall inside the interval [t i, t i+k] where k is selected such that the interval length is approximately t N /3. We consider three strategies for learning with missing data (shown in Figure 4 (left)): a) No imputation. The missing block spans the time interval [t i, t i+k]. We simply ignore the missing data, i.e. training objective L time will include an inter-event time τ = t i+k − t i. b) Mean imputation. We estimate the average inter-event timeτ from the observed data, and impute events at times {t i + nτ for n ∈ N, such that t i + nτ < t i+k}. These imputed events are fed into the history-encoding RNN, but are not part of the training objective. c) Sampling. The RNN encodes the history up to and including t i and produces h i that we use to define the distribution p * (τ |h i). We draw a sample τ (imp) j form this distribution and feed it into the RNN. We keep repeating this procedure until the samples get past the point t i+k. The imputed inter-event times τ ). We sample multiple such sequences in order to approximate the expected log-likelihood of the observed inter-event times E τ (imp) ∼p * i log p * (τ). Since this objective includes an expectation that depends on p *, we make use of reparametrization sampling to obtain the gradients w.r.t. the distribution parameters . Detailed setup. When learning sequence embeddings, we train the model as described in Appendix F.1, besides one difference. First, we pre-train the sequence embeddings e j by disabling the his-tory embedding h i and optimizing − 1 N i log p θ (τ i |e j). Afterwards, we enable the history and minimize − 1 N i log p θ (τ i |e j, h i). In Figure 6 the top row shows samples generated using e SC, embedding of a self-correcting sequence, the bottom row was generated using e SC, embedding of a renewal sequence, and the middle row was generated using 1 /2(e SC + e RN), an average of the two embeddings.
Learn in temporal point processes by modeling the conditional density, not the conditional intensity.
576
scitldr
We propose a novel yet simple neural network architecture for topic modelling. The method is based on training an autoencoder structure where the bottleneck represents the space of the topics distribution and the decoder outputs represent the space of the words distributions over the topics. We exploit an auxiliary decoder to prevent mode collapsing in our model. A key feature for an effective topic modelling method is having sparse topics and words distributions, where there is a trade-off between the sparsity level of topics and words. This feature is implemented in our model by L-2 regularization and the model hyperparameters take care of the trade-off. We show in our experiments that our model achieves competitive compared to the state-of-the-art deep models for topic modelling, despite its simple architecture and training procedure. The “New York Times” and “20 Newsgroups” datasets are used in the experiments. Topic models are among the key models in Natural Language Processing (NLP) that aim to represent 14 a large body of text using only a few concepts or topics, on a completely unsupervised basis. Topic 15 modeling has found its application in many different areas including bioinformatics Sample a word in position n, w n ∼ Multinomial(β z) Two important objectives that LDA implicitly tries to achieve and they make this model suitable for There is a trade-off between these two objectives. If a document is represented using only a few 61 topics, then number of words with high probability in those topics should be large, and if topics 62 are represented using only a few words then we need a large number of topics to cover the words 63 in the document. The sparsity of the distributions is a property of the Dirichlet distribution that is 64 controlled by its concentration parameters. Also, based on LDA, the distribution of the words in a 65 document is a mixture of multinomials. In our model we follow the main principals of the LDA algorithm, i.e. sparse distributions for the 67 topics and words and the final distribution of the words in a document is a mixture of multinomial. On the other hand, we try to avoid the difficulties of training the LDA model. Since our downstream 69 task is finding topics in the documents, and not generating new documents, we do not need to 70 learn the true posterior probability, or find ways to approximate it. Therefore we leave the latent 71 representation unconstrained with regard to its distribution. We first encode the documents to the topic space Z using f topic (x; φ), which is implemented by 73 a neural network with parameter set φ. To make sure Z is a probability space we use a softmax z k β k, will be a reconstruction of the input vector x. We 77 intentionally do not use a matrix multiplication notation so that we can explain the constraints on 78 β k's in a simpler and more explicit way. To make both topic and words distributions sparse, we impose an L-2 norm constraint on them. Maximizing the L-2 norm over a positive, sum-to-one vector, concentrates the probability mass over of the algorithm will be as follows: DISPLAYFORM0 where distance D is the cross entropy, and γ and η are hyperparameters of the model. The trade-off 85 of the sparsity in topics and words distributions can be controlled by tuning γ and η. We observed that training the model using Eq. causes mode collapsing, in the sense that only a 88 very few topics will have meaningful words in them and the rest of the topics have high probability 89 over some random words. Also, all the probability mass of the topics distribution for all of the 90 documents are concentrated on those specific topics. In other words, all the documents are encoded 91 to the same set of topics and the model cannot capture the variations in the documents. We believe 92 this is due to the fact that f β (z) is not a powerful function for backpropagating the error signal from 93 the output to the previous layers of the network. To resolve this issue and produce a richer Z space, we attach an auxiliary decoder to the latent representation, which we call it f AUX (z; ϕ) and it is a 95 neural network with parameter set ϕ. The output of this decoder, denoted byx, also reconstructs the 96 input document. Our observations show that by adding this decoder we can separate the documents' 97 representations in the latent space. In both topic and word level, instead of sampling, we consider z and β k's (for all k ∈ {1, 2, .., K}) as a normalized typical set of the distribution z and β k' s. This is to avoid sampling from the multi-100 nomial distribution for which there is no easy way, e.g. reparameteraztion trick in for Gaussian 101 family, to backpropagate the error for training the neural networks. Therefore the overall objective 102 of our model is: DISPLAYFORM0 where λ is another hyperparameter of the model that controls the role of the auxiliary decoder in 104 training. FIG2 shows the structure of the networks. In this section we compare the performance of the proposed algorithm with LDA with collapsed coherence (higher is better) and perplexity score (lower is better) of the . This dataset doesn't need a preprocessing phase, as the common words and stop words has already 115 been removed from it. We try performing topic modeling using 25 and 50 topics for this dataset (CG 116 in the tables mean Collapsed Gibbs and the best are indicated by bold symbols). The 20 Newsgroups has D = 11, 000 training documents. We follow the same preprocessing in, tokenization, removing some of the non UTF-8 characters and English stop word removal. These are all done using scikit-learn package. After this preprocessing the vocabulary size is 123 N = 2, 000. For this dataset we try training the models with 50 and 200 topics.
A deep model for topic modelling
577
scitldr
Voice Conversion (VC) is a task of converting perceived speaker identity from a source speaker to a particular target speaker. Earlier approaches in the literature primarily find a mapping between the given source-target speaker-pairs. Developing mapping techniques for many-to-many VC using non-parallel data, including zero-shot learning remains less explored areas in VC. Most of the many-to-many VC architectures require training data from all the target speakers for whom we want to convert the voices. In this paper, we propose a novel style transfer architecture, which can also be extended to generate voices even for target speakers whose data were not used in the training (i.e., case of zero-shot learning). In particular, propose Adaptive Generative Adversarial Network (AdaGAN), new architectural training procedure help in learning normalized speaker-independent latent representation, which will be used to generate speech with different speaking styles in the context of VC. We compare our with the state-of-the-art StarGAN-VC architecture. In particular, the AdaGAN achieves 31.73%, and 10.37% relative improvement compared to the StarGAN in MOS tests for speech quality and speaker similarity, respectively. The key strength of the proposed architectures is that it yields these with less computational complexity. AdaGAN is 88.6% less complex than StarGAN-VC in terms of FLoating Operation Per Second (FLOPS), and 85.46% less complex in terms of trainable parameters. Language is the core of civilization, and speech is the most powerful and natural form of communication. Human voice mimicry has always been considered as one of the most difficult tasks since it involves understanding of the sophisticated human speech production mechanism and challenging concepts of prosodic transfer . In the literature, this is achieved using Voice Conversion (VC) technique . Recently, VC has gained more attention due to its fascinating real-world applications in privacy and identity protection, military operations, generating new voices for animated and fictional movies, voice repair in medical-domain, voice assistants, etc. Voice Conversion (VC) technique converts source speaker's voice in such a way as if it were spoken by the target speaker. This is primarily achieved by modifying spectral and prosodic features while retaining the linguistic information in the given speech signal . In addition, Voice cloning is one of the closely related task to VC . However, in this research work we only focus to advance the Voice Conversion. With the emergence of deep learning techniques, VC has become more efficient. Deep learningbased techniques have made remarkable progress in parallel VC. However, it is difficult to get parallel data, and such data needs alignment (which is a arduous process) to get better . Building a VC system from non-parallel data is highly challenging, at the same time valuable for practical application scenarios. Recently, many deep learning-based style transfer algorithms have been applied for non-parallel VC task. Hence, this problem can be formulated as a style transfer problem, where one speaker's style is converted into another while preserving the linguistic content as it is. In particular, Conditional Variational AutoEncoders (CVAEs), Generative Adversarial Networks (GANs) (proposed by), and its variants have gained significant attention in non-parallel VC. However, it is known that the training task for GAN is hard, and the convergence property of GAN is fragile . There is no substantial evidence that the gen-erated speech is perceptually good. Moreover, CVAEs alone do not guarantee distribution matching and suffers from the issue of over smoothing of the converted features. Although, there are few GAN-based systems that produced state-of-the-art for non-parallel VC. Among these algorithms, even fewer can be applied for many-to-many VC tasks. At last, there is the only system available for zero-shot VC proposed by. Zero-shot conversion is a technique to convert source speaker's voice into an unseen target speaker's speaker via looking at a few utterances of that speaker. As known, solutions to a challenging problem comes with trade-offs. Despite the , architectures have become more complex, which is not desirable in real-world scenarios because the quality of algorithms or architectures is also measured by the training time and computational complexity of learning trainable parameters ). Motivated by this, we propose computationally less expensive Adaptive GAN (AdaGAN), a new style transfer framework, and a new architectural training procedure that we apply to the GAN-based framework. In AdaGAN, the generator encapsulates Adaptive Instance Normalization (AdaIN) for style transfer, and the discriminator is responsible for adversarial training. Recently, StarGAN-VC (proposed by) is a state-of-the-art method among all the GAN-based frameworks for non-parallel many-to-many VC. AdaGAN is also GAN-based framework. Therefore, we compare AdaGAN with StarGAN-VC for non-parallel many-to-many VC in terms of naturalness, speaker similarity, and computational complexity. We observe that AdaGAN yields state-of-the-art for this with almost 88.6% less computational complexity. Recently proposed AutoVC (by) is the only framework for zero-shot VC. Inspired by this, we propose AdaGAN for zero-shot VC as an independent study, which is the first GAN-based framework to perform zeroshot VC. We reported initial for zero-shot VC using AdaGAN.The main contributions of this work are as follows: • We introduce the concept of latent representation based many-to-many VC using GAN for the first time in literature. • We show that in the latent space content of the speech can be represented as the distribution and the properties of this distribution will represent the speaking style of the speaker. • Although AdaGAN has much lesser computation complexity, AdaGAN shows much better in terms of naturalness and speaker similarity compared to the baseline. Developing a non-parallel VC framework is challenging task because of the problems associated with the training conditions using non-parallel data in deep learning architectures. However, attempts have been made to develop many non-parallel VC frameworks in the past decade. For example, Maximum Likelihood (ML)-based approach proposed by , speaker adaptation technique by , GMM-based VC method using Maximum a posteriori (MAP) adaptation technique by , iterative alignment method by , Automatic Speech Recognition (ASR)-based method by , speaker verification-based method using i-vectors by , and many other frameworks (; ; ; ; ; Saito et al. (2018a);; Shah et al. (2018b; c);; ). Recently, a method using Conditional Variational Autoencoders (CVAEs) was proposed for non-parallel VC by (; Saito et al. (2018a) ). Recently, VAE based method for VC was proposed, which also uses AdaIN to transfer the speaking style . One powerful framework that can potentially overcome the weakness of VAEs involves GANs. While GAN-based methods were originally applied for image translation problems, these methods have also been employed with noteworthy success for various speech technology-related applications, we can see via architectures proposed by (; Saito et al. (2018b); Shah et al. (2018a) ), and many others. In GANs-based methods, Cycle-consistent Adversarial Network (CycleGAN)-VC is one of the state-of-the-art methods in the non-parallel VC task proposed by . Among these non-parallel algorithms, a few can produce good for non-parallel many-tomany VC. Recently, StarGAN-VC is a state-of-the-art method for the nonparallel many-to-many VC among all the GAN-based frameworks. Past attempts have been made to achieve conversion using style transfer algorithms (; ;). The most recent framework is the AutoVC (proposed by) using style transfer scheme, the first and the only framework in VC literature which achieved state-of-the-art in zero-shot VC. The traditional VC problem is being reformulated as a style transfer problem. Here, we assume Z is a set of n speakers denoted by Z = {Z 1, Z 2, ..., Z n}, where Z i is the i th speaker, and U is the set of m speech utterances denoted by U = {U 1, U 2, ..., U m}, where U i is the i th speech utterance. Now, probability density function (pdf) is generated for given Z i, and U i denoted by p X (.|Z i, U i) via the stochastic process of random sampling from the distributions Z i and U i. Here, X i ∼ p X (.|Z i, U i) can be referred as features of given U i with speaking style of Z i. The key idea is to transfer the speaking style of one speaker into another in order to achieve VC. For this, let us consider a set of random variables (Z 1, U 1) corresponding to a source speaker, and (Z 2, U 2) corresponding to a target speaker. Here, U 1 and U 2 are spoken by Z 1 and Z 2, respectively. Our goal is to achieve pX (.|Z 2, U 1). Now, we want to learn a mapping function to achieve our goal for VC. Our mapping function is able to generate the distribution denoted byX Z1→Z2 with speaking style of Z 2 while retaining the linguistic content of U 1. Formally, we want to generate the pdf (i.e., pX (.|Z 1, U 1, Z 2, U 2)) to be close or equal to the pX (.|Z 2, U 1). Accurately, our mapping function will achieve this property, as shown in eq. 1. pX Intuitively, we want to transfer the speaking style of Z 2 to the Z 1 while preserving the linguistic content of U 1. Therefore, converted voice is perceptually sound as if utterance U 1 were spoken by Z 2. With this, AdaGAN is also designed to achieve zero-shot VC. During zero-shot conversion, U 1 and U 2 can be seen or unseen utterances, and Z 1 and Z 2 can be seen or unseen speakers. Our key idea for style transfer in VC revolves around the AdaIN. First, AdaIN was introduced for arbitrary style transfer in image-to-image translation tasks by. In this paper, AdaIN helps us to capture the speaking style and linguistic content into a single feature representation. AdaIN takes features of a source speaker's speech (i.e., X) and sample features of the target speaker's speech (i.e., Y). Here, x is a feature from the set X related to the linguistic content of source speech, and Y is features related to the speaking style of the target speaker. AdaIN will map the mean and standard deviation of X (i.e., µ X and σ x) in such a way that it will match with mean, and standard deviation of Y (i.e., µ Y and σ Y). Mathematical equation of AdaIN is defined as : From eq., we can infer that AdaIN first normalizes x, and scales back based on mean and standard deviations of y. Intuitively, let's assume that we have one latent space which represents the linguistic content in the distribution and also contains speaking style in terms of the mean and standard deviation of the same distribution. To transfer the speaking style, we have adopted the distribution properties (i.e., its mean and standard deviation) of the target speaker. As a , the output produced by AdaIN has the high average activation for the features which are responsible for style (y) while preserving linguistic content. AdaIN does not have any learning parameters. Hence, it will not affect the computational complexity of the framework. In this Section, we discuss our proposed AdaGAN architecture in detail. we show that AdaIN helps the generator make speaking style transfer easy and efficient, and can achieve zero-shot VC. We present an intuitive and theoretical analysis for the proposed framework. The AdaGAN framework consists of an encoder En, a decoder De, and a discriminator Dis. Here, En encodes the input features of speech to the latent space, De generates the features of speech from the given latent space, and Dis ensures adversarial training. The style transfer scheme and training procedure are shown in Fig Features of source speaker's speech (i.e., x), and any sample features of target speaker's speech (i.e., y), is taken as input to En to get the required latent space representations S x and S y as given in eq. 3. Now, AdaIN is used to transfer distribution properties (i.e., its mean and standard deviation) of S y to S x, and generate the single feature representation denoted by t as per eq. 3. In the next step, we have used De to generate features of speech (i.e., x Z1→Z2) from t. This entire process is illustrated via Fig. 1 (a). This generated features x Z1→Z2 contains the speaking style of target speaker via retaining the linguistic content of source speaker speech. We have encapsulated this style transfer algorithm into the generator of AdaGAN in order to improve the quality of x Z1→Z2 via adversarial training. We have applied a new training methodology in GAN-based framework. We have designed a training procedure based on non-parallel data in order to learn the mapping function for many-to-many as well as zero-shot VC. We know that the idea of transitivity as a way to regularize structured data has a long history. People have extended this concept into the training methodologies of deep learning architectures . In this paper, we have encapsulated the idea of transitivity via introducing the reconstruction loss along with adversarial training. The entire training procedure is illustrated in Fig. 1(b). First, we randomly select the two speakers Z 1 and Z 2. Formally, we have two sets of random variables, (Z 1, U 1, X) and (Z 2, U 2, Y) corresponding to the source and target speaker, respectively. After this, we randomly select x 1, x 2 ∈ p X (.|Z 1, U 1), and y 1, y 2 ∈ p Y (.|Z 2, U 2). During the training, VC is done from the source speaker (Z 1) to target speaker (Z 2) via style transfer scheme illustrated in Fig. 1(a). Using x 1, y 1, we transfer speaking style of speaker Z 2 to Z 1. From eq., we can describe this procedure as shown in eq.. Now, using another sample of source speech (i.e., x 2), we have reconstructed the source speech features (i.e., x Z1→Z2→Z1) from the features of converted speech (x Z1→Z2) in order to achieve better conversion efficiency. This procedure is described in eq.. Now, the same cycle process is again applied to transfer the speaking style of Z 2 to Z 1, we get following equations: During testing, we gave features of the source speaker's speech along with the sample features of target speaker to the encoder. AdaGAN requires 3 s to 5 s of sample speech features of the target speaker in order to transfer speaking style of target speaker to source speaker. This sample speech will be used to estimate the mean and standard deviation of the target speaker's distribution in its respective latent space. After this, the speaking style will be transferred in latent space of source speaker using AdaIN. Next, the decoder will generate speech back from the converted latent representation of the source speaker. Briefly, the decoder will generate the speech with speaking style of the target speaker. Now, the generator of AdaGAN is consist of Encoder and Decoder. Hence, we can say that the generator of AdaGAN will generate the speech with speaking style of the target speaker for a given source speaker's speech along with the sample of target speaker during testing. The training procedure of AdaGAN is formally presented in Algorithm 1. sample 4 minibatches of cepstral features {x 1, x 2} ∈ p X (.|Z 1, U 1), and {y 1, y 2} ∈ p Y (.|Z 2, U 2). First column shows the process of transferring speaking style of speaker Z 2 to Z 1. Second column shows the process of transferring speaking style of speaker Z 1 to Z 2. Comment ends */ 9: 10: S Y1 ← En(y 1); 11: 12: x ← De(t 1); y ← De(t 1); 13: 14: Update the generator by descending its stochastic gradient: Update the discriminator by descending its stochastic gradient: 21: end for 22: return To achieve many-to-many and zero-shot VC, AdaGAN uses four different loss functions: Adversarial loss, reconstruction loss, content preserve loss, and style transfer loss. Adversarial loss: This loss measures how distinguishable the converted data is from the normal speech data. The smaller the loss is, the converted data distribution is more closer to normal speech distribution. Hence, we want to minimize objective function given in eq. against an adversary Dis that tries to maximize it. Here, this loss is used to make the generated or converted speech indistinguishable from the original speech, and can be mathematically formulated as: Reconstruction Loss: By using only adversarial loss, we may loose linguistic information in the converted voice. This loss helps the encoder and decoder to retain the linguistic information in converted voice. We have used L 1 norm as a reconstruction loss, and can be described as: Content Preserve Loss: To preserve the linguistic content of the input speech during AdaIN. This loss also ensure that our encoder and decoder are noise free. We have used following L 1 norm for this loss, i.e., Style transfer Loss: This loss function is at the heart of the AdaGAN. This loss plays a vital role in achieving many-to-many and zero-shot VC using AdaGAN. This loss helps AdaGAN to create a latent space with the speaking style features in terms of mean and standard deviation of the distribution while preserving the linguistic content in the same distribution. We have used L 1 norm as style transfer loss, i.e., Final Objective Function: The overall objective function of AdaGAN can be defined as: where λ 1, λ 2, λ 3, λ 4, and λ 5 are the hyperparameters. These parameters controls the relative importance of each loss w.r.t. each other. We have used λ 1 = 10, λ 2 = 2, λ 3 = 2, λ 4 = 3, and λ 5 = 3 during the experiments. We theoretically proved that how these simple loss functions are the key idea behind the performance of AdaGAN in the next Section. We optimized these loss functions according to the Algorithm 1. AdaGAN framework contains a Generator and a Discriminator. In this Section, we provide detailed information about each component of the AdaGAN framework. As shown in Fig. 1, Generator of AdaGAN consists of mainly 2 modules: Encoder and Decoder. AdaGAN uses the same encoder to extract the features from the source and target speakers' speech. Input of encoder is a vector of 40 Mel cepstral features, which it converts to a latent space of size 1x512. The decoder takes normalized feature vector of size 1x512 as input and converts it to 1x40 target speech features. In encoder and decoder, all layers are fully-connected layers. In encoder, the input and output layer has 40 and 512 cell size, respectively. In decoder, input and output layer have 512 and 40 cell size, respectively. All the hidden layers in encoder and decoder consist 512 cell size. All the layers are followed by Rectified Linear Unit (ReLU) activation function except output layer. In AdaGAN, main goal of the discriminator is similar to traditional GAN training. Accurately, it will discriminate whether the input is generated (x Z1→Z2) or from the original distribution. Same as Encoder and Decoder, structure of discriminator follows the stacked fully-connected layers. It consists of an input layer, 3 hidden layers and, an output layer with 40, 512, and 1 cell size, respectively. In discriminator, each layer followed by the ReLU activation function and output layer followed by a sigmoid activation function. In this Section, we show the theoretical correctness and intuitive explanation of AdaGAN. The key idea of the AdaGAN is to learn the latent space, where we can represent our features as per our requirements. Consider the training procedure of AdaGAN described in Section 4.2. Let us take two latent space features S x1 and S x2 corresponding to two different sample features, x 1 and x 2, respectively, of the same speaker Z 1. We are also going to take S y1 from latent space of another speaker Z 2, where y 1 is a sample feature of that speaker, and Z 1 = Z 2. After training of AdaGAN for a large number of iteration of τ, where theoretically τ → ∞, let us assume the following: 1. In the latent space, mean and standard deviation of the same speaker are constant irrespective of the linguistic content. Formally, we have µ Sx 1 = µ Sx 2, and σ Sx 1 = σ Sx 2. 2. If we have different speakers, then mean and standard deviation of respective latent representations are different. Accurately, µ Sx 1 = µ Sy 1, and σ Sx 1 = σ Sy 1. Theorem 1: Given these assumptions, ∃ a latent space where normalized latent representation of input features will be the same irrespective of speaking style. Here, we take input features of same utterance U 1. Hence, where KL(·|·) is the KL-divergence, and p N (.|Z i, U i) is pdf of normalized latent representation of input feature U i, with speaking style of speaker Z i. This is the fundamental theorem that lies behind the concept of AdaGAN. Intuitively, from this theorem, we can observe that the normalized latent representation of the same utterance spoken by different speakers is the same. This fact leads to the that linguistic content of speech is captured by the distribution of normalized latent space, and speaking style of a speaker is being captured by mean and standard deviation of the same distribution. Theorem 2: By optimization of min En,De L C X→Y + L sty X→Y, the assumptions made in Theorem 1 can be satisfied. The proof of both the theorems are given in Appendix A. Both the theorems conclude that AdaIN made style transfer easy and efficient via only using the mean and standard deviation of the distribution. In Appendix B, we provided the t-SNE visualization of the features in latent space to give the empirical proof. In this Section, we show a comparison between AdaGAN and StarGAN-VC in terms of computational complexity. Table 2, G StarGAN, D StarGAN, and Cls are generator, discriminator, and classifier of StarGAN, respectively. All these three modules contain convolution layers. In StarGAN, there is weight sharing between the 5 convolution layers of discriminator and classifier. Here, we remove the FLOPS and trainable parameters of shared layers from the Cls. Hence, we consider it once in the calculation of total FLOPS and trainable parameters. We can observe that AdaGAN is 88.6% less complex than StarGAN in terms of FLOPS, and 85.46% less complex in terms of trainable parameters. Moreover, StarGAN uses a one-hot encoding to get the information about the target speaker. However, AdaGAN requires any sample of 3 s -5 s from the target speaker. In this Section, we will show experimental setup, and subjective evaluation (or ) of AdaGAN. Samples of converted audio files are provided here 3. The experiments are performed on the VCTK corpus , which contains 44 hours of data for 109 speakers. The statistics of the database are given. The database is designed to provide non-parallel data for VC. From this database, AdaGAN system was developed on data of 20 speakers (10 males and 10 females). Out of this, we have used 80% data for training and 20% data for testing for each speaker. Particularly, we have used 6.27 and 1.45 hours of data for the training and testing, respectively. The 40-dimensional (dim) Mel Cepstral Coefficients (MCCs) (including the 0 th coefficient) and 1-dimensional F 0 are extracted from the speech of source, and the target speakers with 25 ms window and 5 ms frame-shift. For analysis-synthesis, we have used AHOCODER . Mean-variance transformation method has been applied for fundamental frequency F 0 conversion. To evaluate AdaGAN empirically, we performed two subjective tests for evaluating naturalness and speaker similarity. In particular, Mean Opinion Score (MOS) test have been conducted, where subjects have been asked to rate the randomly played converted speech on 5-point scale for naturalness, where 1 means converted voice is very robotic, and 5 means converted voice is very natural. In the second test, subjects have been asked to rate how similar the converted voice given the reference target speech in terms of speaker similarity. Subjects rated converted voices for speaker similarity on the 5-point scale, where 1 means dissimilar with high confidence and 5 means similar with high confidence w.r.t. the given target speaker. Total 15 subjects (6 females and 9 males with no known hearing impairments with age varies between 18 to 31 years) took part in the subjective evaluations. Randomly 2 male and 2 female speakers have been selected from the testing dataset for subjective evaluations. We evaluated four different conversion systems, i.e., male-male (M2M), female-female (F2F), male-female (M2F), and female-male (F2M) developed using proposed AdaGAN and Star-GAN. From each system, two converted audio files have been selected. Hence, 8 audio files from AdaGAN and another 8 audio files from the StarGAN have been taken for subjective evaluations. We kept the same source-target speaker-pairs for fair comparison. Fig. 2 shows the comparison of MOS scores between AdaGAN and the baseline StarGAN-VC. Total of 15 subjects (6 females and 9 males) between 18-30 years of age and with no known hearing impairments took part in the subjective test. For statistically significant analysis, are shown in different conversion possibilities with 95% confidence interval. In addition, for our subjective tests, we obtain p-value 0.013, which is much lesser then 0.05. Therefore, it clearly shows the statistical significance of the . From Fig. 2, it is clear that there is 31.73 % relative improvement (on an average) in MOS score for the AdaGAN compared to the baseline StarGAN. In terms of speaker similarity, AdaGAN yields on an average 10.37% relative improvement in speaker similarity compare to baseline (as shown in Fig. 3). Although AdaGAN outperforms StarGAN, both the methods are not able to achieve good score in the similarity test. The main reason is due to the F 0 conversion and errors in statistical vocoder (i.e., AHOCODER and WORLD-vocoder). However, neural network-based Wavenet-vocoder shows very promising on speech synthesis. Although they are very accurate, they are data-driven approaches. In summary, AdaGAN achieves better performance in MOS tests compared to the StarGAN-VC for naturalness and speaker similarity. In traditional many-to-many VC, all the target speakers are seen while training the architecture. Hence, traditional algorithms are not able to do VC for an unseen speaker (i.e., for the cases of zeroshot VC). Along with many-to-many VC, we extended our study of AdaGAN for zero-shot VC. Zero-shot conversion is the task of transferring the speaking style of seen/unseen source speaker to seen/unseen target speaker. In simple terms, conversion can be done between any speaker whether their data were present in the corpus or not at the time of training. StarGAN-VC uses a one-hot vector for target speaker reference during conversion. In the case of an unseen target speaker, it will not be able to perform the zero-shot conversion. However, AdaGAN maps the input to the required latent space (as proved in Appendix A). Therefore, AdaGAN will be able to learn more promised latent space for even unseen speakers. Here, we show our experimental for the zero-shot VC task. We performed subjective tests in a similar manner as performed in many-to-many VC. We have used AdaGAN trained on 20 speakers (10 males and 10 females). Later on, we selected randomly 1 seen, and 1 unseen male speakers and 1 seen, and 1 unseen female speakers. And we applied the permutations on these different speakers to get all the different conversion samples, such as seen-seen (S2S), seen-unseen (S2U), unseen-seen (U2S), and unseen-unseen (U2U). Fig. 4, and Fig. 5 shows the MOS scores for naturalness, and speaker similarity, respectively. Recently, AutoVC has been proposed, which is the only framework for zero-shot conversion. To the best of authors' knowledge, this is the first GAN-based framework to achieve zero-shot VC. To do the zero-shot conversion, AutoVC requires few samples (20 s) of possible target speakers. However, AdaGAN requires only 3s to 5s of sample speech of the seen or unseen target speaker to extract latent representation for the target speaker in order to generate voices that sound perceptually similar to the target speaker. Moreover, trained AdaGAN architecture can work on any source or target speaker. In this paper, we proposed novel AdaGAN primarily for non-parallel many-to-many VC task. Moreover, we analyzed our proposed architecture w.r.t. current GAN-based state-of-the-art StarGAN-VC method for the same task. We know that the main aim of VC is to convert the source speaker's voice into the target speaker's voice while preserving linguistic content. To achieve this, we have used the style transfer algorithm along with the adversarial training. AdaGAN transfers the style of the target speaker into the voice of a source speaker without using any feature-based mapping between the linguistic content of the source speaker's speech. For this task, AdaGAN uses only one generator and one discriminator, which leads to less complexity. AdaGAN is almost 88.6% computationally less complex than the StarGAN-VC. We have performed subjective analysis on the VCTK corpus to show the efficiency of the proposed method. We can clearly see that AdaGAN gives superior in the subjective evaluations compared to StarGAN-VC. Motivated by the work of AutoVC, we also extended the concept of AdaGAN for the zero-shot conversion as an independent study and reported . AdaGAN is the first GAN-based framework for zero-shot VC. In the future, we plan to explore high-quality vocoders, namely, WaveNet, for further improvement in voice quality. The perceptual difference observed between the estimated and the ground truth indicates the need for exploring better objective function that can perceptually optimize the network parameters of GAN-based architectures, which also forms our immediate future work. At τ → ∞, the assumptions that made in Section 5.1 are true. Hence, from eq., we can conclude that there exists a latent space where normalized latent representation of input features will be the same irrespective of speaking style. Theorem 2: By optimization of min En,De L C X→Y + L sty X→Y, the assumptions made in Theorem 1 can be satisfied. Proof: Our objective function is the following: Iterate step by step to calculate the term (t 2) used in loss function L sty X→Y. Consider, we have the latent representations S x1 and S y1 corresponding to the source and target speech, respectively. Step 1: S x1 (τ) − µ 1 (τ) σ 1 (τ) σ 2 (τ) + µ 2 (τ) (Representation of t 1), Step 2&3: En De S x1 (τ) − µ 1 (τ) σ 1 (τ) σ 2 (τ) + µ 2 (τ). After applying decoder and encoder sequentially on latent representation, we will again get back to the same representation. This is ensured by the loss function L C X→Y. Formally, we want to make L C X→Y → 0. Therefore, we can write step 4 as: Step 4: S x1 (τ) − µ 1 (τ) σ 1 (τ) σ 2 (τ) + µ 2 (τ) (i.e., reconstructed t 1), Step 5: 1σ 2 (τ) S x1 (τ) − µ 1 (τ) σ 1 (τ) ¨σ 2 (τ) +¨μ 2 (τ) −¨μ 2 (τ) (Normalization with its own (i.e., latent representation in Step 4) µ and σ during AdaIN ), Step 6: S x1 (τ) − µ 1 (τ) σ 1 (τ) (Final output of Step 5), Step 7: S x1 (τ) − µ 1 (τ) σ 1 (τ) σ 1 (τ) + µ 1 (τ) (Output after de-normalization in AdaIN . Representation of t 2), where µ 1 and σ 1 are the mean and standard deviations of the another input source speech, x 2. Now, using the mathematical representation of t 2, we can write loss function L sty X→Y as: According to eq., we want to minimize the loss function L sty X→Y. Formally, L sty X→Y → 0. Therefore, we will get µ 1 = µ 1, and σ 1 = σ 1 to achieve our goal. Hence, mean and standard deviation of the same speaker are constant, and different for different speakers irrespective of the linguistic content. We come to the that our loss function satisfies the necessary constraints (assumptions) required in proof of Theorem 1. Figure 6: t-SNE visualization of latent representation of two speakers' speech and its normalized form, where, each point denotes a feature extracted from the 25 ms of speech segment. As we know, Neural Networks (NNs) are hard to train and optimize. Even if everything has been proven in terms of theoretical proofs, statistical and empirical analysis is required. For this analysis, we have adopted t-SNE visualization. Here, we randomly selected few utterances from two different speakers from the VCTK corpus. Latent representations are extracted for the speech of that speakers, and features are reduced to 2-D using t-SNE. The scatter plot shown in Fig. 6 shows that data points are clustered based on the speaking style. After normalized with their respective means and standard deviations, these distribution overlapped. This shows that the distribution of normalized latent representation captures linguistic information-based features irrespective of speaking style as proved in Theorem 1. Therefore, we can say that AdaGAN, and its losses are efficient for practical purposes.
Novel adaptive instance normalization based GAN framework for non parallel many-to-many and zero-shot VC.
578
scitldr
Self-attention-based Transformer has demonstrated the state-of-the-art performances in a number of natural language processing tasks. Self attention is able to model long-term dependencies, but it may suffer from the extraction of irrelevant information in the context. To tackle the problem, we propose a novel model called Sparse Transformer. Sparse Transformer is able to improve the concentration of attention on the global context through an explicit selection of the most relevant segments. Extensive experimental on a series of natural language processing tasks, including neural machine translation, image captioning, and language modeling, all demonstrate the advantages of Sparse Transformer in model performance. Sparse Transformer reaches the state-of-the-art performances in the IWSLT 2015 English-to-Vietnamese translation and IWSLT 2014 German-to-English translation. In addition, we conduct qualitative analysis to account for Sparse Transformer's superior performance. Understanding natural language requires the ability to pay attention to the most relevant information. For example, people tend to focus on the most relevant segments to search for the answers to their questions in mind during reading. However, retrieving problems may occur if irrelevant segments impose negative impacts on reading comprehension. Such distraction hinders the understanding process, which calls for an effective attention. This principle is also applicable to the computation systems for natural language. Attention has been a vital component of the models for natural language understanding and natural language generation. proposed Transformer, a model based on the attention mechanism for Neural Machine Translation(NMT). Transformer has shown outstanding performance in natural language generation tasks. More recently, the success of BERT in natural language processing shows the great usefulness of both the attention mechanism and the framework of Transformer. However, the attention in vanilla Transformer has a obvious drawback, as the Transformer assigns credits to all components of the context. This causes a lack of focus. As illustrated in Figure 1, the attention in vanilla Transformer assigns high credits to many irrelevant words, while in Explicit Sparse Transformer, it concentrates on the most relevant k words. For the word "tim", the most related words should be "heart" and the immediate words. Yet the attention in vanilla Transformer does not focus on them but gives credits to some irrelevant words such as "him". Recent works have studied applying sparse attention in Transformer model. However, they either add local attention constraints which break long term dependency or hurt the time efficiency . Inspired by which introduce sparse credit assignment to the LSTM model, we propose a novel model called Explicit Sparse Transformer which is equipped with our sparse attention mechanism. We implement an explicit selection method based on top-k selection. Unlike vanilla Transformer, Explicit Sparse Transformer only pays attention to the k most contributive states. Thus Explicit Sparse Transformer can perform more concentrated attention than vanilla Transformer. Figure 1: Illustration of self-attention in the models. The orange bar denotes the attention score of our proposed model while the blue bar denotes the attention scores of the vanilla Transformer. The orange line denotes the attention between the target word "tim" and the selected top-k positions in the sequence. In the attention of vanilla Transformer, "tim" assigns too many non-zero attention scores to the irrelevant words. But for the proposal, the top-k largest attention scores removes the distraction from irrelevant words and the attention becomes concentrated. We first validate our methods on three tasks. For further investigation, we compare our methods with previous sparse attention methods and experimentally answer how to choose k in a series of qualitative analyses. We are surprised to find that the proposed sparse attention method can also help with training as a regularization method. Visual analysis shows that Explicit Sparse Transformer exhibits a higher potential in performing a high-quality alignment. The contributions of this paper are presented below: • We propose a novel model called Explicit Sparse Transformer, which enhances the concentration of the Transformer's attention through explicit selection. • We conducted extensive experiments on three natural language processing tasks, including Neural Machine Translation, Image Captioning and Language Modeling. Compared with vanilla Transformer, Explicit Sparse Transformer demonstrates better performances in the above three tasks. Specifically, our model reaches the state-of-the-art performances in the IWSLT 2015 English-to-Vietnamese translation. • Compared to previous sparse attention methods for transformers, our methods are much faster in training and testing, and achieves better . The review to the attention mechanism and the attention-based framework of Transformer can be found in Appendix A.1. Lack of concentration in the attention can lead to the failure of relevant information extraction. To this end, we propose a novel model, Explicit Sparse Transformer, which enables the focus on only a few elements through explicit selection. Compared with the conventional attention, no credit will be assigned to the value that is not highly correlated to the query. We provide a comparison between the attention of vanilla Transformer and that of Explicit Sparse Transformer in Figure 2. Explicit Sparse Transformer is still based on the Transformer framework. The difference is in the implementation of self-attention. The attention is degenerated to the sparse attention through top-k selection. In this way, the most contributive components for attention are reserved and the other irrelevant information are removed. This selective method is effective in preserving important information and removing noise. The attention can be much more concentrated on the most contributive elements of value. In the following, we first introduce the sparsification in self-attention and then extend it to context attention. In the unihead self-attention, the key components, the query are the linear transformation of the source context, namely the input of each layer, where Q = W Q x, K = W K x and V = W V x. Explicit Sparse Transformer first generates the attention scores P as demonstrated below: Then the model evaluates the values of the scores P based on the hypothesis that scores with larger values demonstrate higher relevance. The sparse attention masking operation M(·) is implemented upon P in order to select the top-k contributive elements. Specifically, we select the k largest element of each row in P and record their positions in the position matrix (i, j), where k is a hyperparameter. To be specific, say the k-th largest value of row i is t i, if the value of the j-th component is larger than t i, the position (i, j) is recorded. We concatenate the threshold value of each row to form a The masking functions M(·, ·) is illustrated as follows: With the top-k selection, the high attention scores are selected through an explicit way. This is different from dropout which randomly abandons the scores. Such explicit selection can not only guarantee the preservation of important components, but also simplify the model since k is usually a small number such as 8, detailed analysis can be found in 5.2. The next step after top-k selection is normalization: where A refers to the normalized scores. As the scores that are smaller than the top k largest scores are assigned with negative infinity by the masking function M(·, ·), their normalized scores, namely the probabilities, approximate 0. We show the back-propagation process of Top-k selection in A.3. The output representation of self-attention C can be computed as below: ConvS2S 25.2 --Actor-Critic --28.5 NPMT+LM ( 28.4 --RNMT 28.5 --Fixup 29.3 -34.5 Weighted Transformer 28.9 --Universal Transformer 28.9 --Layer-wise Coordination 29.1 --Transformer(relative position) 29.2 --Transformer 29.3 --DynamicConv 29. The output is the expectation of the value following the sparsified distribution A. Following the distribution of the selected components, the attention in the Explicit Sparse Transformer model can obtain more focused attention. Also, such sparse attention can extend to context attention. Resembling but different from the self-attention mechanism, the Q is no longer the linear transformation of the source context but the decoding states s. In the implementation, we replace Q with W Q s, where W Q is still learnable matrix. In brief, the attention in our proposed Explicit Sparse Transformer sparsifies the attention weights. The attention can then become focused on the most contributive elements, and it is compatible to both self-attention and context attention. The simple implementation of this method is in the Appendix A.4. We conducted a series of experiments on three natural language processing tasks, including neural machine translation, image captioning and language modeling. Detailed experimental settings are in Appendix A.2. Dataset To evaluate the performance of Explicit Sparse Transformer in NMT, we conducted experiments on three NMT tasks, English-to-German translation (En-De) with a large dataset, English-to-Vietnamese (En-Vi) translation and German-to-English translation (De-En) with two datasets of medium size. For En-De, we trained Explicit Sparse Transformer on the standard dataset for WMT 2014 En-De translation. The dataset consists of around 4.5 million sentence pairs. The source and target languages share a vocabulary of 32K sub-word units. We used the newstest 2013 for validation and the newstest 2014 as our test set. We report the on the test set. For En-Vi, we trained our model on the dataset in IWSLT 2015 . The dataset consists of around 133K sentence pairs from translated TED talks. The vocabulary size for source language is around 17,200 and that for target language is around 7,800. We used tst2012 for validation, and tst2013 for testing and report the testing . For De-En, we used the dataset in IWSLT 2014. The training set contains 160K sentence pairs and the validation set contains 7K sentences., we used the same test set with around 7K sentences. The data were preprocessed with byte-pair encoding . The vocabulary size is 14,000. Result Dataset We evaluated our approach on the image captioning task. Image captioning is a task that combines image understanding and language generation. We conducted experiments on the Microsoft COCO 2014 dataset (a). It contains 123,287 images, each of which is paired 5 with descriptive sentences. We report the and evaluate the image captioning model on the MSCOCO 2014 test set for image captioning. We used the publicly-available splits provided by. The validation set and test set both contain 5,000 images. Result Table 2 shows the of the baseline models and Explicit Sparse Transformer on the COCO Karpathy test split. Transformer outperforms the mentioned baseline models. Explicit Sparse Transformer outperforms the implemented Transformer by +0.4 in terms of BLEU-4, +0.3 in terms of METEOR, +0.7 in terms of CIDEr., which consistently proves its effectiveness in Image Captioning. 2 is large-scale dataset for character-level language modeling. It contains 100M bytes of unprocessed Wikipedia texts. The inputs include Latin alphabets, non-Latin alphabets, XML markups and special characters. The vocabulary size 205 tokens, including one for unknown characters. We used the same preprocessing method following. The training set contains 90M bytes of data, and the validation set and the test set contains 5M respectively. Result Table 3 shows the of the baseline models and Explicit Sparse Transformer-XL on the test set of enwiki8. Compared with the other strong baselines, Transformer-XL can reach a better performance, and Explicit Sparse Transformer outperforms Transformer-XL with an advantage. Table 4: In the Transformer model, the proposed method, top-k selection before softmax is faster than previous sparse attention methods and is comparable in terms of BLEU scores. In this section, we performed several analyses for further discussion of Explicit Sparse Transformer. First, we compare the proposed method of topk selection before softmax with previous sparse attention method including various variants of sparsemax (; ;). Second, we discuss about the selection of the value of k. Third, we demonstrate that the top-k sparse attention method helps training. In the end, we conducted a series of qualitative analyses to visualize proposed sparse attention in Transformer. We compare the performance and speed of our method with the previous sparse attention methods 3 on the basis of strong implemented transformer baseline. The training and inference speed are reported on the platform of Pytorch and IWSLT 2014 De-En translation dataset, the batch size for inference is set to 128 in terms of sentence and half precision training(FP-16) is applied. As we can see from Table 4, the proposed sparse attention method achieve the comparable as previous sparse attention methods, but the training and testing speed is 2x faster than sparsemax and 10x faster than Entmax-alpha during the inference. This is due to the fact that our method does not introduce too much computation for calculating sparse attention scores. The other group of sparse attention methods of adding local attention constraints into attention , do not show performance on neural machine translation, so we do not compare them in Table 4. Base T T&P En-Vi (BLEU) 27.4 27.7 27.8 Table 5: Results of the ablation study of the sparsification at different phases on the En-Vi test set. "Base" denotes vanilla Transformer. "T" denotes only adding the sparsification in the training phase, and "T&P" denotes adding it at both phases as the implementation of Explicit Sparse Transformer does. The natural question of how to choose the optimal k comes with the proposed method. We compare the effect of the value of k at exponential scales. We perform experiments on En-Vi and De-En from 3 different initializations for each value of K, and report the mean BLEU scores on the valid set. The figure 3 shows that regardless of the value of 16 on the En-Vi dataset, the model performance generally rises first and then falls as k increases. Under the setting of the k ∈ {4, 8, 16, 32}, setting the value of k to 8 achieves consistent improvements over the We are surprised to find that only adding the sparsification in the training phase can also bring an improvement in the performance. We experiment this idea on IWSLT En-Vi and report the on the valid set in Table 5,. The improvement of 0.3 BLEU scores shows that vanilla Transformer may be overparameterized and the sparsification encourages the simplification of the model. To perform a thorough evaluation of our Explicit Sparse Transformer, we conducted a case study and visualize the attention distributions of our model and the baseline for further comparison. Specifically, we conducted the analysis on the test set of En-Vi, and randomly selected a sample pair of attention visualization of both models. The visualization of the context attention of the decoder's bottom layer in Figure 4 (a). The attention distribution of the left figure is fairly disperse. On the contrary, the right figure shows that the sparse attention can choose to focus only on several positions so that the model can be forced to stay focused. For example, when generating the phrase "for thinking about my heart"(Word-to-word translation from Vietnamese), the generated word cannot be aligned to the corresponding words. As to Explicit Sparse Transformer, when generating the phrase "with all my heart", the attention can focus on the corresponding positions with strong confidence. The visualization of the decoder's top layer is shown in Figure 4 (b). From the figure, the context attention at the top layer of the vanilla Transformer decoder suffers from focusing on the last source token. This is a common behavior of the attention in vanilla Transformer. Such attention with wrong alignment cannot sufficiently extract enough relevant source-side information for the generation. In contrast, Explicit Sparse Transformer, with simple modification on the vanilla version, does not suffer from this problem, but instead focuses on the relevant sections of the source context. The figure on the right demonstrating the attention distribution of Explicit Sparse Transformer shows that our proposed attention in the model is able to perform accurate alignment. Attention mechanism has demonstrated outstanding performances in a number of neural-networkbased methods, and it has been a focus in the NLP studies . A number of studies are proposed to enhance the effects of attention mechanism (; ;). propose local attention and propose local attention for self-attention. propose hard attention that pays discrete attention in image captioning. propose a combination soft attention with hard attention to construct hierarchical memory network. propose a temperature mechanism to change the softness of attention distribution. propose an attention which can select a small proportion for focusing. It is trained by reinforcement learning algorithms . In terms of memory networks, propose to sparse access memory recently propose to use local attention and block attention to sparsify the transformer. Our approach differs from them in that our method does not need to block sentences and still capture long distance dependencies. Besides, we demonstrate the importance of Explicit Sparse Transformer in sequence to sequence learning. Although the variants of sparsemax (; ;) improve in machine translation tasks, we empirically demonstrate in 5.1 that our method introduces less computation in the standard transformer and is much faster than those sparse attention methods on GPUs. first introduced the attention mechanism to learn the alignment between the target-side context and the source-side context, and formulated several versions for local and global attention. In general, the attention mechanism maps a query and a key-value pair to an output. The attention score function and softmax normalization can turn the query Q and the key K into a distribution α. Following the distribution α, the attention mechanism computes the expectation of the value V and finally generates the output C. Take the original attention mechanism in NMT as an example. Both key K ∈ R n×d and value V ∈ R n×d are the sequence of output states from the encoder. Query Q ∈ R m×d is the sequence of output states from the decoder, where m is the length of Q, n is the length of K and V, and d is the dimension of the states. Thus, the attention mechanism is formulated as: where f refers to the attention score computation. Transformer , which is fully based on the attention mechanism, demonstrates the state-of-the-art performances in a series of natural language generation tasks. Specifically, we focus on self-attention and multi-head attention. The ideology of self-attention is, as the name implies, the attention over the context itself. In the implementation, the query Q, key K and value V are the linear transformation of the input x, so that Q = W Q x, K = W K x and V = W V x where W Q, W K and W V are learnable parameters. Therefore, the computation can be formulated as below: where d refers to the dimension of the states. The aforementioned mechanism can be regarded as the unihead attention. As to the multi-head attention, the attention computation is separated into g heads (namely 8 for basic model and 16 for large model in the common practice). Thus multiple parts of the inputs can be computed individually. For the i-th head, the output can be computed as in the following formula: where C (i) refers to the output of the head, Q (i), K (i) and V (i) are the query, key and value of the head, and d k refers to the size of each head (d k = d/g). Finally, the output of each head are concatenated for the output: In common practice, C is sent through a linear transformation with weight matrix W c for the final output of multi-head attention. However, soft attention can assign weights to a lot more words that are less relevent to the query. Therefore, in order to improve concentration in attention for effective information extraction, we study the problem of sparse attention in Transformer and propose our model Explicit Sparse Transformer. We use the default setting in for the implementation of our proposed Explicit Sparse Transformer. The hyper parameters including beam size and training steps are tuned on the valid set. Neural Machine Translation Training For En-Vi translation, we use default scripts and hyperparameter setting of tensor2tensor 4 v1.11.0 to preprocess, train and evaluate our model. We use the default scripts of fairseq 5 v0.6.1 to preprocess the De-En and En-De dataset. We train the model on the En-Vi dataset for 35K steps with batch size of 4K. For IWSLT 2015 De-En dataset, batch size is also set to 4K, we update the model every 4 steps and train the model for 90epochs. For WMT 2014 En-De dataset, we train the model for 72 epochs on 4 GPUs with update frequency of 32 and batch size of 3584. We train all models on a single RTX2080TI for two small IWSLT datasets and on a single machine of 4 RTX TITAN for WMT14 En-De. In order to reduce the impact of random initialization, we perform experiments with three different initializations for all models and report the highest for small datasets. Evaluation We use case-sensitive tokenized BLEU score for the evaluation of WMT14 En-De, and we use case-insensitive BLEU for that of IWSLT 2015 En-Vi and IWSLT 2014 De-En following. Same as , compound splitting is used for WMT 14 En-De. For WMT 14 En-De and IWSLT 2014 De-En, we save checkpoints every epoch and average last 10 checkpoints every 5 epochs, We select the averaged checkpoint with best valid BLEU and report its BLEU score on the test set. For IWSLT 2015 En-Vi, we save checkpoints every 600 seconds and average last 20 checkpoints. Image Captioning We still use the default setting of Transformer for training our proposed Explicit Sparse Transformer. We report the standard automatic evaluation metrics with the help of the COCO captioning evaluation toolkit 6 (b), which includes the commonly-used evaluation metrics, BLEU-4 Papineni et al. Language Models We follow and use their implementation for our Explicit Sparse Transformer. Following the previous work , we use BPC (E[log 2 P (xt + 1|ht)]), standing for the average number of Bits-Per-Character, for evaluation. Lower BPC refers to better performance. As to the model implementation, we implement Explicit Sparse Transformer-XL, which is based on the base version of Transformer-XL. 7 Transformer-XL is a model based on Transformer but has better capability of representing long sequences. The masking function M(·, ·) is illustrated as follow: M(P, k) ij = P ij if P ij ≥ t i (k-th largest value of row i) −∞ if P ij < t i (k-th largest value of row i) Denote M = M(P, k). We regard t i as constants. When back-propagating, ∂M ij ∂P ij = 1 if P ij ≥ t i (k-th largest value of row i) 0 if P ij < t i (k-th largest value of row i) The next step after top-k selection is normalization: where A refers to the normalized scores. When backpropagating, =    ∂A ij ∂M kl if P ij ≥ t i (k-th largest value of row i) 0 if P ij < t i (k-th largest value of row i) The softmax function is evidently differentiable, therefore, we have calculated the gradient involved in top-k selection. A.4 IMPLEMENTATION Figure 5 shows the code for the idea in case of single head self-attention, the proposed method is easy to implement and plug in the successful Transformer model.
This work propose Sparse Transformer to improve the concentration of attention on the global context through an explicit selection of the most relevant segments for sequence to sequence learning.
579
scitldr
Human observers can learn to recognize new categories of objects from a handful of examples, yet doing so with machine perception remains an open challenge. We hypothesize that data-efficient recognition is enabled by representations which make the variability in natural signals more predictable, as suggested by recent perceptual evidence. We therefore revisit and improve Contrastive Predictive Coding, a recently-proposed unsupervised learning framework, and arrive at a representation which enables generalization from small amounts of labeled data. When provided with only 1% of ImageNet labels (i.e. 13 per class), this model retains a strong classification performance, 73% Top-5 accuracy, outperforming supervised networks by 28% (a 65% relative improvement) and state-of-the-art semi-supervised methods by 14%. We also find this representation to serve as a useful substrate for object detection on the PASCAL-VOC 2007 dataset, approaching the performance of representations trained with a fully annotated ImageNet dataset. ResNet trained on CPC ResNet trained on pixels With decreasing amounts of labeled data, supervised networks trained on pixels fail to generalize (red). When trained on unsupervised representations learned with CPC, these networks retain a much higher accuracy in this low-data regime (blue). Equivalently, the accuracy of supervised networks can be matched with significantly fewer labels. Deep neural networks excel at perceptual tasks when labeled data are abundant, yet their performance degrades substantially when provided with limited supervision (Fig. 1, red). In contrast, humans and animals can quickly learn about new classes of objects from few examples . What accounts for this monumental difference in data-efficiency between biological and machine vision? While highly-structured representations (e.g. as proposed by) may improve data-efficiency, it remains unclear how to program explicit structures that capture the enormous complexity of real visual scenes like those in ImageNet . An alternative hypothesis has proposed that intelligent systems need not be structured a priori, but can instead learn about the structure of the world in an unsupervised manner (; ;). Choosing an appropriate training objective is an open problem, but a promising guiding principle has emerged recently: good representations should make the spatio-temporal variability in natural signals more predictable. Indeed, human perceptual representations have been shown to linearize (or 'straighten') the temporal transformations found in natural videos, a property lacking from current supervised image recognition models (Hénaff et al., 2019), and theories of both spatial and temporal predictability have succeeded in describing properties of early visual areas . In this work, we hypothesize that spatially predictable representations may allow artificial systems to benefit from human-like data-efficiency. Contrastive Predictive Coding (CPC, van den) is an unsupervised objective which learns such predictable representations. CPC is a general technique that only requires in its definition that observations be ordered along e.g. temporal or spatial dimensions, and as such has been applied to a variety of different modalities including speech, natural language and images. This generality, combined with the strong performance of its representations in downstream linear classification tasks, makes CPC a promising candidate for investigating the efficacy of predictable representations for data-efficient image recognition. Our work makes the following contributions: • We revisit CPC in terms of its architecture and training methodology, and arrive at a new implementation of CPC with dramatically-improved ability to linearly separate image classes (+17% Top-1 ImageNet classification accuracy). • We then train deep networks on top of the ing CPC representations using very few labeled images (e.g. 1% of the ImageNet dataset), and demonstrate test-time classification accuracy far above networks trained on raw pixels (73% Top-5 accuracy, a 28% absolute improvement), outperforming all other unsupervised representation learning methods (+15% Top-5 accuracy over the previous state-of-the-art). Surprisingly, this representation also surpasses supervised methods when given the entire ImageNet dataset (+1% Top-5 accuracy). • We isolate the contributions of different components of the final model to such downstream tasks. Interestingly, we find that linear classification accuracy is not always predictive of low-data classification accuracy, emphasizing the importance of this metric as a stand-alone benchmark for unsupervised learning. • Finally, we assess the generality of CPC representations by transferring them to a new task and dataset: object detection on PASCAL-VOC 2007. Consistent with the from the previous section, we find CPC to give state-of-the-art performance in this setting. We first review the CPC architecture and learning objective in section 2.1, before detailing how we use its ing representations for image recognition tasks in section 2.2. Contrastive Predictive Coding as formulated in (van den) learns representations by training neural networks to predict the representations of future observations from those of past ones. When applied to images, the original formulation of CPC operates by predicting the representations of patches below a certain position from those above it (Fig. 2, left). These predictions are evaluated using a contrastive loss, in which the network must correctly classify the'future' representation amongst a set of unrelated'negative' representations. This avoids trivial solutions such as representing all patches with a constant vector, as would be the case with a mean squared error loss. In the CPC architecture, each input image is first divided into a set of overlapping patches x i,j, each of which is encoded with a neural network f θ into a single vector z i,j = f θ (x i,j). To make predictions, a masked convolutional network g φ is then applied to the grid of feature vectors. The masks are such that the receptive field of each ing context vector c i,j only includes feature vectors that lie above it in the image (i.e. {z u,v} u≤i,v ). The prediction task then consists o predicting'future' feature vectors z i+k,j from current context vectors c i,j, where k > 0. The predictions are made linearly: given a context vector c i,j, a prediction length k > 0, and a prediction matrix W k, the predicted feature vector isẑ i+k,j = W k c i,j. The quality of this prediction is then evaluated using a contrastive loss. Specifically, the goal is to correctly recognize the target z i+k,j among a set of randomly sampled feature vectors {z l} from the dataset. We compute the probability assigned to the target using a softmax, and evaluate this probability using the usual cross-entropy loss. Summing this loss over locations and prediction Figure 2: Overview of the framework for semi-supervised learning with Contrastive Predictive Coding. Left: unsupervised pre-training with the spatial prediction task (See Section 2.1). First, an image is divided into a grid of overlapping patches. Each patch is encoded independently from the rest with a feature extractor (blue) which terminates with a mean-pooling operation, yielding a single feature vector for that patch. Doing so for all patches yields a field of such feature vectors (wireframe vectors). Feature vectors above a certain level (in this case, the center of the image) are then aggregated with a context network (red), yielding a row of context vectors which are used to linearly predict features vectors below. Right: using the CPC representation for a classification task. Having trained the encoder network, the context network (red) is discarded and replaced by a classifier network (green) which can be trained in a supervised manner. For some experiments, we also fine-tune the encoder network (blue) for the classification task. When applying the encoder to cropped patches (as opposed to the full image) we refer to it as a patched ResNet in the figure. offsets, we arrive at the CPC objective as defined in (van den): The negative samples {z l} are taken from other locations in the image and other images in the minibatch. This loss is called InfoNCE (van den) as it is inspired by Noise-Contrastive Estimation (Gutmann & Hyvärinen, 2010;) and has been shown to maximize the mutual information between c i,j and z i+k,j (van den). Having trained an encoder network f θ, a context network g φ, and a set of linear predictors {W k} using the CPC objective, we use the latents z = f θ (x) as a representation of new observations x for downstream tasks, and discard the rest. We then train a model h ψ to classify these representations given a dataset of labeled images. More formally, given a dataset of N unlabeled images D u = {x n}, and a (potentially much smaller) dataset of M labeled images D l = {x m, y m}: In all cases, the dataset of unlabeled images D u we pre-train on is the full ImageNet ILSVRC 2012 training set . We consider three labeled datasets D l for evaluation, each with an associated classifier h ψ and supervised losse L Sup (see Fig. 2, right). This protocol is sufficiently generic to allow us to later compare the CPC representation to other methods which have their own means of learning a feature extractor f θ. Linear classification is the standard benchmark for evaluating the quality of unsupervised image representations. In this regime, the classification network h ψ is restricted to mean pooling followed by a single linear layer, and the parameters of f θ are kept fixed. The labeled dataset D l is the entire ImageNet dataset, and the supervised loss L Sup is standard cross-entropy. We use the same dataaugmentation as in the unsupervised learning phase for training, and none at test time and evaluate with a single crop. Efficient classification directly tests whether the CPC representation enables visual learning from few labels. For this task, the classifier h ψ is an arbitrary deep neural network (we use an 11-block ResNet architecture with 4096-dimensional feature maps and 1024-dimensional bottleneck layers). The labeled dataset D l is a subset of the ImageNet dataset: we investigated using 1%, 2%, 5%, 10%, 20%, 50% and 100% of the ImageNet dataset. The supervised loss L Sup is again cross-entropy. In addition to random color-dropping we use the Inception data-augmentation scheme for training, no augmentation at test-time and evaluate with a single crop. Transfer learning tests the generality of the representation by applying it to a new task and dataset. For this we chose image detection on the PASCAL-2007 dataset, a standard benchmark in computer vision . As such D l is the entire PASCAL-2007 dataset (comprised of 5011 labeled images); h ψ and L Sup are the Faster-RCNN architecture and loss . In addition to color-dropping, we use scale-augmentation for training. For linear classification, we keep the feature extractor f θ fixed to assess the representation in absolute terms. For efficient classification and transfer learning, we additionally explore fine-tuning the feature extractor for the supervised objective. In this regime, we initialize the feature extractor and classifier with the solutions θ *, ψ * found in the previous learning phase, and train them both for the supervised objective. To ensure that the feature extractor does not deviate too much from the solution dictated by the CPC objective, we use a smaller learning rate and early-stopping. Data-efficient learning has typically been approached by two complementary methods, both of which seek to make use of more plentiful unlabeled data: representation learning and semisupervised learning. The former formulates an objective to learn a feature extractor f θ in an unsupervised manner, whereas the latter directly constrains the classifier h ψ using the unlabeled data. Representation learning saw early success using generative modeling , but likelihood-based models have yet to generalize to more complex stimulus classes. Generative adversarial models have also been harnessed for representation learning , and large-scale implementations have recently achieved corresponding gains in linear classification accuracy . In contrast to generative models which require the reconstruction of observations, self-supervised techniques directly formulate tasks involving the learned representation. For example, simply asking a network to recognize the spatial layout of an image led to representations that transferred to popular vision tasks such as classification and detection . Other works showed that prediction of color and image orientation , and invariance to data augmentation can provide useful self-supervised tasks. Beyond single images, works have leveraged video cues such as object tracking , frame ordering , and object boundary cues . Non-visual information can be equally powerful; information about camera motion , scene geometry , or sound (; can all serve as natural sources of supervision. While many of these tasks require predicting fixed quantities computed from the data, another class of contrastive methods formulate their objectives in the learned representations themselves. CPC is a contrastive representation learning method that maximizes the mutual information between spatially removed latent representations with InfoNCE (van den), a loss function based on Noise-Contrastive Estimation (Gutmann & Hyvärinen, 2010;). Two other methods have recently been proposed using the same loss function, but with different associated prediction tasks. Contrastive Multiview Coding maximizes the mutual information between representations of different views of the same observation. Augmented Multiscale Deep InfoMax is most similar to CPC in that it makes predictions across space, but differs in that it also predicts representations across layers in the model. In addition, AMDIM limits the receptive field of its representation, but does this by constraining the number of spatial convolutions in the network architecture rather than using image patches. A common alternative approach for improving data efficiency is label-propagation , where a classifier is trained on a subset of labeled data, then used to label parts of the unlabeled dataset, after which the process is repeated. This label-propagation can either be discrete (as in pseudo-labeling,) or continuous (as in entropy minimization,). The predictions of this classifier are often constrained to be smooth with respect to certain deformations, such as data-augmentation or adversarial perturbation . Representation learning and semi-supervised learning have been shown to be complementary and can be combined to great effect, which is why we focus solely on representation learning in this paper. When asking whether CPC enables data-efficient learning, we wish to use the best possible representative of this model class. Unfortunately, purely unsupervised metrics tell us little about downstream performance, and implementation details have been shown to matter enormously (; . Since many design choices (e.g. network architecture and datapreprocessing) have been previously evaluated using linear classification, we use this benchmark in section 4.1 to align the CPC model with best practices in representation learning and compare to published . In section 4.2 we select the best performing model from the previous section and assess whether it enables efficient classification. We also investigate to what extent the first, more common metric (linear classification accuracy) is predictive of efficient classification. Finally, in section 4.3 we investigate the generality of our through transfer learning to PASCAL-2007. The overarching principle behind our new model design is to increase the scale and efficiency of the encoder architecture while also maximizing the supervisory signal we obtain from each image. At the same time, it is important not to allow the network to solve the problem trivially, i.e., without learning semantics. To this end, we seek to remove low-level cues common across patches by augmenting individual patches independently, using standard stochastic data-processing techniques from supervised and self-supervised learning. We identify four axes for model capacity and task setup that could impact the model's performance. The first axis increases model capacity by increasing depth and width, while the second improves training efficiency capacity by introducing layer normalization. The third axis increases task complexity by making predictions in all four directions, and the fourth does so by performing more extensive patch-based augmentation. Model capacity. Recent work has shown that networks and more effective training improves self-supervised learning (; Motion Segmentation (MS) 27.6 48.3 Exemplar (Ex) 31.5 53.1 Relative Position (RP) 36.2 59.2 Colorization (Col) 39.6 62.5 Combination of MS + Ex + RP + Col -69.3 CPC v1 (van den 48.7 73.6 Rotation 55.4 -CMC 60.1 82.8 Local Aggregation 60.2 -BigBiGAN 61.3 81.9 AMDIM 68.1 -CPC v2 (ours) 65.9 86.6 other design choices. Interestingly, a larger architecture delivers larger improvements with more efficient training, more self-supervised losses, and more patch-based augmentations (Fig. 3, +5% Top-1 accuracy with original training scheme, +10% accuracy with new one). Layer normalization. Large architectures are more difficult to train efficiently. Early works on context prediction with patches used batch normalization to speed training. However, with CPC we find that batch normalization actually harms downstream performance of large models. We hypothesize that batch normalization allows large models to find a trivial solution to CPC: it introduces a dependency between patches (through the batch statistics) that can be exploited to bypass the constraints on the receptive field. We find that we can reclaim much of batch normalization's training efficiency using layer normalization , which leads to a small gain for the smaller architecture (+1% accuracy over equivalent architectures that use neither normalization) and a larger gain for the larger architecture (+2.5% accuracy). Prediction lengths and directions. Larger architectures also run a greater risk of overfitting. We address this by asking more from the network: specifically, whereas van den predicted each patch using only context from spatially beneath it, we repeatedly predict the patch using context from above, to the right, and to the left, ing in up to four times as many prediction tasks. Combining top-to-bottom with bottom-to-top helps both model architectures (+2% accuracy for both), but using all 4 spatial directions only benefits the larger model (an additional +1.5% for the larger model, -1% for the smaller), consistent with the idea that model capacity and amount of supervision must go hand-in-hand. We also hypothesized that prediction "length"-i.e. offset between the predicted patch and the aggregated context-might affect performance, as distant patches might lie on distinct objects, encouraging the network to memorize images. Indeed, limiting the range of the prediction length k to {2, 3} performed better than {2, . . ., 5} as was used originally (+1% for the larger model). Patch-based augmentation. If the network can solve CPC using low-level patterns (e.g. straight lines continuing between patches, chromatic aberration), it need not learn semantically meaningful content. Augmenting the low-level variability across patches can remove such low level cues. The original CPC model spatially jitters individual patches independently. We further this logic by adopting the'color dropping' method of , which randomly drops two of the three color channels in each patch, and find it to delivers systematic gains (+1% for the small model, +3% for the larger one). We also randomly flip patches horizontally, but find it only benefits the smaller model (+1%). Combined. Cumulatively, these fairly straightforward implementation changes lead to a substantial improvement to the original CPC model (65.9% Top-1 accuracy, a 17% improvement), making it competitive with recent approaches and outperforming prior methods (see table 1). Interestingly, if we train the same patch-based architecture from scratch in a fully supervised manner, we obtain 66.4% Top-1 accuracy (with batch normalization; 62.5% without), suggesting that CPC is now nearly saturating the architecture's representational power despite not using labels. These illustrate how architecture and data have an outsized impact on the linear classification performance of self-supervised representations, and are interesting to compare with with previous . For example, in AMDIM, different settings of data augmentation alone can in a nearly 10% absolute increase in performance on ImageNet linear classification. toWe now turn to our original question of whether CPC can enable data-efficient image recognition. We start by evaluating the performance of purely-supervised networks as the size of the labeled dataset D l varies from 1% to 100% of ImageNet, training separate classifiers on each subset. We found that a ResNet-152 to works best across all data-regimes (see Appendix). Despite our efforts to tune the supervised model for low-data classification (including network depth, regularization, and optimization parameters), the accuracy of the best model only reaches 44.1% Top-5 accuracy when trained on 1% of the dataset (compared to 93.9% when trained on the entire dataset, see Fig. 1, red). Orange, green, and blue dots correspond to CPC models making predictions in 1, 2, and 4 spatial directions respectively. Within a color group, different models correspond to other implementation details (e.g. layer norm and patch augmentation, and combinations thereof). Contrastive Predictive Coding. We now address our central question of whether CPC enables data-efficient learning. We follow the same paradigm as for the supervised baseline (training and evaluating a separate classifier for each size subset), stacking a neural network classifier on top of the CPC latents z = f θ (x) rather than the raw image pixels x (see section 2.2, efficient classification, and Appendix). This representation, which we selected for its improved linear classification performance (CPC v2 in Fig. 3), leads to a significant increase in data-efficiency compared to purely supervised networks (Fig. 1, He et al. (2016b) ). We find similar in all other data-regimes we considered (see Fig. 1). How important are the model specifications described in Section 4.1 for low-data classification? We hypothesized that predictable representations might enable data-efficient classification, and therefore expect that increasing the amount of'predictability' in the representation should also increase its ability to learn from small amounts of data. Fig. 4 shows evidence for this by ablating model parameters and comparing linear classification performance against low-data classification. Consistent with our hypothesis, increasing the number of spatial directions in the CPC prediction task (which increased linear classification performance) systematically increases low-data classification performance (Fig. 4, left, different color groups). As a control, we asked if all modifications that improve linear classification also improve low-data classification. We did not find evidence in favor of this: improvements in linear classification as a of changing other model Methods using label-propagation: Pseudolabeling 51.6 82.4 VAT + Entropy Minimization parameters (patch-based data-augmentation, layer normalization, and combinations thereof) seem uncorrelated to performance in other tasks (Fig. 4, left, within green group: R 2 = 0.17, p = 0.36). Different architectural specifications also produced different changes in both tasks (Fig. 4, right). Whereas increasing the depth of the encoding network greatly improves both metrics, increasing the network width (and therefore the number of features used for linear classification) only improves linear classification accuracy. Other unsupervised representations. How well does the CPC representation compare to other representations that have been learned in an unsupervised manner? If predictable representations are uniquely suited for efficient classification, we would expect other methods within this family to perform similarly, and other model classes less so. Table 2 compares our best model with other works on efficient recognition. We consider three objectives from different model classes: self-supervised learning with rotation prediction, large-scale adversarial feature learning , and another contrastive prediction objective . evaluate the low-data classification performance of representations learned with rotation prediction using a similar paradigm and architecture (ResNet-152), hence we report their directly. Given 1% of ImageNet, their method achieves 57.5% Top-5 accuracy, consistently with the reduced accuracy of a linear classifier (55.4% vs 65.9% for CPC) 1. Because BigBiGAN and AMDIM achieve stronger linear classification accuracy than rotation prediction (61.3% and 68.1% Top-1 accuracy, respectively), we might expect better performance on efficient classification as well. Since their authors do not report on efficient classification we evaluated these representations using the same paradigm we used for evaluating CPC, stacking a ResNet classifier on top of the 7×7×8192 latents of the BigBiGAN and the 7×7×2560 grid of feature vectors of AMDIM. We found fine-tuned representations to yield only marginal gains over fixed ones (72.9% compared to 72.3% Top-5 accuracy given 1% of labels), hence for simplicity we evaluate BigBiGAN and AMDIM on this task while keeping them fixed. We re-tune the hyper-parameters of the classifier (including optimization, regularization, etc.) for each of these representations separately. Although these methods achieve similar performance in terms of linear classification, we find them to achieve very different in efficient classification. Given 1% of ImageNet, classifiers trained on top of BigBiGAN achieve 55.2% Top-5 accuracy, similarly to rotation prediction (57.5%), despite its increased linear classification accuracy (+6% relative to rotation prediction). In contrast, AMDIM (which also belongs to the family of contrastive prediction methods) achieves 67.4% on this same task. Again, its increased linear classification accuracy did not entail an increase in data-efficiency. Nevertheless, in line with our initial hypothesis, we find that contrastive prediction methods such as CPC surpass other approaches in our efficient classification experiments, and that linear classification performance is not perfectly correlated with these . Other semi-supervised techniques A separate class of methods for low-data classification attempts to propagate the knowledge extracted from the subset of labeled examples to unlabeled examples while being invariant to augmentation or other perturbations. These methods generally depend on the quality of the classifier's predictions, and as such tend to fare well when given intermediate amounts of data. Although not sufficient in themselves (Unsupervised Data Augmentation , Virtual Adversarial Training and entropy minimization , and pseudo-labeling achieve 85.8%, 83.4%, and 82.4% Top-5 accuracy with 10% of labels, compared to our 89.4%) when combined with representation learning (e.g. rotation prediction they can provide considerable gains (91.2% Top-5 accuracy). It is therefore surprising that CPC representations alone can enable accuracy that is comparable to that of these methods, and investigating to what extent they can be combined would be an interesting topic of future work. We next investigate transfer performance on object detection on the PASCAL-2007 dataset, which reflects the practical scenario where a representation must be trained on a dataset with different statistics than the dataset of interest. This dataset also tests the efficiency of the representation as it only contains 5011 labeled images to train from. In this setting, we replaced the neural network classifier h ψ used previously with a Faster-RCNN image detection architecture, and use the pre-trained feature extractor on ImageNet. As before, we first trained the Faster-RCNN model while keeping the feature extractor fixed, then fine-tuned the entire model end-to-end. Table 3 displays our compared to other methods. Most competing methods, which optimize a single unsupervised objective on ImageNet before fine-tuning on PASCAL detection, attain around 65% mean average precision. Leveraging larger unlabeled datasets increases their performance up to 67.8% . Combining multiple forms of self-supervision enables them to reach 70.5% . The proposed method, which learns only from ImageNet data using a single unsupervised objective, reaches 70.6% when equipped with a ResNet-101 feature extractor f θ (as for most competing methods but not all (; . Equipped with the more powerful ResNet-161 feature extractor f θ, our method reaches 72.7%. Importantly, this is only 2% short of the performance attained by purely supervised transfer learning, which we obtain by using all ImageNet labels before transferring to PASCAL. We asked whether CPC could enable data-efficient image recognition, and found that it indeed greatly improves the accuracy of classifiers and object detectors when given small amounts of labeled data. Surprisingly, CPC even improves given ImageNet-scale labels. Our show that there is still room for improvement using relatively straightforward changes such as augmentation, optimization, and network architecture. Furthermore, we found that the standard method for evaluating unsupervised representations-linear classification-is only partially predictive of efficient recognition performance, suggesting that further research should focus on efficient recognition as a standalone benchmark. Overall, these open the door toward research on problems where data is naturally limited, e.g. medical imaging or robotics. image detection accuracy to other transfer methods. The supervised baseline learns from the entire labeled ImageNet dataset and fine-tunes for PASCAL detection. The second class of methods learns from the same unlabeled images before transferring. All of these methods pre-train on the ImageNet dataset, except for DeeperCluster which learns from the larger, but uncurated, YFCC100M dataset . All are reported in terms of mean average precision (mAP). † denotes methods implemented in this work. Transfer from labeled data: Supervised -ResNet-152 74.7 Transfer from unlabeled data: Exemplar (Ex) 60.9 Motion Segmentation (MS) 61.1 Colorization (Col) 65.5 Relative Position (RP) 66.8 Combination of Ex + MS + Col + RP 70.5 Instance Discrimination 65.4 Deep Cluster 65.9 Deeper Cluster 67.8 Local Aggregation 69.1 † Faster-RCNN trained on CPC v2 (ResNet-101, fine-tuned) 70.6 † Faster-RCNN trained on CPC v2 72.7 Furthermore, images are far from the only domain where unsupervised representation learning is important: for example, unsupervised learning is already a critical step in language , and shows promise in domains like audio (van den ; ;, video , and robotic manipulation ). Currently much self-supervised work builds upon tasks tailored for a specific domain (often images), which may not be easily adapted to other domains. Contrastive prediction methods, including the techniques suggested in this paper, are task agnostic and could therefore serve as a unifying framework for integrating these tasks and modalities. This generality is particularly useful given that many realworld environments are inherently multimodal, e.g. robotic environments which can have vision, audio, touch, proprioception, action, and more over long temporal sequences. Given the importance of increasing the amounts of self-supervision (via additional directions of prediction), integrating these modalities and tasks could lead to unsupervised representations which rival the efficiency and effectiveness of biological ones. A.1 ADDITIONAL For completeness, we provide pseudo-code for the main calculations involved in the InfoNCE objective, loosely modeled after Tensorflow operations. We suppose we have just calculated a set of latents z i,j = f θ (x i,j) for i, j ∈ {1, . . ., 7}, each one being e.g. a 4096-dimensional vector. Assuming we do so for a batch of B images {x}, the set of latents is a tensor of size B × 7 × 7 × 4096. shaped input image, we end up with a grid of 6x6 features (each of which is obtained from our ResNet-161 architecture). This gives us a tensor for the image. We then use a Batch-Normalization layer to normalize the features (without scale parameter) followed by a 1x1 convolution mapping each feature in the grid to the 1000 logits for ImageNet classification. We then spatially-mean-pool these logits to end up with the final log probabilities for the linear classification. • We use the Inception preprocessing to extract 240x240 crops from the raw image. The image is divided into subcrops as per CPC data-preprocessing used for CPC pre-training. • Optimization details: We use Adam Optimizer with a learning rate of 5e-4. We train the model on a batch size of 512 images with 32 images per core spread over 16 workers. In order to find the best model within this class, we vary the following hyperparameters: • Model architecture: We investigate using ResNet-50, ResNet-101, and ResNet-152 model architectures, all of them using the'v2' variant (b), and find larger architecture to perform better, even when given smaller amounts of data. We insert a DropOut layer before the final linear classification layer . • Data pre-processing: We use the Inception pre-processing pipeline . • Optimization details: We vary the learning rate in {0.05, 0.1, 0.2}, the weight decay logarithmically from 10 −5 to 10 −2, the DropOut linearly from 0 to 1, and the batch size per worker in {16, 32}. We chose the best performing model for each training subset D l of labeled ImageNet (using a separate validation set), and report its accuracy on the test set (i.e. the publicly available ILSVRC-2012 validation set).
Unsupervised representations learned with Contrastive Predictive Coding enable data-efficient image classification.
580
scitldr
We demonstrate the possibility of what we call sparse learning: accelerated training of deep neural networks that maintain sparse weights throughout training while achieving dense performance levels. We accomplish this by developing sparse momentum, an algorithm which uses exponentially smoothed gradients (momentum) to identify layers and weights which reduce the error efficiently. Sparse momentum redistributes pruned weights across layers according to the mean momentum magnitude of each layer. Within a layer, sparse momentum grows weights according to the momentum magnitude of zero-valued weights. We demonstrate state-of-the-art sparse performance on MNIST, CIFAR-10, and ImageNet, decreasing the mean error by a relative 8%, 15%, and 6% compared to other sparse algorithms. Furthermore, we show that sparse momentum reliably reproduces dense performance levels while providing up to 5.61x faster training. In our analysis, ablations show that the benefits of momentum redistribution and growth increase with the depth and size of the network. Current state-of-the-art neural networks need extensive computational resources to be trained and can have capacities of close to one billion connections between neurons (; ;). One solution that nature found to improve neural network scaling is to use sparsity: the more neurons a brain has, the fewer connections neurons make with each other . Similarly, for deep neural networks, it has been shown that sparse weight configurations exist which train faster and achieve the same errors as dense networks. However, currently, these sparse configurations are found by starting from a dense network, which is pruned and re-trained repeatedly -an expensive procedure. In this work, we demonstrate the possibility of training sparse networks that rival the performance of their dense counterparts with a single training run -no re-training is required. We start with random initializations and maintain sparse weights throughout training while also speeding up the overall training time. We achieve this by developing sparse momentum, an algorithm which uses the exponentially smoothed gradient of network weights (momentum) as a measure of persistent errors to identify which layers are most efficient at reducing the error and which missing connections between neurons would reduce the error the most. Sparse momentum follows a cycle of pruning weights with small magnitude, redistributing weights across layers according to the mean momentum magnitude of existing weights, and growing new weights to fill in missing connections which have the highest momentum magnitude. We compare the performance of sparse momentum to compression algorithms and recent methods that maintain sparse weights throughout training. We demonstrate state-of-the-art sparse performance on MNIST, CIFAR-10, and ImageNet-1k. For CIFAR-10, we determine the percentage of weights needed to reach dense performance levels and find that AlexNet, VGG16, and Wide Residual Networks need between 35-50%, 5-10%, and 20-30% weights to reach dense performance levels. We also estimate the overall speedups of training our sparse convolutional networks to dense performance levels on CIFAR-10 for optimal sparse convolution algorithms and naive dense convolution algorithms compared to dense baselines. For sparse convolution, we estimate speedups between 2.74x and 5.61x and for dense convolution speedups between 1.07x and 1.36x. In your analysis, ablations demonstrate that the momentum redistribution and growth components are increasingly important as networks get deeper and larger in size -both are critical for good ImageNet performance. From Dense to Sparse Neural Networks: Work that focuses on creating sparse from dense neural networks has an extensive history. Earlier work focused on pruning via second-order derivatives (; ;) and heuristics which ensure efficient training of networks after pruning (; ;). Recent work is often motivated by the memory and computational benefits of sparse models that enable the deployment of deep neural networks on mobile and low-energy devices. A very influential paradigm has been the iterative train-dense, prune, re-train cycle introduced by. Extensions to this work include: Compressing recurrent neural networks and other models (; ;), continuous pruning and re-training , joint loss/pruning-cost optimization (Carreira-Perpinán and), layer-by-layer pruning , fast-switching growth-pruning cycles , and soft weight-sharing. These approaches often involve re-training phases which increase the training time. However, since the main goal of this line of work is a compressed model for mobile devices, it is desirable but not an important main goal to reduce the run-time of these procedures. This is contrary to our motivation. Despite the difference in motivation, we include many of these dense-to-sparse compression methods in our comparisons. Other compression algorithms include L 0 regularization , and Bayesian methods . For further details, see the survey of. Interpretation and Analysis of Sparse Neural Networks: show that "winning lottery tickets" exist for deep neural networks -sparse initializations which reach similar predictive performance as dense networks and train just as fast. However, finding these winning lottery tickets is computationally expensive and involves multiple prune and re-train cycles starting from a dense network. Followup work concentrated on finding these configurations faster ). In contrast, we reach dense performance levels with a sparse network from random initialization with a single training run while accelerating training. Sparse Neural Networks Throughout Training: Methods that maintain sparse weights throughout training through a prune-redistribute-regrowth cycle are most closely related to our work. introduce DEEP-R, which takes a Bayesian perspective and performs sampling for prune and regrowth decisions -sampling sparse network configurations from a posterior. While theoretically rigorous, this approach is computationally expensive and challenging to apply to large networks and datasets. Sparse evolutionary training (SET) simplifies prune-regrowth cycles by using heuristics: prune the smallest and most negative weights, grow new weights in random locations. Unlike our work, where many convolutional channels are empty and can be excluded from computation, growing weights randomly fills most convolutional channels and makes it challenging to harness computational speedups during training without specialized sparse algorithms. SET also does not include the cross-layer redistribution of weights which we find to be critical for good performance, as shown in our ablation study. The most closely related work to ours is Dynamic Sparse Reparameterization (DSR) by , which includes the full prune-redistribute-regrowth cycle. However, DSR requires some specific layers to be dense. Our method works in a fully sparse setting and is thus more generally applicable. More distantly related is Single-shot Network Pruning (SNIP) , which aims to find the best sparse network from a single pruning decision. The goal of SNIP is simplicity, while our goal is maximizing predictive and run-time performance. In our experiments, we compare against all four methods: DEEP-R, SET, DSR, and SNIP. We define sparse learning to be the training of deep neural networks which maintain sparsity throughout training while matching the predictive performance of dense neural networks. To achieve this, intuitively, we want to find the weights that reduce the error most effectively. This is challenging since most deep neural network can hold trillions of different combinations of sparse weights. Additionally, during training, as feature hierarchies are learned, efficient weights might change gradually from shallow to deep layers. How can we find good sparse configurations? In this work, we follow a divide-and-conquer strategy that is guided by computationally efficient heuristics. We divide sparse learning into the following sub-problems which can be tackled independently: pruning weights, redistribution of weights across layers, and regrowing weights, as defined in more detail below. Figure 1: Sparse Momentum is applied at the end of each epoch: take the magnitude of the exponentially smoothed gradient (momentum) of each layer and normalize to 1; for each layer, remove p = 20% of the weights with the smallest magnitude; across layers, redistribute the removed weights by adding weights to each layer proportionate to the momentum of each layer; within a layer, add weights starting from those with the largest momentum magnitude. Decay p. We use the mean magnitude of momentum M i of existing weights W i in each layer i to estimate how efficient the average weight in each layer is at reducing the overall error. Intuitively, we want to take weights from less efficient layers and redistribute them to weight-efficient layers. The sparse momentum algorithm is depicted in Figure 1. In this section, we first describe the intuition behind sparse momentum and then present a more detailed description of the algorithm. The gradient of the error with respect to a weight ∂E ∂W yields the directions which reduce the error at the highest rate. However, if we use stochastic gradient descent, most weights of ∂E ∂W oscillate between small/large and negative/positive gradients with each mini-batch -a good change for one mini-batch might be a bad change for another. We can reduce oscillations if we take the average gradient over time, thereby finding weights which reduce the error consistently. However, we want to value recent gradients, which are closer to the local minimum, more highly than the distant past. This can be achieved by exponentially smoothing ∂E ∂W -the momentum M i: where α is a smoothing factor, M i is the momentum for the weight W i in layer i; M i is initialized at t = 0 with 0. Momentum is efficient at accelerating the optimization of deep neural networks by identifying weights which reduce the error consistently. Similarly, the aggregated momentum of weights in each layer should reflect how good each layer is at reducing the error consistently. Additionally, the momentum of zero-valued weights -equivalent to missing weights in sparse networks -can be used to estimate how quickly the error would change if these weights would be included in a sparse network. The details of the full training procedure of our algorithm are shown in Algorithm 1. See Algorithm 2 in the Appendix for a more detailed, source-code-like description of sparse momentum. Algorithm 1: Sparse momentum algorithm. Data: Layer i to k with: Before training, we initialize the network with a certain sparsity s: we initialize the network as usual and then remove a fraction of s weights for each layer. We train the network normally and mask the weights after each gradient update to enforce sparsity. We apply sparse momentum after each epoch. We can break the sparse momentum into three major parts: (a) redistribution of weights, (b) pruning weights, (c) regrowing weights. In step (a), we we take the mean of the element-wise momentum magnitude m i that belongs to all nonzero weights for each layer i and normalize the value by the total momentum magnitude of all layers k i=0 m i. The ing proportion is the momentum magnitude contribution for each layer. The number of weights to be regrow in each layer is the total number of removed weights multiplied by each layers momentum contribution: Regrow i = Total Removed · m i. In step (b), we prune a proportion of p (prune rate) of the weights with the lowest magnitude for each layer. In step (c), we regrow weights by enabling the gradient flow of zero-valued (missing) weights which have the largest momentum magnitude. Additionally, there are two edge-cases which we did not include in Algorithm 1 for clarity. If we allocate more weights to be regrown than is possible for a specific layer, for example regrowing 100 weights for a layer of maximum 10 weights, we redistribute the excess number of weights equally among all other layers. For some layers, our algorithm will converge in that the average weight in layer i has much larger momentum magnitude than weights in other layers, but at the same time, this layer is dense and cannot grow further. We do not want to prune weights from such important layers. Thus, for these layers, we reduce the prune rate p i proportional to the sparsity: p i = min(p, sparsity i). After each epoch, we decay the prune rate in Algorithm 1 in the same way learning rates are decayed. We use a cosine decay schedule that anneals the prune rate to zero on the last epoch. See Appendix A.1 for an analysis on how decay schedule and starting prune rate affects training. For comparison, we follow three different experimental settings, one from and two settings follow: For MNIST , we use a batch size of 100, decay the learning rate by a factor of 0.1 every 25000 mini-batches. For CIFAR-10 , we use standard data augmentations (horizontal flip, and random crop with reflective padding), a batch size of 128, and decay the learning rate every 30000 mini-batches. We train for 100 and 250 epochs on MNIST and CIFAR-10, use a learning rate of 0.1, stochastic gradient descent with Nesterov momentum of α = 0.9, and we use a weight decay of 0.0005. We use a fixed 10% of the training data as the validation set and train on the remaining 90%. We evaluate the test set performance of our models on the last epoch. For all experiments on MNIST and CIFAR-10, we report the standard errors. Our sample size is generally between 10 and 12 experiments per method/architecture/sparsity level with different random seeds for each experiment. We use the modified network architectures of AlexNet, VGG16, and LeNet-5 as introduced by. We consider two different variations of the experimental setup of for ImageNet and CIFAR-10. The first follows their procedure closely, in that we run the networks in a partially dense setting where the first convolutional layer and downsampling convolutional layers are dense. Additionally, for CIFAR-10 the last fully connected layer is dense. In the second setting, we compare in a fully sparse setting -no layer is dense at the beginning of training. For the fully sparse setting we increase overall number of weights according to the extra parameters in the dense layers and distribute them equally among the network. The parameters in the dense layers make up 5.63% weights of the ResNet-50 network. We refer to these two settings as the partially dense and fully sparse settings. On ImageNet , we use ResNet-50 with a stride of 2 for the 3x3 convolution in the bottleneck layers. We use a batch size of 256, input size of 224, momentum of α = 0.9, and weight decay of 10 −4. We train for 100 epochs and report validation set performance after the last epoch. We report for the fully sparse and the partially dense setting. For all experiments, we keep biases and batch normalization weights dense. We tuned the prune rate p and momentum rate α searching the parameter space {0.2, 0.3, 0.4, 0.5, 0.6, 0.7} and {0.5, 0.6, 0.7, 0.8, 0.9, 0.95, 0.99} on MNIST and CIFAR-10 and found that p = 0.2 and α = 0.9 work well for most architectures. We use this prune and momentum rate throughout all experiments. ImageNet experiments were run on 4x RTX 2080 Ti and all other experiments on individual GPUs. Our software builds on PyTorch and is a wrapper for PyTorch neural networks with a modular architecture for growth, redistribution, and pruning algorithms. Currently, no GPUaccelerated libraries that utilize sparse tensors exist, and as such we use masked weights to simulate sparse neural networks. Using our software, any PyTorch neural network can be adapted to be a sparse momentum network with less than 10 lines of code. We will open-source our software along with trained models and individual experimental . Figure 2 and Table 1 show a comparison with model compression methods. On MNIST, sparse momentum is the only method that provides consistent strong performance across both LeNet 300-100 and LeNet-5 Caffe models. Soft-weight sharing and Layer-wise Brain Damage are competitive with sparse momentum for one model, but underperforms for the other model. For 1-2% of weights, variational dropout is more effective -but this method also uses dropout for further regularization while we only use weight decay. We can see that sparse momentum achieves equal performance to the LeNet-5 Caffe dense baseline with 8% weights. On CIFAR-10 in Table 1, we can see that sparse momentum outperforms Single-shot Network Pruning (SNIP) for all models and can achieve the same performance level as a dense model for VGG16-D with just 5% of weights. Weights (%) Table 2 show comparisons of sparse learning methods on MNIST and CIFAR that follows the experimental procedure of where some selected layers are dense. For LeNet 300-100 on MNIST, we can see that sparse momentum outperforms all other methods. For CIFAR-10, sparse momentum is better than dynamic sparse in 4 out of 5 cases. However, in general, the confidence intervals for most methods overlap -this particular setup for CIFAR-10 with specifically selected dense layers seems to be too easy to determine difference in performance between methods and we do not recommend this setup for future work. Table 2 shows that sparse momentum outperforms all other methods on ImageNet (ILSVRC2012) for the Top-1 accuracy measure. Dynamic sparse is better for the Top-5 accuracy with 20% weights. In the fully sparse setting, sparse momentum remains competitive and seems to find a weight distribution which works equally well for the 10% weights case. For 20% weights, the performance decreases slightly. Figure 3: Test set accuracy with 95% confidence intervals on MNIST and CIFAR at varying sparsity levels for LeNet 300-100 and WRN 28-2. We analyzed how many weights are needed to achieve dense performance for our networks on CIFAR-10 and how much faster would we able to train such a sparse network compared to a dense one. We do this analysis by increasing the number of weights by 5% until the sparse network trained with sparse momentum reaches a performance level that overlaps with a 95% confidence interval of the dense performance. We then measure the speedup of the model. For each network-density combination we perform ten training runs with different random seeds to calculate the mean test error and its standard error. To estimated the speedups that could be obtained using sparse momentum for these dense networks we follow two approaches: Theoretical speedups for sparse convolution algorithms which are proportional to reductions in FLOPS and practical speedups using dense convolutional algorithms which are proportional to empty convolutional channels. For our sparse convolution estimates, we calculate the FLOPS saved for each convolution operation throughout training as well as the runtime for each convolution. To receive the maximum speedups for sparse convolution, we then scale the runtime for each convolution operation by the FLOPS saved. While a fast sparse convolution algorithm for coarse block structures exist for GPUs , optimal sparse convolution algorithms for fine-grained patterns do not and need to be developed to enable these speedups. The second method measures practical speedups that can be obtained with naive, dense convolution algorithms which are available today. Dense convolution is unsuitable for the training of sparse networks but we include this measurement to highlight the algorithmic gap that exists to efficiently train sparse networks. For dense convolution algorithms, we estimate speedups as follows: If a convolutional channel consists entirely of zero-valued weights we can remove these channels from the computation without changing the outputs and obtain speedups. To receive the speedups for dense convolution we scale each convolution operation by the proportion of empty channels. Using these measures, we estimated the speedups for our models on CIFAR-10. The ing speedups and dense performance levels can be seen in Table 3. We see that VGG16 networks can achieve dense performance with relatively few weights while AlexNet requires the most weights. Wide Residual Networks need an intermediate level of weights. Despite the large number of weights for AlexNet, sparse momentum still yields large speedups around 3.0x for sparse convolution. Sparse convolution speedups are particularly pronounced for Wide Residual Networks (WRN) with speedups as high as 5.61x. Dense convolution speedups are much lower and are mostly dependent on width, with wider networks receiving larger speedups. These highlight the importance to develop optimized algorithms for sparse convolution. Beyond speedups, we also measured the overhead of our sparse momentum procedure to be equivalent of a slowdown to 0.973x±0.029x compared to a dense baseline. growth of weights. To understand the performance contribution of these components, we perform ablations on CIFAR-10 for VGG16-D with 5% weights, MNIST for LeNet 300-100 and LeNet-5 Caffe with 5% weights, and ImageNet for ResNet-50 with 10% weights in the fully sparse setting. The can be seen in Table 4. Redistribution: Redistributing weights according to the momentum magnitude becomes increasingly important the larger a network is as can be seen from the steady increases in error from the small LeNet 300-100 to the large ResNet-50 when no momentum redistribution is used. Increased test error is particularly pronounced for ImageNet where the Top-1 error increases by 3.42% to 9.71% if no redistribution is used. Momentum growth: Momentum growth improves performance over random growth by a large margin for ResNet-50 on ImageNet, but for smaller networks the combination of redistribution and random growth seems to be sufficient to find good weights. Random growth without redistribution, however, cannot find good weights. These suggest that with increasing network size a random search strategy becomes inefficient and smarter growth algorithms are required for good performance. We presented our sparse learning algorithm, sparse momentum, which uses the mean magnitude of momentum to grow and redistribute weights. We showed that sparse momentum outperforms other sparse algorithms on MNIST, CIFAR-10, and ImageNet. Additionally, sparse momentum can rival dense neural network performance while accelerating training. Our analysis of speedups highlights the need for research into specialized sparse convolution and sparse matrix multiplication algorithms to enable the benefits of sparse networks. A.1 SENSITIVITY ANALYSIS Sparse momentum depends on two hyperparameters: Prune rate and momentum. In this section, we study the sensitivity of the accuracy of our models as we vary the prune rate and momentum. Since momentum parameter has an additional effect on the optimization procedure, we run control experiments for fully dense networks thus disentangling the difference in accuracy accounted by our sparse momentum procedure. We run experiments for VGG-D and AlexNet-s with 5% and 10% weights on CIFAR-10. Results can be seen in Figure 4. We see that sparse momentum is highly robust to the choice of prune rate with barely deviating when the prune rate is in the interval between 0.2 to 0.4. However, we can see a gradual linear trend that indicates that smaller prune rates work slightly better than larger ones. Cosine and linear prune rate annealing schedules do equally well. For momentum, confidence intervals for values between 0.7 and 0.9 overlap indicating that our procedure is robust to the choice of the momentum parameter. Sparse momentum is more sensitive to low momentum values (≤0.6) while it is less sensitive for large momentum values (0.95) compared to a dense control. Additionally, we test the null hypothesis that sparse momentum is equally sensitive to deviations from a momentum parameter value of 0.9 as a dense control. The normality assumption was violated and data transformations did not help. Thus we use the non-parametric Wilcoxon Signed-rank Test. We find no evidence that sparse momentum is more sensitive to the momentum parameter than a dense control, W = 22.0, p = 0.58. Overall, we conclude that sparse momentum is highly robust to deviations of the pruning schedule and the momentum and prune rate parameters. In this section, we look at the features of dense and sparse networks and how specialized these features are for certain classes. We test difference between sparse and dense network features statistically. For feature visualization, it is common to backpropagate activity to the inputs to be able to visualize what these activities represent (; ;). However, in our case, we are more interested in the overall distribution of features for each layer within our network, and as such we want to look at the magnitude of the activity in a channel since -unlike feature visualization -we are not just interested in feature detectors but also discriminators. For example, a face detector would induce positive activity for a'person' class but might produce negative activity for a'mushroom' class. Both kinds of activity are useful. With this reasoning, we develop the following convolutional channel-activation analysis: pass the entire training set through the network and aggregate the magnitude of the activation in each convolutional channel separately for each class; normalize across classes to receive for each channel the proportion of activation which is due to each class; look at the maximum proportion of each channel as a measure of class specialization: a maximum proportion of 1/N c where N c is the number of classes indicates that the channel is equally active for all classes in the training set. The higher the proportion deviates from this value, the more is a channel specialized for a particular class. We obtain for AlexNet-s, VGG16-D, and WRN 28-2 on CIFAR-10 and use as many weights as needed to reach dense performance levels. We then test the null hypothesis, that there are no differences in class specialization between features from sparse networks and dense networks. Equal variance assumptions was violated for VGG-D and normality was violated for WRN-28-2, while all assumptions hold for AlexNet-s. For consistency reasons we perform non-parametric Kruskal-Wallis one-way analysis of variance tests for all networks. For AlexNet-s, we find some evidence that features of sparse networks have lower class specialization compared to dense networks χ 2 = 4.43, p = 0.035, for VGG-D and WRN-28-2 we find strong evidence that features of sparse networks have lower class specialization than dense networks χ 2 = 28.1, p < 0.001, χ 2 = 36.2, p < 0.001. Thus we reject the null hypothesis. These increase our confidence that sparse networks learn features which have lower class specialization than dense networks. Plots of the distributions of sparse vs. dense features for AlexNet-s, VGG16-D, and WRN 28-2 on CIFAR-10 in Figure 5. These plots were selected to highlight the difference in distribution in the first layers and last layers of each network. We see the convolutional channels in sparse networks have lower class-specialization indicating they learn features which are useful for a broader range of classes compared to dense networks. This trend intensifies with depth. Overall, we conclude that sparse networks might be able to rival dense networks by learning more general features that have lower class specialization. C FURTHER We also tried a better version of the ResNet-50 in the fully sparse setting for which we use a cosine learning rate schedule, label smoothing of 0.9, and we warmup the learning rate. The can be seen in Table 5. A class-specialization of 0.5 indicates that 50% of the overall activity comes from a single class.
Redistributing and growing weights according to the momentum magnitude enables the training of sparse networks from random initializations that can reach dense performance levels with 5% to 50% weights while accelerating training by up to 5.6x.
581
scitldr
To provide principled ways of designing proper Deep Neural Network (DNN) models, it is essential to understand the loss surface of DNNs under realistic assumptions. We introduce interesting aspects for understanding the local minima and overall structure of the loss surface. The parameter domain of the loss surface can be decomposed into regions in which activation values (zero or one for rectified linear units) are consistent. We found that, in each region, the loss surface have properties similar to that of linear neural networks where every local minimum is a global minimum. This means that every differentiable local minimum is the global minimum of the corresponding region. We prove that for a neural network with one hidden layer using rectified linear units under realistic assumptions. There are poor regions that lead to poor local minima, and we explain why such regions exist even in the overparameterized DNNs. Deep Neural Networks (DNNs) have achieved state-of-the-art performances in computer vision, natural language processing, and other areas of machine learning. One of the most promising features of DNNs is its significant expressive power. The expressiveness of DNNs even surpass shallow networks as a network with few layers need exponential number of nodes to have similar expressive power . The DNNs are getting even deeper after the vanishing gradient problem has been solved by using rectified linear units (ReLUs) BID12. Nowadays, RELU has become the most popular activation function for hidden layers. Leveraging this kind of activation functions, depth of DNNs has increased to more than 100 layers BID7.Another problem of training DNNs is that parameters can encounter pathological curvatures of the loss surfaces prolonging training time. Some of the pathological curvatures such as narrow valleys would cause unnecessary vibrations. To avoid these obstacles, various optimization methods were introduced (; BID9 . These methods utilize the first and second order moments of the gradients to preserve the historical trends. The gradient descent methods also have a problem of getting stuck in a poor local minimum. The poor local minima do exist in DNNs, but recent works showed that errors at the local minima are as low as that of global minima with high probability BID4 BID2 BID8 BID14 ).In case of linear DNNs in which activation function does not exist, every local minimum is a global minimum and other critical points are saddle points BID8. Although these beneficial properties do not hold in general DNNs, we conjecture that it holds in each region of parameters where the activation values for each data point are the same as shown in FIG0. We prove this for a simple network. The activation values of a node can be different between data points as shown in FIG0, so it is hard to apply proof techniques used for linear DNNs. The whole parameter space is a disjoint union of these regions, so we call it loss surface decomposition. Using the concepts of loss surface decomposition, we explain why poor local minima do exist even in large networks. There are poor local minima where gradient flow disappears when using the ReLU . We introduce another kind of poor local minima where the loss is same as that of linear regression. To be more general, we prove that for each local minimum in a network, there exists a local minimum of the same loss in the larger network that is constructed by adding a node to that network. DISPLAYFORM0 T. In each region, activation values are the same. There are six nonempty regions. The parameters on the boundaries hit the non-differentiable point of the rectified linear unit. Loss surface of deep linear networks have the following interesting properties: 1) the function is non-convex and non-concave, 2) every local minimum is a global minimum, 3) every critical point that is not a global minimum is a saddle point BID8. This means that there is no poor local minima problem when using gradient descent methods, but such properties do not hold for nonlinear networks. We conjecture that these properties hold if activation values are fixed, and we prove it for a simple network. The loss surface of DNNs can be decomposed into regions in terms of activation values as illustrated in FIG0. Let D be a dataset {(x 1, y 1), (x 2, y 2),..., (x N, y N)} with x i ∈ R n and y i ∈ R. We define a network with one hidden layer as follows: DISPLAYFORM0 The model parameters are W ∈ R h×n, v ∈ R h, b ∈ R h, and c ∈ R where h is the number of hidden nodes. Let θ = [vec(W), v, b, c] T collectively denote vectorized form of all the model parameters. The activation function σ(x) = max(x, 0) is a rectified linear unit, and we abuse notation by generalizing it as an element-wise function for multidimensional inputs. Alternatively, the network can be expressed in terms of the activation values: DISPLAYFORM1 where DISPLAYFORM2 T is a vector of the binary activation values a ij ∈ {0, 1} of i-th data point x i, and A = (a 1, a 2, ..., a N) is a collection of all activation values for a given dataset D. We fix the activation values of the function g A (x i, θ) regardless of real activation values to find out the interesting properties. The real model f (x i, θ) agrees with g A (x i, θ) only if A is same as the real activation values in the model. Before we introduce a definition of the activation region, we denote w A simple example of a non-differentiable local minimum for a dataset x 1 = −2, y 1 = −3, x 2 = +2, y 2 = −1. In this example, a network is defined by f (x) = w 2 σ(w 1 x) and w 1 is fixed to one. The non-differentiable local minima exist in a line w 1 = 0 which is a boundary of the two regions. Note that if DISPLAYFORM3 We consider a general loss function called squared error loss: DISPLAYFORM4 The following lemma state that the local curvatures of DISPLAYFORM5 Lemma 2.2 For any differentiable point θ ∈ R A, the θ is a local minimum (saddle point) in L f (θ) if and only if it is a local minimum (saddle point) in L g A (θ). The function g A (x i, θ) of fixed activation values A has properties similar to that of linear neural networks. If all activation values are one, then the function g A (x i, θ) is identical to a linear neural network. In other cases, some of the parameters are inactive. The proof becomes tricky since inactive parameters are different for each data point. In case of the simple network g A (x i, θ), we can convert it into a convex function in terms of other variables. DISPLAYFORM0 where p j = v j w j and q j = v j b j. The v j is a j-th scalar value of the vector v and a ij is an activation value on a j-th hidden node of a i-th data point. DISPLAYFORM1 is a convex function in terms of p j, q j, and c. Note that for any p j and q j, there exist θ that forms them, so the following lemma holds. DISPLAYFORM2 Now we introduce the following theorem describing the important properties of the function L g A (θ).Theorem 2.5 The function L g A (θ) has following properties: 1) it is non-convex and non-concave except for the case that activation values are all zeros, 2) every local minimum is a global minimum, 3) every critical point that is not a global minimum is a saddle point. A function f (x, y) = (xy − 1) 2 is not convex, since it has a saddle point at x = y = 0. Similarly, the L g A (θ) is a quadratic function of v j b j, so it is non-convex and nonconcave. If activation values are all zeros, then DISPLAYFORM0, so the global minima are critical points. In other critical points, at least one of the gradients along p j or q j is not zero. If a critical point satisfies ∇ pj L = 0 (or ∇ qj L = 0), then it is a saddle point with respect to w T j and v j (or b j and v j). The detailed proof is in the appendix. To distinguish between the global minimum of L g A (θ) and L f (θ), we introduce subglobal minimum: DISPLAYFORM1 Some of the subglobal minima may not exist in the real loss surface L f (θ). For this kind of regions, there only exist saddle points and the parameter would move to another region by gradient descent methods without getting stuck into local minima. Since the parameter space is a disjoint union of the activation regions, the real loss surface L f (θ) is a piecewise combination of L g A (θ). Using Lemma 2.2 and Theorem 2.5, we conclude as follows: DISPLAYFORM2 The function L f (θ) has following properties: 1) it is non-convex and non-concave, 2) every differentiable local minimum is a subglobal minimum, 3) every critical point that is not a subglobal minimum is a saddle point. We explicitly distinguish differentiable and non-differentiable local minima. The non-differentiable local minima can exist as shown in FIG2. In this section, we answer why poor local minima do exist even in large networks. There are parameter points where all the activation values are zeros eliminating gradient flow . This is a well-known region that forms poor and flat local minima. We introduce another kind of poor region called linear region and show that it always forms poor local minima when a dataset is nonlinear. In a more general setting, we prove that a network has every local minimum of the narrower networks of the same number of layers. There always exists a linear region where all activation values are one, and its subglobal minima stay in that region. This subglobal minimum in an error which is same as that of linear regression, so if given dataset is nonlinear the error would be poor. We can easily spot a linear region by manipulating biases to satisfy w T j x i + b j > 0. One way of achieving this is by selecting b j as: DISPLAYFORM0 To say that the model can get stuck in the linear region, it is necessary to find the subglobal minima in that region. If f (x i, θ) is linear, then it is of form u T, y 2 = 2, x 3 = T, y 3 = 3. The network has one hidden layer and no biases. We increased the number of hidden nodes from one to four. DISPLAYFORM1 The ratio of the poor regions decreases as the size of the network grows. We show it numerically by identifying all subglobal minima of a simple network. For the MNIST , we estimated subglobal minima of randomly selected activation values and compared with the rich regions. Training a neural network is known to be NP-Complete BID1, due to the nonconvexity and infinite parameter space of DNNs. The number of possible combination of activation values has the complexity of O(2 N h) for f (x i, θ), so we restricted the experiments to a small size of hidden layers and datasets to find all subglobal minima in a reasonable time. Consider the Equation 4 again. The subglobal minimum is a solution of the convex optimization for L g A (θ). To compute optimal parameters, we need to solve linear equations DISPLAYFORM0, and ∇ c L g A = 0. For simplicity, we assume that biases are removed, then the gradient ∇ pj L g A is as follows: DISPLAYFORM1 DISPLAYFORM2 As a , the linear equation to solve is as follows: The leftmost matrix in the Equation 7 is a square matrix. If it is not full rank, we compute a particular solution. FIG4 shows four histograms of the poor subglobal minima for the different number of hidden nodes. As shown in the histograms, gradient descent based methods are more likely to avoid poor subglobal minima in larger networks. It also shows that the subglobal minima arise from the smaller networks. Intuitively speaking, adding a node provides a downhill path to the previous poor subglobal minima without hurting the rich subglobal minima in most cases. DISPLAYFORM3 For more realistic networks and datasets, we conducted experiments on MNIST. We used networks of two hidden layers consisting of 2k and k nodes respectively. The networks use biases, softmax outputs, cross entropy loss, mini-batch size of 100, and Adam Optimizer BID9. Assuming that the Corollary 2.7 holds for multilayer networks, the subglobal minima can be estimated by gradient descent methods. It is impossible to compute all of them, so we randomly selected various combinations of activation values with P (a = 1) = P (a = 0) = 0.5. Then we removed rectified linear units and multiplied the fixed activation values as follows: DISPLAYFORM0 where h A is the output of the second hidden layer. The rich subglobal minima were estimated by optimizing the real networks since it would end up in one of the subglobal minima that exist in the real loss surface. The experiments were repeated for 100 times, and then we computed mean and standard deviation. The are shown in TAB0 and it implies that most of the regions in the large networks are rich, whereas the small networks have few rich regions. In other words, it is more likely to end up in a rich subglobal minimum in larger networks.5 RELATED WORKS BID0 proved that linear networks with one hidden layer have the properties of the Theorem 2.5 under minimal assumptions. Recently, BID8 proved that it also holds for deep linear networks. Assuming that the activation values are drawn from independent Bernoulli distribution, a DNN can be mapped to a spin-glass Ising model in which the number of local minima far from the global minima diminishes exponentially with the size of the network BID2. Under same assumptions in BID2, the effect of nonlinear activation values disappears by taking expectation, so nonlinear networks satisfy the same properties of linear networks BID8.Nonlinear DNNs usually do not encounter any significant obstacles on a single smooth slope path BID6 BID4 explained that the training error at local minima seems to be similar to the error at the global minimum which can be understood via random matrix theory. The volume of differentiable sub-optimal local minima is exponentially vanishing in comparison with the same volume of global minima under infinite data points . Although a number of specific example of local minima can be found in DNNs , it seems plausible to state that most of the local minima are near optimal. As the network width increases, we are more likely to meet a random starting point from which there is a continuous, strictly monotonically decreasing path to a global minimum BID14. Similarly, the starting point of the DNNs approximate a rich family of hypotheses precisely BID3. Another explanation is that the level sets of the loss become connected as the network is increasingly overparameterized BID5. These works are analogous to our showing that the parameters would end up in one of the subglobal minima which are similar to the global minima. We conjecture that the loss surface is a disjoint union of activation regions where every local minimum is a subglobal minimum. Using the concept of loss surface decomposition, we studied the existence of poor local minima and experimentally investigated losses of subglobal minima. However, the structure of non-differentiable local minima is not yet well understood yet. These non-differentiable points exist within the boundaries of the activation regions which can be obstacles when using gradient descent methods. Further work is needed to extend knowledge about the local minima, activation regions, their boundaries. Let θ ∈ R A be a differentiable point, so it is not in the boundaries of the activation regions. This implies that w T j x i + b j = 0 for all parameters. Without loss of generality, we assume w T j x i + b j < 0. Then there exist > 0 such that w T j x i + b j + < 0. This implies that small changes in the parameters for any direction does not change the activation region. Since L f (θ) and L g A (θ) are equivalent in the region R A, the local curvatures of these two function around the θ are also the same. Thus, the θ is a local minimum (saddle point) in L f (θ) if and only if it is a local minimum (saddle point) in L g A (θ). DISPLAYFORM0 is a linear transformation of p j, q j, and c, the DISPLAYFORM1 2 is convex in terms of p j, q j, and c. Summation of convex functions is convex, so the lemma holds. A.3 PROOF OF THEOREM 2.5 Assume that activation values are not all zeros, and then consider the following Hessian matrix evaluated from v j and b j for some non-zero activation values a ij > 0: DISPLAYFORM2 Let v j = 0 and b j = 0, then two eigenvalues of the Hessian matrix are as follows: DISPLAYFORM3 There exist c > 0 such that g A (x i, θ) > y i for all i. If we choose such c, then DISPLAYFORM4 ∂vj ∂bj > 0 which implies that two eigenvalues are positive and negative. Since the Hessian matrix is not positive semidefinite nor negative semidefinite, the function L g A (θ) is non-convex and non-concave. We organize some of the gradients as follows: DISPLAYFORM5 We select a critical point θ * where ∇ wj L g A (θ *) = 0, ∇ vj L g A (θ *) = 0, ∇ bj L g A (θ *) = 0, and ∇ c L g A (θ *) = 0 for all j. Case 1) Assume that ∇ pj L g A (θ *) = 0 and ∇ qj L g A (θ *) = 0 for all j. These points are global minima, since ∇ c L g A (θ *) = 0 and L g A (θ) is convex in terms of p j, q j, and c. Case 2) Assume that there exist j such that ∇ pj L g A (θ DISPLAYFORM6 There exist an element w * in w j such that ∇ vj ∇ w * L g A (θ *) = 0. Consider a Hessian matrix evaluated from w * and v j. Analogous to the proof of, this matrix is not positive semidefinite nor negative semidefinite. Thus θ * is a saddle point. Case 3) Assume that there exist j such that ∇ qj L g A (θ *) = 0. Since ∇ bj L g A (θ *) = v j ∇ qj L g A (θ *) = 0, the v j is zero. Analogous to the Case 2, a Hessian matrix evaluated from b j and v j is not positive semidefinite nor negative semidefinite. Thus θ * is a saddle point. As a , every critical point is a global minimum or a saddle point. Since L g A (θ) is a differentiable function, every local minimum is a critical point. Thus every local minimum is a global minimum.
The loss surface of neural networks is a disjoint union of regions where every local minimum is a global minimum of the corresponding region.
582
scitldr
Contextualized word representations, such as ELMo and BERT, were shown to perform well on a various of semantic and structural (syntactic) task. In this work, we tackle the task of unsupervised disentanglement between semantics and structure in neural language representations: we aim to learn a transformation of the contextualized vectors, that discards the lexical semantics, but keeps the structural information. To this end, we automatically generate groups of sentences which are structurally similar but semantically different, and use metric-learning approach to learn a transformation that emphasizes the structural component that is encoded in the vectors. We demonstrate that our transformation clusters vectors in space by structural properties, rather than by lexical semantics. Finally, we demonstrate the utility of our distilled representations by showing that they outperform the original contextualized representations in few-shot parsing setting. Human language 1 is a complex system, involving an intricate interplay between meaning (semantics) and structural rules between words and phrases (syntax). Self-supervised neural sequence models for text trained with a language modeling objective, such as ELMo , BERT , and RoBERTA (b), were shown to produce representations that excel in recovering both structure-related information (; van Schijndel & Linzen; ;) as well as in semantic information . In this work, we study the problem of disentangling structure from semantics in neural language representations: we aim to extract representations that capture the structural function of words and sentences, but which are not sensitive to their content. For example, consider the sentences: We aim to learn a function from contextualized word representations to a space that exposes these similarities. Crucially, we aim to do this in an unsupervised manner: we do not want to inform the process of the kind of structural information we want to obtain. We do this by learning a transformation that attempts to remove the lexical-semantic information in a sentence, while trying to preserve structural properties. Disentangling syntax from lexical semantics in word representations is a desired property for several reasons. From a purely scientific perspective, once disentanglement is achieved, one can better control for confounding factors and analyze the knowledge the model acquires, e.g. attributing the predictions of the model to one factor of variation while controlling for the other. In addition to explaining model predictions, such disentanglement can be useful for the comparison of the representations the model acquires to linguistic knowledge. From a more practical perspective, disentanglement can be a first step toward controlled generation/paraphrasing that considers only aspects of the structure, akin to the style-transfer works in computer vision, i.e., rewriting a sentence while preserving its structural properties while ignoring its meaning, or vice-versa. It can also inform search-based application in which one can search for "similar" texts while controlling various aspects of the desired similarity. To achieve this goal, we begin with the intuition that the structural component in the representation (capturing the form) should remain the same regardless of the lexical semantics of the sentence (the meaning). Rather than beginning with a parsed corpus, we automatically generate a large number of structurally-similar sentences, without presupposing their formal structure (§3.1). This allows us to pose the disentanglement problem as a metric-learning problem: we aim to learn a transformation of the contextualized representation, which is invariant to changes in the lexical semantics within each group of structurally-similar sentences (§3.3). We demonstrate the structural properties captured by the ing representations in several experiments (§4), among them automatic identification of structurally-similar words and few-shot parsing. The problem of disentangling different sources of variation has long been studied in computer vision, and was recently applied to neural models (; ;). Such disentanglement can assist in learning representations that are invariant to specific factors, such as pose-invariant face-recognition or style-invariant digit recognition . From a generative point of view, disentanglement can be used to modify one aspect of the input (e.g., "style"), while keeping the other factors (e.g., "content") intact, as done in neural image style-transfer . In the field of NLP, disentanglement is much less researched. In controlled natural language generation and style transfer, several works attempted to disentangle factors of variation such as sentiment or age of the writer, with the intention to control for those factors and generate new sentences with specific properties, or transfer existing sentences to similar sentences that differ only in the those properties. Several works have trained conditional generative models by explicitly conditioning a decoder network with a vector of attributes. On training, the attributes derive from the training sentence, while in testing the conditioning vector can be set to generate a text with the desired attributes. Other works; aim to achieve style transfer (as opposed to generation) by explicitly training representations that are invariant to the controlled attributes (e.g. by co-training of a generator and attribute discriminator). A decoder generates the transferred text from the disentangled representation and from an explicit attribute representation. use a conditioned back-translation approach to achieve a similar goal. While these works try to disentangle sentence-level attributes, in this study we focus on disentangling between two components in the representations of individual words. Several works examine the way semantic and syntactic information is distributed across the layers of neural models of text . They use diagnostic classifiers to predict syntactic properties and demonstrate that different parts of the model encode information in different levels of abstraction (e.g. POS information, dependency label and semantic role). Liu et al. (2019a) used diagnostic classifiers trained to predict various syntactic and semantic properties from state-of-the-art LMs representations, and demonstrated that many syntactic and semantic distinctions are encoded in the probed representations. probed the attention patterns of BERT, and showed that individual attention-heads focus in syntactically-meaningful relations in the input. Beyond the descriptive level, recent works have focused on supervised extraction of syntax-related representations from neural representations. used a linear transformation, inspired from the notion of "similarity order" in classic word representation learning, to tailor uncontextualized word representations to syntactic vs. semantic tasks. demonstrated that it is possible to train a linear transformation, under which squared euclidean distance between transformed contextualized word vectors correspond to the distances between the respective words in the syntax tree that represents the hierarchical structure of the sentence. Concurrent to this work, have used a variational estimation method of the information-bottleneck principle to extract word embeddings that are useful to the end task of parsing. While impressive, those works presuppose a specific formal syntactic structure (e.g. annotated parse trees following a specific linguistic annotation schema) and use this syntactic signal to learn structural information in a supervised manner. In other words, these works assume a given structure, and use supervision to make the structural information more salient, mapping (or forcing) the neural representations to known linguistic properties. In contrast, we aim to expose the structural information encoded in the network in an unsupervised manner, without pre-supposing an existing syntactic annotation scheme. Our goal is to learn a function f: R n → R m, which operates on contextualized word representations x and extracts vectors f (x) which make the structural information encoded in x more salient, while discarding as much lexical information as possible. In the sentences "Maple syrup is delicious" and "Neural networks are interesting", we want to learn a f such that f (v Moreover, we would like the relation between the words "maple" and "delicious" in the second sentence, to be similar to the relation between "neural" and "interesting" in the first sentence: pair(v . Operativly, we represent pairs of words (x, y) by the difference between their transformation f (x) − f (y), and aim to learn f that preserves:. The choice to represent pairs this way was inspired by several works that demonstrated that nontrivial semantic and syntactic relations between uncontextualized word representations can be approximated by simple vector arithmetic (a; b;). To learn f, we start with groups of sentences that the sentences within each group are known to share their structure but differ in their lexical semantics. We call the sentences in each group structurally equivalent. Figure 1 shows an example of two structurally equivalent sets. Acquiring such sets is challenging, especially if we do not assume a known syntactic formalism and cannot mine for sentences based on their observed tree structures. To this end, we automatically generate the sets starting with known sentences and sampling variants from a language model (§3.1). Our sentenceset generation procedure ensures that words from the same set that share an index also share their structural function. We call such words corresponding. Figure 1: Two groups of structurally-equivalent sentences. In each group, the first sentence is original sentence from Wikipedia, and the sentences below it were generated by the process of repeated BERT substitution. Some sets of corresponding words-that is, words that share the same structural function-are highlighted in the same color. We now proceed to learn a function f to map contextualized vectors of corresponding words (and the relations between them, as described above) to neighbouring points in the space. We train f such that the representation assigned to positive pairs -pairs that share indices and come from the same equivalent set -is distinguished from the representations of negative pairs -challenging pairs that come from different sentences, and thus do not share the structure of the original pair, but can, potentially, share their lexical meaning. We do so using Triplet loss, which pushes the representations of pairs coming from the same group closer together (§3.3). Figure 2 sketches the network. Figure 2: An illustration of triplet-loss calculation. Pairs of words are represented by the difference between their transformation f, which is identical for all words. The pairs of words in the anchor and positive sentences are lexically different, but structurally similar. The negative example presented here is especially challenging, as it is lexically similar, but structurally different. In order to generate sentences that approximately share their structure, we sequentially replace content words in the sentence with other content words, while aiming to maintain the grammatically of the sentence, and keep its structure intact. Replacing words with words of the same POS, as done in , does not answer to those requirements, as this method does not respect various restrictions that apply to words that share different function within the sentence. For example, verb argument structure dictates limitations on the arguments the predicate receives, and verbs differ in properties such as whether or not they accept a complement. Replacing verbs with other verbs does not guarantee fulfilling these limitations. Since we do not want to rely on syntactic annotation (apart from the level of POS tags) when performing this replacement, we opted to use a pre-trained language model -BERT -under the assumption that strong neural language models do implicitly encode many of the syntactic restrictions that apply to words in different grammatical functions (e.g., we assume that BERT would not predict a transitive verb in the place of an intransitive verb, or a verb that accepts a complement in the place of a verb that does not accept a complement). While this assumption seems to hold with regard to basic distinctions such as transitive vs. intransitive verbs, its validity is less clear in the more nuanced cases, in which small differences in the surface level can translate to substantial differences in the deep structuresuch as replacing a control verb with a raising verb. This is a limitation of the current approach, although we find that the average sentence we generate is grammatical and similar in structure to the original sentence. Moreover, as our goal is to expose the structural similarity encoded in neural language models, we find it reasonable to only capture the distinctions that are captured by a state-of-the-art neural language model. Implementation Concretely, we rely on a BERT masked LM model. We start each group with a Wikipedia sentence, for which we generate k = 6 equivalent sentences by iterating over the sentence from left to right sequentially, masking the ith word, and replacing it with one of BERT's top-30 predictions. Crucially, to increase semantic variability, we perform the replacement in place (online), that is, after randomly choosing a guess w, we insert w to the sentence at index i, and continue guessing the i + 1 word based on the modified sentence. 5 We exclude a closed set of a few dozens of words (mostly function words) and keep them unchanged in all k variations of a sentence. To the extent that BERT learns to recover corrupted sentences by suggesting replacements that respect the probability distribution of actual natural language, the suggestions would be both semantically and structurally correct. We further maintain structural correctness by maintaining the POS, and encourage semantic diversity by the auto-regressive replacement process. We find that the average sentence is grammatical and maintain the structure of the original sentence, although some generated sentences violate these requirements. In the appendix §?? we show some additional generated groups, and highlight some recurring errors. The sets in Figure 1 were generated using this method. We use the method to generate N = 150, 000 equivalent sets E i of structurally equivalent sentences, and collect the contextualized vector representations of words in these sets, ing in 1,500,000 training pairs and 200,000 evaluation pairs for the training process of f. We experiment with both ELMo and BERT-based contextualized representations. In average, we sample 11 pairs from each group of equivalent sentences. For ELMo, we represent each word in context as a concatenation of the last two ELMo layers (excluding the word embedding layer, which is not contextualized and therefore irrelevant for structure), ing in representations of dimension 2048. For BERT, we concatenate the mean of the words' representation across all 23 layers of BERT-Large, with the representation of layer 16, which was found by most indicative of syntax. We learn the mapping function f using triplet loss (Figure 2). Concretely, given a group of equivalent sentences E i, we randomly choose two sentences to be the anchor sentence S A, and the positive sentence S P, and sample two different word indices {i 1, i 2}. We represent pairs as their differences after transformation, ing in the anchor pair V A and positive pair V P: where f is the parameterized syntactic transformation we aim to learn. We also consider a negative pair: coming from sentence S N which is not in the equivalent set. As f has shared parameters for both words in the pair, it can thus be considered a part of a Siamese network, making our learning procedure an instance of a triplet Siamese network. We choose f to be a simple model: a single linear layer that maps from dimensionality 2048 to 75. The dimensional of the transformation was chosen according to development set performance. We use triplet loss to move the representation of the anchor vector V A closer to the representation of the positive vector V P and farther apart from the representation of the negative vector V N. , we calculate the softmax version of the triplet loss: where dist(x, y) = 1 − x y x y is the cosine-distance between the vectors x and y. Note that → 0, as expected. The triplet objective is optimized end-to-end using the Adam optimizer . We train for 5 epochs with a mini-batch of size 500 6, and take the last model as the final syntactic extractor. During training, the gradient backpropagates through the pair vectors to the parameters f of the Siamese model, to get representations of individual words that are similar for corresponding words in equivalent sentences. We note that we do not back-propagate the gradient to the contextualized vectors: we keep them intact, and only adjust the learned transformation. Hard negative sampling We obtain the negative vectors V N using hard negative sampling. For each mini-batch B, we collect 500 {V A i, V P i} pairs, each pair taken from an equivalent set E i. The negative instances V N i are obtained by searching the batch for a vector that is closest to the anchor and comes from a different set: where dist is again the cosine distance. In addition, we enforce a symmetry between the anchor and positive vectors, by adding a pair (positive, anchor) for each pair (anchor, positive) in B. That is, V N i is the "most misleading" word-pair vector: it comes from a sentence that has a different structure than the structure of V We have trained the syntactic transformation f in a way that should encourage it to retain the structural information encoded in contextualized vectors, but discard other information. We assess the representations our model acquired in an unsupervised manner, by evaluating the extant to which the local neighbors of each transformed contextualized vector f (x) share known structural properties, such as grammatical function within the sentence. For the baseline, we expect the neighbors of each vector to share a mix of semantic and syntactic properties. For the transformed vectors, we expect the neighbors to share mainly syntactic properties. Finally, we demonstrate that in a fewshot setting, our representations outperform the original ELMO representation, indicating they are indeed distilled from syntax, and discard other information that is encoded in ELMO vectors but is irrelevant for the extraction of the structure of a sentence. Corpus For training the transformation f, we rely on 150,000 sentences from Wikipedia, tokenized and POS-tagged by spaCy 8. The POS tags are used in the equivalent set generation to filter replacement words. Apart from POS tagging, we do not rely on any syntactic annotation during training. The evaluation sentences for the experiments mentioned below are sampled from a collection of 1,000,000 original and unmodified Wikipedia sentences (different from those used in the model training). Figure 3 shows a 2-dimensional t-SNE projection of 15,000 random content words. The left panel projects the original ELMo states, while the right panel is the syntactically transformed ones. The points are colored according to the dependency label (relation to parent) of the corresponding word, assigned by the spacy parser. As can be seen, in the original ELMo representation most states -apart from those characterized by a specific part-of-speech, such as amod (adjectives, in orange) or nummod (numbers, in light green) -do not fit well into a single cluster. In contrast, the syntactically transformed vectors are more neatly clustered, with some clusters, such as direct objects (brown) and prepositional-objects (blue), that are relatively separated after, but not before, the transformation. Interestingly, some functions that used to be a single group in ELMo (like the adjectives in orange, or the noun-compounds in 6 A large enough mini-batch is necessary to find challenging negative examples. 7 We implicitly assume that any pair coming from a different group of equivalent sentences is a valid negative example -that is, does not share the structural relation that exists between the anchor pair's words. This is a relatively mild assumption, as due to sparsity, in high probability two different sentences do not share the very same structure 8 https://spacy.io/ ELMo Transformed Figure 3 : t-SNE projection of ELMO states, colored by syntactic function, before (left) and after (right) the syntactic transformation. green) are now split into several clusters, corresponding to their use in different sentence positions, separating for examples adjectives that are used in subject positions from those in object position or within prepositional phrases. Additionally, as noun compounds ("maple" in "maple syrup") and adjectival modifiers ("tasty" in "tasty syrup") are relatively structurally similar (they appear between determiners and nouns within noun phrases, and can move with the noun phrase to different positions), they are split and grouped together in the representation (the green and orange clouds). To quantify the difference, we run K-means clustering on the projected vectors, and calculate the average cluster purity score as the relative proportion of the most common dependency label in each cluster. The higher this value is, the more the division to clusters reflect division to grammatical functions (dependency labels). We run the clustering with different K values: 10, 20, 40, 80. We find an increase in class purity following our transformation: from scores of 22.6%, 26.8%, 32.6% and 36.4% (respectively) for the original vectors, to scores of 24.3%, 33.4%, 42.1% and 48.0% (respectively) for the transformed vectors. Examples Below are a few query words (Q) and their closest neighbours before (N) and after (NT) the transformation. Note the high structural similarity of the entire sentence, as well as the function of the word within it (Q1: last word of subject NP in a middle clause, Q2: possessed noun in sentence initial subject NP, Q3: head of relative clause of a direct object): Q:in this way of thinking, an impacting projectile goes into an ice-rich layer -but no further. N:they generally have a pre-engraved rifling band to engage the rifled launch tube, spin-stabilizing the projectile, hence the term " rifle ". NT:to achieve a large explosive yield, a linear implosion weapon needs more material, about 13 kgs. Q: the mint's director at the time, nicolas peinado, was also an architect and made the initial plans. N: the director is angry at crazy loop and glares at him, even trying to get a woman to kick crazy loop out of the show (which goes unsuccessfully). NT: jetley's mother, kaushaliya rani, was the daughter of high court advocate shivram jhingan. Q: their first project is software that lets players connect the company's controller to their device N: you could try use norton safe web, which lets you enter a website and show whether there seems to be anything bad in it. NT: the city offers a route-finding website that allows users to map personalized bike routes We expect our transformed vectors to capture more structural and less lexical similarities than the source vectors. We expect each vectors' neighbors in space to share the structural function of the word over which the vector was collected, but not necessarily share its lexical meaning. We focus on the following structural properties: Table 1: Results in the closest-word queries, before and after the application of the syntactic transformation. "Basline" refers to unmodified ELMo vectors, "Transformed" refers to ELMo vectors after the learned syntactic transformation f, and "Transformed-untrained" refers to ElMo vectors, after a transformation that was trained on a randomely-initialized ELMo. "Difficult" refers to evaluation on the subset of POS tags which are most structurally diverse. • Dependency-tree edge of a given word (dep-edge), that represents its function (subject, object etc.) • The dependency edge of the word parent's (head's dep-edge) in the tree -to represent higher level structure, such as a subject that resides within a relative clause, as in the word'man" in the phrase "the child that the man saw". • Depth in the dependency tree (distance from the root of the sentence tree). • Constituency-parse paths: Consider, for example, the sentence "They saw the moon with the telescope". The word "Telescope" is a part of a noun-phrase "The telescope", which resides inside a prepositional phrase "with the telescope", which is part of the Verbal phrase ""Saw with the telescope". The complete constituency path for this word is therefore "NP-PP-VP". We calculate the complete tree path to the root (Tree-path-complete), as well as paths limited to lengths 2 and 3. For this evaluation, we parse 400,000 random sentences taken from the 1-million-sentences Wikipedia sample, run ELMo and BERT to collect the contextualized representations of the sentences' words, and randomly choose 400,000 query word vectors (excluding function words). We then retrieve, for each query vector x, the value vector y that is closest to x in cosine-distance, and record the percentage of closest-vector pairs (x, y) that share each of the structural properties listed above. For the tree depth property, we calculate the Pearson correlation between the depths of the queries and the retrieved values. We use Spacy parser for dependency-parsing, and the Berkeley Neural Parser for constituency parsing. We exclude function words from the evaluation. Easier and Harder cases The baseline models tend to retrieve words that are lexically similar. Since certain words tend to appear at above-chance probability in certain structural functions, this can make the baseline be "right for the wrong reason", as the success in the closest-word test reflects lexical similarity, rather than grammatical generalization of the model. To control for this confounding, we sort the different POS tags according to the entropy of their dependency-labels distribution, and repeat the evaluation only for words belonging to those POS tags having the highest entropy (those POS tags are the most structurally variant, and tend to appear in different structural functions). We find that the performance of the baselines (ELMo, BERT models) on those words drops significantly, while the performance of our model are only mildly influenced, further indicating the superiority of our model in capturing structural rather than lexical information. The for ELMo are presented in Table 4. For BERT, we witnessed similar, but somewhat lower, accuracy: for example, 68.1% dependency-edge accuracy, 56.5% head's dependencyedge accuracy, and 22.1% complete constituency-path accuracy. The for BERT are available in the appendix §C, and for the reminder of the paper, we focus in ELMo. We observe significant improvement over the baseline for all tests. The correlation between the depth in tree of the query and the value words, for examples, rises from 44.8% to 56.1%, indicating that our model encourages the structural property of the depth of the word to be more saliently encoded in its representation compared with the baseline. The most notable relative improvement is recorded with regard to full constituency-path to the root: from 16.6% before the structural transformation, to 25.3% after itan improvement of 52%. In addition to the increase in syntax-related properties, we observe a sharp drop -from 73.6% to 28.4% --in the proportion of query-value pairs that are lexically identical (lexical match, Table 4). This indicates our transformation f removes much of the lexical information, which is irrelevant for structure. To assess to what extent the improvements stems from the information encoded in ELMo, rather than being an artifact of the triplet-loss training, we also evaluate on a transformation f that was trained on a randomly-initialized ELMo, a surprisingly strong baseline . We find this model performs substantially worse than the baseline (Table 4, "Transformed-untrained (all)"). The absolute nearest-neighbour accuracy values may appear to be relatively low: for example, only 67.6% of the (query, value) pairs share the same dependency edge. As the model acquires its representation without being exposed to human-mandated syntactic convention, some of the apparent discrepancies in nearest neighbours may be due to the fact the model acquires different kind of generalization, or learned a representation that emphasizes different kinds of similarities. Still, we expect the ing (75 dimensional) representations to contain distilled structure information that is mappable to human notions of syntax. To test this, we compare dependency-parsers trained on our representation and on the source representation. If our representation indeed captures structural information, we expect it to excel on a low data regime. To this end, we test our hypothesis with few-shot dependency parsing setup, where we train a model to predict syntactic trees representation with only a few hundred labeled examples. We use an off-the-shelf dependency parser and first swap the pre-trained Glove embeddings with ELMo contextualized embeddings . In order to have a fair comparison with our method, we use the concatenation of the two last layers of Elmo; we refer to this experiment as elmo. As our representation is much smaller than ELMo's (75 as opposed to 2048), a potential issue for a low data regime is the high parameter number to optimize in the later case, therefore a lower dimension can achieve better . We design two additional baseline experiments to remedy this potential issue: Using PCA in order to reduce the representation dimensionality. We randomly chose 1M words from Wikipedia, calculated their representation with ELMo embeddings and performed PCA. This transformation is applied during training on top of ELMo representation while keeping the 75 first components. This experiment is referred to as elmo-pca. This representation should perform well if the most salient information in the ELMo representations are structural. We exepct it to not be the case. Automatically learning a matrix that reduces the embedding dimension. This matrix is learned during training and can potentially extract the relevant structural information from the representations. We refer to this experiment as elmo-reduced. Lastly, we examine the performance of our representation, where we apply our structural extraction method on top of ELMo representation. We refer to this experiment as syntax. We run the few-shot setup with multiple training size values: 50, 100, 200, 500. The -for both labeled (LAS) and unlabeled (UAS) attachment scores-are presented in Figure 4, and the numerical are available in the appendix §B. We notice that in the lower training size regime, we obtain the best performances compared to all baselines. The more training data is used, the gap between our representation and the baselines reduced, but the syntax representation still outperforms elmo. Reducing the dimensions with PCA (elmo-pca) works considerably worse than ELMo, indicating that the most salient information is indeed not structural, and the PCA loses important information. Reducing the dimensions with a learned matrix (elmo-reduced) works substantially better than ELMo, and achieve the same UAS as our representation from 200 training sentences onward. However, our transformation was learned in an unsupervised fashion, without access to the syntactic trees. Finally, when considering the labeled attachment score, where the model is tasked at predicting not only the child-parent relation but also its label, our syntax representation outperforms elmo-reduced. In this work, we propose an unsupervised method for the distillation of structural information from neural contextualized word representations. We used a process of sequential BERT-based substitu- Figure 4: Results of the few shot parsing setup tion to create a large number of sentences which are structurally similar, but semantically different. By controlling for one aspect -structure -while changing the other -lexical choice, we learn a metric (via triplet loss) under which pairs of words that come from structurally-similar sentences are close in space. We demonstrated that the representations acquired by this method share structural properties with their neighbors in space, and show that with a minimal supervision, those representations outperform ELMo in the task of few-shots parsing. The method presented here is a first step towards a better disentanglement between various kinds of information that is represented in neural sequence models. The method used to create the structurally equivalent sentences can be useful by its own for other goals, such as augmenting parse-tree banks (which are often scarce and require large resources to annotate). In a future work, we aim to extend this method to allow for a more soft alignment between structurally-equivalent sentences. Table 4: Results in the closest-word queries, before and after the application of the syntactic transformation. "Basline" refers to unmodified vectors derived from BERT, and "Transformed" refers to the vectors after the learned syntactic transformation f. "Difficult" refers to evaluation on the subset of POS tags which are most structurally diverse.
We distill language models representations for syntax by unsupervised metric learning
583
scitldr
We introduce a new memory architecture for navigation in previously unseen environments, inspired by landmark-based navigation in animals. The proposed semi-parametric topological memory (SPTM) consists of a (non-parametric) graph with nodes corresponding to locations in the environment and a (parametric) deep network capable of retrieving nodes from the graph based on observations. The graph stores no metric information, only connectivity of locations corresponding to the nodes. We use SPTM as a planning module in a navigation system. Given only 5 minutes of footage of a previously unseen maze, an SPTM-based navigation agent can build a topological map of the environment and use it to confidently navigate towards goals. The average success rate of the SPTM agent in goal-directed navigation across test environments is higher than the best-performing baseline by a factor of three. Deep learning (DL) has recently been used as an efficient approach to learning navigation in complex three-dimensional environments. DL-based approaches to navigation can be broadly divided into three classes: purely reactive BID49, based on unstructured general-purpose memory such as LSTM BID33 BID31, and employing a navigation-specific memory structure based on a metric map BID36.However, extensive evidence from psychology suggests that when traversing environments, animals do not rely strongly on metric representations BID16 BID47 BID13. Rather, animals employ a range of specialized navigation strategies of increasing complexity. According to BID13, one such strategy is landmark navigation -"the ability to orient with respect to a known object". Another is route-based navigation that "involves remembering specific sequences of positions". Finally, map-based navigation assumes a "survey knowledge of the environmental layout", but the map need not be metric and in fact it is typically not: "[. . .] humans do not integrate experience on specific routes into a metric cognitive map for navigation [. . .] Rather, they primarily depend on a landmark-based navigation strategy, which can be supported by qualitative topological knowledge of the environment."In this paper, we propose semi-parametric topological memory (SPTM) -a deep-learning-based memory architecture for navigation, inspired by landmark-based navigation in animals. SPTM consists of two components: a non-parametric memory graph G where each node corresponds to a location in the environment, and a parametric deep network R capable of retrieving nodes from the graph based on observations. The graph contains no metric relations between the nodes, only connectivity information. While exploring the environment, the agent builds the graph by appending observations to it and adding shortcut connections based on detected visual similarities. The network R is trained to retrieve nodes from the graph based on an observation of the environment. This allows the agent to localize itself in the graph. Finally, we build a complete SPTM-based navigation agent by complementing the memory with a locomotion network L, which allows the agent to move between nodes in the graph. The R and L networks are trained in self-supervised fashion, without any manual labeling or reward signal. We evaluate the proposed system and relevant baselines on the task of goal-directed maze navigation in simulated three-dimensional environments. The agent is instantiated in a previously unseen maze and given a recording of a walk through the maze (images only, no information about actions taken or ego-motion). Then the agent is initialized at a new location in the maze and has to reach a goal location in the maze, given an image of that goal. To be successful at this task, the agent must represent the maze based on the footage it has seen, and effectively utilize this representation for navigation. The proposed system outperforms baseline approaches by a large margin. Given 5 minutes of maze walkthrough footage, the system is able to build an internal representation of the environment and use it to confidently navigate to various goals within the maze. The average success rate of the SPTM agent in goal-directed navigation across test environments is higher than the best-performing baseline by a factor of three. Qualitative and an implementation of the method are available at https://sites.google.com/view/SPTM. Navigation in animals has been extensively studied in psychology. BID45 introduced the concept of a cognitive map -an internal representation of the environment that supports navigation. The existence of cognitive maps and their exact form in animals, including humans, has been debated since. suggested that internal representations take the form of metric maps. More recently, it has been shown that bees BID7 BID9, ants BID22, and rats BID41 rely largely on landmark-based mechanisms for navigation. BID3 and BID26 question the existence of cognitive maps in animals. BID16 BID47, and BID13 argue that humans rely largely on landmark-based navigation. In contrast, navigation systems developed in robotics are typically based on metric maps, constructed using the available sensory information -sonar, LIDAR, RGB-D, or RGB input BID12 BID44 BID11. Particularly relevant to our work are vision-based simultaneous localization and mapping (SLAM) methods BID6. These systems provide high-quality maps under favorable conditions, but they are sensitive to calibration issues, do not deal well with poor imaging conditions, do not naturally accommodate dynamic environments, and can be difficult to scale. Modern deep learning (DL) methods allow for end-to-end learning of sensorimotor control, directly predicting control signal from high-dimensional sensory observations such as images BID32. DL approaches to navigation vary both in the learning method -reinforcement learning or imitation learning -and in the memory representation. Purely reactive methods BID49 lack explicit memory and do not navigate well in complex environments BID39. Systems equipped with general-purpose LSTM memory BID33 BID37 BID21 BID31 or episodic memory BID5 BID38 can potentially store information about the environment. However, these systems have not been demonstrated to perform efficient goal-directed navigation in previously unseen environments, and empirical indicate that LSTM-based systems are not up to the task BID39. BID34 use an addressable memory for first-person-view navigation in three-dimensional environments. The authors demonstrate that the proposed memory structure supports generalization to previously unseen environments. Our work is different in that BID34 experiment with relatively small discrete gridworld-like environments, while our approach naturally applies to large continuous state spaces. Most related to our work are DL navigation systems that use specialized map-like representations. BID4 augment a DL system with a metric map produced by a standard SLAM algorithm. BID36 use a 2D spatial memory that represents a global map of the environment. build a 2D multi-scale metric map using the end-to-end trainable planning approach of BID42. Our method differs from these approaches in that we are not aiming to build a global metric map of the environment. Rather, we use a topological map. This allows our method to support navigation in a continuous space without externally provided camera poses or ego-motion information. w. This waypoint and the current observation o are fed into the locomotion network L, which outputs the action a to be executed in the environment. While contemporary approaches in robotics are dominated by metric maps, research on topological maps has a long history in robotics. Models based on topological maps have been applied to navigation in simple 2D mazes BID25 BID27 BID40 and on physical systems BID20 BID1 BID14 BID43 BID15. BID46 provide a review of biologically-inspired navigation systems, including landmark-based ones. Milford and colleagues designed SLAM systems inspired by computational models of the hippocampus BID28 BID29 BID2. We reinterpret this line of work in the context of deep learning. We consider an agent interacting with an environment in discrete time steps. At each time step t, the agent gets an observation o t of the environment and then takes an action a t from a set of actions A. In our experiments, the environment is a maze in a three-dimensional simulated world, and the observation is provided to the agent as a tuple of several recent images from the agent's point of view. The interaction of the agent with a new environment is set up in two stages: exploration and goaldirected navigation. During the first stage, the agent is presented with a recording of a traversal of the environment over a number of time steps T e, and builds an internal representation of the environment based on this recording. In the second stage, the agent uses this internal representation to reach goal locations in the environment. This goal-directed navigation is performed in an episodic setup, with each episode lasting for a fixed maximum number of time steps or until the goal is reached. In each episode, the goal location is provided to the agent by an observation of this location o g. The agent has to use the goal observation and the internal representation built during the exploration phase to effectively reach the goal. We propose a new form of memory suitable for storing internal representations of environments. We refer to it as semi-parametric topological memory (SPTM). It consists of a (non-parametric) memory graph G where each node represents a location in the environment, and a (parametric) deep network R capable of retrieving nodes from the graph based on observations. A high-level overview of an SPTM-based navigation system is shown in FIG0. Here SPTM acts as a planning module: given the current observation o and the goal observation o g, it generates a waypoint observation o w, which lies on a path to the goal and can be easily reached from the agent's current location. The current observation and the waypoint observation are provided to a locomotion network L, which is responsible for short-range navigation. The locomotion network then guides the agent towards the waypoint, and the loop repeats. The networks R and L are trained in self-supervised fashion, without any externally provided labels or reinforcement signals. We now describe each component of the system in detail. Retrieval network. The network R estimates the similarity of two observations (o 1, o 2). The network is trained on a set of environments in self-supervised manner, based on trajectories of a randomly acting agent. Conceptually, the network is trained to assign high similarity to pairs of observations that are temporally close, and low similarity to pairs that are temporally distant. We cast this as a classification task: given a pair of observations, the network has to predict whether they are temporally close or not. To generate the training data, we first let a random agent explore the environment, ing in a sequence of observations {o 1, . . . o N} and actions {a 1, . . . a N}. We then automatically generate training samples from these trajectories. Each training sample is a triple o i, o j, y ij that consists of two observations and a binary label. Two observations are considered close (y ij = 1) if they are separated by at most l = 20 time steps: |i − j| ≤ l. Negative examples are pairs where the two observations are separated by at least M · l steps, where M = 5 is a constant factor that determines the margin between positive and negative examples. We use a siamese architecture for the network R, akin to BID48. Each of the two input observations is first processed by a deep convolutional encoder based on ResNet-18 BID19, which outputs a 512-dimensional embedding vector. These two vectors are concatenated and further processed by a small 5-layer fully-connected network, ending with a 2-way softmax. The network is trained in supervised fashion with the cross-entropy loss. Further details are provided in the supplement. DISPLAYFORM0 where 0 < s shortcut < 1 is a similarity threshold for creating a shortcut connection. The first type of edge corresponds to natural spatial adjacency between locations, while the second type can be seen as a form of loop closure. Two enhancements improve the quality of the graph. First, we only connect vertices by a "visual shortcut" edge if |i−j| > ∆T, so as to avoid adding trivial edges. Second, to improve the robustness of visual shortcuts, we find these by matching sequences of observations, not single observations: DISPLAYFORM1 Finding the waypoint. At navigation time, we use SPTM to provide waypoints to the locomotion network. As illustrated in FIG1, the process includes three steps: localization, planning, and waypoint selection. In the localization step, the agent localizes itself and the goal in the graph based on its current observation o and the goal observation o g, as illustrated in FIG1 (a). We have experimented with two approaches to localization. In the basic variant, the agent's location is retrieved as the median of k = 5 nearest neighbors of the observation in the memory. The siamese architecture of the retrieval network allows for efficient nearest neighbor queries by pre-computing the embeddings of observations in the memory. An issue with this simple technique is that localization is performed per frame, and therefore the can be noisy and susceptible to perceptual aliasing -inability to discriminate two locations with similar appearance. We therefore implement a modified approach allowing for temporally consistent self-localization, inspired by localization approaches from robotics BID30. We initially perform the nearest neighbor search only in a local neighborhood of the previous agent's localization, and resort to global search in the whole memory only if this initial search fails (that is, the similarity of the retrieved nearest neighbor to the current observation is below a certain threshold s local). This simple modification improves the performance of the method while also reducing the search time. In the planning step, we find the shortest path on the graph between the two retrieved nodes v a and v g, as shown in FIG1 (b). We used Dijkstra's algorithm in our experiments. Finally, the third step is to select a waypoint on the computed shortest path, as depicted in FIG1. We denote the shortest path by DISPLAYFORM2 A naive solution would be to set the waypoint to v sp D, with a fixed D. However, depending on the actions taken in the exploration sequence, this can lead to selecting a waypoint that is either too close (no progress) or too far (not reachable). We therefore follow a more robust adaptive strategy. We choose the furthest vertex along the shortest path that is still confidently reachable: DISPLAYFORM3 where 0 < s reach < 1 is a fixed similarity threshold for considering a vertex reachable. In practice, we limit the waypoint search to a fixed window i ∈ [H min, H max]. The output of the planning process is the observation o w = o v w that corresponds to the retrieved waypoint. The network L is trained to navigate towards target observations in the vicinity of the agent. The network maps a pair (o 1, o 2), which consists of a current observation and a goal observation, into action probabilities: DISPLAYFORM0 The action can then be produced either deterministically by choosing the most probable action, or stochastically by sampling from the distribution. In what follows we use the stochastic policy. Akin to the retrieval network R, the network L is trained in self-supervised manner, based on trajectories of a randomly acting agent. Random exploration produces a sequence of observations {o 1, . . . o N} and actions {a 1, . . . a N}. We generate training samples from these trajectories by taking a pair of observations separated by at most l = 20 time steps and the action corresponding to the first observation: ((o i, o j), a i ). The network is trained in supervised fashion on this data, with a softmax output layer and the cross-entropy loss. The architecture of the network is the same as the retrieval network. Why is it possible to learn a useful controller based on trajectories of a randomly acting agent? The proposed training procedure leads to learning the conditional action distribution P (a|o t, o t+k). Even though the trajectories are generated by a random actor, this distribution is generally not uniform. For instance, if k = 1, the network would learn actions to be taken to perform one-step transitions between neighboring states. For k > 1, training data is more noisy, but there is still useful training signal, which turns out to be sufficient for short-range navigation. Figure 3: SPTM-based agent navigating towards a goal in a three-dimensional maze (a). The agent aims to reach the goal, denoted by a star. Given the current agent's observation (b) and the goal observation (d), SPTM produces a waypoint observation (c). The locomotion network is then used to navigate towards the waypoint. Inputs to the retrieval network R and the locomotion network L are observations of the environment o, represented by stacks of two consecutive RGB images obtained from the environment, at resolution 160×120 pixels. Both networks are based on ResNet-18 BID19. Note that ResNet-18 is much larger than networks typically used in navigation agents based on reinforcement learning. The use of this high-capacity architecture is made possible by the self-supervised training of our model. Training of a network of this size from scratch with pure reinforcement learning would be problematic and, to our knowledge, has never been demonstrated. The training setup is similar for both networks. We generate training data online by executing a random agent in the training environment, and maintain a replay buffer B of recent samples. At each training iteration, we sample a mini-batch of 64 observation pairs at random from the buffer, according to the conditions described in Sections 3.1 and 3.2. We then perform an update using the Adam optimizer BID24, with learning rate λ = 0.0001. We train the networks R and L for a total of 1 and 2 million mini-batch iterations, respectively. Further details are provided in the supplement. We made sure that all operations in the SPTM are implemented efficiently. Goal localization is only performed once in the beginning of a navigation episode. Shortest paths to the goal from all vertices of the graph can therefore also be computed once in the beginning of navigation. The only remaining computationally expensive operations are nearest-neighbor queries for agent self-localization in the graph. However, thanks to the siamese architecture of the retrieval network, we can precompute the embedding vectors of observations in the memory and need only evaluate the small fully-connected network during navigation. We perform experiments using a simulated three-dimensional environment based on the classic game Doom BID23 ). An illustration of an SPTM agent navigating towards a goal in a maze is shown in Figure 3. We evaluate the proposed method on the task of goal-directed navigation in previously unseen environments and compare it to relevant baselines from the literature. We are interested in agents that are able to generalize to new environments. Therefore, we used different mazes for training, validation, and testing. We used the same set of textures for all labyrinths, but the maze layouts are very different, and the texture placement is randomized. During training, we used a single labyrinth layout, but created 400 versions with randomized goal placements and textures. In addition, we created 3 mazes for validation and 7 mazes for testing. Layouts of the training and test labyrinths are shown in Figure 4; the validation mazes are shown in the supplement. Each maze is equipped with 4 goal locations, marked by 4 different special objects. The appearance of these special objects is common to all mazes. We used the validation mazes for tuning the parameters of all approaches, and used fixed parameters when evaluating in the test mazes. Figure 4: Layouts of training and test mazes. The overall experimental setup follows Section 3. When given a new maze, the agent is provided with an exploration sequence of the environment, with a duration of approximately 5 minutes of in-simulation time (equivalent to 10,500 simulation steps). In our experiments, we used sequences generated by a human subject aimlessly exploring the mazes. The same exploration sequences were provided to all algorithms -the proposed method and the baselines. Example exploration sequences are shown on the project page, https://sites.google.com/view/SPTM.Given an exploration sequence, the agent attempts a series of goal-directed navigation trials. In each of these, the agent is positioned at a new location in the maze and is presented with an image of the goal location. In our experiments, we used 4 different starting locations, 4 goals per maze, and repeated each trial 6 times ( in these vary due to the use of randomized policies for all methods), ing in 96 trials for each maze. A trial is considered successfully completed if the agent reaches the goal within 5,000 simulation steps, or 2.4 minutes of in-simulation time. We set the hyperparameters of the SPTM agent based on an evaluation on the validation set, as reported in Table S1 in the supplement. We find that the method performs well for a range of hyperparameter values. Interestingly, the approach is robust to temporal subsampling of the walkthrough sequence. Therefore, in the following experiments we subsample the walkthrough sequence by a factor of 4 when building the SPTM graph. Another important parameter is the threshold s shortcut for creating shortcuts in the graph. We set this threshold as a percentile of the set of all pairwise distances between observations in the memory, or, in other words, as the desired number of shortcuts to be created. We set this number to 2000 in what follows. When making visual shortcuts in the graph, we set the minimum shortcut distance ∆T = 5 and the smoothing window size ∆T w = 10. The threshold values for waypoint selection are set to s local = 0.7 and s reach = 0.95. The minimum and maximum waypoint distances are set to H min = 1 and H max = 7, respectively. We compare the proposed method to a set of baselines that are representative of the state of the art in deep-learning-based navigation. Note that we study an agent operating in a realistic setting: a continuous state space with no access to ground-truth information such as depth maps or egomotion. This setup excludes several existing works from our comparison: the full model of BID31 that uses ground-truth depth maps and ego-motion, the method of that operates on a discrete grid given ground-truth ego-motion, and the approach of BID36 that requires the knowledge of ground-truth global coordinates of the agent. Table 1: Comparison of the SPTM agent to baseline approaches. We report the percentage of navigation trials successfully completed in 5,000 steps (higher is better).The first baseline is a goal-agnostic agent without memory. The agent is not informed about the goal, but may reach it by chance. We train this network in the training maze using asynchronous advantage actor-critic (A3C) BID33. The agent is trained on the surrogate task of collecting invisible beacons around the labyrinth. (The beacons are made invisible to avoid providing additional visual guidance to the agents.) In the beginning of each episode, the labyrinth is populated with 1000 of these invisible beacons, at random locations. The agent receives a reward of 1 for collecting a beacon and 0 otherwise. Each episode lasts for 5,000 simulation steps. We train the agent with the A3C algorithm and use an architecture similar to BID33. Further details are provided in the supplement. The second baseline is a feedforward network trained on goal-directed navigation, similar to BID49. The network gets its current observation, as well as an image of the goal, as input. It gets the same reward as the goal-agnostic agent for collecting invisible beacons, but in addition it gets a large reward of 800 for reaching the goal. This network can go towards the goal if the goal is within its field of view, but it lacks memory, so it is fundamentally unable to make use of the exploration phase. The network architecture is the same as in the first baseline, but the input is the concatenation of the 4 most recent frames and the goal image. The third and fourth baseline approaches are again goal-agnostic and goal-directed agents, but equipped with LSTM memory. The goal-directed LSTM agent is similar to BID31. At test time, we feed the exploration sequence to the LSTM agent and then let it perform goaldirected navigation without resetting the LSTM state. When training these networks, we follow a similar protocol. First, the agent navigates the environment for 10,000 steps in exploration mode; that is, with rewards for collecting invisible beacons, but without a goal image given and with no reward for reaching a goal. Next, the agent is given a goal image and spends another 5,000 steps in goal-directed navigation mode; that is, with a goal image given and with a high reward for reaching the goal (while also continuing to receive rewards for collecting the invisible beacons). We do not reset the state of the memory cells between the two stages. This way, the agent can learn to store the layout of the environment in its memory and use it for efficient navigation. Table 1 shows, for each test maze, the percentage of navigation trials successfully completed within 5,000 steps, equivalent to 2.4 minutes of real-time simulation. Figure 5 presents the on the test mazes in more detail, by plotting the percentage of completed episodes as a function of the trial duration. Qualitative are available at https://sites.google.com/view/SPTM. The proposed SPTM agent is superior to the baselines in all mazes. As Table 1 demonstrates, its average success rate across the test mazes is three times higher than the best-performing baseline. Figure 5 demonstrates that the proposed approach is not only successful overall, but that the agent typically reaches the goal much faster than the baselines. The difference in performance between feedforward and LSTM baseline variants is generally small and inconsistent across mazes. This suggests that standard LSTM memory is not sufficient to efficiently make use of the provided walkthrough footage. One reason can be that recurrent networks, including LSTMs, struggle with storing long sequences BID17. The duration of the walkthrough footage, 10,000 time steps, is beyond the capabilities of standard recurrent networks. SPTM is at an advantage, since it stores all the provided information by design. Why is the performance of the baseline approaches in our experiments significantly weaker than reported previously BID31? The key reason is that we study generalization of agents to previously unseen environments, while BID31 train and evaluate agents in the same environment. The generalization scenario is much more challenging, but also more realistic. Our indicate that existing methods struggle with generalization. Interestingly, the best-performing baseline is goal-agnostic, not goal-directed. We see two main explanations for this. First, generalization performance has high variance and may be dominated by spurious correlations in the appearance of training and test mazes. Second, even in the training environments the goal-directed baselines do not necessarily outperform the goal-agnostic ones, since the large reward for reaching the goal makes reinforcement learning unstable. This effect has been observed by BID31, and to avoid it the authors had to resort to reward clipping; in our setting, reward clipping would effectively lead to ignoring the goals. Figure 6 (left) shows a trajectory of a walkthrough provided to the algorithms in the Val-3 maze. The shortcut connections made automatically in the SPTM graph are marked in red. We selected a conservative threshold for making shortcut connections to ensure that there are no false positives. Still, the automatically discovered shortcut connections greatly increase the connectivity of the graph: for instance, in the Val-3 maze the average length of the shortest path to the goal, computed over all nodes in the graph, drops from 990 to 155 steps after introducing the shortcut connections. Figure 6 (right) demonstrates three representative trajectories of the SPTM agent performing goaldirected navigation. In Tracks 1 and 2, the agent deliberately goes for the goal, making use of the environment representation stored in SPTM. Track 3 is less successful and the agent's trajectory contains unnecessary loops; we attribute this to the difficulty of vision-based self-localization in large environments. Table 2 reports an ablation study of the SPTM agent on the validation set. Removing vision-based shortcuts from the graph leads to dramatic decline in performance. The agent with independent per-frame localization performs quite well on two of the three mazes, but underperforms on the Table 2: Ablation study on the SPTM agent. We report the percentage of navigation trials successfully completed in 5,000 steps in validation mazes (higher is better).more challenging Val-3 maze. A likely explanation is that perceptual aliasing gets increasingly problematic in larger mazes. Additional experiments are reported in the supplement: performance in the validation environments, robustness to hyperparameter settings, an additional ablation study evaluating the performance of the R and L networks compared to simple alternatives, experiments in environments with homogeneous textures, and experiments with automated (non-human) exploration. We have proposed semi-parametric topological memory (SPTM), a memory architecture that consists of a non-parametric component -a topological graph, and a parametric component -a deep network capable of retrieving nodes from the graph given observations from the environment. We have shown that SPTM can act as a planning module in a navigation system. This navigation agent can efficiently reach goals in a previously unseen environment after being presented with only 5 minutes of footage. We see several avenues for future work. First, improving the performance of the networks R and L will directly improve the overall quality of the system. Second, while the current system explicitly avoids using ego-motion information, findings from experimental psychology suggest that noisy ego-motion estimation and path integration are useful for navigation. Incorporating these into our model can further improve robustness. Third, in our current system the size of the memory grows linearly with the duration of the exploration period. This may become problematic when navigating in very large environments, or in lifelong learning scenarios. A possible solution is adaptive subsampling, by only retaining the most informative or discriminative observations in memory. Finally, it would be interesting to integrate SPTM into a system that is trainable end-to-end. SUPPLEMENTARY MATERIAL S1 METHOD DETAILS S1.1 NETWORK ARCHITECTURESThe retrieval network R and the locomotion network L are both based on ResNet-18 BID19. Both take 160×120 pixel images as inputs. The networks are initialized as proposed by BID19. We used an open ResNet implementation: https://github.com/raghakot/ keras-resnet/blob/master/resnet.py. The network R admits two observations as input. Each of these is processed by a convolutional ResNet-18 encoder. Each of the encoders produces a 512-dimensional embedding vector. These are concatenated and fed through a fully-connected network with 4 hidden layers with 512 units each and ReLU nonlinearities. The network L also admits two observations, but in contrast with the network R it processes them jointly, after concatenating them together. A convolutional ResNet-18 encoder is followed by a single fully-connected layer with 7 outputs and a softmax. The 7 outputs correspond to all available actions: do nothing, move forward, move backward, move left, move right, turn left, and turn right. We implemented the training in Keras BID8 and Tensorflow BID0. The training setup is similar for both networks. We generate training data online by executing a random agent in the environment, and maintain a replay buffer B of size |B| = 10,000. We run the random agent for 10,000 steps and then perform 50 mini-batch iterations of training. For the random agent, as well as for all other agents, we use action repeat of 4 -that is, every selected action is repeated 4 times. At each training iteration, we sample a mini-batch of 64 training observation pairs at random from the buffer, according to the conditions described in Sections 3.2 and 3.1. We then perform an update using the Adam optimizer BID24, with learning rate λ = 0.0001, momentum parameters β 1 = 0.9 and β 2 = 0.999, and the stabilizing parameter ε = 10 −8. The baselines are based on an open A3C implementation: https://github.com/pathak22/ noreward-rl. We have used the architectures of BID33 and BID31. The feedforward model consists of two convolutional layers and two fully-connected layers, from which the value and the policy are predicted. In the LSTM model the second fully connected layer is replaced by LSTM. The input to the networks is a stack of 4 most recent observed frames, resized to 84×84 pixels. We experimented with using RGB and grayscale frames, and found the baselines trained with grayscale images to perform better. We therefore always report the for baselines with grayscale inputs. We train the baselines for 80 million action steps, which corresponds to 320 million simulation steps because of action repeat. We selected the snapshot to be used at test time based on the training reward. Layouts of the validation mazes are shown in FIG0. Plots of success rate as a function of trial duration on each validation maze are shown in FIG1. Performance of an SPTM agent with varying hyperparameters is shown in Table S1. To evaluate the robustness of the approach, we tried varying the texture distribution in the environment and the properties of the exploration sequence. Val-1 Val-2 Val-3 FIG0: Layouts of the mazes used for validation. In the experiments in the main paper we used mazes with relatively diverse (although repetitive) textures, see for example Figure 3 in the main paper. We re-textured several mazes to be qualitatively similar to BID31: with mainly homogeneous textures and only relatively sparse inclusions of more discriminative textures. When testing the method with these textures, we retrained the networks R and L in a training maze with similar texture distribution, but kept all other parameters of the method fixed. For experiments in the main paper we used walkthrough sequences recorded from humans exploring the maze. An intelligent agent should be able to explore and map an environment fully autonomously. Effective exploration is a challenging task in itself, and a comprehensive study of this problem is outside the scope of the present paper. However, as a first step, we experiment with providing our method with walkthrough sequences generated fully autonomously -by our baseline agents trained with reinforcement learning. This is only possible in simple mazes, where these agents were able to reach all goals. We used the best-performing baseline for each maze and repeated exploration multiple times, until all goals were located. The are reported in TAB5. The use of automatically generated trajectories leads to only a minor decrease in the final performance, although qualitatively the trajectories of the SPTM agent become much noisier (not shown). The different texture distribution affects the more, since visual self-localization becomes challenging with sparser textures. Yet the method still performs quite well and outperforms the baselines by a large margin. To better understand the importance of the locomotion and retrieval networks, we performed two experiments. First, we substituted the retrieval network R with simple per-pixel matching. Second, we substituted actions predicted by the locomotion network L by actions from the exploration sequence (teach-and-repeat). Note that this second approach uses information unavailable to our methodactions performed during the walkthrough sequence. It thus cannot be considered a proper baseline. We further discuss the exact settings and the . With normalization Without normalization Figure S3: Graphs constructed using per pixel matching. Shortcut connections are shown in red. Most shortcuts connections are wrong -they connect distant locations. This experiment was inspired by the approach of BID30. To compute the localization score, we downsample images to the resolution 40×30, convert to grayscale and then compute cosine distances between them. We experiment with two variants of this method: with local contrast normalization (similar to BID30) and without. To perform the normalization, we split the downsampled grayscale image into patches of size 10×10. In each patch, we subtract the mean and divide by the standard deviation. As Table S3 indicates, the per-pixel comparison baseline performs poorly. As shown in Figure S3, the visual shortcuts made with this technique are catastrophically wrong. Local normalization only makes the worse because it discards information about absolute color intensity, which can be a useful cue in our environments. To be able to use actions from the exploration sequence, a few modification to our method are necessary. First, we introduce no shortcut connections in the graph, as we would not know the actions for them. The graph thus turns into a path, making the shortest paths longer. Second, to allow the agent to move along this path in both directions, we select an opposite for every action: for example, the opposite of moving forward is moving backward. Finally, we found that taking a fraction of completely random actions helps the agent not to get stuck when it diverges far from the exploration track and the recorded actions are not useful anymore. We found 10% of random actions to lead to good . Overall, the method works as follows. First, the goal and the agent are localized using the same procedure as our method. Then the agent has to move either forward or backward along the exploration graph-line. If forward, then the action corresponding to the agent's localized observation is taken, if backward -the opposite of the recorded action. As Table S3 suggests, this method works significantly worse than our method, even though it makes use of extra information -the recorded actions. We see two reasons for this. First, there are no shortcut connections, which makes the path to the goal longer. Second, as soon as the agent diverges from the exploration trajectory, the actions do not match the states any more, and there is no mechanism for the agent to get back on track. For instance, imagine a long corridor: if the agent is oriented at a small angle to the direction of the corridor, it will inevitably crash into a wall. Why does the approach not fail completely due to the latter problem? This is most likely because the environment is forgiving: it allows the agent to slide along walls when facing them at an angle less than 90 degrees. This way, even if the agent diverges from the exploration path, it does not break down completely and still makes progress towards the goal. Videos of successful navigation trials for this agent can be found at https://sites.google.com/view/SPTM. Table S1: Effect of hyperparameters, evaluated on the validation set. We report the percentage of navigation trials successfully completed in 5,000 steps (higher is better). Table S3: Additional ablation study of the SPTM navigation agent. We report the percentage of navigation trials successfully completed in 5,000 steps in validation mazes (higher is better).
We introduce a new memory architecture for navigation in previously unseen environments, inspired by landmark-based navigation in animals.
584
scitldr
The available resolution in our visual world is extremely high, if not infinite. Existing CNNs can be applied in a fully convolutional way to images of arbitrary resolution, but as the size of the input increases, they can not capture contextual information. In addition, computational requirements scale linearly to the number of input pixels, and resources are allocated uniformly across the input, no matter how informative different image regions are. We attempt to address these problems by proposing a novel architecture that traverses an image pyramid in a top-down fashion, while it uses a hard attention mechanism to selectively process only the most informative image parts. We conduct experiments on MNIST and ImageNet datasets, and we show that our models can significantly outperform fully convolutional counterparts, when the resolution of the input is that big that the receptive field of the baselines can not adequately cover the objects of interest. Gains in performance come for less FLOPs, because of the selective processing that we follow. Furthermore, our attention mechanism makes our predictions more interpretable, and creates a trade-off between accuracy and complexity that can be tuned both during training and testing time. Our visual world is very rich, and there is information of interest in an almost infinite number of different scales. As a , we would like our models to be able to process images of arbitrary resolution, in order to capture visual information with arbitrary level of detail. This is possible with existing CNN architectures, since we can use fully convolutional processing , coupled with global pooling. However, global pooling ignores the spatial configuration of feature maps, and the output essentially becomes a bag of features 1. To demonstrate why this an important problem, in Figure 1 (a) and (b) we provide an example of a simple CNN that is processing an image in two different resolutions. In (a) we see that the receptive field of neurons from the second layer suffices to cover half of the kid's body, while in (b) the receptive field of the same neurons cover area that corresponds to the size of a foot. This shows that as the input size increases, the final representation becomes a bag of increasingly more local features, leading to the absence of coarselevel information, and potentially harming performance. We call this phenomenon the receptive field problem of fully convolutional processing. An additional problem is that computational resources are allocated uniformly to all image regions, no matter how important they are for the task at hand. For example, in Figure 1 (b), the same amount of computation is dedicated to process both the left half of the image that contains the kid, and the right half that is merely . We also have to consider that computational complexity scales linearly with the number of input pixels, and as a , the bigger the size of the input, the more resources are wasted on processing uninformative regions. We attempt to resolve the aforementioned problems by proposing a novel architecture that traverses an image pyramid in a top-down fashion, while it visits only the most informative regions along the way. The receptive field problem of fully convolutional processing. A simple CNN consisted of 2 convolutional layers (colored green), followed by a global pooling layer (colored red), processes an image in two different resolutions. The shaded regions indicate the receptive fields of neurons from different layers. As the resolution of the input increases, the final latent representation becomes a bag of increasingly more local features, lacking coarse information. (c) A sketch of our proposed architecture. The arrows on the left side of the image demonstrate how we focus on image sub-regions in our top-down traversal, while the arrows on the right show how we combine the extracted features in a bottom-up fashion. In Figure 1 (c) we provide a simplified sketch of our approach. We start at level 1, where we process the input image in low resolution, to get a coarse description of its content. The extracted features (red cube) are used to select out of a predefined grid, the image regions that are worth processing in higher resolution. This process constitutes a hard attention mechanism, and the arrows on the left side of the image show how we extend processing to 2 additional levels. All extracted features are combined together as denoted by the arrows on the right, to create the final image representation that is used for classification (blue cube). We evaluate our model on synthetic variations of MNIST and on ImageNet , while we compare it against fully convolutional baselines. We show that when the resolution of the input is that big, that the receptive field of the baseline 2 covers a relatively small portion of the object of interest, our network performs significantly better. We attribute this behavior to the ability of our model to capture both contextual and local information by extracting features from different pyramid levels, while the baselines suffer from the receptive field problem. Gains in accuracy are achieved for less floating point operations (FLOPs) compared to the baselines, due to the attention mechanism that we use. If we increase the number of attended image locations, computational requirements increase, but the probability of making a correct prediction is expected to increase as well. This is a trade-off between accuracy and computational complexity, that can be tuned during training through regularization, and during testing by stopping processing on early levels. Finally, by inspecting attended regions, we are able to get insights about the image parts that our networks value the most, and to interpret the causes of missclassifications. Attention. Attention has been used very successfully in various problems (; ; ; ;). Most similar to our work, are models that use recurrent neural networks to adaptively attend to a sequence of image regions, called glimpses (; ; ; ;). There are notable technical differences between such models and our approach. However, the difference that we would like to emphasize, is that we model the image content as a hierarchical structure, and we implicitly create a parsing tree , where each node corresponds to an attended location, and edges connect image regions with sub-regions (an example is provided in Appendix A.1). If we decide to store and reuse information, building such a tree structure offers a number of potential benefits, e.g. efficient indexing. We consider this an important direction to explore, but it is beyond the scope of the current paper. Multi-scale representations. We identify four broad categories of multi-scale processing methods. Image pyramid methods extract multi-scale features by processing multi-scale inputs (; ;). Our approach belongs to this category. Encoding schemes take advantage of the inherently hierarchical nature of deep neural nets, and reuse features from different layers, since they contain information of different scale (; ;). Encoding-Decoding schemes follow up the feed-forward processing (encoding) with a decoder, that gradually recovers the spatial resolution of early feature maps, by combining coarse with fine features . Spatial modules are incorporated into the feed forward processing, to alter the feature extraction between layers (; ;). Computational efficiency. We separate existing methods on adjusting the computational cost of deep neural networks, into four categories. ). This is the strategy that we follow in our architecture. We present our architecture by walking through the example in Figure 2, where we process an image with original resolution of 128 × 128 px (1 in the top left corner). In the fist level, we downscale the image to 32 × 32 px and pass it through the feature extraction module, in order to produce a feature vector V 1 that contains a coarse description of the original image. The feature extraction module is a CNN that accepts inputs in a fixed resolution, that we call base resolution. We can provide V 1 as direct input to the classification module in order to get a rapid prediction. If we end processing here, our model is equivalent to a standard CNN operating on 32 × 32 px inputs. However, since the original resolution of the image is 128 × 128 px, we can take advantage of the additional information by moving to the second processing level. In the second level, we feed the last feature map, F 1, of the feature extraction module, to the location module. The location module considers a number of candidate locations within the image described by F 1, and predicts how important it is to process in higher detail each one of them. In this particular example, the candidate locations form a 2 × 2 regular grid, and the location module yields 4 probabilities, that we use as parameters of 4 Bernoulli distributions in order to sample a hard attention mask. Based on this mask, we crop the corresponding regions, and we pass them through the feature extraction module, creating V 21 and V 23. If we want to stop at this processing level, we can directly pass V 21 and V 23 through the aggregation module (skipping the merging module 4). The aggregation module combines the features from individual image regions into a single feature vector V, that describes the original image solely based on fine information. This means that V agg 1 is complementary to V 1, and both vectors are combined by the merging module, which integrates fine information (V agg 1) with its context (V 1), creating a single comprehensive representation V 1. Then, V 1 can be used for the final prediction. We can extend our processing to a third level, where F 21 and F 23 are fed to the location module to create two binary masks. No locations are selected from the image patch described by V 23, and the aggregation module only creates V agg 21. Then, we start moving upwards for the final prediction. In Appendix A.2 we provide additional details about the modules of our architecture. In Appendix A.3 we express the feature extraction process with a single recursive equation. Our model is not end-to-end differentiable, because of the Bernoulli sampling involved in the location selection process. To overcome this problem, we use a variant of REINFORCE : where N · M is the number of images we use for each update, x i is the ith image, y i is its label, and w are the parameters of our model. p(l i |x i, w) is the probability that the sequence of locations l i is attended for image x i, and p(y i |l i, x i, w) is the probability of predicting the correct label after attending l i. The size of our original batch B is N, but in the derivation of we approximate with a Monte Carlo estimator of M samples, the expectation for each image x i in B. Based on this, for simplicity we just consider that our batch has size N · M. b is a baseline that we use to reduce the variance of our estimators, and λ f is a weighting hyperparameter. The first term of L F allows us to update the parameters in order to maximize the probability of each correct label. The second term allows us to update the location selection process, according to the utility of attended locations for the prediction of the correct labels. In Appendix A.4.1 we provide the exact derivation of learning rule. We experimentally identified two problems related to. First, our model tends to attend all the available locations, maximizing the computational cost. To regulate this, we add the following term: where p l i t approximates the expected number of attended locations for image x i base on l i, and t calculates the average expected number of attended locations per image in our augmented batch of N M images. The purpose of R t is to make the average number of attended locations per image equal to c t, which is a hyperparameter selected according to our computational cost requirements. λ t is simply a weighting hyperparameter. The second problem we identified while using learning rule, is that the learned attention policy may not be diverse enough. In order to encourages exploration, we add the following term: where p k is the average probability of attending the kth out of the g candidate locations, that the location module considers every time it is applied during the processing of our N M images. R r encourages the location module to attend with the same average probability c r, locations that are placed at different regions inside the sampling grid. λ r is a weighting hyperparameter. In Appendix A.4.2 we provide additional details about terms and. Our final learning rule is the following: Gradual Learning. The quality of the updates we make by using, depends on the quality of the Monte Carlo estimators. When we increase the number of processing levels that our model can go through, the number of location sequences that can be attended increases exponentially. Based on this, if we allow our model to go through multiple processing levels, and we use a small number of samples in order to have a moderate cost per update, we expect our estimators to have high variance. To avoid this, we separate training into stages, where in the first stage we allow only 2 processing levels, and at every subsequent stage an additional processing level is allowed. This way, we expect the location module to gradually learn which image parts are the most informative, narrowing down the location sequences that have a considerable probability to be attended at each training stage, and allowing a small number of samples to provide acceptable estimators. We call this training strategy gradual learning, and the learning rule that we use at each stage s, is the following: where L s r is equivalent to, with s superscripts indicating that the hyperparameters of each term can be adjusted at each training stage. The maximum number of possible training stages depends on the resolution of the training images. Multi-Level Learning. By following the gradual learning paradigm, a typical behavior that we observe is the following. After the first stage of training, our model is able to classify images with satisfactory accuracy by going through 2 processing levels. After the second stage of training, our model can go through 3 processing levels, and as we expect, accuracy increases since finer information can be incorporated. However, if we force our model to stop processing after 2 levels, the obtained accuracy is significantly lower than the one we were able to achieve after the first stage of training. This is a behavior that we observe whenever we finish a new training stage, and it is an important problem, because we would like to have a flexible model that can make accurate predictions after each processing level. To achieve this, we introduce the following learning rule: where L z r is learning rule with z + 1 processing levels allowed for our model. Each λ s z is a hyperparameter that specifies the relative importance of making accurate predictions after processing level z + 1, while we are at training stage s. We experiment with two datasets, MNIST and ImageNet . MNIST is a small dataset that we can easily modify in order to test different aspects of our models' Figure 3: Example images from our MNIST-based datasets, along with the attended locations of M 28 3 models. We blur the area outside the attended locations, because it is processed only in lower resolution during the first processing level. This way we aim to get a better understanding of what our models "see". behavior, e.g. the localization capabilities of our attention mechanism. ImageNet has over a million training images, and allows us to evaluate our model on a large scale. Data. MNIST is a dataset with images of handwritten digits that range from 0 to 9, leading to 10 different classes. All images are grayscale, and their size is 28 × 28 pixels. The dataset is split in a training set of 55, 000 images, a validation set of 5, 000 images, and a test set of 10, 000 images. We modify the MNIST images by placing each digit at a randomly selected location inside a black canvas of size 56 × 56. We refer to this MNIST variation as plain MNIST, and we further modify it to create noisy MNIST and textured MNIST. In the case of noisy MNIST, we add salt and pepper noise by corrupting 10% of the pixels in every image, with a 0.5 ratio between black and white noise. In the case of textured MNIST, we add textured that is randomly selected from a large image depicting grass. Example images are provided in the first column of Figure 3., each composed of a version M i of our architecture, and a corresponding fully convolutional baseline BL i for comparison. Ideally, we would like the feature extraction processes of the models that we compare to be consisted of the same layers, in order to eliminate important factors of performance variation like the type or the number of layers. To this end, in every pair of models, the baseline BL i is equivalent to the feature extraction module of M i followed by the classification module, with 2 fully connected layers in between. The 2 fully connected layers account for the aggregation and merging modules, which could be considered part of the feature extraction process in M i. We create 3 pairs of models because we want to study how our architecture performs relatively to fully convolutional models with different receptive fields. To achieve this, BL 1 has 1 convolutional layer and receptive field 3 × 3 px, BL 2 has 2 convolutional layers and receptive field 8 × 8 px, and BL 3 has 3 convolutional layers and receptive field 20 × 20 px. We would like to note that all models {M i} 3 i=1 have base resolution 14 × 14 px, and their location modules consider 9 candidate locations which belong to a 3 × 3 regular grid with 50% overlap. In Appendix A.5.1 we provide the exact architectures of all models. Training. We describe how we train one pair of models (M i, BL i) with one of the 3 MNIST-based datasets that we created. The same procedure is repeated for all datasets, and for every pair of models. The original resolution of our dataset is 56 × 56 px, and we rescale it to 2 additional resolutions that Figure 4: Experimental on plain, textured and noisy MNIST. The differences in accuracy between many models were very small, and as a , in the provided graphs we report the average of 20 different evaluations on the validation set, where each time we randomly change the positions of the digits inside the images. For textured and noisy MNIST, we randomly change the and the noise pattern of each image as well. are equal to 28 × 28 px, and 14 × 14 px (example images are provided in the first 3 columns of Figure 3). We split our training procedure in 3 sessions, where at each session we train our models with images of different resolution. In the first session we use images of size 14 × 14 px, and we refer to the ing models as BL 14 i and M 14 i. We note that since the resolution of the input is equal to the base resolution of M i, our model can go through just 1 processing level and the location module is not employed. In our second training session we use images of resolution 28 × 28 px, and the increased resolution of the input allows M i to use the location module and extend processing to 2 processing levels. Based on this, we are able to train M i multiple times by assigning different values to hyperparameter c t, ing to models that attend a different average number of locations per image, and as a , they have different average computational cost and accuracy. We refer to the models from this training session as BL 28 i and M, where n c is the average number of locations that M i attended on the validation set, while trained with c t = c. We also define M 28 i as the set of all M 28,nc i models we trained in this session. In the third training session we use images of resolution 56 × 56 px, and M i is able to go through 3 processing levels. Following our previous notation, we refer to the models of this session as BL. In Appendix A.5.2 we provide additional details about the training sessions, along with the exact hyperparameters that we used to obtain the that follow. Results. In the first row of Figure 4, we present performance graphs for all models on plain MNIST. We note that the annotations under all models indicate the average number of locations n c that we described in the training section. We start by examining Figure 4 (a), where, that demonstrate the interpretability of our models' decisions because of the employed attention mechanism. we depict the performance of models M 3 and BL 3 on images of different resolution. The overlapping blue markers indicate models M 14 3 and BL 14 3, which achieve the same accuracy level. The green markers denote models BL 28 3 and M 28,2.2 3 3, and as we expect, they achieve higher accuracy compared to BL 3, and by observing the performance of all M 3 models, we see that as the resolution of the input and the number of attended locations increases, accuracy increases as well, which is the expected trade-off between computation and accuracy. This trade-off follows a form of logarithmic curve, that saturates in models M operates. We observe that the location module is capable of focusing on the digit, and the generally good performance of the location module is reflected on the accuracy of the model, which is over 99%. However, BL, but are correctly classified by BL 28 3. In both examples, the attended locations partially cover the digits, leading 5 to be interpreted as 1, and 3 to be interpreted as 7. Of course, we are not always able to interpret the cause of a missclassification by inspecting the attended locations, but as the provided examples show, we may be able to get important insights. Besides the performance of our attention mechanism, we don't expect M 28,2.2 3 to achieve higher accuracy compared to BL 28 3. We remind that the images provided to the models are of size 28 × 28 px, the digit within each image covers an area of maximum size 14 × 14 px, and the receptive field of the baseline is 20 × 20 px. As a , the receptive field can perfectly cover the area of the digit, and the extracted features contain both coarse and fine information. Consequently, our model doesn't offer any particular advantage in terms of feature extraction that could be translated in better accuracy. This is something that we expect to change if the resolution of the input increases, or if the receptive field of the baseline gets smaller, as in the cases of BL 2 and BL 1. In Fig. 4 (b) we present the performance of models M 2 and BL 2, and the main difference we observe with (a), is that BL 56 2 demonstrates lower accuracy compared to BL 28 2. We attribute this behavior to the fact that the receptive field of BL 56 2 covers less than 10% of the area occupied by each digit, and as a , BL 56 2 is unable to extract coarse level features which are valuable for classification. Based on this hypothesis, we are able to interpret the behavior of the models in (c) as well. We observe that M In the second and third row of Figure 4, we present our on textured and noisy MNIST respectively. Our previous analysis applies to these as well. In addition, we would like to note that our attention mechanism is robust to textured and salt and pepper noise, as we can see in the respresentative examples provided in the last column of Fig. 3. In Appendix A.5.3 we provide some additional remarks on the reported in Fig. 4. Data. We use the ILSVRC 2012 version of ImageNet, which contains 1, 000 classes of natural images. The training and validation sets contain 1, 281, 167 and 50, 000 images, respectively. The average resolution of the original images is over 256 px per dimension. All images are color images, 3 We don't depict other M 28 3 models that were trained with different ct values, because they don't demonstrate any significant changes in accuracy, and would reduce the clarity of our graphs. In the y-axis we provide the top-1 accuracy on the validation set, while in the x-axis we provide the required number of FLOPs (×10 6) per image. but for simplicity, when we refer to resolution, we will drop the last dimension that corresponds to the color channels. Models. We create two pairs of models by following the same design principles we presented in Section 5.1. Model BL 1 has 3 convolutional layers and receptive field 18 × 18 px, while BL 2 has 4 convolutional layers and receptive field 38 × 38 px. have base resolution 32 × 32 px, and their location modules consider 9 candidate locations which belong to a 3 × 3 regular grid with 50% overlap. In Appendix A.6.1 we provide the architectures of all models. Training. We follow the training procedure we described in Section 5.1. The main difference is that we rescale our images to 4 different resolutions {r × r|r ∈ {32, 64, 128, 256}}, ing to 4 training sessions. We follow the notation we introduced in Section 5.1l, and we denote the models that from our first training session as M For the second training session we have r = 64, for the third r = 128, and for the fourth r = 256, while i ∈ {1, 2}. Finally, we use multi-level learning rule to train models that are able to demonstrate high accuracy after each processing level. The ing models are denoted by {M . In Appendix A.6.2 we provide additional training details. Results. In Figure 6 we provide our experimental . As in Figure 4, markers of different color denote models which are trained with images of different resolution, while the annotations next to markers that correspond to models indicate the average number of locations n c . In the first row of Fig. 6, we depict the performance of models M 2 and BL 2 . First, we would like to note that we observe the trade-off between accuracy and computatinal complexity that we have identified in Fig. 4 . When the number of required FLOPs increases by processing inputs of higher resolution, or by attending a bigger number of locations, accuracy increases as well. By inspecting the behavior of individual models, we observe that BL lack coarse information, and we expect this phenomenon to become even more intense for models BL 1, since they have smaller receptive field. Indeed, as we can see in the second row of Fig. 6, the performance gap between M 1 and BL 1 models is bigger. M We would also like to comment on the behavior of models M ml 2 and M ml 1 that we provide in the graphs of Fig. 6 . We observe that both M ml 2 and M ml 1 are able to maintain comparable performance to models M r 2 and M r 1 respectively, in all processing levels. This shows that we are able to adjust the computational requirements of our models during testing time, by controlling the number of processing levels that we allow them to go through. This is beneficial when we face constraints in the available computational resources, but also when we are processing images which vary in difficultly. Easier images can be classified after a few processing levels, while for harder ones we can extend processing to more levels. Finally, in Figure 7 we provide examples of attended image locations. We proposed a novel architecture that is able to process images of arbitrary resolution without sacrificing spatial information, as it typically happens with fully convolutional processing. This is achieved by approaching feature extraction as a top-down image pyramid traversal, that combines information from multiple different scales. The employed attention mechanism allows us to adjust the computational requirements of our models, by changing the number of locations they attend. This way we can exploit the existing trade-off between computational complexity and accuracy. Furthermore, by inspecting the image regions that our models attend, we are able to get important insights about the causes of their decisions. Finally, there are multiple future research directions that we would like to explore. These include the improvement of the localization capabilities of our attention mechanism, and the application of our model to the problem of budgeted batch classification. In addition, we would like our feature extraction process to become more adaptive, by allowing already extracted features to affect the processing of image regions that are attended later on. Figure 8 we provide the parsing tree that our model implicitly creates. Feature extraction module. It is a CNN that receives as input images of fixed resolution h × w × c, and outputs feature vectors of fixed size 1 × f . In the example of Fig. 2, h = w = 32. Location module. It receives as input feature maps of fixed size f h × f w × f c, and returns g probabilities, where g corresponds to the number of cells in the grid of candidate locations. In the example of Fig. 2, g = 4 since we are using a 2 × 2 grid. Aggregation module. It receives g vectors of fixed size 1 × f, and outputs a vector of fixed size 1 × f . The g input vectors describe the image regions inside a k × k grid. Image regions that were not selected by the location module, are described by zero vectors. The g input vectors are reorganized into a 1 × k × k × f tensor, according to the spatial arrangement of the image regions they describe. The tensor of the reorganized input vectors is used to produce the final output. Merging module. It receives two vectors of fixed size 1 × f, concatenates them into a single vector of size 1 × 2f, and outputs a vector of fixed size 1 × f . Classification module. It receives as input a vector of fixed size 1 × f, and outputs logits of fixed size 1 × c, where c is the number of classes. The logits are fed into a softmax layer to yield class probabilities. The feature extraction process of our model can be described with the following recursive equation: where ⊕ denotes the merging module, agg(·) denotes the aggregation module, loc(·) denotes the outcome of the Bernoulli sampling that is based on the output of the location module, and l Vn is the set of indexes that denote the candidate locations related to the image region described by V n. When recursion ends, V m = V m ∀m. The REINFORCE rule naturally emerges if we optimize the log likelihood of the labels, while considering the attended locations as latent variables . For a batch of N images, the log likelihood is given by the following relation: where x i is the ith image in the batch, y i is its label, and w are the parameters of our model. p(l i |x i, w) is the probability that the sequence of locations l i is attended for image x i, and p(y i |l i, x i, w) is the probability of predicting the correct label after attending l i. Equation 8 describes the log likelihood of the labels in terms of all location sequences that could be attended. Equation 8a shows that is the number of times the location module is applied while sequence l i is attended, and g is the number of candidate locations considered by the location module. In the example of We use Jensen's inequality in equation 8 to derive the following lower bound on the log likelihood: By maximizing the lower bound F, we expect to maximize the log likelihood. The update rule that we use is the partial derivative of F with respect to w, normalized by the number of images in the batch. We get: To derive equation 10 we used the log derivative trick. As we can see, for each image x i we need to calculate an expectation according to p(l i |x i, w). We approximate each expectation with a Monte Carlo estimator of M samples: We get samples from p(l i |x i, w) by repeating the processing of image x i. l i,m is the sequence of locations that is attended during the mth time we process image x i. In order to reduce the variance of the estimators, we use the baseline technique from. In particular, the baseline we use is the exponential moving average of the log likelihood, and is updated after the processing of each batch during training. Our baseline after the nth batch is the following: where x n i is the ith image in the nth batch, y n i is its label, and l i,n is the corresponding attended sequence of locations. Since we use M samples for the Monte Carlo estimator of each image, we simply consider that our batch has size NM to simplify the notation. Our updated learning rule is the following: For simplicity, we drop the indexes that indicate the batch we are processing. is the learning rule we presented in, and this concludes our derivation. We provide the exact equations that we use to calculate quantities p l i t and p k in regularization terms R t and R r respectively. We have: Based on the notation we introduced in Appendix A.4.1, p l i t approximates the expected number of attended locations for image x i, by summing the probabilities from all Bernoulli distributions considered during a single processing of x i under l i. p k computes the average probability of attending the kth out of the g candidate locations, that the location module considers every time it is applied. The average is calculated by considering all the times the location module is applied during the processing of the N M images in our augmented batch. Finally, we would like to note that the values of c t and c r are interdependent. A.5.1 ARCHITECTURES In Tables 1, 2 and 3, we provide the exact architectures of models (M 3, BL 3), (M 2, BL 2) and (M 1, BL 1). We provide details about training one pair of models (M i, BL i) with one of the 3 MNIST-based datasets, and the exact same procedure applies to all pairs and to all datasets. In our first training session we optimize the cross entropy loss to train BL i, and we use learning rule for M i. We remind that because of the input resolution, M i goes through only 1 processing level, and behaves as a regular CNN which is consisted of the feature extraction module followed by the classification module. As a , the only term of learning rule that we actually use, is the first term of L F, and we end up optimizing the cross entropy of the labels. For both models we use the following hyperparameters. We use learning rate 0.01 that drops by a factor of 0.2 after the completion of 80% and 90% of the total number of training steps. We train for 70 epochs, and we use batches of size 128. We use the Adam optimizer with the default values of β 1 = 0.9, The architectures of models M 3 and BL 3 that we used in our experiments with MNIST. "GAP" denotes a global average pooling layer. The variable output sizes of the baseline CNN are approximately calculated to preserve the clarity of our tables. Based on these approximate output sizes, the computation of the number of FLOPs in the corresponding layers is approximate as well. β 2 = 0.999 and = 10 −8. We use xavier initialization for the weights, and zero initialization for the biases. For regularization purposes, we use the following form of data augmentation. Every time we load an image, we randomly change the position of the digit inside the black canvas, as well as the noise pattern and the , in case we are using noisy MNIST or textured MNIST. This data augmentation strategy and the aforementioned hyperparameter's values, are used in the other two training sessions as well. In the second training session we again optimize the cross entropy loss for BL i, and we use learning rule for M i. The resolution of the input allows M i to go through 2 processing levels, and all terms of contribute to the updates of our models' parameters. We would like to note that we stop the gradients' flow from the location module to the other modules, and as a , the second term of, as well as the regularization terms and, affect only the parameters of the location module. We do this to have better control on how the location module learns, since we experienced the problems we described in Section 4. In the same context, we set λ f = 10 −6, λ r = 10 −7 and λ t = 10 −7, which lead to very small updates at every training step. For our Monte Carlo estimators we use 2 samples. These hyperparameter values are used in the third training session as well. The models M i which are reported in Figure 4 and from this training session (green circles), are trained with c t = 2. In the third training session M i can go through 3 processing levels, and we can train it by using either learning rule, or gradual learning. Gradual learning evolves in 2 stages, where in the first stage M i can go through 2 processing levels, while in the second stage it can go through 3. The first training stage is equivalent to the previous training session, since there is an equivalence between going through a different number of processing levels and processing inputs of different resolution. Based on that, we can directly move to the second training stage of gradual learning, by initializing the variables of M i with the learned parameters from one of the M 28 i models that we already trained. Gradual learning creates an imbalance in terms of how M i and BL i are trained, since it evolves in multiple stages. However, if we see gradual learning as a method of training models with images of gradually increasing resolution, it can be applied to the baselines as well. Based on this, we can i, and then apply our standard optimization to the cross entropy loss. In practice, we observe that baselines are almost always benefited by gradual learning, while in some cases our models achieve higher accuracy when they are trained from scratch with learning rule. In general, gradual learning and most modifications to the initial learning rule, ed from our experimentation with ImageNet, which is a much bigger and more diverse dataset of natural images. Consequently, our on the MNIST-based datasets remain almost unchanged even if we simplify our training procedure, e.g. by excluding term from our training rule. However, we kept our training procedure consistent both on the MNIST-based datasets and on ImageNet, we experimented both with gradual learning and with training from scratch at every session, and in the of Figure 4 we report the best performing models. Finally, the models M i which are reported in Figure 4 and from this training session (red circles), are trained with c t = 6 and c t = 12. In Figure 4 (e), BL 56 2 achieves higher accuracy compared to BL 28 2, which wasn't the case in (b). We hypothesize that the fine information provided by the increased resolution, is valuable to disentangle the digits from the distracting , and outweighs the lack of coarse level features that stems from the receptive field size of BL 56 2. In addition, the differences in accuracy between M 1 and BL 1 models in Fig. 4 (f), are considerably bigger compared to the ones recorded in (c). This shows that our models are more robust to distracting textured compared to the baselines. In Fig. 4 (g . This is surprising, because M 56,6.1 3 is processing images of higher resolution, and it should be able to reach at least the same level of accuracy as M 28,2.2 3 . Our explanation for this observation is based on the nature of the data. As we can see in the example images that we provide in the last row of Figure 4, when the resolution of an image is reduced, the noise is blurred out. As a , when M 56,6.1 3 is processing images of higher resolution, it is processing more intense high frequency noise, and since it is using a limited number of locations, its accuracy drops compared to M . This phenomenon is observed in Fig. 4 respectively. The explanation we provided adds a new dimension to the understanding of our models, because so far we were treating fine information as something that is by default beneficial for classification, while high frequency noise of any form may require special consideration in our design and training choices. Tables 4 and 5, we provide the exact architectures of models (M 2, BL 2) and (M 1, BL 1). We provide details about training one pair of models (M i, BL i). We make a distinction between models M 1 and M 2 only when the process we follow, or the values of the hyperparameters that we use, differ. In our first training session we optimize the cross entropy loss to train BL i, and we use learning rule for M i. The base resolution of M i matches the size of the input images (32 × 32 px), and our model goes through only 1 processing level, without using the location module. As a , the only term of learning rule that we actually use, is the first term of L F, and we end up optimizing the cross entropy of the labels. For both models we use the following hyperparameters. We use learning rate 0.001 that drops by a factor of 0.2 after the completion of 80% and 90% of the total number of training steps. We train for 200 epochs, and we use batches of size 128. We use the Adam optimizer with the default values of β 1 = 0.9, β 2 = 0.999 and = 10 −8. We use xavier initialization for the weights, and zero initialization for the biases. For regularization purposes, we use data augmentation that is very similar to the one used by. In particular, given a training image, we get a random The architectures of models M 2 and BL 2 that we used in our experiments on ImageNet. "GAP" denotes a global average pooling layer. The variable output sizes of the baseline CNN are approximately calculated to preserve the clarity of our tables. Based on these approximate output sizes, the computation of the number of FLOPs in the corresponding layers is approximate as well. crop that covers at least 85% of the image area, while it has an aspect ration between 0.5 and 2.0. Since we provide inputs of fixed size to our networks, we resize the image crops appropriately. The resizing is performed by randomly selecting between bilinear, nearest neighbor, bicubic, and area interpolation. Also, we randomly flip the resized image crops horizontally, and we apply photometric distortions according to. The final image values are scaled between −1 and 1. This data augmentation strategy and the aforementioned hyperparameter's values, are used in the other training sessions as well, and in the stages of multi-level learning that we describe later. In the second training session we again optimize the cross entropy loss for BL i, and we use learning rule for M i. The resolution of the input allows M i to go through 2 processing levels, and all terms of contribute to the updates of our models' parameters. As we described in Appendix A.5.2, we stop the gradients' flow from the location module to the other modules. We set λ f = 10 −8, λ r = 10 −9 and λ t = 10 −9, and for our Monte Carlo estimators we use 2 samples. These hyperparameter values are used in the two remaining training sessions as well. The models M 64 i which are reported in Figure 6 are trained with c t ∈ {1.5, 4.5}. In the third training session we use gradual learning for M i, by initializing its variables with the learned parameters of a M 64 i model. The models M 128 2 which are reported in Figure 6 are trained with c t ∈ {3.75, 7, 13.5, 24.75}, and models M 128 1 with c t ∈ {3.75, 8, 11, 13.5, 30}. We use gradual learning for BL i as well, by initializing its parameters with those of BL 64 i before we apply the standard optimization to the cross entropy loss. In the fourth training session we again use gradual learning for M i, by initializing its variables with the learned parameters of a M with c t ∈ {30, 35, 40}. As in the previous session, we optimize the cross entropy loss to train BL i, and we initialize its parameters with those of BL 128 i.
We propose a novel architecture that traverses an image pyramid in a top-down fashion, while it visits only the most informative regions along the way.
585
scitldr
Hyperparameter optimization can be formulated as a bilevel optimization problem, where the optimal parameters on the training set depend on the hyperparameters. We aim to adapt regularization hyperparameters for neural networks by fitting compact approximations to the best-response function, which maps hyperparameters to optimal weights and biases. We show how to construct scalable best-response approximations for neural networks by modeling the best-response as a single network whose hidden units are gated conditionally on the regularizer. We justify this approximation by showing the exact best-response for a shallow linear network with L2-regularized Jacobian can be represented by a similar gating mechanism. We fit this model using a gradient-based hyperparameter optimization algorithm which alternates between approximating the best-response around the current hyperparameters and optimizing the hyperparameters using the approximate best-response function. Unlike other gradient-based approaches, we do not require differentiating the training loss with respect to the hyperparameters, allowing us to tune discrete hyperparameters, data augmentation hyperparameters, and dropout probabilities. Because the hyperparameters are adapted online, our approach discovers hyperparameter schedules that can outperform fixed hyperparameter values. Empirically, our approach outperforms competing hyperparameter optimization methods on large-scale deep learning problems. We call our networks, which update their own hyperparameters online during training, Self-Tuning Networks (STNs). Regularization hyperparameters such as weight decay, data augmentation, and dropout are crucial to the generalization of neural networks, but are difficult to tune. Popular approaches to hyperparameter optimization include grid search, random search BID3, and Bayesian optimization . These approaches work well with low-dimensional hyperparameter spaces and ample computational resources; however, they pose hyperparameter optimization as a black-box optimization problem, ignoring structure which can be exploited for faster convergence, and require many training runs. We can formulate hyperparameter optimization as a bilevel optimization problem. Let w denote parameters (e.g. weights and biases) and λ denote hyperparameters (e.g. dropout probability). Let L T and L V be functions mapping parameters and hyperparameters to training and validation losses, respectively. We aim to solve 1: DISPLAYFORM0 Substituting the best-response function w * (λ) = arg min w L T (λ, w) gives a single-level problem: DISPLAYFORM1 If the best-response w * is known, the validation loss can be minimized directly by gradient descent using Equation 2, offering dramatic speed-ups over black-box methods. However, as the solution to a high-dimensional optimization problem, it is difficult to compute w * even approximately. , we propose to approximate the best-response w * directly with a parametric functionŵ φ. We jointly optimize φ and λ, first updating φ so thatŵ φ ≈ w * in a neighborhood around the current hyperparameters, then updating λ by usingŵ φ as a proxy for w * in Eq. 2: DISPLAYFORM2 Finding a scalable approximationŵ φ when w represents the weights of a neural network is a significant challenge, as even simple implementations entail significant memory overhead. We show how to construct a compact approximation by modelling the best-response of each row in a layer's weight matrix/bias as a rank-one affine transformation of the hyperparameters. We show that this can be interpreted as computing the activations of a base network in the usual fashion, plus a correction term dependent on the hyperparameters. We justify this approximation by showing the exact best-response for a shallow linear network with L 2 -regularized Jacobian follows a similar structure. We call our proposed networks Self-Tuning Networks (STNs) since they update their own hyperparameters online during training. STNs enjoy many advantages over other hyperparameter optimization methods. First, they are easy to implement by replacing existing modules in deep learning libraries with "hyper" counterparts which accept an additional vector of hyperparameters as input 2. Second, because the hyperparameters are adapted online, we ensure that computational effort expended to fit φ around previous hyperparameters is not wasted. In addition, this online adaption yields hyperparameter schedules which we find empirically to outperform fixed hyperparameter settings. Finally, the STN training algorithm does not require differentiating the training loss with respect to the hyperparameters, unlike other gradient-based approaches , allowing us to tune discrete hyperparameters, such as the number of holes to cut out of an image BID12, data-augmentation hyperparameters, and discrete-noise dropout parameters. Empirically, we evaluate the performance of STNs on large-scale deep-learning problems with the Penn Treebank and CIFAR-10 datasets , and find that they substantially outperform baseline methods. A bilevel optimization problem consists of two sub-problems called the upper-level and lower-level problems, where the upper-level problem must be solved subject to optimality of the lower-level problem. Minimax problems are an example of bilevel programs where the upper-level objective equals the negative lower-level objective. Bilevel programs were first studied in economics to model leader/follower firm dynamics and have since found uses in various fields (see BID10 for an overview). In machine learning, many problems can be formulated as bilevel programs, including hyperparameter optimization, GAN training , meta-learning, and neural architecture search BID18.Even if all objectives and constraints are linear, bilevel problems are strongly NP-hard . Due to the difficulty of obtaining exact solutions, most work has focused on restricted settings, considering linear, quadratic, and convex functions. In contrast, we focus on obtaining local solutions in the nonconvex, differentiable, and unconstrained setting. Let F, f: R n × R m → R denote the upper-and lower-level objectives (e.g., L V and L T) and λ ∈ R n, w ∈ R m denote the upper-and lower-level parameters. We aim to solve: DISPLAYFORM0 subject to w ∈ arg min DISPLAYFORM1 It is desirable to design a gradient-based algorithm for solving Problem 4, since using gradient information provides drastic speed-ups over black-box optimization methods . The simplest method is simultaneous gradient descent, which updates λ using ∂F /∂λ and w using ∂f /∂w. However, simultaneous gradient descent often gives incorrect solutions as it fails to account for the dependence of w on λ. Consider the relatively common situation where F doesn't depend directly on λ, so that ∂F /∂λ ≡ 0 and hence λ is never updated. A more principled approach to solving Problem 4 is to use the best-response function . Assume the lower-level Problem 4b has a unique optimum w * (λ) for each λ. Substituting the best-response function w * converts Problem 4 into a single-level problem: DISPLAYFORM0 If w * is differentiable, we can minimize Eq. 5 using gradient descent on F * with respect to λ. This method requires a unique optimum w * (λ) for Problem 4b for each λ and differentiability of w *. In general, these conditions are difficult to verify. We give sufficient conditions for them to hold in a neighborhood of a point (λ 0, w 0) where w 0 solves Problem 4b given λ 0. Lemma 1. Let w 0 solve Problem 4b for λ 0. Suppose f is C 2 in a neighborhood of (λ 0, w 0) and the Hessian ∂ 2 f /∂w 2 (λ 0, w 0) is positive definite. Then for some neighborhood U of λ 0, there exists a continuously differentiable function w *: U → R m such that w * (λ) is the unique solution to Problem 4b for each λ ∈ U and w * (λ 0) = w 0.Proof. See Appendix B.1.The gradient of F * decomposes into two terms, which we term the direct gradient and the response gradient. The direct gradient captures the direct reliance of the upper-level objective on λ, while the response gradient captures how the lower-level parameter responds to changes in the upper-level parameter: DISPLAYFORM1 Even if ∂F /∂λ ≡ 0 and simultaneous gradient descent is possible, including the response gradient can stabilize optimization by converting the bilevel problem into a single-level one, as noted by for GAN optimization. Conversion to a single-level problem ensures that the gradient vector field is conservative, avoiding pathological issues described by. In general, the solution to Problem 4b is a set, but assuming uniqueness of a solution and differentiability of w * can yield fruitful algorithms in practice. In fact, gradient-based hyperparameter optimization methods can often be interpreted as approximating either the best-response w * or its Jacobian ∂w * /∂λ, as detailed in Section 5. However, these approaches can be computationally expensive and often struggle with discrete hyperparameters and stochastic hyperparameters like dropout probabilities, since they require differentiating the training loss with respect to the hyperparameters. Promising approaches to approximate w * directly were proposed by , and are detailed below.1. Global Approximation. The first algorithm proposed by approximates w * as a differentiable functionŵ φ with parameters φ. If w represents neural net weights, then the mappingŵ φ is a hypernetwork . If the distribution p(λ) is fixed, then gradient descent with respect to φ minimizes: DISPLAYFORM0 If support(p) is broad andŵ φ is sufficiently flexible, thenŵ φ can be used as a proxy for w * in Problem 5, ing in the following objective: min DISPLAYFORM1 2. Local Approximation. In practice,ŵ φ is usually insufficiently flexible to model w * on support(p). The second algorithm of locally approximates w * in a neighborhood around the current upper-level parameter λ. They set p(|σ) to a factorized Gaussian noise distribution with a fixed scale parameter σ ∈ R n +, and found φ by minimizing the objective: DISPLAYFORM2 Intuitively, the upper-level parameter λ is perturbed by a small amount, so the lower-level parameter learns how to respond. An alternating gradient descent scheme is used, where φ is updated to minimize equation 9 and λ is updated to minimize equation 8. This approach worked for problems using L 2 regularization on MNIST . However, it is unclear if the approach works with different regularizers or scales to larger problems. It requiresŵ φ, which is a priori unwieldy for high dimensional w. It is also unclear how to set σ, which defines the size of the neighborhood on which φ is trained, or if the approach can be adapted to discrete and stochastic hyperparameters. In this section, we first construct a best-response approximationŵ φ that is memory efficient and scales to large neural networks. We justify this approximation through analysis of simpler situations. Then, we describe a method to automatically adjust the scale of the neighborhood φ is trained on. Finally, we formally describe our algorithm and discuss how it easily handles discrete and stochastic hyperparameters. We call the ing networks, which update their own hyperparameters online during training, Self-Tuning Networks (STNs). We propose to approximate the best-response for a given layer's weight matrix W ∈ R Dout×Din and bias b ∈ R Dout as an affine transformation of the hyperparameters λ 3: DISPLAYFORM0 Here, indicates elementwise multiplication and row indicates row-wise rescaling. This architecture computes the usual elementary weight/bias, plus an additional weight/bias which has been scaled by a linear transformation of the hyperparameters. Alternatively, it can be interpreted as directly operating on the pre-activations of the layer, adding a correction to the usual pre-activation to account for the hyperparameters: DISPLAYFORM1 This best-response architecture is tractable to compute and memory-efficient: it requires D out (2D in + n) parameters to representŴ φ and D out (2 + n) parameters to representb φ, where n is the number of hyperparameters. Furthermore, it enables parallelism: since the predictions can be computed by transforming the pre-activations (Equation 11), the hyperparameters for different examples in a batch can be perturbed independently, improving sample efficiency. In practice, the approximation can be implemented by simply replacing existing modules in deep learning libraries with "hyper" counterparts which accept an additional vector of hyperparameters as input 4. Given that the best-response function is a mapping from R n to the high-dimensional weight space R m, why should we expect to be able to represent it compactly? And why in particular would equation 10 be a reasonable approximation? In this section, we exhibit a model whose best-response function can be represented exactly using a minor variant of equation 10: a linear network with Jacobian norm regularization. In particular, the best-response takes the form of a network whose hidden units are modulated conditionally on the hyperparameters. Consider using a 2-layer linear network with weights w = (Q, s) ∈ R D×D × R D to predict targets t ∈ R from inputs x ∈ R D: a(x; w) = Qx, y(x; w) = s a(x; w)Suppose we use a squared-error loss regularized with an L 2 penalty on the Jacobian ∂y /∂x, where the penalty weight λ lies in R and is mapped using exp to lie R +: DISPLAYFORM0 Theorem 2. Let w 0 = (Q 0, s 0), where Q 0 is the change-of-basis matrix to the principal components of the data matrix and s 0 solves the unregularized version of Problem 13 given DISPLAYFORM1 where σ is the sigmoid function. Proof. See Appendix B.2.Observe that y(x; w * (λ)) can be implemented as a regular network with weights w 0 = (Q 0, s 0) with an additional sigmoidal gating of its hidden units a(x; w * (λ)): DISPLAYFORM2 This architecture is shown in FIG0. Inspired by this example, we use a similar gating of the hidden units to approximate the best-response for deep, nonlinear networks. The sigmoidal gating architecture of the preceding section can be further simplified if one only needs to approximate the best-response function for a small range of hyperparameter values. In particular, for a narrow enough hyperparameter distribution, a smooth best-response function can be approximated by an affine function (i.e. its first-order Taylor approximation). Hence, we replace the sigmoidal gating with linear gating, in order that the weights be affine in the hyperparameters. The following theorem shows that, for quadratic lower-level objectives, using an affine approximation to the best-response function and minimizing E ∼p(|σ) [f (λ +,ŵ φ (λ +))] yields the correct best-response Jacobian, thus ensuring gradient descent on the approximate objective F (λ,ŵ φ (λ)) converges to a local optimum: DISPLAYFORM0 is Gaussian with mean 0 and variance DISPLAYFORM1 Proof. See Appendix B.3. The effect of the sampled neighborhood. Left: If the sampled neighborhood is too small (e.g., a point mass) the approximation learned will only match the exact best-response at the current hyperparameter, with no guarantee that its gradient matches that of the best-response. Middle: If the sampled neighborhood is not too small or too wide, the gradient of the approximation will match that of the best-response. Right: If the sampled neighborhood is too wide, the approximation will be insufficiently flexible to model the best-response, and again the gradients will not match. The entries of σ control the scale of the hyperparameter distribution on which φ is trained. If the entries are too large, thenŵ φ will not be flexible enough to capture the best-response over the samples. However, the entries must remain large enough to forceŵ φ to capture the shape locally around the current hyperparameter values. We illustrate this in FIG1. As the smoothness of the loss landscape changes during training, it may be beneficial to vary σ. To address these issues, we propose adjusting σ during training based on the sensitivity of the upperlevel objective to the sampled hyperparameters. We include an entropy term weighted by τ ∈ R + which acts to enlarge the entries of σ. The ing objective is: DISPLAYFORM0 This is similar to a variational inference objective, where the first term is analogous to the negative log-likelihood, but τ = 1. As τ ranges from 0 to 1, our objective interpolates between variational optimization and variational inference, as noted by. Similar objectives have been used in the variational inference literature for better training BID5 and representation learning .Minimizing the first term on its own eventually moves all probability mass towards an optimum λ *, ing in σ = 0 if λ * is an isolated local minimum. This compels σ to balance between shrinking to decrease the first term while remaining sufficiently large to avoid a heavy entropy penalty. When benchmarking our algorithm's performance, we evaluate F (λ,ŵ φ (λ)) at the deterministic current hyperparameter λ 0. (This is a common practice when using stochastic operations during training, such as batch normalization or dropout.) We now describe the complete STN training algorithm and discuss how it can tune hyperparameters that other gradient-based algorithms cannot, such as discrete or stochastic hyperparameters. We use an unconstrained parametrization λ ∈ R n of the hyperparameters. Let r denote the element-wise function which maps λ to the appropriate constrained space, which will involve a non-differentiable discretization for discrete hyperparameters. Let L T and L V denote training and validation losses which are (possibly stochastic, e.g., if using dropout) functions of the hyperparameters and parameters. Define functions f, F by f (λ, w) = L T (r(λ), w) and F (λ, w) = L V (r(λ), w). STNs are trained by a gradient descent scheme which alternates between updating φ for T train steps to minimize E ∼p(|σ) [f (λ +,ŵ φ (λ +))] (Eq. 9) and updating λ and σ for T valid steps to minimize DISPLAYFORM0 (Eq. 15). We give our complete algorithm as Algorithm 1 and show how it can be implemented in code in Appendix G. The possible non-differentiability of r due to discrete hyperparameters poses no problem. To estimate the derivative of E ∼p(|σ) [f (λ +,ŵ φ (λ +))] with respect to φ, we can use the reparametrization trick and compute ∂f /∂w and ∂ŵ φ/∂φ, neither of whose computation paths involve the discretization r. DISPLAYFORM1 with respect to a discrete hyperparameter λ i, there are two cases we must consider: Initialize: Best-response approximation parameters φ, hyperparameters λ, learning rates DISPLAYFORM0 while not converged do DISPLAYFORM1 Case 1: For most regularization schemes, L V and hence F does not depend on λ i directly and thus the only gradient is throughŵ φ. Thus, the reparametrization gradient can be used. Case 2: If L V relies explicitly on λ i, then we can use the REINFORCE gradient estimator BID15 to estimate the derivative of the expectation with respect to λ i. The number of hidden units in a layer is an example of a hyperparameter that requires this approach since it directly affects the validation loss. We do not show this in Algorithm 1, since we do not tune any hyperparameters which fall into this case. We applied our method to convolutional networks and LSTMs , yielding self-tuning CNNs (ST-CNNs) and self-tuning LSTMs (ST-LSTMs). We first investigated the behavior of STNs in a simple setting where we tuned a single hyperparameter, and found that STNs discovered hyperparameter schedules that outperformed fixed hyperparameter values. Next, we compared the performance of STNs to commonly-used hyperparameter optimization methods on the CIFAR-10 and PTB datasets. Due to the joint optimization of the hypernetwork weights and hyperparameters, STNs do not use a single, fixed hyperparameter during training. Instead, STNs discover schedules for adapting the hyperparameters online, which can outperform any fixed hyperparameter. We examined this behavior in detail on the PTB corpus using an ST-LSTM to tune the output dropout rate applied to the hidden units. The schedule discovered by an ST-LSTM for output dropout, shown in Figure 3, outperforms the best, fixed output dropout rate (0.68) found by a fine-grained grid search, achieving 82.58 vs 85.83 validation perplexity. We claim that this is a consequence of the schedule, and not of regularizing effects from sampling hyperparameters or the limited capacity ofŵ φ.To rule out the possibility that the improved performance is due to stochasticity introduced by sampling hyperparameters during STN training, we trained a standard LSTM while perturbing its dropout rate around the best value found by grid search. We used random Gaussian perturbations, and sinusoid perturbations for a cyclic regularization schedule. STNs outperformed both perturbation methods (Table 2 : Final validation and test performance of each method on the PTB word-level language modeling task, and the CIFAR-10 image-classification task. To determine whether the limited capacity ofŵ φ acts as a regularizer, we trained a standard LSTM from scratch using the schedule for output dropout discovered by the ST-LSTM. Using this schedule, the standard LSTM performed nearly as well as the STN, providing evidence that the schedule itself (rather than some other aspect of the STN) was responsible for the improvement over a fixed dropout rate. To further demonstrate the importance of the hyperparameter schedule, we also trained a standard LSTM from scratch using the final dropout value found by the STN (0.78), and found that it did not perform as well as when following the schedule. The final validation and test perplexities of each variant are shown in TAB0.Next, we show in Figure 3 that the STN discovers the same schedule regardless of the initial hyperparameter values. Because hyperparameters adapt over a shorter timescale than the weights, we find that at any given point in training, the hyperparameter adaptation has already equilibrated. As shown empirically in Appendix F, low regularization is best early in training, while higher regularization is better later on. We found that the STN schedule implements a curriculum by using a low dropout rate early in training, aiding optimization, and then gradually increasing the dropout rate, leading to better generalization. We evaluated an ST-LSTM on the PTB corpus , which is widely used as a benchmark for RNN regularization due to its small size (; ; BID14 . We used a 2-layer LSTM with 650 hidden units per layer and 650-dimensional word embeddings. We tuned 7 hyperparameters: variational dropout rates for the input, hidden state, and output; embedding dropout (that sets rows of the embedding matrix to 0); DropConnect on the hidden-to-hidden weight matrix; and coefficients α and β that control the strength of activation regularization and temporal activation regularization, respectively. For LSTM tuning, we obtained the best when using a fixed perturbation scale of 1 for the hyperparameters. Additional details about the experimental setup and the role of these hyperparameters can be found in Appendix D. We compared STNs to grid search, random search, and Bayesian optimization. 6 FIG2 shows the best validation perplexity achieved by each method over time. STNs outperform other meth- ods, achieving lower validation perplexity more quickly. The final validation and test perplexities achieved by each method are shown in Table 2. We show the schedules the STN finds for each hyperparameter in FIG2; we observe that they are nontrivial, with some forms of dropout used to a greater extent at the start of training (including input and hidden dropout), some used throughout training (output dropout), and some that are increased over the course of training (embedding and weight dropout). We evaluated ST-CNNs on the CIFAR-10 dataset, where it is easy to overfit with high-capacity networks. We used the AlexNet architecture , and tuned: continuous hyperparameters controlling per-layer activation dropout, input dropout, and scaling noise applied to the input, discrete data augmentation hyperparameters controlling the length and number of cut-out holes , and continuous data augmentation hyperparameters controlling the amount of noise to apply to the hue, saturation, brightness, and contrast of an image. In total, we considered 15 hyperparameters. We compared STNs to grid search, random search, and Bayesian optimization. FIG4 shows the lowest validation loss achieved by each method over time, and Table 2 shows the final validation and test losses for each method. Details of the experimental setup are provided in Appendix E. Again, STNs find better hyperparameter configurations in less time than other methods. The hyperparameter schedules found by the STN are shown in FIG3. Bilevel Optimization. BID10 provide an overview of bilevel problems, and a comprehensive textbook was written by BID1. When the objectives/constraints are restricted to be linear, quadratic, or convex, a common approach replaces the lower-level problem with its KKT conditions added as constraints for the upper-level problem . In the unrestricted setting, our work loosely resembles trust-region methods BID9, which repeatedly approximate the problem locally using a simpler bilevel program. In closely related work, used evolutionary techniques to estimate the best-response function iteratively. Hypernetworks. First considered by Schmidhuber (1993; BID15, hypernetworks are functions mapping to the weights of a neural net. Predicting weights in CNNs has been developed in various forms BID11 BID16 . used hypernetworks to generate weights for modern CNNs and RNNs. BID6 used hypernetworks to globally approximate a bestresponse for architecture search. Because the architecture is not optimized during training, they require a large hypernetwork, unlike ours which locally approximates the best-response. Gradient-Based Hyperparameter Optimization. There are two main approaches. The first approach approximates w * (λ 0) using w T (λ 0, w 0), the value of w after T steps of gradient descent on f with respect to w starting at (λ 0, w 0). The descent steps are differentiated through to approximate ∂w * /∂λ(λ 0) ≈ ∂w T /∂λ(λ 0, w 0). This approach was proposed by BID13 and used by , and. The second approach uses the Implicit Function Theorem to derive ∂w * /∂λ(λ 0) under certain conditions. This was first developed for hyperparameter optimization in neural networks and developed further by. Similar approaches have been used for hyperparameter optimization in log-linear models , kernel selection BID8 ), and image reconstruction (; BID7 . Both approaches struggle with certain hyperparameters, since they differentiate gradient descent or the training loss with respect to the hyperparameters. In addition, differentiating gradient descent becomes prohibitively expensive as the number of descent steps increases, while implicitly deriving ∂w * /∂λ requires using Hessian-vector products with conjugate gradient solvers to avoid directly computing the Hessian. Model-Based Hyperparameter Optimization. A common model-based approach is Bayesian optimization, which models p(r|λ, D), the conditional probability of the performance on some metric r given hyperparameters λ and a dataset D = {(λ i, r i)}. We can model p(r|λ, D) with various methods (; ; ; . D is constructed iteratively, where the next λ to train on is chosen by maximizing an acquisition function C(λ; p(r|λ, D)) which balances exploration and exploitation. Training each model to completion can be avoided if assumptions are made on learning curve behavior . These approaches require building inductive biases into p(r|λ, D) which may not hold in practice, do not take advantage of the network structure when used for hyperparameter optimization, and do not scale well with the number of hyperparameters. However, these approaches have consistency guarantees in the limit, unlike ours. Model-Free Hyperparameter Optimization. Model-free approaches include grid search and random search. BID3 advocated using random search over grid search. Successive Halving and Hyperband extend random search by adaptively allocating resources to promising configurations using multi-armed bandit techniques. These methods ignore structure in the problem, unlike ours which uses rich gradient information. However, it is trivial to parallelize model-free methods over computing resources and they tend to perform well in practice. Hyperparameter Scheduling. Population Based Training (PBT) considers schedules for hyperparameters. In PBT, a population of networks is trained in parallel. The performance of each network is evaluated periodically, and the weights of under-performing networks are replaced by the weights of better-performing ones; the hyperparameters of the better network are also copied and randomly perturbed for training the new network clone. In this way, a single model can experience different hyperparameter settings over the course of training, implementing a schedule. STNs replace the population of networks by a single best-response approximation and use gradients to tune hyperparameters during a single training run. We introduced Self-Tuning Networks (STNs), which efficiently approximate the best-response of parameters to hyperparameters by scaling and shifting their hidden units. This allowed us to use gradient-based optimization to tune various regularization hyperparameters, including discrete hyperparameters. We showed that STNs discover hyperparameter schedules that can outperform fixed hyperparameters. We validated the approach on large-scale problems and showed that STNs achieve better generalization performance than competing approaches, in less time. We believe STNs offer a compelling path towards large-scale, automated hyperparameter tuning for neural networks. We thank Matt Johnson for helpful discussions and advice. MM is supported by an NSERC CGS-M award, and PV is supported by an NSERC PGS-D award. RG acknowledges support from the CIFAR Canadian AI Chairs program. Best-response of the parameters to the hyperparameters The (validation loss) direct (hyperparameter) gradient DISPLAYFORM0 The (elementary parameter) response gradient DISPLAYFORM1 The (validation loss) response gradient DISPLAYFORM2 The hyperparameter gradient: a sum of the validation losses direct and response gradients B PROOFS B.1 LEMMA 1Because w 0 solves Problem 4b given λ 0, by the first-order optimality condition we must have: DISPLAYFORM3 The Jacobian of ∂f /∂w decomposes as a block matrix with sub-blocks given by: DISPLAYFORM4 We know that f is C 2 in some neighborhood of (λ 0, w 0), so ∂f /∂w is continuously differentiable in this neighborhood. By assumption, the Hessian ∂ 2 f /∂w 2 is positive definite and hence invertible at (λ 0, w 0). By the Implicit Function Theorem, there exists a neighborhood V of λ 0 and a unique continuously differentiable function w *: V → R m such that ∂f /∂w(λ, w * (λ)) = 0 for λ ∈ V and w * (λ 0) = w 0.Furthermore, by continuity we know that there is a neighborhood DISPLAYFORM5 Combining this with ∂f /∂w(λ, w * (λ)) = 0 and using second-order sufficient optimality conditions, we conclude that w * (λ) is the unique solution to Problem 4b for all λ ∈ U. This discussion mostly follows from. We let X ∈ R N ×D denote the data matrix where N is the number of training examples and D is the dimensionality of the data. We let t ∈ R N denote the associated targets. We can write the SVD decomposition of X as: DISPLAYFORM0 where U and V are N × D and D × D orthogonal matrices and D is a diagonal matrix with entries DISPLAYFORM1 We next simplify the function y(x; w) by setting u = s Q, so that y(x; w) = s Qx = u x. We see that the Jacobian ∂y /∂x ≡ u is constant, and Problem 13 simplifies to standard L 2 -regularized least-squares linear regression with the following loss function: DISPLAYFORM2 It is well-known (see , Chapter 3) that the optimal solution u * (λ) minimizing Equation 19 is given by: DISPLAYFORM3 Furthermore, the optimal solution u * to the unregularized version of Problem 19 is given by: DISPLAYFORM4 Recall that we defined Q 0 = V, i.e., the change-of-basis matrix from the standard basis to the principal components of the data matrix, and we defined s 0 to solve the unregularized regression problem given Q 0. Thus, we require that Q 0 s 0 = u * which implies s 0 = D −1 U t. There are not unique solutions to Problem 13, so we take any functions Q(λ), s(λ) which satisfy Q(λ) s(λ) = v * (λ) as "best-response functions". We will show that our chosen functions Q * (λ) = σ(λv + c) row Q 0 and s * (λ) = s 0, where v = −1 and c i = 2 log(d i) for i = 1,..., D, meet this criteria. We start by noticing that for any d ∈ R +, we have: DISPLAYFORM5 It follows that: DISPLAYFORM6... DISPLAYFORM7... DISPLAYFORM8... DISPLAYFORM9 B.3 THEOREM 3By assumption f is quadratic, so there exist A ∈ R n×n, B ∈ R n×m, C ∈ R m×m and d ∈ R n, e ∈ R m such that: DISPLAYFORM10 One can easily compute that: DISPLAYFORM11 Since we assume ∂ 2 f /∂w 2 0, we must have C 0. Setting the derivative equal to 0 and using second-order sufficient conditions, we have: DISPLAYFORM12 Hence, we find: DISPLAYFORM13 We letŵ φ (λ) = U λ + b, and definef to be the function given by: DISPLAYFORM14 Substituting and simplifying: DISPLAYFORM15 Expanding, we find that equation 36 is equal to: DISPLAYFORM16 where we have: DISPLAYFORM17 We can simplify these expressions considerably by using linearity of expectation and that ∼ p(|σ) has mean 0: DISPLAYFORM18 We can use the cyclic property of the Trace operator, E ∼p(|σ) [] = σ 2 I, and commutability of expectation and a linear operator to simplify the expectations of 2 and 3: DISPLAYFORM19 We can then differentiatef by making use of various matrix-derivative equalities to find: DISPLAYFORM20 Setting the derivative ∂f /∂b(λ 0, U, b, σ) equal to 0, we have: DISPLAYFORM21 Setting the derivative for ∂f /∂U(λ 0, U, b, σ) equal to 0, we have: DISPLAYFORM22 DISPLAYFORM23 DISPLAYFORM24 This is exactly the best-response Jacobian ∂w * /∂λ(λ) as given by Equation 34. Substituting U = C −1 B into the equation 50 gives: DISPLAYFORM25 This is w * (λ 0) − ∂w * /∂λ(λ 0), thus the approximate best-response is exactly the first-order Taylor series of w * about λ 0. updated the model parameters, but did not update hyperparameters. We terminated training when the learning rate dropped below 0.0003.We tuned variational dropout (re-using the same dropout mask for each step in a sequence) on the input to the LSTM, the hidden state between the LSTM layers, and the output of the LSTM. We also tuned embedding dropout, which sets entire rows of the word embedding matrix to 0, effectively removing certain words from all sequences. We regularized the hidden-to-hidden weight matrix using DropConnect (zeroing out weights rather than activations) . Because DropConnect operates directly on the weights and not individually on the mini-batch elements, we cannot use independent perturbations per example; instead, we sample a single DropConnect rate per mini-batch. Finally, we used activation regularization (AR) and temporal activation regularization (TAR). AR penalizes large activations, and is defined as: DISPLAYFORM26 where m is a dropout mask and h t is the output of the LSTM at time t. TAR is a slowness regularizer, defined as: DISPLAYFORM27 For AR and TAR, we tuned the scaling coefficients α and β. For the baselines, the hyperparameter ranges were: [0, 0.95] for the dropout rates, and for α and β. For the ST-LSTM, all the dropout rates and the coefficients α and β were initialized to 0.05 (except in Figure 3, where we varied the output dropout rate). Here, we present additional details on the CNN experiments. For all , we held out 20% of the training data for validation. We trained the baseline CNN using SGD with initial learning rate 0.01 and momentum 0.9, on mini-batches of size 128. We decay the learning rate by 10 each time the validation loss fails to decrease for 60 epochs, and end training if the learning rate falls below 10 −5 or validation loss has not decreased for 75 epochs. For the baselines-grid search, random search, and Bayesian optimization-the search spaces for the hyperparameters were as follows: dropout rates were in the range We trained the ST-CNN's elementary parameters using SGD with initial learning rate 0.01 and momentum of 0.9, on mini-batches of size 128 (identical to the baselines). We use the same decay schedule as the baseline model. The hyperparameters are optimized using Adam with learning rate 0.003. We alternate between training the best-response approximation and hyperparameters with the same schedule as the ST-LSTM, i.e. T train = 2 steps on the training step and T valid = 1 steps on the validation set. Similarly to the LSTM experiments, we used five epochs of warm-up for the model parameters, during which the hyperparameters are fixed. We used an entropy weight of τ = 0.001 in the entropy regularized objective (Eq. 15). The cutout length was restricted to lie in {0, . . ., 24} while the number of cutout holes was restricted to lie in {0, . . ., 4}. All dropout rates, as well as the continuous data augmentation noise parameters, are initialized to 0.05. The cutout length is initialized to 4, and the number of cutout holes is initialized to 1. Overall, we found the ST-CNN to be relatively robust to the initialization of hyperparameters, but starting with low regularization aided optimization in the first few epochs. Here, we draw connections between hyperparameter schedules and curriculum learning. Curriculum learning BID2 ) is an instance of a family of continuation methods BID0, which optimize non-convex functions by solving a sequence of functions that are ordered by increasing difficulty. In a continuation method, one considers a family of training criteria C λ (w) with a parameter λ, where C 1 (w) is the final objective we wish to minimize, and C 0 (w) represents the training criterion for a simpler version of the problem. One starts by optimizing C 0 (w) and then gradually increases λ from 0 to 1, while keeping w at a local minimum of C λ (w) BID2 ). This has been hypothesized to both aid optimization and improve generalization. In this section, we explore how hyperparameter schedules implement a form of curriculum learning; for example, a schedule that increases dropout over time increases stochasticity, making the learning problem more difficult. We use the of grid searches to understand the effects of different hyperparameter settings throughout training, and show that greedy hyperparameter schedules can outperform fixed hyperparameter values. First, we performed a grid search over 20 values each of input and output dropout, and measured the validation perplexity in each epoch. FIG8 shows the validation perplexity achieved by different combinations of input and output dropout, at various epochs during training. We see that at the start of training, the best validation loss is achieved with small values of both input and output dropout. As we train for more epochs, the best validation performance is achieved with larger dropout rates. Next, we present a simple example to show the potential benefits of greedy hyperparameter schedules. For a single hyperparameter-output dropout-we performed a fine-grained grid search and constructed a dropout schedule by using the hyperparameter values that achieve the best validation perplexity at each epoch in training. As shown in FIG9, the schedule formed by taking the best output dropout value in each epoch yields better generalization than any of the fixed hyperparameter values from the initial grid search. In particular, by using small dropout values at the start of training, the schedule achieves a fast decrease in validation perplexity, and by using larger dropout later in training, it achieves better overall validation perplexity. FIG10 shows the perturbed values for output dropout we used to investigate whether the improved performance yielded by STNs is due to the regularization effect, and not the schedule, in Section 4.1. In this section, we provide PyTorch code listings for the approximate best-response layers used to construct ST-LSTMs and ST-CNNs: the HyperLinear and HyperConv2D classes. We also provide a simplified version of the optimization steps used on the training set and validation set.
We use a hypernetwork to predict optimal weights given hyperparameters, and jointly train everything together.
586
scitldr
Conditional Generative Adversarial Networks (cGANs) are finding increasingly widespread use in many application domains. Despite outstanding progress, quantitative evaluation of such models often involves multiple distinct metrics to assess different desirable properties, such as image quality, conditional consistency, and intra-conditioning diversity. In this setting, model benchmarking becomes a challenge, as each metric may indicate a different "best" model. In this paper, we propose the Frechet Joint Distance (FJD), which is defined as the Frechet distance between joint distributions of images and conditioning, allowing it to implicitly capture the aforementioned properties in a single metric. We conduct proof-of-concept experiments on a controllable synthetic dataset, which consistently highlight the benefits of FJD when compared to currently established metrics. Moreover, we use the newly introduced metric to compare existing cGAN-based models for a variety of conditioning modalities (e.g. class labels, object masks, bounding boxes, images, and text captions). We show that FJD can be used as a promising single metric for model benchmarking. The use of generative models is growing across many domains (van den c; ; ; ;). Among the most promising approaches, Variational Auto-Encoders (VAEs) , auto-regressive models (van den a; b), and Generative Adversarial Networks (GANs) have been driving significant progress, with the latter at the forefront of a wide-range of applications (; ; a; ; ; ;). In particular, significant research has emerged from practical applications, which require generation to be based on existing context. For example, tasks such as image inpainting, super-resolution, or text-to-image synthesis have been successfully addressed within the framework of conditional generation, with conditional GANs (cGANs) among the most competitive approaches. Despite these outstanding advances, quantitative evaluation of GANs remains a challenge . In the last few years, a significant number of evaluation metrics for GANs have been introduced in the literature (; ; Bińkowski et al., 2018; ; ; Kynkäänniemi et al., 2019;). Although there is no clear consensus on which quantitative metric is most appropriate to benchmark GAN-based models, Inception Score (IS) and Fréchet Inception Distance (FID) have been extensively used. However, both IS and FID were introduced in the context of unconditional image generation and, hence, focus on capturing certain desirable properties such as visual quality and sample diversity, which do not fully encapsulate all the different phenomena that arise during conditional image generation. In conditional generation, we care about visual quality, conditional consistency -i.e., verifying that the generation respects its conditioning, and intra-conditioning diversity -i.e., sample diversity per conditioning. Although visual quality is captured by both metrics, IS is agnostic to intra-conditioning diversity and FID only captures it indirectly. 1 Moreover, neither of them can capture conditional con-sistency. In order to overcome these shortcomings, researchers have resorted to reporting conditional consistency and diversity metrics in conjunction with ). Consistency metrics often use some form of concept detector to ensure that the requested conditioning appears in the generated image as expected. Although intuitive to use, these metrics require pretrained models that cover the same target concepts in the same format as the conditioning (i.e., classifiers for image-level class conditioning, semantic segmentation for mask conditioning, etc.), which may or may not be available off-the-shelf. Moreover, using different metrics to evaluate different desirable properties may hinder the process of model selection, as there may not be a single model that surpasses the rest in all measures. In fact, it has recently been demonstrated that there is a natural trade-off between image quality and sample diversity , which calls into question how we might select the correct balance of these properties. In this paper we introduce a new metric called Fréchet Joint Distance (FJD), which is able to implicitly assess image quality, conditional consistency, and intra-conditioning diversity. FJD computes the Fréchet distance on an embedding of the joint image-conditioning distribution, and introduces only small computational overhead over FID compared to alternative methods. We evaluate the properties of FJD on a variant of the synthetic dSprite dataset and verify that it successfully captures the desired properties. We provide an analysis on the behavior of both FID and FJD under different types of conditioning such as class labels, bounding boxes, and object masks, and evaluate a variety of existing cGAN models for real-world datasets with the newly introduced metric. Our experiments show that FJD captures the three highlighted properties of conditional generation; it can be applied to any kind of conditioning (e.g., class, bounding box, mask, image, text, etc.); and when applied to existing cGAN-based models, FJD demonstrates its potential to be used as a promising unified metric for hyper-parameter selection and cGAN benchmarking. To our knowledge, there are no existing metrics for conditional generation that capture all of these key properties. Conditional GANs have witnessed outstanding progress in recent years. Training stability has been improved through the introduction of techniques such as progressive growing, , spectral normalization and the two time-scale update rule . Architecturally, conditional generation has been improved through the use of auxiliary classifiers and the introduction of projection-based conditioning for the discriminator . Image quality has also benefited from the incorporation of self-attention (a), as well as increases in model capacity and batch size . All of this progress has led to impressive , paving the road towards the challenging task of generating more complex scenes. To this end, a flurry of works have tackled different forms of conditional image generation, including class-based (; ; ; ; ;), image-based (a; b; ; ;), mask-and bounding box-based (; ; ;, as well as text- (; ; 2018a; ;) and dialogue-based conditionings . This intensified research has lead to the development of a variety of metrics to assess the three factors of conditional image generation process quality, namely: visual quality, conditional consistency, and intra-conditioning diversity. Visual quality. A number of GAN evaluation metrics have emerged in the literature to assess visual quality of generated images in the case of unconditional image generation. Most of these metrics either focus on the separability between generated images and real images (; ; ;), compute the distance between distributions (; ;), assess sample quality and diversity from conditional or marginal distributions (; ;), measure the similarity between generated and real images (; ; ;) or are log-likelihood based 2. Among these, the most accepted automated visual quality metrics are Inception Score (IS) and Fréchet Inception Distance (FID) . Conditional consistency. To assess the consistency of the generated images with respect to model conditioning, researchers have reverted to available, pre-trained feed-forward models. The structure of these models depends on the modality of the conditioning (e.g. segmentation models are used for mask conditioning or image captioning models are applied to evaluate text conditioning). Moreover, the metric used to evaluate the forward model on the generated distribution depends on the conditioning modality and includes: accuracy in the case of class-conditioned generation, Intersection over Union when using bounding box-and mask-conditionings, BLEU , METEOR or CIDEr in the case of text-based conditionings, and Structural Similarity (SSIM) or peak signal-to-noise ratio (PSNR) for image-conditioning. Intra-conditioning diversity. The most common metric for evaluating sample diversity is Learned Perceptual Image Patch Similarity (LPIPS) (b), which measures the distance between samples in a learned feature space. Alternatively, proposed Intra-FID, which calculates a FID score separately for each conditioning and reports the average score over all conditionings. This method should in principle capture the desirable properties of image quality, conditional consistency, and intra-class diversity. However, it scales poorly with the number of unique conditions, as the computationally intensive FID calculation must be repeated for each case, and because FID behaves poorly when the sample size is small (Bińkowski et al., 2018). Furthermore, in cases where the conditioning cannot be broken down into a set of discrete classes (e.g., pixel-based conditioning), Intra-FID is intractable. As a , it has not been applied beyond class-conditioning. FID aims to compare the statistics of generated samples to samples from a real dataset. Given two multivariate Gaussian distributions N (µ, Σ) and N (μ,Σ), Fréchet Distance (FD) is defined as: When evaluating a generative model, N (µ, Σ) represents the data (reference) distribution, obtained by fitting a Gaussian to samples from a reference dataset, and N (μ,Σ) represents the learned (generated) distribution, a of fitting to samples from a generative model. In FID, both the real images and model samples are embedded in a learned feature space using a pre-trained Inception v3 model . Thus, the Gaussian distributions are defined in the embedded space. More precisely, given a dataset of images {x, a set of model samples, and an Inception embedding function f, we estimate the Gaussian parameters µ, Σ,μ andΣ as: In conditional image generation, a dataset is composed of image-condition pairs {(, where the conditioning can take variable forms, such as image-level classes, segmentation masks, or text. The goal of conditional image generation is to produce realistic looking, diverse imagesx that are consistent with the conditioningŷ. Thus, a set of model samples with corresponding conditioning can be defined as: . As discussed in Section 3, the Fréchet distance (FD) compares any two Gaussians defined over arbitrary spaces. In FJD, we propose to compute the FD between two Gaussians defined over the joint image-conditioning embedding space. More precisely, given an image embedding function f, a conditioning embedding function h, a conditioning embedding scaling factor α, and a merging function g that combines the image embedding with the conditioning embedding into a joint one, we can estimate the respective Gaussian parameters µ, Σ,μ andΣ as: Note that by computing the FD over the joint image-conditioning distribution, we are able to simultaneously assess image quality, conditional consistency, and intra-conditioning diversity, all of which are important factors in evaluating the quality of conditional image generation models. To ensure reproducibility, when reporting FJD scores it is important to include details such as which conditioning embedding function was used, which dataset is used for the reference distribution, and the α value. We report these values for all of our experiments in Appendix B. Sentence-BERT The purpose of the embedding function h is to reduce the dimensionality and extract a useful feature representation of the conditioning. As such, the choice of h will vary depending on the modality of conditioning. In most cases, an off-the-shelf, pretrained embedding can be used for the purposes of extracting a useful representation. In the absence of preexisting, pretrained conditioning embedding functions, a new one should be learned. For example, for bounding box and mask conditionings the embedding function could be learned with an autoencoder. 3 For suggested assignments of conditioning modalities to embedding functions please refer to Table 1. In order to control the relative contribution of the image component and the conditioning component to the final FJD value, we scale the conditioning embedding by a constant α. In essence, α indicates how much we care about the conditioning component compared to the image component. When α = 0, the conditioning component is ignored and FJD is equivalent to FID. As the value of α increases, the perceived importance of the conditioning component is also increased and reflected accordingly in the ing measure. To equally weight the image component and the conditioning component, we recommend setting α to be equal to the ratio between the average L 2 norm of the image embedding and the conditioning embedding. This weighting ensures that FJD retains consistent behaviour across conditioning embeddings, even with varying dimensionality or magnitude. We note that α should be calculated on data from the reference distribution (real data distribution), and then applied to all conditioning embeddings thereafter. See Appendix F for an example of the effect of the α hyperparameter. 3 In the initial stages of this project, we also explored methods to bypass this additional training step by projecting a visual representation of bounding box or mask conditioning into an Inceptionv3 embedding space. However, the Inceptionv3 embedding may not properly capture object positions as it is trained to classify, discarding precise spatial information. Therefore, we consider autoencoders (AE) to be better suited to our setup since they are trained to recover both object appearance and spatial information from the embedded representation. The purpose of the merging function g is to combine the image embedding and conditioning embedding into a single joint embedding. We compared several candidate merging functions and found concatenation of the image embedding and conditioning embedding vectors to be most effective, both in terms of simplicity and performance. As such, concatenation is used as the merging function in all following experiments. In this section, we demonstrate that FJD captures the three desiderata of conditional image generation, namely image quality, conditional consistency and intra-conditioning diversity. dSprite-textures. The dSprite dataset is a synthetic dataset where each image depicts a simple 2D shape on a black . Each image can be fully described by a set of factors, including shape, scale, rotation, x position, and y position. We augment the dSprite dataset to create dSprite-textures by adding three texture patterns for each sample. Additionally, we include class labels indicating shape, as well as bounding boxes and mask labels for each sample (see Figure 1). In total, the dataset contains 2,211,840 unique images. This synthetic dataset allows us to exactly control our sample distribution and, thereby, simulate a generator with image-conditioning inconsistencies or reduced sample diversity. To embed the conditioning for calculating FJD in the following experiments, we use one-hot encoding for the class labels, and autoencoder representations for the bounding box and mask labels. 4 We are releasing the code to generate dSprite-textures. In this subsection, we aim to test the sensitivity of FJD to image quality perturbations. To do so, we draw 10k random samples from the dSprite-textures dataset to form a reference dataset. The generated dataset is simulated by duplicating the reference dataset and adding Gaussian noise drawn from N (0, σ) to the images, where σ ∈ [0, 0.25] and pixel values are normalized (and clipped after noise addition) to the range. The addition of noise mimics a generative model that produces low quality images. We repeat this experiment for all three conditioning types in dSprite-textures: class, bounding box, and mask. Results are shown in Figure 2, where we plot both FID and FJD as a function of the added Gaussian noise (σ is indicated on the x-axis as Noise Magnitude). We find that, in all cases, FJD has a very similar trend to FID, indicating that it successfully captures image quality. Additional image quality experiments on the large scale COCO-Stuff dataset can be found in Appendix C. In this subsection, we aim to highlight the sensitivity of FJD to conditional consistency. In particular, we target specific types of inconsistencies, such as incorrect scale, orientation, or position. We draw a set of 10k samples from the dSprite-textures dataset and duplicate it to represent the reference dataset and the generated dataset, each with identical image and conditioning marginal distributions. For 30% of the generated dataset samples we swap conditionings of pairs of samples that are identical in all but one of the attributes (scale, orientation, x position or y position). For example, if one generated sample has attribute x position 4 and a second generated sample has attribute x position 7, swapping their conditionings leads to generated samples that are offset by 3 pixels w.r.t. their ground truth x position. Swapping conditionings in this manner allows us to control for specific attributes' conditional consistency, while keeping the image and conditioning marginal distributions unchanged. As a , all changes in FJD can be attributed solely to conditional inconsistencies. Figure 3 depicts the of this experiment for four different types of alterations: scale, orientation, and x and y positions. We observe that the FID between image distributions (solid blue line) remains constant even as the degree of conditional inconsistency increases. For class conditioning (dotted orange line), FJD also remains constant, as changes to scale, orientation, and position are independent of the object class. Bounding box and mask conditionings, as they contain spatial information, produce variations in FJD that are proportional to the offset. Interestingly, for the orientation offsets, FJD with mask conditioning fluctuates rather than increasing monotonically. This behaviour is due to the orientation masks partially re-aligning with the ground truth around 90 • and 180 •. Each of these cases emphasize the effective sensitivity of FJD with respect to conditional consistency. Additional conditional consistency experiments with text conditioning can be found in Appendix D. In this subsection, we aim to test the sensitivity of FJD to intra-conditioning diversity 5, by alternating the per-conditioning image texture variability. More precisely, we vary the texture based on four different image attributes: shape that is captured in all tested conditionings, as well as scale, orientation and position that are captured by bounding box and mask conditionings only. To create attribute-texture assignments, we stratify attributes based on their values. For example, one possible shape-based stratification of a dataset with three shapes might be: [squares, ellipses, hearts]. To quantify the dataset intra-conditioning diversity, we introduce a diversity score. A diversity score of 1 means that the per-attribute texture distribution is uniform across stratas, while a diversity score of 0 means that each strata is assigned to a single texture. Middling diversity scores indicate that the textural distribution is skewed towards one texture type in each strata. We create our reference dataset by randomly drawing 10k samples. The generated distribution is created by duplicating the reference distribution and adjusting the per-attribute texture variability to achieve the desired diversity score. The of these experiments are shown in Figure 4, which plots the increase in FID and FJD, for different types of conditioning, as the diversity of textures within each subset decreases. For all tested scenarios, we observe that FJD is sensitive to intra-conditioning diversity changes. Moreover, not surprisingly, since a change in the joint distribution of attributes and textures also implies a change to the image marginal distribution, we observe that FID increases with reduced diversity. This experiment suggests that FID is able to capture intra-conditioning diversity changes when the image conditional distribution is also affected. However, if the image marginal distribution were to stay constant, FID would be blind to intra-conditioning diversity changes (as is shown in Section 5.3). In this section, we seek to demonstrate the application of FJD to evaluate models with several different conditioning modalities, in contrast to FID and standard conditional consistency and diversity metrics. We focus on testing class-conditioned, image-conditioned, and text-conditioned image generation tasks, which have been the focus of numerous works 6. Multi-label, bounding box, and mask conditioning are also explored in Appendix I. We note that FJD and FID yield similar rankings of models in this setting, which is to be expected since most models use similar conditioning mechanisms. Rankings are therefore dominated by image quality, rather than conditional consistency. We refer the reader to Appendix F and H for examples of cases where FJD ranks models differently than FID. Class-conditioned cGANs. . Accuracy is used to evaluate conditional consistency, and is computed as the Inception v3 accuracy of each model's generated samples, using their conditioning as classification ground truth. Class labels from the validation set are used as conditioning to generate 50k samples for each model, and the training set is used as the reference distribution. One-hot encoding is used to embed the class conditioning for the purposes of calculating FJD. We find that FJD follows the same trend as FID for class-conditioned models, preserving their ranking and highlighting the FJD's ability to capture image quality. Additionally, we note that the difference between FJD and FID correlates with each model's classification accuracy, with smaller gaps appearing to indicate better conditional consistency. Diversity scores, however, rank models in the opposite order compared to all other metrics. This behaviour evokes the trade-off between realism and diversity highlighted by. Ideally, we would like a model that produces diverse outputs, but this property is not as attractive if it also in a decrease in image quality. At what point should diversity be prioritized over image quality, and vice versa? FJD is a suitable metric for answering this question if the goal is to find a model that best matches the target conditional data generating distribution. Image-conditioned cGANs. In this setting we encounter some ambiguity with regards to model selection, as for all datasets, each metric ranks the models differently. BicycleGAN appears to have the best image quality, Pix2pix produces images that are most visually similar to the ground truth, and MSGAN and MUNIT achieve the best sample diversity scores. This scenario demonstrates the benefits of using a single unified metric for model selection, for which there is only a single best model. Text-conditioned cGANs. Table 4 shows FJD and FID scores for three state-of-the-art text-conditioned models trained on the Caltech-UCSD Birds 200 dataset (CUB-200) at 256 × 256 resolution: HDGan (c), StackGAN++ (a), and AttnGAN . Conditional consistency is evaluated using visual-semantic similarity, as proposed by Zhang et al. (2018c). Conditioning from the test set captions is used to generate 30k images, and the same test set is also used as the reference distribution. We use pre-computed Char-CNN-RNN sentence embeddings as the conditioning embedding for FJD, since they are commonly used with CUB-200 and are readily available. In this case we find that AttnGAN dominates in terms of conditional consistency compared to HDGan and StackGAN++, while all models are comparable in terms of diversity. AttnGAN is ranked best overall by FJD. In cases where the biggest differentiator between the models is image quality, FID and FJD will provide a consistent ranking as we see here. In cases where the trade-off is more subtle we believe practitioners will opt for a metric that measurably captures intra-conditioning diversity. In this paper we introduce Fréchet Joint Distance (FJD), which is able to assess image quality, conditional consistency, and intra-conditioning diversity within a single metric. We compare FJD to FID on the synthetic dSprite-textures dataset, validating its ability to capture the three properties of interest across different types of conditioning, and highlighting its potential to be adopted as a unified cGAN benchmarking metric. We also demonstrate how FJD can be used to address the potentially ambiguous trade-off between image quality and sample diversity when performing model selection. Looking forward, FJD could serve as valuable metric to ground future research, as it has the potential to help elucidate the most promising contributions within the scope of conditional generation. In this section, we illustrate the claim made in Section 1 that FID cannot capture intra-conditioning diversity when the joint distribution of two variables changes but the marginal distribution of one of them is not altered. Consider two multivariate Gaussian distributions, (X 1, Y 1) ∼ N (0, Σ 1) and (X 2, Y 2) ∼ N (0, Σ 2), where If we let X i take the role of the embedding of the conditioning variables (e.g., position) and Y i take the role of the embedding of the generated variables (i.e., images), then computing FID in this example would correspond to computing the FD between f Y1 and f Y2, which is zero. On the other hand, computing FJD would correspond to the FD between f X1,Y1 and f X2,Y2, which equals 0.678. But note that Dist1 and Dist2 have different degrees of intra-conditioning diversity, as illustrated by Figure 5 (right), where two histograms of f Yi|Xi∈(0.9,1.1) are displayed, showing marked differences to each other (similar plots can be constructed for other values of X i). Therefore, this example illustrates a situation in which FID is unable to capture changes in intra-conditioning diversity, while FJD is able to do so. Important details pertaining to the computation of the FID and FJD metrics for different experiments included in this paper are reported in Table 5. For each dataset we report which conditioning modality was used, as well as the conditioning embedding function. Information about which split and image resolution are used for the reference and generated distributions is also included, as well as how many samples were generated per conditioning. Values for α reported here are calculated according to the balancing mechanism recommended in Section 4.2. Datasets splits marked by "-" indicate that the distribution is a randomly sampled subset of the full dataset. We repeat the experiment initially conducted in Section 5.2 on a real world dataset to see how well FJD tracks image quality. Specifically, we use the COCO-Stuff dataset , which provides class labels, bounding box annotations, and segmentation masks. We follow the same experimental procedure as outlined in Section 5.2: Gaussian noise is drawn from N (0, σ) and add to the images, where σ ∈ [0, 0.25] and pixel values are normalized (and clipped after noise addition) to the range. The original dataset of clean images is used as the reference distribution, while noisy images are used to simulate a generated distribution with poor image quality. For the purposes of calculating FJD, we use N-hot encoding to embed the labels of the classes present in each image, and autoencoder representations for the bounding box and mask labels. As shown in Figure 6, FID and FJD both track image quality well, increasing as more noise is added to the generated image distribution. In order to test the effectiveness of FJD at detecting conditional inconsistencies in the text domain, we use the Caltech-UCSD Birds 200 dataset . This dataset is a common benchmark for text conditioned image generation models, containing 200 fine-grained bird categories, 11,788 images, and 10 descriptive captions per images. Also included in the dataset are vectors of detailed binary annotations describing the attributes of the bird in each image. Each annotation indicates the presence or absence of specific features, such as has\_bill\_shape::curved or has\_wing\_color::blue. Our goal in this experiment is to swap captions between images, and in this fashion introduce inconsistencies between images and their paired captions, while preserving the marginal distributions of images and labels. We compare attribute vectors belonging to each image using the Hamming distance to get an indication for how well the captions belonging to one image might describe another. Small Hamming distances indicate a good match between image and caption, while at larger values the captions appear to describe a very different bird than what is pictured (as demonstrated in Figure 7). This bird is fully covered in red except for some parts of wing and it has brown feet. A red bird with a short bill with a black cheek patch. This small bird is bright red with black wings, black eyeing, and short black beak. The small brown bird has a yellow beak and black round eyes. 39 The body of the bird is ivory and the crown is bright red while the wing is black and ivory speckled. 51 Figure 7: A summer tanager, as described by a variety of captions (ground truth caption highlighted in blue). The Hamming distance between attribute vectors associated with each caption and the ground truth caption provides an indication of how well each caption describes the image. To test FJD we create two datsets: one which contains the original image-captions pairs from CUB-200 to act as the reference distribution, and another in which captions have been swapped to act as a generated distribution that has poor conditional consistency. Char-CNN-RNN embeddings are used to encode the captions for the purposes of calculating FJD. In Figure 8 we observe that as the average Hamming distance across captions increases (i.e., the captions become worse at describing their associated images), FJD also increases. FID, which is unable to detect these inconsistencies, remains constant throughout. E LIST OF SOURCES OF PRE-TRAINED MODEL Table 6 includes the hyperlinks to all of the pretrained conditional generation models used in our experiments in Section 6. The α parameter in the FJD equation acts as a weighting factor indicating the importance of the image component versus the conditional component. When α = 0, then FJD is equal to FID, since we only care about the image component. As the value of α increases, the magnitude of the conditional component's contribution to the value of FJD increases as well. In our experiments, we attempt to find a neutral value for α that will balance the contribution from the conditional component and the image component. This balancing is done by finding the value of α that would in equal magnitude between the image and conditioning embeddings (as measured by the average L2 norm of the embedding vectors). Instead of reporting FJD at a single α, an alternative approach is to calculate and plot FJD for a range of α values, as shown in Figure 9. Plotting α versus FJD allows us to observe any change in rank of models as the importance weighting on the conditional component is increased. Here we use the truncation trick to evaluate BigGAN at several different truncation values σ. The truncation trick is a technique wherein the noise vector used to condition a GAN is scaled by σ in order to trade sample diversity for image quality and conditional consistency, without needing to retrain the model (as shown in Table 7). Table 7: Comparison of BigGAN model evaluated with different truncation values σ. FJD is calculated at α = 17.8. As σ increases, classification accuracy decreases and diversity increases. Note that FID and FJD are consistent in their choice of the preferred model at σ = 1.0, however, the relative ranking of σ = 0.25 and σ = 2.0 is different between the two metrics. We find that in several cases, the ranking of models changes when comparing them at α = 0 (equivalent to FID), versus comparing them using FJD at higher α values. Models with low truncation values σ initially achieve good performance when α is also low. However, as α increases, these models rapidly drop in rank due to lack of sample diversity, and instead models with higher σ values are favoured. This is most obvious when comparing σ = 0.25 and σ = 1.75 (blue and yellow lines in Figure 9) respectively. To create embeddings for the bounding box and mask conditionings evaluated in this paper we utilize a variant of the Regularized AutoEncoder with Spectral Normalization (RAE-SN) introduced by and enhance it with residual connections (Tables 8 and 9). For better reconstruction quality, we substitute the strided convolution and transposed convolution for average pooling and nearest neighbour upsampling, respectively. Spectral normalization is applied to all linear and convolution layers in the decoder, and an L 2 penalty is applied to the latent representation z during training. Hyperparameters such as the weighting factor on the L 2 penalty and the number of dimensions in the latent space are selected based on which combination produces the best reconstructions on a held-out validation set. In Tables 10 and 11 we depict the architecture for an autoencoder with 64 × 64 input resolution, but this can be scaled up or down by adding or removing residual blocks as required. ch represents a channel multiplier which is used to control the capacity of the model. M represents the number of latent dimensions in the latent representation. C indicates the number of classes in the bounding box or mask representation. ResBlock down 2ch → 4ch ResBlock down 4ch → 8ch ResBlock up 4ch → 2ch ResBlock up 2ch → ch Conv ch → C In order to demonstrate the utility of FJD for the purposes of model selection and hyperparameter tuning, we consider the loss function of the generator from an auxiliary classifier GAN (ACGAN) , as shown in Equation 7 to 9. Here S indicates the data source, and C indicates the class label. The generator loss L G is maximized during training, and consists of two components: an adversarial component L S, which encourages generated samples to look like real samples, and a classification component L C, which encourages samples to look more like their target class. In this experiment we add a weighting parameter λ, which weights the importance of the conditional component of the generator loss. The original formulation of ACGAN is equivalent to always setting λ = 1, but it is unknown whether this is the most suitable setting as it is never formally tested. To this end, we train models on the MNIST dataset and perform a sweep over the λ parameter in the range, training a single model for each λ value tested. Each model is evaluated using FID, FJD, and classification accuracy to indicate conditional consistency. For FID and FJD we use the training set as the reference distribution, and generate 50, 000 samples for the generated distribution. Classification accuracy is measured using a pretrained LeNet classifier , where the conditioning label is used as the groundtruth. Scores from best performing models as indicated by FID, FJD, and classification accuracy are shown in Table 12. Sample sheets are provided in Figure 10, where each column is conditioned on a different digit from 0 to 9. We find that FID is optimized when λ = 0.25 (Figure 10a). This produces a model with good image quality, but almost no conditional consistency. Accuracy is optimized when λ = 5.0 (Figure 10c), yielding a model with good conditional consistency, but limited image quality. Finally, FJD is optimized when λ = 1.0 (Figure 10b), producing a model that demonstrates a balance between image quality and conditional consistency. These demonstrate the importance of considering both image quality and conditional consistency simultaneously when performing hyperparameter tuning. To demonstrate FJD applied to multi-label, bounding box, and mask conditioning on a real world dataset, we train a GAN on the COCO-Stuff dataset . To this end, we train three generative models, one for each conditioning type. Following , we select only images containing between 3 and 8 objects, and also ignore any objects that occupy less than 2% of the total image area. Two image resolutions are considered: 64 × 64 and 128 × 128. We adopt a BigGAN-style model , but modify the design such that a single fixed architecture can be trained with any of the three conditioning types. See Section I.1 for architectural details. We train each model 5 times, with different random seeds, and report mean and standard deviation of both FID and FJD in Table 13. N-hot encoding is used as the embedding function for the multi-label conditioning, while autoencoder representations are used to calculate FJD for bounding box and mask conditioning. In most cases we find that FID values are very close between conditioning types. A similar trend is observed in FJD at the 128 × 128 resolution. For models trained at 64 × 64 resolution however, we notice a more drastic change in FJD between conditioning types. Mask conditioning achieves the lowest FJD score, followed by multi-label conditioning and bounding box conditioning. This could indicate that the mask conditioning models are more conditionally consistent (or diverse) compared to other conditioning types. In order to modify to work with multiple types of conditioning we make two major changes. The first change occurs in the generator, where we replace the conditional batch normalization layers with SPADE . This substitution allows the generator to receive spatial conditioning such as bounding boxes or masks. In the case of class conditioning with a spatially tiled class vector, SPADE behaves similarly to conditional batch normalization. The second change we make is to the discriminator. The original BigGAN implementation utilizes a single projection layer in order to provide class-conditional information to the discriminator. To extend this functionality to bounding box and mask conditioning, we add additional projection layers after each ResBlock in the discriminator. The input to each projection layer is a downsampled version of the conditioning that has been resized using nearest neighbour interpolation to match the spatial resolution of each layer. In this way we provide conditioning information at a range of resolutions, allowing the discriminator to use whichever is most useful for the type of conditioning it has received. Aside from these specified changes, and using smaller batch sizes, models are trained with the same hyperparameters and training scheme as specified in . In this section, we present some random 128 × 128 samples of conditional generation for the models covered in Section I. In particular, Figures 11-13 show class, bounding box, and mask conditioning samples, respectively. Each row displays a depiction of conditioning, followed by 4 different samples, and finally the real image corresponding to the conditioning. As shown in Figure 11, conditioning on classes leads to variable samples w.r.t. object positions, scales and textures. As we increase the conditioning strength, we reduce the freedom of the generation and hence, in Figure 12, we observe how the variability starts appearing in more subtle regions. Similarly, in Figure 13, taking different samples per conditioning only changes the textures. Although the degrees of variability decrease as the conditioning strength increases, we obtain sharper, better looking images.
We propose a new metric for evaluating conditional GANs that captures image quality, conditional consistency, and intra-conditioning diversity in a single measure.
587
scitldr
Combining deep model-free reinforcement learning with on-line planning is a promising approach to building on the successes of deep RL. On-line planning with look-ahead trees has proven successful in environments where transition models are known a priori. However, in complex environments where transition models need to be learned from data, the deficiencies of learned models have limited their utility for planning. To address these challenges, we propose TreeQN, a differentiable, recursive, tree-structured model that serves as a drop-in replacement for any value function network in deep RL with discrete actions. TreeQN dynamically constructs a tree by recursively applying a transition model in a learned abstract state space and then aggregating predicted rewards and state-values using a tree backup to estimate Q-values. We also propose ATreeC, an actor-critic variant that augments TreeQN with a softmax layer to form a stochastic policy network. Both approaches are trained end-to-end, such that the learned model is optimised for its actual use in the tree. We show that TreeQN and ATreeC outperform n-step DQN and A2C on a box-pushing task, as well as n-step DQN and value prediction networks on multiple Atari games. Furthermore, we present ablation studies that demonstrate the effect of different auxiliary losses on learning transition models. A promising approach to improving model-free deep reinforcement learning (RL) is to combine it with on-line planning. The model-free value function can be viewed as a rough global estimate which is then locally refined on the fly for the current state by the on-line planner. Crucially, this does not require new samples from the environment but only additional computation, which is often available. One strategy for on-line planning is to use look-ahead tree search BID12 BID2. Traditionally, such methods have been limited to domains where perfect environment simulators are available, such as board or card games BID4 BID24. However, in general, models for complex environments with high dimensional observation spaces and complex dynamics must be learned from agent experience. Unfortunately, to date, it has proven difficult to learn models for such domains with sufficient fidelity to realise the benefits of look-ahead planning BID17 BID29.A simple approach to learning environment models is to maximise a similarity metric between model predictions and ground truth in the observation space. This approach has been applied with some success in cases where model fidelity is less important, e.g., for improving exploration BID3 BID17. However, this objective causes significant model capacity to be devoted to predicting irrelevant aspects of the environment dynamics, such as noisy s, at the expense of value-critical features that may occupy only a small part of the observation space (Pathak et al., Since the transition model is only weakly grounded in the actual environment, our approach can alternatively be viewed as a model-free method in which the fully connected layers of DQN are replaced by a recursive network that applies transition functions with shared parameters at each tree node expansion. The ing architecture, which we call TreeQN, encodes an inductive bias based on the prior knowledge that the environment is a stationary Markov process, which facilitates faster learning of better policies. We also present an actor-critic variant, ATreeC, in which the tree is augmented with a softmax layer and used as a policy network. We show that TreeQN and ATreeC outperform their DQN-based counterparts in a box-pushing domain and a suite of Atari games, with deeper trees often outperforming shallower trees, and TreeQN outperforming VPN BID18 on most Atari games. We also present ablation studies investigating various auxiliary losses for grounding the transition model more strongly in the environment, which could improve performance as well as lead to interpretable internal plans. While we show that grounding the reward function is valuable, we conclude that how to learn strongly grounded transition models and generate reliably interpretable plans without compromising performance remains an open research question. We consider an agent learning to act in a Markov Decision Process (MDP), with the goal of maximising its expected discounted sum of rewards R t = ∞ t=0 γ t r t, by learning a policy π(s) that maps states s ∈ S to actions a ∈ A. The state-action value function (Q-function) is defined as DISPLAYFORM0 The Bellman optimality equation writes Q * recursively as DISPLAYFORM1 where P is the MDP state transition function and r is a reward function, which for simplicity we assume to be deterministic. Q-learning BID33 ) uses a single-sample approximation of the contraction operator T to iteratively improve an estimate of Q *.In deep Q-learning BID14, Q is represented by a deep neural network with parameters θ, and is improved by regressing Q(s, a) to a target r + γ max a Q(s, a ; θ −), where θ − are the parameters of a target network periodically copied from θ. We use a version of n-step Q-learning BID15 with synchronous environment threads. In particular, starting at a timestep t, we roll forward n env = 16 threads for n = 5 timesteps each. We then bootstrap off the final states only and gather all n env × n = 80 transitions in a single batch for the backward pass, minimising the loss: DISPLAYFORM2 If the episode terminates, we use the remaining episode return as the target, without bootstrapping. This algorithm's actor-critic counterpart is A2C, a synchronous variant of A3C BID15 in which a policy π and state-value function V (s) are trained using the gradient: DISPLAYFORM3 where A j is an advantage estimate given by DISPLAYFORM4, H is the policy entropy, β is a hyperparameter tuning the degree of entropy regularisation, and α is a hyperparameter controlling the relative learning rates of actor and critic. These algorithms were chosen for their simplicity and reasonable wallclock speeds, but TreeQN can also be used in other algorithms, as described in Section 3. Our implementations are based on OpenAI Baselines BID11 ). The canonical neural network architecture in deep RL with visual observations has a series of convolutional layers followed by two fully connected layers, where the final layer produces one output for each action-value. We can think of this network as first calculating an encoding z t of the state s t which is then evaluated by the final layer to estimate Q * (s t, a) (see FIG0).In tree-search on-line planning, a look-ahead tree of possible future states is constructed by recursively applying an environment model. These states are typically evaluated by a heuristic, a learned value function, or Monte-Carlo rollouts. Backups through the tree aggregate these values along with the immediate rewards accumulated along each path to estimate the value of taking an action in the current state. This paper focuses on a simple tree-search with a deterministic transition function and no value uncertainty estimates, but our approach can be extended to tree-search variants like UCT BID13 if the components remain differentiable. In this section, we propose TreeQN, a novel end-to-end differentiable tree-structured architecture for deep reinforcement learning. We first give an overview of the architecture, followed by details of each model component and the training procedure. TreeQN uses a recursive tree-structured neural network between the encoded state z t and the predicted state-action values Q(s t, a), instead of directly estimating the state-action value from the current encoded state z t using fully connected layers as in DQN BID14. Specifically, TreeQN uses a recursive model to refine its estimate of Q(s t, a) via learned transition, reward, and value functions, and a tree backup (see FIG1). Because these learned components are shared throughout the tree, TreeQN implements an inductive bias, missing from DQN, that reflects the prior knowledge that the Q-values are properties of a stationary Markov process. We also encode the inductive bias that Q-values may be expressed as a sum of scalar rewards and values. Specifically, TreeQN learns an action-dependent transition function that, given a state representation z l|t, predicts the next state representation z ai l+1|t for action a i ∈ A, and the corresponding rewardr ai l|t. To make the distinction between internal planning steps and steps taken in the environment explicit, we write z l|t to denote the encoded state at time t after l internal transitions, starting with z 0|t for the encoding of s t. TreeQN applies this transition function recursively to construct a tree containing the state representations and rewards received for all possible sequences of actions up to some predefined depth d ("Tree Transitioning" in FIG1 The value of each predicted state V (z) is estimated with a value function module. Using these values and the predicted rewards, TreeQN then performs a tree backup, mixing the k-step returns along each path in the tree using TD(λ) BID25 BID27. This corresponds to "Value Prediction & Backup" in FIG1 and can be formalized as DISPLAYFORM0 DISPLAYFORM1 where b is a function to recursively perform the backup. For 0 < λ < 1, value estimates of the intermediate states are mixed into the final Q-estimate, which encourages the intermediate nodes of the tree to correspond to meaningful states, and reduces the impact of outlier values. When λ = 1, and b is the standard hard max function, then Eq. 3 simplifies to a backup through the tree using the familiar Bellman equation: DISPLAYFORM2 We note that even for a tree depth of only one, TreeQN imposes a significant structure on the value function by decomposing it as a sum of action-conditional reward and next-state value, and using a shared value function to evaluate each next-state representation. Crucially, during training we backpropagate all the way from the final Q-estimate, through the value prediction, tree transitioning, and encoding layers of the tree, i.e., the entire network shown in FIG1. Learning these components jointly ensures that they are useful for planning on-line. In this section, we describe each of TreeQN's components in more detail. Encoder function. As in DQN, a series of convolutional layers produces an embedding of the observed state, z 0|t = encode(s t).Transition function. We first apply a single fully connected layer to the current state embedding, shared by all actions. This generates an intermediate representation (z env l+1|t) that could carry information about action-agnostic changes to the environment. In addition, we use a fully connected layer per action, which is applied to the intermediate representation to calculate a next-state representation that carries information about the effect of taking action a i. We use residual connections for these layers: DISPLAYFORM0 where DISPLAYFORM1 are learnable parameters. Note that the next-state representation is calculated for every action a i independently using the respective transition matrix W ai, but this transition function is shared for the same action throughout the tree. A caveat is that the model can still learn to use different parts of the latent state space in different parts of the tree, which could undermine the intended parameter sharing in the model structure. To help TreeQN learn useful transition functions that maintain quality and diversity in their latent states, we introduce a unit-length projection of the state representations by simply dividing a state's vector representation by its L2 norm before each application of the transition function, z l|t:= z l|t / z l|t. This prevents the magnitude of the representation from growing or shrinking, which encourages the behaviour of the transition function to be more consistent throughout the tree. Reward function. In addition to predicting the next state, we also predict the immediate reward for every action a i ∈ A in state z l|t usinĝ DISPLAYFORM2 where DISPLAYFORM3 and ReLU is the rectified linear unit BID16, and the predicted reward for a particular actionr ai l|t is the i-th element of the vectorr(z l|t). Value function. The value of a state representation z is estimated as DISPLAYFORM4 where DISPLAYFORM5 Backup function. We use the following function that can be recursively applied to calculate the tree backup: DISPLAYFORM6 Using a hard max for calculating the backup would in gradient information only being used to update parameters along the maximal path in the tree. By contrast, the softmax allows us to use downstream gradient information to update parameters along all paths. Furthermore, it potentially reduces the impact of outlier value predictions. With a learned temperature for the softmax, this function could represent the hard max arbitrarily closely. However, we did not find an empirical difference so we left the temperature at 1. The TreeQN architecture is fully differentiable, so we can directly use it in the place of a Q-function in any deep RL algorithm with discrete actions. Differentiating through the entire tree ensures that the learned components are useful for planning on-line, as long as that planning is performed in the same way as during training. However, it seems plausible that auxiliary objectives based on minimising the error in predicting rewards or observations could improve the performance by helping to ground the transition and reward functions to the environment. It could also encourage TreeQN to perform model-based planning in an interpretable manner. In principle, such objectives could give rise to a spectrum of methods from model-free to fully model-based. At one extreme, TreeQN without auxiliary objectives can be seen as a model-free approach that draws inspiration from tree-search planning to encode valuable inductive biases into the neural network architecture. At the other extreme, perfect, grounded reward and transition models could in principle be learned. Using them in our architecture would then correspond to standard model-based lookahead planning. The sweet spot could be an intermediate level of grounding that maintains the flexibility of end-to-end model-free learning while benefiting from the additional supervision of explicit model learning. To investigate this spectrum, we experiment with two auxiliary objectives. Reward grounding. We experiment with an L2 loss regressingr a t:t+l−1 l|t, the predicted reward at level l of the tree corresponding to the selected action sequence {a t . . . a t+l−1}, to the true observed rewards. For each of the n timesteps of n-step Q-learning this gives: DISPLAYFORM0 where η r is a hyperparameter weighting the loss, andd = min(d, n − j + 1) restricts the sum to rewards for which we have already observed the true value. State grounding. We experiment with a grounding in the latent space, using an L2 loss to regress the predicted latent state z a t:t+l l|t at level l of the tree to z 0|t+l, the initial encoding of the true state corresponding to the actions actually taken: DISPLAYFORM1 By employing an additional decoder module, we could use a similar loss to regress decoded observations to the true observations. In informal experiments, joint training with such a decoder loss did not yield good performance, as also found by BID18.In Section 7.1, we present on the use these objectives, showing that reward grounding gives better performance, but that our method for state grounding does not. The intuitions guiding the design of TreeQN are as applicable to policy search as to valuebased RL, in that a policy can use a tree planner to improve its estimates of the optimal action probabilities BID8 BID22. As our proposed architecture is trained end-to-end, it can be easily adapted for use as a policy network. In particular, we propose ATreeC, an actor-critic extension of TreeQN. In this architecture, the policy network is identical to TreeQN, with an additional softmax layer that converts the Q estimates into the probabilities of a stochastic policy. The critic shares the encoder parameters, and predicts a scalar state value with a single fully connected layer: V cr (s) = w cr z + b cr. We used different parameters for the critic value function and the actor's tree-value-function module, but found that sharing these parameters had little effect on performance. The entire setup, shown in FIG2, is trained with A2C as described in Section 2, with the addition of the same auxiliary losses used for TreeQN. Note that TreeQN could also be used in the critic, but we leave this possibility to future work. There is a long history of work combining model-based and model-free RL. An early example is Dyna-Q BID26 ) which trains a model-free algorithm with samples drawn from a learned model. Similarly, van train a sparse model with some environment samples that can be used to refine a model-free Q-function. BID9 use local linear models to generate additional samples for their model-free algorithm. However, these approaches do not attempt to use the model on-line to improve value estimates. In deep RL, value iteration networks BID30 use a learned differentiable model to plan on the fly, but require planning over the full state space, which must also possess a spatial structure with local dynamics such that convolution operations can execute the planning algorithm. The predictron BID23 instead learns abstract-state transition functions in order to predict values. However, it is restricted to policy evaluation without control. Value prediction networks take a similar approach but are more closely related to our work because the learned model components are used in a tree for planning. However, in their work this tree is only used to construct targets and choose actions, and not to compute the value estimates during training. Such estimates are instead produced from non-branching trajectories following on-policy action sequences. By contrast, TreeQN is a unified architecture that constructs the tree dynamically at every timestep and differentiates through it, eliminating any mismatch between the model at training and test time. Furthermore, we do not use convolutional transition functions, and hence do not impose spatial structure on the latent state representations. These differences simplify training, allow our model to be used more flexibly in other training regimes, and explain in part our substantially improved performance on the Atari benchmark. BID6 propose differentiating through a stochastic programming optimisation using a probabilistic model to learn model parameters with respect to their true objective rather than a maximum likelihood surrogate. However, they do not tackle the full RL setting, and do not use the model to repeatedly or recursively refine predictions. Imagination-augmented agents BID34 learn to improve policies by aggregating rollouts predicted by a model. However, they rely on pretraining an observation-space model, which we argue will scale poorly to more complex environments. Further, their aggregation of rollout trajectories takes the form of a generic RNN rather than a value function and tree backup, so the inductive bias based on the structure of the MDP is not explicitly present. A class of value gradient methods BID5 BID7 BID10 also differentiates through models to train a policy. However, this approach does not use the model during execution to refine the policy, and requires continuous action spaces. BID17 and BID3 propose methods for learning observation-prediction models in the Atari domain, but use these models only to improve exploration. Variants of scheduled sampling BID1 may be used to improve robustness of these models, but scaling to complex domains has proven challenging BID28. We evaluate TreeQN and ATreeC in a simple box-pushing environment, as well as on the subset of nine Atari environments that BID18 use to evaluate VPN. The experiments are designed to determine whether or not TreeQN and ATreeC outperform DQN, A2C, and VPN, and whether they can scale to complex domains. We also investigate how to best ground the the transition function with auxiliary losses. Furthermore, we compare against alternative ways to increase the number of parameters and computations of a standard DQN architecture, and study the impact of tree depth. Full details of the experimental setup, as well as architecture and training hyperparameters, are given in the appendix. Grounding. We perform a hyperparameter search over the coefficients η r and η s of the reward and state grounding auxiliary losses, on the Atari environment Seaquest. These experiments aim to determine the relevant trade-offs between the flexibility of a model-free approach and the potential benefits of a more model-based algorithm. Box Pushing. We randomly place an agent, 12 boxes, 5 goals and 6 obstacles on the center 6 × 6 tiles of an 8 × 8 grid. The agent's goal is to push boxes into goals in as few steps as possible while avoiding obstacles. Boxes may not be pushed into each other. The obstacles, however, are'soft' in that they are do not block movement, but generate a negative reward if the agent or a box moves onto an obstacle. This rewards better planning without causing excessive gridlock. This environment is inspired by Sokoban, as used by BID34, in that poor actions can generate irreversibly TreeQN adds additional parameters to a standard DQN architecture. We compare TreeQN to two baseline architectures with increased computation and numbers of parameters to verify the benefit of the additional structure and grounding. DQN-Wide doubles the size of the embedding dimension (1024 instead of 512). DQN-Deep inserts two additional fully connected layers with shared parameters and residual connections between the two fully-connected layers of DQN. This is in effect a non-branching version of the TreeQN architecture that also lacks explicit reward prediction. In this section, we present our experimental for TreeQN and ATreeC.7.1 GROUNDING FIG4 shows the of a hyperparameter search on η r and η s, the coefficients of the auxiliary losses on the predicted rewards and latent states. An intermediate value of η r helps performance but there is no benefit to using the latent space loss. Subsequent experiments use η r = 1 and η s = 0.The predicted rewards that the reward-grounding objective encourages the model to learn appear both in its own Q-value prediction and in the target for n-step Q-learning. Consequently, we expect this auxiliary loss to be well aligned with the true objective. By contrast, the state-grounding loss (and other potential auxiliary losses) might help representation learning but would not explicitly learn any part of the desired target. It is possible that this mismatch between the auxiliary and primary objective leads to degraded performance when using this form of state grounding. One potential route to overcoming this obstacle to joint training would be pre-training a model, as done by BID34. Inside TreeQN this model could then be fine-tuned to perform well inside the planner. We leave this possiblity to future work. FIG3 shows the of TreeQN with tree depths 1, 2, and 3, compared to a DQN baseline. In this domain, there is a clear advantage for the TreeQN architecture over DQN. TreeQN learns policies that are substantially better at avoiding obstacles and lining boxes up with goals so they can be easily pushed in later. TreeQN also substantially speeds up learning. We believe that the greater structure brought by our architecture regularises the model, encouraging appropriate state representations to be learned quickly. Even a depth-1 tree improves performance significantly, as disentangling the estimation of rewards and next-state values makes them easier to learn. This is further facilitated by the sharing of value-function parameters across branches. When trained with n-step Q-learning, the deeper depth-2 and depth-3 trees learn faster and plateau higher than the shallow depth-1 tree. In the this domain, useful transition functions are relatively easy to learn, and the extra computation time with those transition modules can help refine value estimates, yielding advantages for additional depth. FIG6 shows the of ATreeC with tree depths 1, 2, and 3, compared to an A2C baseline. As with TreeQN, ATreeC substantially outperforms the baseline. Furthermore, thanks to its stochastic policy, it substantially outperforms TreeQN. Whereas TreeQN and DQN sometimes indecisively bounce back and forth between adjacent states, ATreeC captures this uncertainty in its policy probabilities and thus acts more decisively. However, unlike TreeQN, ATreeC shows no pronounced differences for different tree depths. This is in part due to a ceiling effect in this domain. However, ATreeC is also gated by the quality of the critic's value function, which in these experiments was a single linear layer after the state encoding as described in Section 4. Nonetheless, this demonstrates the ease with which TreeQN can be used as a drop-in replacement for any deep RL algorithm that learns policies or value functions for discrete actions.7.3 ATARI TAB1 summarises all our Atari , while FIG8 shows learning curves in depth. TreeQN shows substantial benefits in many environments compared to our DQN baseline, which itself often outperforms VPN BID18. ATreeC always matches or outperforms A2C. We present the mean performance of five random seeds, while the VPN reported by BID18, shown as dashed lines in FIG8, are the mean of the best five seeds of an unspecified number of trials. TreeQN. In all environments except Frostbite, TreeQN outperforms DQN on average, with the most significant gains in Alien, CrazyClimber, Enduro, Krull, and Seaquest. Many of these environments seem well suited to short horizon look-ahead planning, with simple dynamics that generalise well and tradeoffs between actions that become apparent only after several timesteps. For example, an incorrect action in Alien can trap the agent down a corridor with an alien. In Seaquest, looking ahead could help determine whether it is better to go deeper to collect more points or to surface for oxygen. However, even in a game with mostly reactive decisions like the racing game Enduro, TreeQN shows significant benefits. TreeQN also outperforms the additional baselines of DQN-Wide and DQN-Deep, indicating that the additional structure and grounding of our architecture brings benefits beyond simply adding model capacity and computation. In particular, it is interesting that DQN-Deep is often outperformed by the vanilla DQN baseline, as optimisation difficulties grow with depth. In contrast, the additional BID18 report the same statistic, but average instead over the best five of an unspecified number of agents.structure and auxiliary loss employed by TreeQN turn its additional depth from a liability into a strength. ATreeC. ATreeC matches or outperforms its baseline (A2C) in all environments. Compared to TreeQN, ATreeC's performance is better across most environments, particularly on Qbert, reflecting an overall advantage for actor-critic also found by BID15 and in our box-pushing experiments. However, performance is much worse on Seaquest, revealing a deficiency in exploration as policy entropy collapses too rapidly and consequently the propensity of policy gradient methods to become trapped in a local optimum. In Krull and Frostbite, most algorithms have poor performance, or high variance in returns from run to run, as agents are gated by their ability to explore. Both of these games require the completion of sub-levels in order to accumulate large scores, and none of our agents reliably explore beyond the initial stages of the game. Mean performance appears to favor TreeQN and ATreeC in Krull, and perhaps DQN in Frostbite, but the returns are too variable to draw from this number of random seeds. Combining TreeQN and ATreeC with smart exploration mechanisms is an interesting direction for future work to improve robustness of training in these types of environments. Compared to the box-pushing domain, there is less of a clear performance difference between trees of different depths. In some environments (Amidar, MsPacman), greater depth does appear to be employed usefully by TreeQN to a small extent, ing in the best-performing individual agents. However, for the Atari domain the embedding size for the transition function we use is much larger (512 compared to 128), and the dynamics are much more complex. Consequently, we expect that optimisation difficulties, and the challenge of learning abstract-state transition functions, impede the utility of deeper trees in some cases. We look to future work to further refine methods for learning to plan abstractly in complex domains. However, the decomposition of Q-value into reward and next-state value employed by the first tree expansion is clearly of utility in a broad range of tasks. When inspecting the learned policies and trees, we find that the values sometimes correspond to intuitive reasoning about sensible policies, scoring superior action sequences above poorer ones. However, we find that the actions corresponding to branches of the tree that are scored most highly are frequently not taken in future timesteps. The flexibility of TreeQN and ATreeC allows our agents to find any useful way to exploit the computation in the tree to refine action-value estimates. As we found no effective way to strongly ground the model components without sacrificing performance, the interpretability of learned trees is limited. We presented TreeQN and ATreeC, new architectures for deep reinforcement learning in discreteaction domains that integrate differentiable on-line tree planning into the action-value function or policy. Experiments on a box-pushing domain and a set of Atari games show the benefit of these architectures over their counterparts, as well as over VPN. In future work, we intend to investigate enabling more efficient optimisation of deeper trees, encouraging the transition functions to produce interpretable plans, and integrating smart exploration. A.1 BOX PUSHING Environment. For each episode, a new level is generated by placing an agent, 12 boxes, 5 goals and 6 obstacles in the center 6 × 6 tiles of an 8 × 8 grid, sampling locations uniformly. The outer tiles are left empty to prevent initial situations where boxes cannot be recovered. The agent may move in the four cardinal directions. If the agent steps off the grid, the episode ends and the agent receives a penalty of −1. If the agent moves into a box, it is pushed in the direction of movement. Moving a box out of the grid generates a penalty of −0.1. Moving a box into another box is not allowed and trying to do so generates a penalty of −0.1 while leaving all positions unchanged. When a box is pushed into a goal, it is removed and the agent receives a reward of +1.Obstacles generate a penalty of −0.2 when the agent or a box is moved onto them. Moving the agent over goals incurs no penalty. Lastly, at each timestep the agent receives a penalty of −0.01. Episodes terminate when 75 timesteps have elapsed, the agent has left the grid, or no boxes remain. The observation is given to the model as a tensor of size 5 × 8 × 8. The first four channels are binary encodings of the position of the agent, goals, boxes, and obstacles respectively. The final channel is filled with the number of timesteps remaining (normalised by the total number of timesteps allowed).Architecture. The encoder consists of (conv-3x3-1-24, conv-3x3-1-24, conv-4x4-1-48, fc-128), where conv-wxh-s-n denotes a convolution with n filters of size w × h and stride s, and fc-h denotes a fully connected layer with h hidden units. All layers are separated with ReLU nonlinearities. The hidden layer of the reward function MLP has 64 hidden units. Preprocessing of inputs follows the procedure of BID14, including concatenation of the last four frames as input, although we use a frameskip of 10.Architecture. The Atari experiments have the same architecture as for box-pushing, except for the encoder architecture which is as follows: (conv-8x8-4-16, conv-4x4-2-32, fc-512). All experiments use RMSProp BID31 ) with a learning rate of 1e-4, a decay of α = 0.99, and = 1e-5.The learning rate was tuned coarsely by running DQN on the Seaquest environment, and kept the same for all subsequent experiments (box-pushing and Atari).For DQN and TreeQN, for -greedy exploration was decayed linearly from 1 to 0.05 over the first 4 million environment transitions observed (after frameskipping, so over 40 million atomic Atari timesteps).For A2C and ATreeC, we use a value-function loss coefficient α = 0.5 and an entropy regularisation β = 0.01.The reward prediction loss was scaled by η r = 1.We use n steps = 5 and n envs = 16, for a total batch size of 80.The discount factor is γ = 0.99 and the target networks are updated every 40, 000 environment transitions.
We present TreeQN and ATreeC, new architectures for deep reinforcement learning in discrete-action domains that integrate differentiable on-line tree planning into the action-value function or policy.
588
scitldr
Multi-label classification (MLC) is the task of assigning a set of target labels for a given sample. Modeling the combinatorial label interactions in MLC has been a long-haul challenge. Recurrent neural network (RNN) based encoder-decoder models have shown state-of-the-art performance for solving MLC. However, the sequential nature of modeling label dependencies through an RNN limits its ability in parallel computation, predicting dense labels, and providing interpretable . In this paper, we propose Message Passing Encoder-Decoder (MPED) Networks, aiming to provide fast, accurate, and interpretable MLC. MPED networks model the joint prediction of labels by replacing all RNNs in the encoder-decoder architecture with message passing mechanisms and dispense with autoregressive inference entirely. The proposed models are simple, fast, accurate, interpretable, and structure-agnostic (can be used on known or unknown structured data). Experiments on seven real-world MLC datasets show the proposed models outperform autoregressive RNN models across five different metrics with a significant speedup during training and testing time. Multi-label classification (MLC) is receiving increasing attention in tasks such as text categorization and image classification. Accurate and scalable MLC methods are in urgent need for applications like assigning topics to web articles, classifying objects in an image, or identifying binding proteins on DNA. The most common and straightforward MLC method is the binary relevance (BR) approach that considers multiple target labels independently BID0. However, in many MLC tasks there is a clear dependency structure among labels, which BR methods ignore. Accordingly, probabilistic classifier chain (PCC) models were proposed to model label dependencies and formulate MLC in an autoregressive sequential prediction manner BID1. One notable work in the PCC category was from which implemented a classifier chain using a recurrent neural network (RNN) based sequence to sequence (Seq2Seq) architecture, Seq2Seq MLC. This model uses an encoder RNN encoding elements of an input sequence, a decoder RNN predicting output labels one after another, and beam search that computes the probability of the next T predictions of labels and then chooses the proposal with the max combined probability. However, the main drawback of classifier chain models is that their inherently sequential nature precludes parallelization during training and inference. This can be detrimental when there are a large number of positive labels as the classifier chain has to sequentially predict each label, and often requires beam search to obtain the optimal set. Aside from time-cost disadvantages, PCC methods have several other drawbacks. First, PCC methods require a defined ordering of labels for the sequential prediction, but MLC output labels are an unordered set, and the chosen order can lead to prediction instability. Secondly, even if the optimal ordering is known, PCC methods struggle to accurately capture long-range dependencies among labels in cases where the number of positive labels is large (i.e., dense labels). For example, the Delicious dataset has a median of 19 positive labels per sample, so it can be difficult to correctly predict the labels at the end of the prediction chain. Lastly, many real-world applications prefer interpretable predictors. For instance, in the task of predicting which proteins (labels) will bind to a DNA sequence based binding site, users care about how a prediction is made and how the interactions among labels influence the predictions 1.Message Passing Neural Networks (MPNNs) BID3 introduce a class of methods that model joint dependencies of variables using neural message passing rather than an explicit representation such as a probabilistic classifier chain. Message passing allows for efficient inference by modelling conditional independence where the same local update procedure is applied iteratively to propagate information across variables. MPNNs provide a flexible method for modeling multiple variables jointly which have no explicit ordering (and can be modified to incorporate an order, as explained in section 3). To handle the drawbacks of BR and PCC methods, we propose a modified version of MPNNs for MLC by modeling interactions between labels using neural message passing. We introduce Message Passing Encoder-Decoder (MPED) Networks aiming to provide fast, accurate, and interpretable multi-label predictions. The key idea is to replace RNNs and to rely on neural message passing entirely to draw global dependencies between input components, between labels and input components, and between labels. The proposed MPED networks allow for significantly more parallelization in training and testing. The main contributions of this paper are:• Novel approach for MLC. To the authors' best knowledge, MPED is the first work using neural message passing for MLC.• Accurate MLC. Our model achieves similar, or better performance compared to the previous state of the art across five different MLC metrics. We validate our model on seven MLC datasets which cover a wide spectrum of input data structure: sequences (English text, DNA), tabular (bag-of-words), and graph (drug molecules), as well as output label structure: unknown and graph.• Fast. Empirically our model achieves an average 1.7x speedup over the autoregressive seq2seq MLC at training time and an average 5x speedup over its testing time.• Interpretable. Although deep-learning based systems have widely been viewed as "black boxes" due to their complexity, our attention based MPED models provide a straightforward way to explain label to label, input to label, and feature to feature dependencies. Message Passing Neural Networks (MPNNs) BID3 are a generalization of graph neural networks (GNNs) BID4 BID5, where variables are represented as nodes on a graph G and joint dependencies are modelled using message passing rather than explicit representations, which allows for efficient inference. MPNNs model the joint dependencies using message function M t and node update function U t for T time steps, where t is the current time step. The hidden state v t i ∈ R d of node i ∈ G is updated based on messages m t i from its neighboring nodes {v t j∈N (i) } defined by neighborhood N (i): DISPLAYFORM0 DISPLAYFORM1 After T updates, a readout function R is used on the updated nodes for a prediction (e.g., node classification or graph classification) on the graph G.Many possibilities exist for functions M t and U t. For example, one can pass messages using neural attention in which nodes are able to attend over their neighborhoods differentially BID6. This allows for the network to learn different weights for different nodes in a neighborhood, without depending on knowing the graph structure a priori. In this formulation, messages for node v t i are obtained by a weighted sum of all neighboring nodes {v t j∈N (i) } where the weights are obtained by attention BID7. In our implementation, we implement neural message passing with attention. In the rest of the paper, we use "graph attention" and "neural message passing" interchangeably. Neural message passing with attention works as follows. DISPLAYFORM2 where e t ij represents the importance of node j for node i. DISPLAYFORM3 Attention coefficients e t ij are then normalized across all neighboring nodes of node i using a softmax function: DISPLAYFORM4.In our method, we use a so called attention message function M t atn to produce the message from node j to node i using the learned attention weights α t ij and another transformation matrix W v ∈ R d×d. Then we compute the full message m t i by linearly combining messages from all nodes j ∈ N (i) with a residual connection on the current v DISPLAYFORM5 DISPLAYFORM6 DISPLAYFORM7 DISPLAYFORM8 It is important to note that matrices W are shared (i.e., separately applied) across all nodes. This can be viewed as 1-dimensional convolution with kernel and stride sizes of 1. Weight sharing across nodes is a key aspect of MPNNs, where node dependencies are learned in an order-invariant manner. Notations: We define the following notations, used throughout the paper. DISPLAYFORM0 be the set of data samples with inputs x ∈ X and outputs y ∈ Y. Inputs x are a (possibly ordered) set of S components {x 1, x 2, ..., x S}, and outputs y are a set of L labels {y 1, y 2, ..., y L}. MLC involves predicting the set of binary labels {y 1, y 2, ..., y L}, y i ∈ {0, 1} given input x. Input features are represented as embedded vectors {c DISPLAYFORM1, where d is the embedding size, δ is the vocabulary size, and t represents the 'state' of the embedding after t updates. Similarly, labels are represented as an embedded vectors {h DISPLAYFORM2 where L is the number of labels, and t represents the 'state' of the embedding after t updates. In MLC, each output label is determined by a joint probability of other labels and the input features. Our goal is to achieve the performance of explicit joint probability methods such as PCCs (Eqs. 22 and 23), at the test speed of BR methods (Eq. 21).We introduce Message Passing Encoder-Decoder (MPED) networks, where we formulate MLC using an encoder-decoder architecture. In MPED Networks, input components are represented as nodes in encoder graph G ENC using embedding vectors {c t 1:S}, and labels are represented as nodes in decoder graph G DEC using embedding vectors {h t 1:L}. MPED networks use three MPNN modules with attention to pass messages within G ENC, from G ENC to G DEC, and within G DEC to model the joint prediction of labels. The first module, MPNN xx, is used to update input component nodes {c t 1:S} by passing messages within the encoder (between input nodes). The second module, MPNN xy, is used to update output label nodes {h t 1:L} by passing messages from the encoder to decoder (from input nodes {c t 1:S} to output nodes {h t 1:L}). The third module, MPNN yy, is used to update output label nodes {h t 1:L} by passing messages within the decoder (between label nodes). Once messages have been passed to update input and label nodes, a readout function R is then used on the label nodes to make a binary classification prediction for each label, {ŷ 1,ŷ 2, ...,ŷ L}. An overview of our model is shown in Fig. 1. COMPONENT MESSAGE PASSING For a particular input x, we first assume that the input features {x 1:S} are nodes on a graph, we call G ENC. G ENC = (V, E), V = {x 1:S}, and E includes all undirected pairwise edges connecting node c i and node c j. MPNN xx, parameterized by W xx, is used to pass messages between the input embeddings in order to update their states. x i can be any component of a particular input (e.g. words in a sentence, patches of an image, nodes of a known graph, or tabular features).Nodes on G ENC are represented as embedding vectors {c : DISPLAYFORM0 DISPLAYFORM1 If there exists a known G ENC graph, message m t i for node i is computed using its neighboring nodes j ∈ N (i), where the neighbors N (i) are defined by the graph. If there is no known graph, we assume a fully connected G ENC graph, which means N (i) = {j = i}. Inputs with a sequential ordering can be modelled as a fully connected graph using positional embeddings BID9. Similar to the input components in the encoder, we assume that the labels {y 1:L} are nodes on a decoder graph called G DEC. Nodes on G DEC are represented as embedding vectors {h DISPLAYFORM0 where the initial states {h 0 1:L} are obtained using label embedding matrix W y. The decoder MPNNs update the label embeddings {h t 1:L} by passing messages from the encoder to the decoder, and then pass messages within the decoder. MPNN xy, is used to pass messages from input embeddings {c T 1:S} to label embeddings, and then MPNN yy is used to pass messages between label embeddings. In order to update the label nodes given a particular input x, the decoder uses MPNN xy, parameterized by W xy, to pass messages from input x to labels y. At the equation level, this module is identical to MPNN xx except that it updates the i th label node's embedding h i using the embeddings of all the components of an input. That is, we update each h t i by using a weighted sum of all input embeddings {c T 1:S}, in which the weights represent how important an input component is to the i th label node and the weights are learned via attention. Messages are only passed from the encoder nodes to the decoder nodes, and not vice versa (i.e. encoder to decoder message passing is directed). DISPLAYFORM0 DISPLAYFORM1 The key advantage of input-to-label message passing with attention is that each label node can attend to different input nodes (e.g. different words in the sentence). At this point, the decoder can make an independent prediction for each label conditioned on x. However, in order to make more accurate predictions, we model interactions between the label nodes {h t 1:L} using message passing and update them accordingly. To do this we use a a third message passing module, MPNN yy. At the equation level, this layer is identical to MPNN xx except that it replaces the input embeddings with label embeddings. In other words, label embedding h t i is updated by a weighted combination through attention of all its neighbor label nodes {h DISPLAYFORM0 To update each label embedding h : DISPLAYFORM1 DISPLAYFORM2 If there exists a known G DEC graph, message m t i for node i is computed using its neighboring nodes j ∈ N (i), where the neighbors N (i) are defined by the graph. If there is no known G DEC graph, we assume a fully connected graph, which means N (i) = {j = i}.In our implementation, the label embeddings are updated by MPNN xy and MPNN xx for T time steps to produce {h DISPLAYFORM3 The last module of the decoder predicts each label DISPLAYFORM0 is the learned output vector for label i. The calculated vector of size L × 1 is then fed through an element-wise sigmoid function to produce probabilities of each label being positive: DISPLAYFORM1 In MPED networks we use binary the mean cross entropy on the individual label predictions to train the model. p(y i |{y j =i}, c T 1:S; W) is approximated in MPED networks by jointly representing {y 1:L} using message passing from {c T 1:S} and from the embeddings of all neighboring labels {y j∈N (i) }. Multi-head Attention In order to allow a particular node to attend to multiple other nodes (or multiple groups of nodes) at once, MPED uses multiple attention heads. Inspired by BID8, we use K independent attention heads for each W · matrix during the message computation, where each matrix column W ·,k j is of dimension d/K. The generated representations are concatenated (denoted by) and linearly transformed by matrix W z ∈ R d×d. Multi-head attention changes message passing function M atn, but update function U mlp stays the same. DISPLAYFORM0 DISPLAYFORM1 Graph Time Steps To learn more complex relations among nodes, we compute T time steps of embedding updates. This is essentially a stack of T MPNN layers. Matrices BID10. DISPLAYFORM2 Speed. In MPED models, the joint probability of labels isn't explicitly estimated using the chain rule. This enables making predictions in parallel and decreases test time drastically, especially when the number of labels is large. We model the joint probability implicitly using the MPED decoder, at the benefit of a substantial speedup. Time complexities of different types of models are compared in Handling dense label predictions. Motivated by the drawbacks of autoregressive models for MLC (Section 5.6), the proposed MPED model removes the dependencies on a chosen label ordering and beam search. This is particularly beneficial when the number of positive output labels is large (i.e. dense). MPED networks predict the output set of labels all at once, which is made possible by the fact that inference doesn't use a probabilistic chain, but there is still a representation of label dependencies via label to label attention. As an additional benefit, as noted by, it may be useful to maintain'soft' predictions for each label in MLC. This is a major drawback of the PCC models which make'hard' predictions of the positive labels, defaulting all other labels to 0. DISPLAYFORM0 Flexibility Many input or output types are instances where the relational structure is not made explicit, and must be inferred or assumed (e.g., text corpora, or MLC labels) BID9. MPED networks allow for greater flexibility of input structures (known structure such as sequence or graph, or unknown such as tabular), or output structures (e.g., known graph vs unknown structure).Interpretability. One advantage of MPED models is that interpretability is "built in" via neural attention. Specifically, we can visualize 3 aspects: input-to-input attention (input dependencies), input-to-label attention (input/label dependencies), and label-to-label attention (label dependencies). Structured Output Predictions The use of graph attention in MPED models is closely connected to the literature of structured output prediction for MLC. BID12 used conditional random fields BID13 ) to model dependencies among labels and features for MLC by learning a distribution over pairs of labels to input features. In another research direction, recently proposed SPENs (structured prediction energy network BID14) and Deep Value Networks BID15 tackled MLC by optimizing different variants of structured loss formulations. In contrast to SPEN and related methods which use an iterative refinement of the output label predictions, our method is a simpler feed forward block to make predictions in one step, yet still models dependencies through attention mechanisms on embeddings. However, we plan to expand MPED models by adding a structured loss formulation. Graph Neural Networks (GNNs) Passing embedding messages from node to neighbor nodes connects to a large body of literature on graph neural networks BID9 ) and embedding models for structures BID16. The key idea is that instead of conducting probabilistic operations (e.g., product or re-normalization), the proposed models perform nonlinear function mappings in each step to learn feature representations of structured components. Neural message passing networks BID3, graph attention networks BID6 and neural relation models BID17 follow similar ideas to pass the embedding from node to neighbor nodes or neighbor edges. There have been many recent works extending the basic GNN framework to update nodes using various message passing, update, and readout functions BID18 BID19 BID20 BID21 BID3 BID17 BID22 BID23. We refer the readers to BID9 ) for a survey. However, none of these have used GNNs for MLC. In the appendix, we have added the details of training/hyperparameters (5.2), datasets (5.1), evaluation metrics (5.3), and explained how we selected baseline models from previous work (5.8). We explain our own MPED variations in 5.7 and previous work baselines in section 5.8.In short, we compare MPED to MPED Prior G DEC (by using a known label graph), MPED Edgeless G DEC (by removing label-to-label message passing), MPED Autoregressive (by predicting labels sequentially), MP BR (binary relevance MLP output used on the mean of MPNN xx embeddings), and the baselines reported in related works. We compare our models on seven real world datasets, which vary in the number of samples, number of labels, input type (sequential, tabular, graph), and output type (unknown, known label graph), as summarized in TAB8.Across all datasets, MPED outperforms or achieves similar as the baseline models. Most importantly, we show that autoregressive models are not crucial in MLC for most metrics, and non-autoregressive models in a significant speedup at test time. Table 2 shows the performance of different models across the 7 datasets. Significance across models is shown in Appendix TAB9. For subset accuracy (ACC), autoregressive models perform the best, but at a small margin of increase. However, autoregressive models that predict only positive labels are targeted at maximizing subset accuracy, but they perform poorly on other metrics. For all other metrics, autoregressive models are not essential. One important observation is that for most datasets, MPED outperforms the autoregressive models in both miF1 (frequent labels) and more importantly, maF1 (rare labels). Since maF1 favors models which can predict the rare labels, this shows that autoregressive models with beam search often make the wrong predictions on the rare labels (which are ordered last in the sequence during training). MPED is a solid choice across all metrics as it comes the closest to subset accuracy as the autoregressive models, but also performs well in other metrics. While MPED does not explicitly model label dependencies as autoregressive or structured prediction models do, it seems as though the attention weights do learn some dependencies among labels (Visualizations TAB8). This is indicated by the fact that MPED, which uses label-to-label attention, mostly outperforms the ones which don't, indicating that it is learning label dependencies. Table 2 shows 3 time step models , but a comparison of time steps is shown in Figure 2.Speed Table 2 shows the per epoch train and test times for each model. All models are trained and tested on the same GPU using a batch size of 32. At test time, since the autoregressive model cannot be parallelized, MPED and other non-autoregressive models are significantly faster. During training, the autoregressive model can be parallelized because the true labels are fed as the previous label. Since the autoregressive models only predict the ρ positive labels, they can be faster at training time, whereas the MPED model is predicting the probability for all labels. MPED in a mean of 1.7x and 5.0x training and testing speedups, respectively, over Seq2Seq autoregressive models. Interpretability We present visualizations of the input-to-label and label-to-label attention weights (averaged across the 4 attention heads) in the Appendix. In the visualizations, we show the positive Table 2: Results. Across all 7 datasets, MPED produces similar or better average metric scores to baseline models. MPED in a mean of 1.7x and 5.0x training and testing speedups, respectively, over the previous state-of-the-art probabilistic MLC method, RNN Seq2Seq. Speedups over RNN Seq2Seq model are shown in minutes per epoch in parentheses for the MPED model. Bold numbers show the best performing method(s). labels only, and the darker lines show higher attention weights to the corresponding label or word. The attention weights clearly learn certain relationships between input-label pairs as well as the label-label pairs, which is all done in an unsupervised manner. In future work, we plan to add a structured prediction loss function which will likely improve the attention mechanisms and the model's ability to estimate the joint probability. In this work we present Message Passing Encoder-Decoder (MPED) Networks which achieve a significant speedup at close to the same performance as autoregressive models for MLC. We open a new avenue of using neural message passing to model label dependencies in MLC tasks. In addition, we show that our method is able to handle various input data types (sequence, tabular, graph), as well various output label structures (known vs unknown). One of our future extensions is to adapt the current model to predict more dynamic outputs. BID1 BID24 We test our method against baseline methods on seven different multi-label sequence classification datasets. The datasets are summarized in TAB8. We use Reuters-21578, Bibtex BID30, which is side effects of drug molecules. As shown in the table, each dataset has a varying number of samples, number of labels, positive labels per sample, and samples per label. For BibTex and Delicious, we use 10% of the provided training set for validation. For the TFBS dataset, we use 1 layer of convolution at the first layer to extract "words" from the DNA characters (A,C,G,T), as commonly done in deep learning models for DNA.For datasets which have sequential ordering of the input components (Reuters, RCV1), we add a positional encoding to the word embedding as used in BID8 (sine and cosine functions of different frequencies) to encode the location of each word in the sentence. For datasets with no ordering or graph stucture (Bibtex, Delicious, Bookmarks, which use bag-of-word input representations) we do not use positional encodings. For inputs with an explicit graph representation (SIDER), we use the known graph structer. We validate our model on seven MLC datasets. These datasets cover a wide spectrum of input data types, including: raw English text (sequential form), bag-of-words (tabular form), and drug molecules (graph form).For all 6 datasets except SIDER, we use the same MPED model with T =3 time steps, d = 512, and K=4 attention heads. Since SIDER is significantly smaller, we use T =1 time step, d = 64, and K=4 attention heads. We trained our models on an NVIDIA TITAN X Pascal with a batch size of 32. We used Adam BID31 with betas= (0.9, 0.999), eps=1e-08, and a learning rate of 0.0002 for each dataset. We used dropout of p = 0.2 for all models. The MPED models also use layer normalization BID32 around each of the attention and feedforward layers. The non-autoregressive models are trained with binary cross-entropy on each label and the autoregressive models are trained with cross entropy across all possible labels at each position. Multi-label classification methods can be evaluated with many different metrics which each evaluate different strengths or weaknesses. We use the same 5 evaluation metrics from.All of our autoregressive models predict only the positive labels before outputting a stop signal. This is a special case of PCC models (explained as PCC+ in section 5.4), which have been shown to outperform the binary prediction of each label in terms of performance and speed. These models use beam search at inference time with a beam size of 5. For the non-autoregressive models, to convert the labels to {0, 1} we chose the best threshold on the validation set from the same set of thresholds used in BID14.Example-based measures are defined by comparing the target vector y to the prediction vectorŷ. Subset Accuracy (ACC) requires an exact match of the predicted labels and the true labels: ACC(y,ŷ) = I[y =ŷ].Hamming Accuracy (HA) evaluates how many labels are correctly predicted inŷ: DISPLAYFORM0 Example-based F1 (ebF1) measures the ratio of correctly predicted labels to the sum of the total true and predicted labels: DISPLAYFORM1 Label-based measures treat each label yj as a separate two-class prediction problem, and compute the number of true positives (tpj), false positives (f pj), and false negatives (f nj) for a label. Macro-averaged F1 (maF1) measures the label-based F1 averaged over all labels: DISPLAYFORM2 measures the label-based F1 averaged over each sample: DISPLAYFORM3. High maF1 scores usually indicate high performance on less frequent labels. High miF1 scores usually indicate high performance on more frequent labels. MLC has a rich history in text (McCallum; BID34, images BID0 BID35, bioinformatics BID0 BID35, and many other domains. MLC methods can roughly be broken into several groups, which are explained as follows. Label powerset models (LP) BID36 BID37, classify each input into one label combination from the set of all possible combinations Y = {{1}, {2},..., {1, 2, ..., L}}. LP explicitly models the joint distribution by predicting the one subset of all positive labels. Since the label set Y grows exponentially in the number of total labels (2 L), classifying each possible label set is intractable for a modest L. In addition, even in small L tasks, LP suffers from the "subset scarcity problem" where only a small amount of the label subsets are seen during training, leading to bad generalization. Binary relevance (BR) methods predict each label separately as a logistic regression classfier for each label BID38 BID39. The naïve approach to BR prediction is to predict all labels independently of one another, assuming no dependencies among labels. That is, BR uses the following conditional probability parameterized by learned weights W: DISPLAYFORM0 Probabilistic classifier chain (PCC) methods BID40 BID1 ) are autoregressive models that estimate the true joint probability of output labels given the input by using the chain rule, predicting one label at a time: DISPLAYFORM1 Two issues with PCC models are that inference is very slow if L is large, and the errors propagate as L increases BID41. To mitigate the problems with both LP and PCC methods, one solution is to only predict the true labels in the LP subset. In other words, only predicting the positive labels (total of ρ for a particular sample) and ignoring all other labels, which we call PCC+. Similar to PCC, the joint probability of PCC+ can be computed as product of conditional probabilities, but unlike PCC, only ρ < L terms are predicted as positive: DISPLAYFORM2 This can be beneficial when the number of possible labels L is large, reducing the total number of prediction steps. However, in both PCC and PCC+, inference is done using beam search, which is a costly dynamic programming step to find the optimal prediction. MPED methods approximate the following factored formulation, where N (Yi) denotes the neighboring nodes of Yi. DISPLAYFORM3 In machine translation (MT), sequence-to-sequence (Seq2Seq) models have proven to be the superior method, where an encoder RNN reads the source language sentence into an encoder hidden state, and a decoder RNN Figure 2: Average Across Metrics for T =1, T =2, and T =3 GDEC time steps. In these experiments, the encoder GENC is processed with a fixed T =3 time steps, and the decoder time steps are varied. We do not compare time steps for the SIDER dataset since it is too small, and we only evaluate using T =1.translates the hidden state into a target sentence, predicting each word autoregressively BID42. BID7 improved this model by introducing "neural attention" which allows the decoder RNN to "attend" to every encoder word at each step of the autoregressive translation. Recently, showed that, across several metrics, state-of-the-art MLC could be achieved by using a recurrent neural network (RNN) based encoder-to-decoder framework for Equation 23 (PCC+). They use a Seq2Seq RNN model (Seq2Seq Autoregressive) which uses one RNN to encode x, and a second RNN to predict each positive label sequentially, until it predicts a'stop' signal. This type of model seeks to maximize the'subset accuracy', or correctly predict every label as its exact 0/1 value. BID8 eliminated the need for the recurrent network in MT by introducing the Transformer. Instead of using an RNN to model dependencies, the Transformer explicitly models pairwise dependencies among all of the features by using attention BID7 BID43 between signals. This speeds up training time because RNNs can't be fully parallelized but, the transformer still uses an autoregressive decoder. Autoregressive models have been proven effective for machine translation and MLC BID42 BID7. However, predictions must be made sequentially, eliminating parallelization. Also, beam search is typically used at test time to find optimal predictions. But beam search is limited by the time cost of large beams sizes, making it difficult to optimally predict many output labels BID44.In addition to speed constraints, beam search for autoregressive inference introduces a second drawback: initial wrong predictions will propagate when using a modest beam size (e.g. most models use a beam size of 5). This can lead to significant decreases in performance when the number of positive labels is large. For example, the Delicious dataset has a median of 19 positive labels per sample, and it can be very difficult to correctly predict the labels at the end of the prediction chain. Autoregressive models are well suited for machine translation because these models mimic the sequential decoding process of real translation. However, for MLC, the output labels have no intrinsic ordering. While the joint probability of the output labels is independent of the label ordering via autoregressive based inference, the chosen ordering can make a difference in practice BID45. Some ordering of labels must be used during training, and this chosen ordering can lead to unstable predictions at test time. Our non-autoregressive version connects to BID46 who removed the autoregressive decoder in MT with the Non-Autoregressive Transformer. In this model, the encoder makes a proxy prediction, called "fertilities", which are used by the decoder to predict all translated words at once. The difference between their model and ours is that we have a constant label at each position, so we don't need to marginalize over all possible labels at each position. In the full MPED model, we use 3 encoder time steps and 3 decoder time steps with node to node attention in both the encoder and decoder graphs, and K=4 attention heads.
We propose Message Passing Encoder-Decode networks for a fast and accurate way of modelling label dependencies for multi-label classification.
589
scitldr
Recent few-shot learning algorithms have enabled models to quickly adapt to new tasks based on only a few training samples. Previous few-shot learning works have mainly focused on classification and reinforcement learning. In this paper, we propose a few-shot meta-learning system that focuses exclusively on regression tasks. Our model is based on the idea that the degree of freedom of the unknown function can be significantly reduced if it is represented as a linear combination of a set of sparsifying basis functions. This enables a few labeled samples to approximate the function. We design a Basis Function Learner network to encode basis functions for a task distribution, and a Weights Generator network to generate the weight vector for a novel task. We show that our model outperforms the current state of the art meta-learning methods in various regression tasks. Regression deals with the problem of learning a model relating a set of inputs to a set of outputs. The learned model can be thought as function y = F (x) that gives a prediction y ∈ R dy given input x ∈ R dx where d y and d x are dimensions of the output and input respectively. Typically, a regression model is trained on a large number of data points to be able to provide accurate predictions for new inputs. Recently, there have been a surge in popularity on few-shot learning methods (; ;). Few-shot learning methods require only a few examples from each task to be able to quickly adapt and perform well on a new task. These few-shot learning methods in essence are learning to learn i.e. the model learns to quickly adapt itself to new tasks rather than just learning to give the correct prediction for a particular input sample. In this work, we propose a few shot learning model that targets few-shot regression tasks. Our model takes inspiration from the idea that the degree of freedom of F (x) can be significantly reduced when it is modeled a linear combination of sparsifying basis functions. Thus, with a few samples, we can estimate F (x). The two primary components of our model are (i) the Basis Function Learner network which encodes the basis functions for the distribution of tasks, and (ii) the Weights Generator network which produces the appropriate weights given a few labelled samples. We evaluate our model on the sinusoidal regression tasks and compare the performance to several meta-learning algorithms. We also evaluate our model on other regression tasks, namely the 1D heat equation tasks modeled by partial differential equations and the 2D Gaussian distribution tasks. Furthermore, we evaluate our model on image completion as a 2D regression problem on the MNIST and CelebA data-sets, using only a small subset of known pixel values. To summarize, our contributions for this paper are: • We propose to address few shot regression by linear combination of a set of sparsifying basis functions. • We propose to learn these (continuous) sparsifying basis functions from data. Traditionally, basis functions are hand-crafted (e.g. Fourier basis). • We perform experiments to evaluate our approach using sinusoidal, heat equation, 2D Gaussian tasks and MNIST/CelebA image completion tasks. An overview of our model as in meta-training. Our system learns the basis functions Φ that can in sparse representation for any task drawn from a certain task distribution. The basis functions are encoded in the Basis Function Learner network. The system produces predictions for a regression task by generating a weight vector, w for a novel task, using the Weights Generator network. The prediction is obtained by taking a dot-product between the weight vector and learned basis functions. Regression problems has long been a topic of study in the machine learning and signal processing community . Though similar to classification, regression estimates one or multiple scalar values and is usually thought of as a single task problem. A single model is trained to only perform regression on only one task. Our model instead reformulates the regression problem as a few-shot learning problem, allowing for our model to be able to perform regressions of tasks sampled from the same task distribution. The success achieved by deep neural networks heavily relies on a large amount of data, especially labelled ones. As labelling data is time-consuming and labor-intensive, learning from limited labelled data is drawing more and more attention. A prominent approach is meta learning. Meta learning, also referred as learning to learn, aims at learning an adaptive model across different tasks. Meta learning has shown potential in style transfer , visual navigation , etc. Meta learning has also been applied to few-shot learning problems, which concerns models that can learn from prior experiences to adapt to new tasks. proposed the one-shot classification problem and introduced the Omniglot data set as a few-shot classification data set, similar to MNIST for traditional classification. Since then, there has been a surge of meta learning methods striving to solve few-shot problems. Some meta learning approaches learn a similarity metric (; ;) between new test examples with few-shot training samples to make the prediction. The similarity metric used here can be Euclidean distance, cosine similarity or more expressive metric learned by relation networks . On the other hand, optimization-based approaches learn how to optimize the model directly. learned an optimal initialization of models for different tasks in the same distribution, which is able to achieve good performance by simple gradient descent. learned how to perform gradient descent in the latent space to adapt the model parameters more effectively. employed an LSTM to learn an optimization algorithm. Generative models are also proposed to overcome the limitations ed from few-shot setting; ). Few-shot regression tasks are used among various few-shot leaning methods (; ;). In most existing works, these experiment usually does not extend beyond the sinusoidal and linear regression tasks. A prominent family of algorithms that tackles a similar problem as few-shot regression is Neural Processes (b; a;). Neural Processes algorithms model the distributions of the outputs of regression functions using Deep Neural Networks given pairs of input-output pairs. Similar to Variational Autoencoders , Neural Processes employ a Bayesian approach in modelling the output distribution of regression function using an encoder-decoder architecture. Our model on the other hand employs a deterministic approach where we directly learn a set of basis functions to model the output distribution. Our model also does not produce any latent vectors but instead produces predictions via a dot product between the learned basis functions and weight vector. Our experiment show that our model (based on sparse linear combination of basis functions) compares favorably to Neural Processes (based on conditional stochastic processes). Our proposed sparse linear representation framework for few shot regression makes the few shot regression problem appears to be similar to another research problem called dictionary learning (DL) , which focuses on learning dictionaries of atoms that provide efficient representations of some class of signals. However the differences between DL and our problem are significant: Our problems are continuous rather than discrete as in DL, and we only observe a very small percentage of samples. Detailed comparison with DL is discussed in the appendix. 3 PROPOSED METHOD We first provide problem definition for few-shot regression. We aim at developing a model that can rapidly regress to a variety of equations and functions based on only a few training samples. We assume that each equation we would like to regress is a task T i sampled from a distribution p(T). We train our model on a set of training tasks, S train, and evaluate it on a separate set of testing tasks, S test. Unlike few-shot classification tasks, the tasks distribution p(T) is continuous for regression task in general. Each regression task is comprised of training samples D train and validation samples D val, for both the training set S train and testing set S test, D train is comprised of K training samples and labels D train = {(x Here we discuss our main idea. We would like to model the unknown function y = F (x) given only D train = {(x k t, y k t)|k = 1...K}. With small K, e.g. K = 10, this is an ill-posed task, as F (x) can take any form. As stated before, we assume that each function we would like to regress is a task T i drawn from an unknown distribution p(T). To simplify discussion, we assume scalar input and scalar output. Our idea is to learn sparse representation of the unknown function F (x), so that a few samples {(x k t, y k t)|k = 1...K} can provide adequate information to approximate the entire F (x). Specifically, we model the unknown function F (x) as a linear combination of a set of basis functions {φ i (x)}: Many handcrafted basis functions have been developed to expand F (x). For example, the Maclaurin series expansion (Taylor series expansion at x = 0) uses {φ i (x)} = {1, x, x 2, x 3, ...}: If F (x) is a polynomial, can be a sparse representation, i.e. only a few non-zero, significant w i, and most w i are zero or near zero. However, if F (x) is a sinusoid, it would require many terms to represent F (x) adequately, e.g.: In, M is large and M K. Given only K samples {(x k t, y k t)|k = 1...K}, it is not adequate to determine {w i} and model the unknown function. On the other hand, if we use the Fourier basis instead, i.e., {φ i (x)} = {1, sin(x), sin(2x),..., cos(x), cos(2x),...}, clearly, we can obtain a sparse representation: we can adequately approximate the sinusoid with only a few terms. Under Fourier basis, there are only a few non-zero significant weights w i, and K samples are sufficient to estimate the significant w i and approximate the function. Essentially, with a sparsifying basis {φ i (x)}, the degree of freedom of F (x) can be significantly reduced when it is modeled using, so that K samples can well estimate F (x). Our approach is to use the set of training tasks drawn from p(T) to learn {φ i (x)} that in sparse representation for any task drawn from p(T). The set of {φ i (x)} is encoded in the Basis Function Learner Network that takes in x and outputs T. In our framework, Φ(x) is the same for any task drawn from p(T), as it encodes the set of {φ i (x)} that can sparsely represent any task from p(T). We further learn a Weights Generator Network to map the K training samples of a novel task to a constant vector w = [w 1, w 2, ..., w M] T. The unknown function is modeled as w T Φ(x). An overview of our model is depicted in Figure 1. Given a regression task T with The model is then applied to make prediction for any input x. During meta-training, the validation set D val = {x n p, y n p |n = 1...N} for a task T is given. The prediction is produced by taking a dot product between task-specific weights vector, w and the set of learned basis functions: To train our model, we design a loss function L that consists of three terms. The first term is a mean-squared error between the validation set labels y n p ∈ D val and the predicted y n pred. We also add two penalty terms on the weights vector w generated for each task. The first penalty term is on the L1 norm of the generated weight vectors. This is to encourage the learned weight vectors to be sparse in order to approximate the unknown function with a few significant basis functions. The second penalty term is on the L2 norm of the generated weights vector. This is used to reduce the variance of the estimated weights as commonly used in regression . The full loss function L is as follows: where λ 1 and λ 2 represents the weightage of the L1 and L2 terms respectively. Note that, it turns out that our loss function for meta learning is is similar to that of the Elastic Net Regression with both L1 and L2 regularization terms. However, the difference is significant: Instead of focusing on a single regression task as in , we use this loss function to learn (i) the parameter θ for the Basis Function Learner network, which encodes the sparsifying basis functions for any task drawn from a task distribution, and (ii) the parameter ψ for the Weight Generator network, which produces the weights for any novel task drawn from the same task distribution. 1.13 ± 0.18 0.77 ± 0.11 0.48 ± 0.08 Meta-SGD 0.90 ± 0.16 0.53 ± 0.09 0.31 ± 0.05 EMAML (small) 0.885 ± 0.117 0.615 ± 0.091 0.371 ± 0.048 EMAML (large) 0.783 ± 0.101 0.537 ± 0.079 0.307 ± 0.040 BMAML (small) 0.927 ± 0.116 0.735 ± 0.104 0.459 ± 0.058 BMAML (large) 0.878 ± 0.108 0.675 ± 0.094 0.442 ± 0.055 NP (b) 0.640 ± 0.205 0.561 ± 0.234 0.421 ± 0.088 CNP (a) 0.910 ± 0.234 0.630 ± 0.222 0.393 ± 0.145 ANP 0.488 ± 0.188 0.216 ± 0.082 0.095 ± 0.068 Ours (small) 0.363 ± 0.018 0.169 ± 0.007 0.076 ± 0.004 Ours (large) 0.199 ± 0.010 0.062 ± 0.003 0.027 ± 0.002 In this section we describe the experiments we ran and introduce the types of regression task used to evaluate our method. For all of our experiments, we set the learning rate to 0.001 and use the Adam Optimizer as the optimization method to preform stochastic gradient decent on our model. We implement all our models using the Tensorflow library. In the following subsections, we decribe each of experiments in more detail. We include the experiments on the 1D Heat Equation and 2D Gaussian regression tasks in the appendix. For all 1D Regression tasks, the Basis Function Learner consists of two fully connected layers with 40 hidden units. For the loss function we set λ 1 = 0.001 and λ 2 = 0.0001. Sinusoidal Regression. We first evaluate our model on the sinusoidal regression task which is a few-shot regression task that is widely used by other few-shot learning methods as a few-shot learning task to evaluate their methods on (; ;). The target function is defined as y(x) = Asin(ωx + b), where amplitude A, phase b, frequency ω are the parameters of the function. We follow the setup exactly as in . We sample the each parameters uniformly from range. We train our model on tasks of batch size 4 and 60000 iterations for 5,10 and, 20 shot cases, where each training task contains K ∈ {5, 10, 20} training samples and 10 validation samples. We compare our method against recent few-shot learning methods including Meta-SGD , MAML , EMAML,BMAML and the Neural Processes family of methods including Neural Processes (b) Conditional Neural Processes (a) and Attentive Neural Processes . We use the officially released code for these three methods 1. We show the in Table 1. We provide two variants our model in this experimental setup. The two models differ only in the size of the Weights Generator. For the "small" model the Weights Generator consist of B = 1 self-attention blocks followed by a fully connected layer of 40 hidden units. The self-attention block consists of three parallel weight projections of 40 dimensions followed by fully connected layers of 80 and 40 hidden units respectively. The "large" model consists of B = 3 self-attention blocks also followed by a fully connected layer of 40 hidden units. Each self-attention block has weight projections of 64 dimensions followed by fully connected layers of 128 and 64 hidden units respectively. Both MAML and Meta-SGD uses an architecture of 2 fully connected layers with 40 hidden units which is similar to the architecture of the Basis Learner network, though both Meta-SGD and MAML both have additional optimization for individual tasks. The Neural Process family of methods uses encoder archtecture of 4 fully connected layers with 128 hidden units and decoder architecture of 2 fully connected layers of 128 hidden units respectively which is more similar in architecture our larger model. Similarly, we also compare our methods against two variants of EMAML and BMAML. The "small" model consist of 2 fully connected layers with 40 hidden units each while the "large" model consists of 5 fully connected layers with 40 hidden units each. This is to ensure fair comparison as both BMAML and EMAML lack a separate network to generate weight vectors but are ensemble methods that aggregate from M p number of model instances. We set the number of model instances in BMAML and EMAML to 10. Alternative Sinusoidal Regression We also evaluate our method on another version of the sinusoidal task as introduced by. The range of A remain the same while the range of b is increased to [0, 2π] and the range of ω is increased to [0.5, 2.0]. An extra noise term, is also added the function y(x). For noise, we sample it from distribution N ∼ (0, (0.01A) 2 ). We also fix the total number of our tasks used during training to 1000 as in . For this experimental setup we also include an ensemble version of our model where we train 10 separate instance of our model on the same 1000 tasks and aggregate their by taking a mean of the predictions. We evaluate our model for both 10 shot and 5 shot cases and show the mean-squared error in Table 2. For this experimental setup, we calculate the mean-squared error from 10 randomly points from 1000 advanced sinusoidal tasks. Our show that our method outperforms all recent few-shot regression methods in sinusoidal tasks. We also tested our method on more challenging image data, as done in (a; b;). We use MNIST and CelebA datasets here for qualitative and quantitative comparison. Each image can be regarded as a continuous function f: R 2 → R dy, where d y = 1 if the image is gray-scale or or d y = 3 if it is RGB. The input x ∈ R 2 to f is the normalized coordinates of pixels and the output y ∈ R dy is the normalized pixel value. The size of the images is 28 × 28 in MNIST and rescaled to 32 × 32 in CelebA. During meta-training, we randomly sample K points from 784 pixels in one image as D train and another K points as D val to form a regression task. In the meta-testing stage, the MSE is evaluated on 784−K pixels. 60,000 images are used for meta-training and 10,000 for meta-testing for MNIST(CelebA) dataset. We compare our methods with NP family: CNP (a), NP (b) and ANP for K = 50 and K = 100. Deeper network structure is adopted due to the complexity of regression on image data. Namely, we use 5 fully connected layers with 128 hidden units in Basis Function Learner and 3 attention blocks in Weight Generator for our method. The encoders and decoders in NP family are all MLPs including 4 fully connected layers with 128 hidden units. Thus, the comparison is fair in terms of network capacity. All the models are trained for 500 epochs with batch size 80. The MSE on 10,000 tasks from meta-testing set is reported with 95% confidence interval, shown in Table 3. The top are highlighted. It can be observed that our method outperforms two of three NP methods and achieves MSE very close to most recent ANP. The outputs of regression on CelebA image data are high-dimension predictions, which demonstrates the effectiveness of our method in such challenging tasks. Note that ANP significantly improves upon NP and CNP using cross-attention, which can potentially be applied to our method as well. In this subsection we provide some deeper analysis on the basis functions that are learned by our method. In particular, we provide some further evidence to our claim that our method learns a set of sparsifying basis functions that correspond to the regression tasks that we would like to model. To demonstrate the sparsity of basis functions, we take only the S largest weights in terms of |w| and their corresponding basis functions and illustrate the predicted regression function with the combination of only the S weights and basis functions. We conduct this experiment on both the sinusoidal regression task and the more difficult image completion task and show these S-weights predictions in Figures 3 and 2b respectively. The figures illustrate that our method is able to produce a good prediction of the regression function with only a fraction of the full set learned basis function (40 for the sinusoidal task, 128 for the MNIST image completion task). This demonstrates the sparsity of Φ(x) as most of the prediction is carried out by just a small number of basis functions. This also demonstrates that our method is able to force most of the information of F (x) to be contained in a few terms. Therefore, using K samples to estimate the weights of these few important terms could achieve a good approximation of F (x). In this subsection we detail some ablation studies on our model to test the validity of certain design choices of our model. In particular we focus on the effects of the addition of self-attention operations in the Weights Generator and also the effects of using different penalty terms on our loss function. To test out the effects of adding the selfattention operations to our model, we conduct a simple experiment where we replace the self attention operations in the self-attention block with just a single fully connected layer of equal dimensions as the self-attention weight projection. Essentially, this reduces the Weights Generator to be just a series of fully connected layers with residual connections and layer normalization. We compare the simpler model performance on the sinusoidal regression task as specified in Table 1 with our original model and show the in Table 4. The show that adding the self-attention operations do improve our methods performance on the 1D sinusoidal regression task. We also conducted experiments to test the effects of the different penalty terms on the the generated weights vector. In this ablation study, we compared our models trained using different variants of the. Similar to the previous study, we evaluate them on their performance on the sinusoidal regression task as specified in Table 1. The variants we tested out are: (i) Loss function with only the L1-norm penalty term; (ii) Loss function with only the L2-norm penalty term (iii) Loss function with both L1 and L2-norm penalty terms. To demonstrate the sparsity of the weights vectors of each variant, we also show the a breakdown of the magnitude of the learned weight vectors over 100 sinusoidal tasks. We group the weight vectors into three groups: |w| less than 0.02 to indicate weights that are near zero, |w| between 0.02 and 1 and weights with magnitude more than 1. We show the of the different variants in Table 5. We also present histograms of the magnitude of the learned weight vectors in Figure 4 The do show that the combination of both L1 and L2 penalty terms do ultimately give the best performance for the sinusoidal regression task. In terms of sparsity, the model trained with only the L1 loss term do gives the highest percentage of sparse weights though we found the model with both L1 and L2 terms do give a better performance while still maintaining a relatively high percentage of near zero weights. We propose a few-shot meta learning system that focuses exclusively on regression tasks. Our model is based on the idea of linear representation of basis functions. We design a Basis Function Learner network to encode the basis functions for the entire task distribution. We also design a Weight generator network to generate the weights from the K training samples of a novel task drawn from the same task distribution. We show that our model has competitive performance in in various few short regression tasks. In this section we illustrate all of the individual non-zero basis functions learned by our model for the sinusoidal task. These functions are shown Figure 5. Note that out of 40 of the basis functions, only 22 of the learned basis functions are non-zero functions, further demonstrating that indeed our method is forcing the model to learn a set of sparse functions to represent the tasks. Furthermore, it can be seen that the learned basis functions all correspond to the different "components" of sinusoidal function: most of the learned functions seem to represent possible peaks, or troughs if multiplied with a negative weight at various regions of the input range whereas the top four basis function seem to model the more complicated periodic nature of the sinusoidal functions. Adding on to the experiments in Section 4.3, we also illustrate what happens when do the exact opposite. We take the prediction using the full set of weight vectors/basis function and study the effect of the prediction when we remove certain basis function from the prediction. Similar to the previous experiment, we remove the basis function by order of magnitude starting with the basis function with the largest corresponding |w|. we conduct this experiment on the sinusoidal regression task and illustrate the in Figure 6. Similarly, this study also demonstrates the importance of certain basis functions as removing them caused the prediction the change drastically. In particular, notice that for sinusoidal task, removing just 4 of the most important basis functions ed in a less accurate prediction than using just 10 of the most important basis functions. Here we provide more details on the architecture of the Weights Generator Network. As mentioned previously in Section 3.3. The Weights Generator Network consists of a series of self attention blocks followed by a final fully connected layer. We define a self attention block as such: An attention block consists of a self attention operation on the input of self attention block. Following the self-attention operation, the ant embedding is further passed through two fully connected layers. A residual connection from the output of the self-attention operation to the output of the second fully connected layer. Finally, ant embedding of the residual connection is then passed though a a layer normalization operation . Note that the input of the first self attention block will always be the input to the Weights Generator network, (Φ(x k t), y k t ) whereas the inputs to subsequent attention blocks are the outputs of the previous attention block. For the self-attention operation, the input is transformed into query, key and value vectors though their respective weight projections. These query, key and value vectors, Q, K and V then go through a scaled dot-product self-attention operation as introduced by : We also evaluate our method on another 1D Regression task, the 1D heat Equation task, we define it as such: Consider a 1-dimensional rod of length L with both of its ends connected to heat sinks, i.e. the temperature of the ends will always be fixed at 0K unless a heat source is applied at the end. a constant point heat source is then applied to a random point s on the rod such the the heat point source will always have a temperature of 1.0K. We would like the model the temperature u(x, t) at each point of the rod a certain time t after applying the heat source until the temperature achieves equilibrium throughout the rod. The temperature at each point x after time t is given by the heat equation: ∂x 2 For our experiments, we set L to be 5 and randomly sample K points of range on the heat equation curve. We fix the total number of tasks used during training to 1000 and evaluate our model on both 10 shot and 5 shot cases, similar to the experimental setup for the Advanced Sinusoidal tasks. We also compare our to both EMAML and BMAML on this regrssion task and add an ensemble version of method for comparison. The of our evaluation is presented in Table 6. 2D Gaussian. We also evaluated our method on the for the 2D Gaussian regression tasks. For this task, we train our model to predict the probability distribution function of a two-dimensional Gaussian distribution. We train our model from Gaussian distribution task with mean ranging from (−2, −2) to and standard deviation of range [0.1, 2]. We fix the standard deviation to be of the same value in both directions. Similar to the heat equation, we use the same setup as the Advanced Sinusoidal task and compare our methods to EMAML and BMAML. We evaluate our model on 10, 20 and 50 shot case. The of our evaluation is presented in Table 7. Qualitative on CelebA datasets. We provide the qualitative on CelebA datasets in Figure 7. We note that the RGB images are complex 2D functions. We choose them to evaluate so that we can see the more directly, not to compare with image inpainting methods, which is also mentioned in (a). The in Figure 7a are consistent with Figure 2a. The regression from our method are visually better than NP and CNP. The predictions using first S largest weights are shown in Figure 7b. The 2D image function is usually predicted with less than 50 weights, which suggests that the information of the 2D function is kept in several terms. Our proposed sparse linear representation framework for few shot regression makes the few shot regression problem appears to be similar to another research problem called dictionary learning (DL), which focuses on learning dictionaries of atoms that provide efficient representations of some class of signals . However the differences between DL and our problem are significant: Our problems are continuous rather than discrete as in DL, and we only observe a very small percentage of samples. Specifically, for a given y ∈ R n, the goal of DL is to learn the dictionary (n by M) Φ for some sparse w: In typical DL, the entire y is given. Also, M > n for an overcomplete dictionary (Figure 8). In few shot regression, the goal is to predict the entire continuous function y = F (x). Therefore, viewing this as the setup in, n is infinite. Moreover, only a few (K) samples of y is given: y k t = F (x k t). The locations of the given samples (x k t) are different for different y (different task). Therefore, our problem is significantly different and more difficult than DL. Typical DL algorithms solve and return Φ, which is a n by M matrix of finite dimensions (the dictionary). In our setup, the basis matrix Φ has infinite entries, and Φ is encoded by the proposed Basis Function Learner network. GT NP CNP ANP Ours K=100
We propose a method of doing few-shot regression by learning a set of basis functions to represent the function distribution.
590
scitldr
Large-scale pre-trained language model, such as BERT, has recently achieved great success in a wide range of language understanding tasks. However, it remains an open question how to utilize BERT for text generation tasks. In this paper, we present a novel approach to addressing this challenge in a generic sequence-to-sequence (Seq2Seq) setting. We first propose a new task, Conditional Masked Language Modeling (C-MLM), to enable fine-tuning of BERT on target text-generation dataset. The fine-tuned BERT (i.e., teacher) is then exploited as extra supervision to improve conventional Seq2Seq models (i.e., student) for text generation. By leveraging BERT's idiosyncratic bidirectional nature, distilling the knowledge learned from BERT can encourage auto-regressive Seq2Seq models to plan ahead, imposing global sequence-level supervision for coherent text generation. Experiments show that the proposed approach significantly outperforms strong baselines of Transformer on multiple text generation tasks, including machine translation (MT) and text summarization. Our proposed model also achieves new state-of-the-art on the IWSLT German-English and English-Vietnamese MT datasets. Large-scale pre-trained language model, such as ELMo , GPT and BERT , has become the de facto first encoding step for many natural language processing (NLP) tasks. For example, BERT, pre-trained with deep bidirectional Transformer via masked language modeling and next sentence prediction, has revolutionized the state of the art in many language understanding tasks, such as natural language inference and question answering . However, beyond common practice of fine-tuning BERT for language understanding, applying BERT to language generation still remains an open question. Text generation aims to generate natural language sentences conditioned on certain input, with applications ranging from machine translation , text summarization (; ;) ), to image captioning; ). In this paper, we study how to use BERT for better text generation, which to the best of our knowledge is still a relatively unexplored territory. Intuitively, as BERT is learned with a generative objective via Masked Language Modeling (MLM) during the pre-training stage, a natural assumption is that this training objective should have learned essential, bidirectional, contextual knowledge that can help enhance text generation. Unfortunately, this MLM objective is not auto-regressive, which encumbers its direct application to auto-regressive text generation in practice. In this paper, we tackle this challenge by proposing a novel and generalizable approach to distilling knowledge learned in BERT for text generation tasks. We first propose a new Conditional Masked Language Modeling (C-MLM) task, inspired by MLM but requiring additional conditional input, which enables fine-tuning pre-trained BERT on a target dataset. In order to extract knowledge from the fine-tuned BERT and apply it to a text generation model, we leverage the fine-tuned BERT as a teacher model that generates sequences of word probability logits for the training samples, and treat the text generation model as a student network, which can effectively learn from the teacher's outputs for imitation. The proposed approach improves text generation by providing a good estimation on the word probability distribution for each token in a sentence, consuming both the left and the right context, the exploitation of which encourages conventional text generation models to plan ahead. Text generation models are usually trained via Maximum Likelihood Estimation (MLE), or teacher forcing: at each time step, it maximizes the likelihood of the next word conditioned on its previous ground-truth words. This corresponds to optimizing one-step-ahead prediction. As there is no explicit signal towards global planning in the training objective, the generation model may incline to focusing on local structure rather than global coherence. With our proposed approach, BERT's looking into the future ability can act as an effective regularization method, capturing subtle long-term dependencies that ensure global coherence and in consequence boost model performance on text generation. An alternative way to leverage BERT for text generation is to initialize the parameters of the encoder or decoder of Seq2Seq with pre-trained BERT, and then fine-tuning on the target dataset. However, this approach requires the encoder/decoder to have the same size as BERT, inevitably making the final text generation model too large. Our approach, on the other hand, is modular and compatible to any text-generation model, and has no restriction on the model size (e.g., large or small) or model architecture (e.g., LSTM or Transformer). The main contributions of this work are three-fold. (i) We present a novel approach to utilizing BERT for text generation. The proposed method induces sequence-level knowledge into the conventional one-step-ahead and teacher-forcing training paradigm, by introducing an effective regularization term to MLE training loss. (ii) We conduct comprehensive evaluation on multiple text generation tasks, including machine translation, text summarization and image captioning. Experiments show that our proposed approach significantly outperforms strong Transformer baselines and is generalizable to different tasks. (iii) The proposed model achieves new state-of-the-art on both IWSLT14 German-English and IWSLT15 English-Vietnamese datasets. Pre-trained Language Models Prior to pre-trained language model, word embeddings (; ;) were widely used for NLP tasks. Recently, CoVe introduced (conditional) language models pre-trained on paired machine translation corpus. ELMo learned a contextual language model on a large corpus with bidirectional RNN. GPT used unidirectional Transformer to achieve better contextualized word representation. By fine-tuning pre-trained language models, ULMFit also achieved promising on text classification. In our study, we focus on BERT due to its superior performance on multiple language understanding tasks. However, different from previous work exploiting BERT for language understanding tasks, here we aim to apply BERT to text generation. To the best of our knowledge, this is still a relatively unexplored space. The proposed approach is also model-agnostic and can be applied to other pretrained language models as well. There has been some recent attempt on applying BERT to text generation. trained cross-lingual MLM and demonstrated promising for cross-lingual natural language inference and unsupervised neural machine translation (NMT). formulated BERT as a Markov Random Field LM and showed preliminary on unsupervised text generation with improved diversity. Zhang et al. (2019a) utilized an encoder with BERT and a two-stage decoder for text summarization. proposed Masked Seq2Seq (MASS) pre-training, demonstrating promising on unsupervised NMT, text summarization and conversational response generation. Concurrent with our work, proposed a similar conditional MLM for constant-time translation, and studied how to fine-tune BERT for NMT. Our approach is novel in the sense that we do not directly use the parameters of BERT in the Seq2Seq model. Instead, BERT acts as an effective regularization to the MLE training loss, by proactively injecting future information for predicting the present. Right-to-Left Generation Our work also shares a high-level intuition with those approaches that try to regularize left-to-right generative models with a right-to-left counterpart. Specifically, trained a separate reverse NMT and performed joint decoding at inference time to enforce agreement between forward and reverse models. Twin Networks used a backward RNN jointly trained with a forward RNN decoder by matching their hidden states. Zhang et al. (2019b) further extended the idea to Transformer with joint training, so that the forward and the backward models iteratively improve each other. Our proposed approach stems from a similar intuition. However, we focus on using pre-trained language model such as BERT to regularize an auto-regressive generation model. Knowledge Distillation Our method shares the same loss formulation as Knowledge Distillation (KD) proposed in;; , where a smaller student model is trained on soft labels provided by a larger teacher model. More recently, applied KD to multilingual proposed patient KD for BERT model compression. Compared with these previous studies, where both the teacher and the student are trained on the same task, our approach is different in the sense that the BERT teacher is not designed to perform the student's generation task. We focus on using KD to leverage the learned knowledge of BERT for text generation, while previous work mostly focused on model compression. In this section, we present our proposed approach to distilling the knowledge in BERT for text generation in generic sequence-to-sequence (Seq2Seq) setting. We first review Seq2Seq learning in Section 3.1, and then describe the proposed approach in Section 3.2 and 3.3. Seq2Seq learning aims to generate a sequence of discrete output Y = (y 1, . . ., y N) of length N, conditioned on a sequence of discrete input X = (x 1, . . ., x M) of length M. A Seq2Seq model learns parameters θ to estimate the conditional likelihood P θ (Y |X), typically trained via Maximum Likelihood Estimation (MLE), or equivalently, minimizing the cross-entropy loss as follows: where each conditional probability can be calculated via an attention-based recurrent neural network (RNN) (;, Transformer , or any other neural sequence-generation models. This generic Seq2Seq learning framework is the state of the art on a wide range of text generation tasks. Using modern deep neural networks, the conditional probabilities can be readily modeled as a sequence of classifications over the word vocabulary. However, during training, in order to generate the t-th token y t, the model only sees a partial sentence y 1:t−1 from the ground-truth training data. Intuitively, it is reasonable to assume that a bidirectional model can be more informative than a leftto-right generation model, since additional context from the right (or future) is also incorporated to predict the current word. Unfortunately, this additional information is not utilized in a standard Seq2Seq model, since it can only be trained in a left-to-right manner, where the future context is masked out to prevent each word from indirectly "seeing itself ". To compensate this singledirectional limitation of Seq2Seq setting, we propose a new conditional language model (C-MLM) to enable the fine-tuning of BERT on target generation task, in hope that the fine-tuned bidirectional BERT can be utilized for better text generation. BERT is a deep bidirectional Transformer trained via Masked Language Modeling (MLM). 2 In a similar setting, where the input is a sequence pair (X, Y), 3 15% of the tokens are randomly masked. Formally, we denote the masked token sets as X m and Y m, and the disjoint counterpart (i.e., the unmasked tokens) as X u and Y u, respectively. The trained BERT model aims to estimate the joint probability: where i and j denote the number of masked tokens in X and Y, respectively. Each x m ∈ X m, and each y m ∈ Y m. Eqn. can be trained with the standard word-level cross-entropy loss. We aim to marry MLM pre-training with Seq2Seq learning, to leverage bidirectional language model for text generation. To this end, we propose a conditional-MLM, a variant of MLM that allows further fine-tuning of pre-trained BERT on target dataset. For example, for machine translation, X and Y represent the source and the target sentence, respectively. We first concatenate them together and randomly mask 15% of the tokens only in Y, then train the network to model the joint probability The above C-MLM objective is similar to the conditional language modeling (LM) objective in Eqn., but conditional LM only permits predicting a word based on its left context. C-MLM is also related to Masked Seq2Seq (MASS) pre-training. However, in MASS, the encoder takes a sentence with randomly masked fragment (several consecutive tokens) as input, and the decoder tries to predict this masked fragment, which is different from our model design. The final goal is also different: MASS focuses on Seq2Seq pre-training, while we focus on leveraging BERT for text generation. In our experiments, we observe that the C-MLM task can obtain high accuracy and good generalization on word prediction. However, it is not feasible to generate sequential output directly from C-MLM. Instead, we use knowledge distillation to distill the knowledge learned from the fine-tuned BERT into a Seq2Seq model for direct text generation, which will be explained in the next sub-section. Our inspiration springs from the observation that the probability distribution of the masked word y m t is estimated using both y u 1:t−1 and y u t+1:N from Y u. In other words, the distribution for a given word P (y m t |X, Y u) contains information from both backward and forward contexts, which is a desirable benefit for providing sequence-level global guidance. This probability distribution can be considered as soft targets for a text generation model to mimic from, which potentially contains more useful and fine-grained information than the usual hard-assigned, one-hot label, therefore enhancing conventional left-to-right generation models to look into the future. In a knowledge distillation setting, the BERT model can be considered as a teacher, while the Seq2Seq model acts as a student. Specifically, the Seq2Seq model can be trained with the following objective function: where P φ (y t) is the soft target estimated by the fine-tuned BERT with learned parameters φ, and V denotes the output vocabulary. Note that φ is fixed during the distillation process. An illustration of this learning process is provided in Figure 1, which aims to match the word probability distribution P θ (y t) provided by the student with P φ (y t) provided by the teacher (i.e., distillation). To further improve the Seq2Seq student model, hard-assigned labels are also utilized. the final model is trained with the following compound objective: where α is a hyper-parameter for tuning the relative importance of the two training targets: soft estimation from fine-tuned BERT, and ground-truth hard label. Note that our proposed approach only has a minimal requirement on the architecture of the incorporated Seq2Seq model. As long as the model is trained to estimate word-level probability as in Eqn., it can be trained jointly with the proposed objective function Eqn.. At a higher level, the additional loss term L bidi can be interpreted as a sequence-level objective function. Our auto-regressive (or causal) model θ tries to predict the probability distribution that matches the estimation the bidirectional teacher model predicts, hence encouraging the planning of future (right context) for generation. In this section, we describe our experiments on two well-studied text generation tasks: machine translation, and abstractive text summarization. Machine Translation We consider two relatively small-scale datasets, IWSLT15 EnglishVietnamese (En-Vi, 113k training samples) and IWSLT14 German-English (De-En, 160k training samples), and one medium-scale dataset, WMT14 English-German (En-De, 4.5M training samples). For IWSLT15 En-Vi, we use the pre-processed dataset provided by. We use tst2012 as dev set and test on tst2013. For IWSLT14 De-En, we follow the pre-processing steps and the same train/dev/test split as in. For WMT14 En-De, we follow the preprocessing steps in for fair comparison. We use newstest2013 as the dev set and newstest2014 as the test set. We report BLEU scores for evaluation of MT performance following the Moses script. Abstractive Summarization For summarization, we conduct experiments on the Gigaword summarization dataset . Note that the original train/valid/test split of Gigaword is 3.8M/190k/2k. In our experiments, we observed severe distribution mismatch between the validation and test data. See Table 4, 5, and Sec. 4.3 for detailed discussion. Therefore, we further sampled 5k/5k dev/test-dev splits from the validation set and tuned hyper-parameters on the dev set only. We report ROUGE scores on test-dev for the evaluation of our proposed approach, and include on the standard test split for the comparison with prior work. Training and Hyper-parameters Our implementation is based on the PyTorch version of OpenNMT seq2seq toolkit. We use the'base' model of 6-layer Transformer with 512-hidden 8-head attention blocks and 2048-hidden feed-forward layer for all experiments, with label smoothing regularization (LSR) of 0.1. We batch examples with similar sequence length, and count batch size by the number of tokens. For MT we use the pre-trained BERT-base-multilingual-cased model, and for summarization we use BERTbase-uncased as the starting point of BERT fine-tuning. 5 We use the corresponding pre-trained byte-pair-encoding shipped together with the BERT model for tokenization. For all training methods of all Transformer models, the learning rate schedule is set to lr = η · d −0.5 model · min(step −0.5, step · warmup steps −1.5), where d model = 512 is the attention representation size . For all BERT fine-tuning, we follow use a triangular learning rate schedule with maximum learning rate η. The parameters are updated with the Adam optimizer . In the distillation stage, we pre-compute BERT's prediction logits of the training data using top-K distillation to reduce computation overhead and memory footprint, where K is set to 8 across all the experiments. We also tune the temperature T for the sof tmax applied at the teacher's logits. For the detailed values of the hyper-parameters for each experiment, please refer to the supplementary material. We found it necessary to train longer with L bidi, since it is still improving after the step at which the baseline Transformer starts to plateau. At inference time, we use beam search with beam size 4 and length penalty of 0.6 across all the models. All the hyperparameters are tuned on the development set. Note that we tuned our Transformer baseline to achieve higher scores than the reference implementation on each dataset with default hyper-parameters (in most cases comparable to the state-of-the-art). We first validate our proposed text generation approach on machine translation task. Experimental are summarized in Table 1, 2 and 3, which show that our model significantly improves over the strong Transformer baseline across all three datasets. Note that our baseline is the'base' model of Transformer, which has 44M trainable parameters, and the reference implementation by is Transformer (big) with 176M trainable parameters. For IWSLT German-English translation, our method improves over the Transformer baseline by 1.54 BLEU points, and achieves new state of the art. Our approach outperforms previously-reported such as ConvS2S+MRT, a convolutional-based model with minimum risk training, and Lightweight and Dynamic Convolution . Note that also tuned checkpoint averaging, which creates a soft ensemble effect. And their model has roughly the same amount of parameters as Transformer (big). For IWSLT English-Vietnamese translation, since most prior work experimented with RNN models, we also report RNN-based here. This also suggests that our method is model-agnostic. Our best model outperforms Seq2Seq-OT that utilizes optimal transport for sequencelevel training, as well as the ELMo and CVT reported in. 8 For WMT14 6 Different from the original KD, we do not apply the same temperature on the student. In our preliminary experiment we found high T of Seq2Seq in much worse performance. We hypothesize the low-entropy nature of conditioned text generation is not suitable for temperature scaling. 7 Parameter counts exclude word embedding and final linear projection, which mostly depends on the vocabulary size. BERT-base has 86M trainable parameters. 8 The CVT used a much larger RNN and CNN-based character embedding, as well as a customized structure. Therefore, we did not try to use RNN to match their . English-German translation, our method still improves over the well-tuned Transformer baseline. We also report the scores of Transformer (big) and state-of-the-art Dynamic Convolution model for reference. Table 4 and Table 5 show the of our approach on abstractive summarization task, where R-1, R-2, and R-L denote F 1 scores of ROUGE-1, ROUGE-2, and ROUGE-L, respectively. Our method shows improvement on all the metrics, as shown in Table 4. We observe that the performance on test set is much lower, which suggests that the distribution in the test set is very different from that in the validation set, as mentioned in Section 4.1. When we manually checked the test set data, we found many corrupted examples such as short input articles, meaningless text, and dominating unknown words. Given that the official test split contains only 1,951 noisy examples, we believe that our on the dev/test-dev sets are more reliable. R-1 R-2 R-L Dev On the test split, our best model is comparable to state-of-the-art models that use much more complex architectures specifically designed for summarization. CGU augmented convolutional gating units. FTSum g (b) leveraged extra information extraction and dependency parsing features. E2T cnn utilized entities provided by an external entity linking system. Re 3 Sum (a) carefully designed a retrieve-and-rerank pipeline with human-written soft templates. Despite that our model has no summarization-specific model design, we still achieve comparable performance to these models on all the metrics. There are several possible factors that could contribute to the performance gain: additional parameters of BERT, extra data (pretraining corpus) of BERT, and the bidirectional nature. To better understand the key contributions of our method, we conduct an ablation study described in the following. We finetune 2 extra teachers: BERT sm and BERT l2r. For BERT sm, we use a smaller BERT (6 layers) for C-MLM finetuning, which has approximately the same number of parameters as Transformer-base 9. For BERT l2r, we use the full BERT model but finetune it using left-to-right LM as in the conventional Seq2Seq model. Next, we apply the proposed KD method to train the Transformer on En-Vi and De-En MT tasks. Results are shown in Table 6. BERT sm still works well though the full BERT provides further improvement. On the other hand, BERT l2r slightly hurts the performance. We hypothesize that it generates noisy learning targets for the student, hence the performance drop. Empirically, we show that the bidirectional knowledge could be more important than the extra parameters, while the pre-trained weights remain useful for more stable C-MLM training. We next analyze the effect of our proposed approach on different output lengths. We plot the BLEU scores on MT w.r.t. different output generation lengths N on the development set. 10 Results are provided in Figure 2. For IWSLT German-English dataset (Figure 2 : Left), we can see a shared trend that the proposed L bidi objective gains higher BLEU points on longer translation pairs. For WMT English-German (Figure 2 : Middle), we can see that although the proposed method performs much worse when the output sentences are very short, it achieves relatively consistent improvement on longer cases, hence ing in overall BLEU improvement. For IWSLT English-Vietnamese (Figure 2 : Right), we see a similar trend when the length N > 24. In Table 7, we show some translation examples on IWSLT German-English dataset. In the first example, the baseline Transformer cannot recover from'with' and'of', which renders the full sentence not making much sense. " I started reading with..." would make sense from the left context; however, if the model also considers the right context "the age of two", the word'with' would be assigned with lower probability by the soft labels provided by the BERT teacher. Even though at test-time the model cannot'look ahead', the soft-targets at training-time prevents the over-confidence of the model on one-hot label; hence the better generalization at the test-time. Similarly, other examples show that our model can generate text more coherently w.r.t. the context on the right (underlined in Table 7), thus making more accurate and natural translation. In this work, we propose a novel and generic approach to utilizing pre-trained language models to improve text generation without explicit parameter sharing, feature extraction, or augmenting with auxiliary tasks. Our proposed Conditional MLM mechanism leverages unsupervised language models pre-trained on large corpus, and then adapts to supervised sequence-to-sequence tasks. Our distillation approach indirectly influences the text generation model by providing soft-label distributions only, hence is model-agnostic. Experiments show that our model improves over strong Transformer baselines on multiple text generation tasks such as machine translation and abstractive summarization, and achieves new state-of-the-art on some of the translation tasks. For future work, we will explore the extension of Conditional MLM to multimodal input such as image captioning. We run all experiments on single GPU of NVIDIA Titan RTX or V100 except for WMT En-De we use 4 V100s for training. Note that for large batch sizes that do not fit in GPU memory, we use the gradient accumulation tricks as in. Batch sizes are counted in number of tokens. Note that all the hyper-parameters are tuned on the development set only. IWSLT De-En For C-MLM fine-tuning, we train for 100k steps with 5k warmup steps, η = 5 · 10 −5, and batch size of 16k tokens. For baseline model, we train for 50k steps with 4k warmup steps and batch size of 6k tokens. The learning rate η is set to 1. For the proposed model, we train for 100k steps with 8k warmup steps and batch size of 6k tokens. The learning rate η is set to 2, α = 0.5, and T = 10. Seq2Seq model uses dropout of 0.3 in both cases. IWSLT En-Vi For C-MLM fine-tuning and baseline Transformer, the hyper-parameters are identical to that of IWSLT De-En. For the proposed model, we train for 100k steps with 8k warmup steps and batch size of 6k tokens. The learning rate η is set to 2, α = 0.1, and T = 5. Dropout is still 0.1. WMT En-De For C-MLM fine-tuning, we train for 100k steps with 5k warmup steps, η = 5 · 10 −5, and batch size of 512k tokens. For baseline model, we train for 30k steps with 4k warmup steps and batch size of 384k tokens. The learning rate η is set to 4. Since this is our largest dataset and training is slow, for the proposed model we use the baseline Transformer to initialize the Seq2Seq student. For the proposed model, we continue training for 50k steps with 4k warmup steps and batch size of 64k tokens. The learning rate η is set to 2, α = 0.1, and T = 5. Seq2Seq model uses dropout of 0.1 in both cases. Gigaword For C-MLM fine-tuning, we train for 100k steps with 5k warmup steps, η = 5·10 −5, and batch size of 64k tokens. For baseline model, we train for 50k steps with 4k warmup steps and batch size of 40k tokens. The learning rate η is set to 1. For the proposed model, we train for 70k steps with 4k warmup steps and batch size of 36k tokens. The learning rate η is set to 2, α = 0.1, and T = 10. Seq2Seq model uses dropout of 0.1 in both cases. We show Gigaword summarization examples in Table 9 and extra En-DE generation examples in Reference it would be immoral to leave these young people with a climate system spiraling out of control. Transformer it would be immoral to let these young people leave a climate system that was out of control. (44.6) Ours it would be immoral to leave these young people with a climate system out of control. Table 8: Qualitative examples from IWSLT German-English translation. Numbers inside the parenthesis are sentence-level BLEU scores. Red word is where the baseline Transformer makes a mistake without considering the possible future phrase and fails to recover. On the other hand, our model makes the right decision at the blue word, hence generates more coherent sentence. Please refer to Section 4.5 in the main paper for detailed explanation. Reference china offers tax exemptions for laid-off workers Transformer china encourages laid-off workers to seek employment Ours china offers tax exemptions to laid-off workers Reference swiss police arrest britons who allegedly ran rental car racket Transformer three britons arrested in swiss luxury hotel Ours swiss police arrest three britons in rental car racket case Reference south korea stocks extend declines as kia concerns intensify Transformer south korean stocks fall for #th time in # days; kia leads Ours south korean stocks fall as kia troubles intensify Table 9: Qualitative examples from the Gigaword summarization dataset. Baseline model suffers from early mistakes. Our model generates more coherent summaries.
We propose a model-agnostic way to leverage BERT for text generation and achieve improvements over Transformer on 2 tasks over 4 datasets.
591
scitldr
Humans have the remarkable ability to correctly classify images despite possible degradation. Many studies have suggested that this hallmark of human vision from the interaction between feedforward signals from bottom-up pathways of the visual cortex and feedback signals provided by top-down pathways. Motivated by such interaction, we propose a new neuro-inspired model, namely Convolutional Neural Networks with Feedback (CNN-F). CNN-F extends CNN with a feedback generative network, combining bottom-up and top-down inference to perform approximate loopy belief propagation. We show that CNN-F's iterative inference allows for disentanglement of latent variables across layers. We validate the advantages of CNN-F over the baseline CNN. Our experimental suggest that the CNN-F is more robust to image degradation such as pixel noise, occlusion, and blur. Furthermore, we show that the CNN-F is capable of restoring original images from the degraded ones with high reconstruction accuracy while introducing negligible artifacts. Convolutional neural networks (CNNs) have been widely adopted for image classification and achieved impressive prediction accuracy. While state-of-the-art CNNs can achieve near-or super-human classification performance, these networks are susceptible to accuracy drops in the presence of image degradation such as blur and noise, or adversarial attacks, to which human vision is much more robust. This weakness suggests that CNNs are not able to fully capture the complexity of human vision. Unlike the CNN, the human's visual cortex contains not only feedforward but also feedback connections which propagate the information from higher to lower order visual cortical areas as suggested by the predictive coding model. Additionally, recent studies suggest that recurrent circuits are crucial for core object recognition. A recently proposed model extends CNN with a feedback generative network, moving a step forward towards more brain-like CNNs. The inference of the model is carried out by the feedforward only CNN. We term convolutional neural networks with feedback whose inference uses no iterations as CNN-F (0 iterations). The generative feedback models the joint distribution of the data and latent variables. This methodology is similar to how human brain works: building an internal model of the world. Despite the success of CNN-F (0 iterations) in semi-supervised learning and out-of-distribution detection, the feedforward only CNN can be a noisy inference in practice and the power of the rendering top-down path is not fully utilized. A neuro-inspired model that carries out more accurate inference is therefore desired for robust vision. Our work is motivated by the interaction of feedforward and feedback signals in the brain, and our contributions are: We propose the Convolutional Neural Network with Feedback (CNN-F) with more accurate inference. We perform approximated loopy belief propagation to infer latent variables. We introduce recurrent structure into our network by feeding the generated image from the feedback process back into the feedforward process. We term the model with k-iteration inference as CNN-F (k iterations). In the context without confusion, we will use the name CNN-F for short in the rest of the paper. We demonstrate that the CNN-F is more robust to image degradation including noise, blur, and occlusion than the CNN. In particular, our experiments show that CNN-F experiences smaller accuracy drop compared to the corresponding CNN on degraded images. We verify that CNN-F is capable of restoring degraded images. When trained on clean data, the CNN-F can recover the original image from the degraded images at test time with high reconstruction accuracy. Convolutional Neural Network with Feedback (CNN-F) is a generative model that generates images by coarse-to-fine rendering using the features computed by the corresponding CNN. Latent variables in CNN-F account for the uncertainty of the rendering process. The prior distribution of those latent variables is designed to capture the dependencies between them across layers. Inference for the optimal latent variables given image x and label y matches a feedforward CNN in CNN-F (0 iterations) (see Fig. 1). We provide mathematical description of CNN-F (0 iterations) below. Let h be the generated image, y ∈ {1, ..., K} be object category. z = {t, s}, = 1,..., L are the latent variables at layer, where t defines translation of rendering templates based on the position of local maximum from Maxpool, and s decides whether to render a pixel or not based on whether it is activated (ReLU) in the feed-forward CNN. T (t) denotes the translation matrix corresponding to the translation latent variable t. W are rendering templates, where W is the weight matrix at layer in the corresponding CNN. h is the intermediate rendered image at layer. The generation process in CNN-F (0 iterations) is given by: The dependencies among latent variables {z} 1:L across different layers are captured by the structured prior π z|y Softmax η exp(η), and b corresponds the bias after convolutions in CNN. Under the assumption that the intermediate rendered images {h} 1:L are nonnegative, the joint maximum a posteriori (JMAP) inference of latent variable z in CNN-F (0 iterations) is a CNN. Convolutional Neural Networks with Feedback using k-iteration inference [CNN-F (k iterations)] performs approximated loopy belief propagation on CNN-F for k times (see Fig. 1). Inference of latent variables is performed by propagating along both directions of the model. In the following of this session, we will use CNN-F to denote CNN-F (k iterations) for short. Inheriting the notation for the formulation in the CNN-F (0 iterations), we formulate CNN-F as follows. The generation process of the top-down pathway in CNN-F is the same as in the CNN-F (0 iterations), i.e. h(− 1) = T (t)W (s h). Different from the CNN-F (0 iterations), the generated image h in the CNN-F is fed back to the bottomup pathway for approximated loopy belief propagation. In other words, the CNN-F performs bottom-up followed by top-down inference such that the information at later layers in the CNNs can be used to update the noisy estimations at the early layers in the same network. Specifically, the feedforward process in the CNN-F is g = W AdaPool{AdaRelu(g( − 1))} + b, where g denotes the network activations at layer. The top-down messages correct for the noisy bottom-up inference by the adaptive operators Input: Input image x. and object class y *. where W is the rendering template at layer, and b is the parameters of the structured prior π z|y at layer. T (t) is the translation matrix corresponding to the translation latent variable t. Repeat step 2 -3 until convergence or early stopping. (see Algorithm 1): We study the robustness and image restoration performance of CNN-F (10 iterations). Additionally, we observe the disentanglement of information stored in latent variables. In this section, we will refer to CNN-F (10 iterations) as CNN-F. We train a 4 layer CNN and CNN-F (10 iterations) of corresponding architecture on the clean MNIST train set. For the architecture, we use 3 convolutional layers followed by 1 fully connected layer. We use 5x5 convolutional kernel for each convolutional layer with 8 channels in the first layer followed by 16 channels in the second layer followed by 8 channels in the third layer. We use instance norm between layers to normalize the input. We test the models on degraded test set images. The CNN trained has test accuracy 99.1% while CNN-F has test accuracy 95.26%. Our experimental study shows that the iterative inference in CNN-F promotes disentanglement of latent factors across layers. In particular, we observe that the latent variables at each layer in CNN-F captures different essences of the reconstructed image. For example, in the case of MNIST digits, those essences are different strokes that form the digits. Those strokes differ from each other in their location, styles, or angles. In our experiment, we trained a CNN-F with 3 convolutional layers on MNIST. Then, we sent an MNIST image of digit 0 and an MNIST image of digit 1 into the trained networks and collected their corresponding sets of latent variables. We denote z k to be the estimated latent variables from the image of digit 1 at layer k = 1, 2, 3 in CNN-F. Figure 2 illustrates that each set of latent variables z k captures strokes at a particular location in digit 1. In the first column of Figure 2, we use latent variables z 3 at the top layer in CNN-F to reconstruct the image. Similarly, in the second column of Figure 2, in addition to z 3, we add z 2 into the reconstruction. We observe that the latent variables z 3 capture the center of the digit 1 while the latent variables z 2 try to extend the digits to both ends. Finally, we include z 1 into the reconstruction and observe that it completes the digit by filling in the two ends. This observation suggests that CNN-F and its iterative inference algorithm lead to effective disentanglement of latent factors across the layers. Robustness Table 1 shows the accuracy and percent accuracy drop on noisy, blurry and occluded input. The accuracy of CNN-F drops less compared to CNN of same architecture, indicating that CNN-F is more robust. Image Restoration Table 2 shows CNN-F's reconstruction of images with added gaussian noise, blur, and occlusion. CNN-F is able to denoise, deblur, and do some degree of inpainting in on the degraded images. We note that with more iterations of feedback, the reconstructed image becomes more clean. The ability of CNN-F to restore images is consistent with studies in neuroscience which suggest that feedback signals contribute to automatic sharpening of images. For example, Abdelhack and Kamitani showed that the neural representation of blurry images is more similar to the latent representation of the clean version from a deep neural network than the latent representation of the blurry image. CNN-F is able to sharpen blurry images, which is consistent with this study. to understand better the role of feedback in robust vision. To compare CNN-F with neuronal/psychological data, we will scale up the training to ImageNet. A more challenging scenario for robust vision is adversarial attack. We will study the robustness of the proposed CNN-F under various types of adversarial attacks. We also plan to measure the similarity between the latent representations of the CNN-F with neural activity recorded from the brain in order to access whether CNN-F is a good model for human vision. We propose the Convolutional Neural Networks with Feedback (CNN-F) which consists of both a classification pathway and a generation pathway similar to the feedforward and feedback connections in human vision. Our model uses approximate loopy belief propagation for inferring latent variables, allowing for messages to be propagated along both directions of the model. We also introduce recurrency by passing the reconstructed image and predicted label back into the network. We show that CNN-F is more robust than CNN on corrupted images such as noisy, blurry, and occluded images and is able to restore degraded images when trained only on clean images.
CNN-F extends CNN with a feedback generative network for robust vision.
592
scitldr
We develop new approximation and statistical learning theories of convolutional neural networks (CNNs) via the ResNet-type structure where the channel size, filter size, and width are fixed. It is shown that a ResNet-type CNN is a universal approximator and its expression ability is no worse than fully-connected neural networks (FNNs) with a \textit{block-sparse} structure even if the size of each layer in the CNN is fixed. Our is general in the sense that we can automatically translate any approximation rate achieved by block-sparse FNNs into that by CNNs. Thanks to the general theory, it is shown that learning on CNNs satisfies optimality in approximation and estimation of several important function classes. As applications, we consider two types of function classes to be estimated: the Barron class and H\"older class. We prove the clipped empirical risk minimization (ERM) estimator can achieve the same rate as FNNs even the channel size, filter size, and width of CNNs are constant with respect to the sample size. This is minimax optimal (up to logarithmic factors) for the H\"older class. Our proof is based on sophisticated evaluations of the covering number of CNNs and the non-trivial parameter rescaling technique to control the Lipschitz constant of CNNs to be constructed. Convolutional Neural Network (CNN) is one of the most popular architectures in deep learning research, with various applications such as computer vision , natural language processing , and sequence analysis in bioinformatics (,). Despite practical popularity, theoretical justification for the power of CNNs is still scarce from the viewpoint of statistical learning theory. For fully-connected neural networks (FNNs), there is a lot of existing work, dating back to the 80's, for theoretical explanation regarding their approximation ability (, , , , and) and generalization power (, , and). See also and for surveys of earlier works. Although less common compared to FNNs, recently, statistical learning theory for CNNs has been studied, both about approximation ability (, ,) and about generalization power . One of the standard approaches is to relate the approximation ability of CNNs with that of FNNs, either deep or shallow. For example, proved that CNNs are a universal approximator of the Barron class (,), which is a historically important function class in the approximation theory. Their approach is to approximate the function using a 2-layered FNN (i.e., an FNN with a single hidden layer) with the ReLU activation function and transform the FNN into a CNN. Very recently independent of ours, showed any function realizable with an FNN can extend to an equivariant function realizable by a CNN that has the same order of parameters. However, to the best of our knowledge, no CNNs that achieves the minimax optimal rate (, Giné &) in important function classes, including the Hölder class, can keep the number of units in each layer constant with respect to the sample size. Architectures that have extremely large depth, while moderate channel size and width have become feasible, thanks to recent methods such as identity mappings (,), sophisticated initialization schemes (,), and normalization techniques (,). Therefore, we would argue that there are growing demands for theories which can accommodate such constant-size architectures. In this paper, we analyze the learning ability of ResNet-type ReLU CNNs which have identity mappings and constant-width residual blocks with fixed-size filters. There are mainly two reasons that motivate us to study this type of CNNs. First, although ResNet is the de facto architecture in various practical applications, the approximation theory for ResNet has not been explored extensively, especially from the viewpoint of the relationship between FNNs and CNNs. Second, constant-width CNNs are critical building blocks not only in ResNet but also in various modern CNNs such as Inception , DenseNet , and U-Net , to name a few. Our strategy is to replicate the learning ability of FNNs by constructing tailored ResNet-type CNNs. To do so, we pay attention to the block-sparse structure of an FNN, which roughly means that it consists of a linear combination of multiple (possibly dense) FNNs (we define it rigorously in the subsequent sections). Block-sparseness decreases the model complexity coming from the combinatorial sparsity patterns and promotes better bounds. Therefore, it is often utilized, both implicitly or explicitly, in the approximation and learning theory of FNNs (e.g., Bölcskei et al., ). We first prove that if an FNN is block-sparse with M blocks (M -way block-sparse FNN), we can realize the FNN with a ResNet-type CNN with O(M) additional parameters, which are often negligible since the original FNN already has Ω(M) parameters. Using this approximation, we give the upper bound of the estimation error of CNNs in terms of the approximation errors of block sparse FNNs and the model complexity of CNNs. Our is general in the sense that it is not restricted to a specific function class, as long as we can approximate it using block-sparse FNNs. To demonstrate the wide applicability of our methods, we derive the approximation and estimation errors for two types of function classes with the same strategy: the Barron class (of parameter s = 2) and Hölder class. We prove, as corollaries, that our CNNs can achieve the approximation error of orderÕ(M) for the β-Hölder class, where M is the number of parameters (we used M here, same as the number of blocks because it will turn out that CNNs have O(M) blocks for these cases), N is the sample size, and D is the input dimension. These rates are same as the ones for FNNs ever known in the existing literature. An important consequence of our theory is that the ResNet-type CNN can achieve the minimax optimal estimation error (up to logarithmic factors) for β-Hölder class even if its filter size, channel size and width are constant with respect to the sample size, as opposed to existing works such as and , where optimal FNNs or CNNs could have a width or a channel size goes to infinity as N → ∞.In summary, the contributions of our work are as follows:• We develop the approximation theory for CNNs via ResNet-type architectures with constant-width residual blocks. We prove any M -way block-sparse FNN is realizable such a CNN with O(M) additional parameters. That means if FNNs can approximate a function with O(M) parameters, we can approximate the function with CNNs at the same rate (Theorem 1).• We derive the upper bound of the estimation error in terms of the approximation error of FNNs and the model complexity of CNNs (Theorem 2). This gives the sufficient conditions to derive the same estimation error as that of FNNs (Corollary 1).• We apply our general theory to the Barron class and Hölder class and derive the approximation (Corollary 2 and 4) and estimation (Corollary 3 and 5) error rates, which are identical to those for FNNs, even if the CNNs have constant channel and filter size with respect to the sample size. In particular, this is minimax optimal for the Hölder case. We summarize in Table 1 the differences in the CNN architectures between our work and and , which established the approximation theory of CNNs via FNNs. First and foremost, only considered a specific function class -the Barron class -as a target function class, although their method is applicable to any function class that can be realized by a 2-layered ReLU FNN. Regarding the architecture, they considered CNNs with a single channel and whose width is "linearly increasing" layer by layer. For regression or classification problems, it is rare to use such an architecture. In addition, since they did not bound the norm of parameters in the approximating CNNs, we cannot derive the estimation error from this method. fully utilized the group invariance structure of underlying input spaces to construct CNNs. Such a structure makes theoretical analysis easier, especially for investigating the equivariance properties of CNNs since it enables us to incorporate mathematical tools such as group theory, Fourier analysis, and representation theory. Although their are quite strong in that it is applicable to any function that can be approximated by FNNs, their assumption on the group structure excludes the padding convolution layer, an important and popular type of convolution operations. Another point is that if we simply apply their construction method to derive the estimation error for (equivariant) Hölder functions, combined with the approximation of , the ing CNN that achieves the minimax optimal rate hasÕ(ε − D β) channels where ε is the approximation error threshold. It is partly because their construction is not aware of the internal sparse structure of approximating FNNs. Finally, the filter size of their CNN is as large as the input dimension. As opposed to these two works, we employ padding-and ResNettype CNNs which have multiple channels, fixed-size filters, and constant widths. , our is applicable to any function, as long as the FNNs to be approximated are block sparse, including the Barron and Hölder cases. If we apply our theorem to these classes, we can show that the optimal CNNs can achieve the same approximation and estimation rate as FNNs, while the number of channels is independent of the sample size. Further, this is minimax optimal up to the logarithmic factors for the Hölder class. Due to its practical success, theoretical analysis for ResNet has been explored recently (e.g., , , , and). From the viewpoint of statistical learning theory, and investigated the generalization power of ResNet from the perspective of the boosting interpretation. However, they did not discuss the function approximation ability of ResNet. To the best of our knowledge, our theory is the first work to provide the approximation ability of the CNN class that can accommodate the ResNet-type ones. We import the approximation theories for FNNs, especially ones for the Barron class and Hölder class. The approximation theory for the Barron class has been investigated in e.g., , , and. considered the parameter s = 1 (see Definition 3) and the activation function σ satisfying σ(z) → 1 as z → ∞ and σ(z) → 0 as z → −∞. Using this bound, proved that the estimation error of the ERM estimator is O(N − 2β 2β+D), which is minimax optimal up to logarithmic factors (see, e.g.,). We consider a regression task in this paper. Let X be a [−1, 1] D -valued random variable with unknown probability distribution P X and ξ be an independent random noise drawn from the Gaussian distribution with an unknown variance DISPLAYFORM0 • rigorously in the theorems later). We define a random variable Y by Y:= f• (X) + ξ. We denote the joint distribution of (X, Y) by P. Suppose we are given a dataset D = ((x 1, y 1),..., (x N, y N)) independently and identically sampled from the distribution P, we want to estimate the true function f• from the finite dataset D.We evaluate the performance of an estimator by the squared error. For a measurable function f: DISPLAYFORM1 Here, clip is the clipping operator defined DISPLAYFORM2 we define the L 2 -norm (weighted by P X) and the sup norm of f by DISPLAYFORM3 The task is to estimate the approximation error min f ∈F f − f • ∞ and the estimation error of the clipped ERM estimator: R(f) − R(f •). Note that the estimation error is a random variable with respect the choice of the training dataset D. By the definition of R and the independence of X and ξ, the estimation error equals to f − f DISPLAYFORM4 In this section, we define CNNs used in this paper. For this purpose, it is convenient to introduce 0, the set of real-valued sequences whose finitely many elements are non-zero: 0:= {w = (w n) n∈N>0 | ∃N ∈ N >0 s.t. w n = 0, ∀n ≥ N }. w = (w 1, . . ., w K) ∈ R K can be regarded as an element of 0 by setting w n = 0 for all n > K. Likewise, for C, C ∈ N >0, which will be the input and output channel sizes, respectively, we can think of (w k,j,i DISPLAYFORM0, we define the one-sided padding and stride-one convolution by w as an order-4 tensor DISPLAYFORM1 Here, i (resp. j) runs through 1 to C (resp. C) and α and β runs through 1 to D. Since we fix the input dimension D throughout the paper, we will omit the subscript D and write as L w if it is obvious from context. DISPLAYFORM2 Using this equality, we can expand a size-K filter to size-K.We can interpret L w as a linear mapping from DISPLAYFORM3 Next, we define the building block of CNNs: convolutional layers and fully-connected layers. Let C, C, K ∈ N >0 be the input channel size, output channel size, and filter size, respectively. For a weight tensor w ∈ R K×C ×C, a bias vector b ∈ R C, and an activation function σ: R → R, we define the convolutional layer Conv DISPLAYFORM4 where, ⊗ is the outer product of vectors and σ is applied in element-wise manner. Similarly, let W ∈ R D×C, b ∈ R, and σ: R → R, we define the fully-connected layer FC DISPLAYFORM5 is the vectorization operator that flattens a matrix into a vector. Finally, we define the ResNet-type CNN as a sequential concatenation of one convolution block, M residual blocks, and one fully-connected layer. FIG2 is the schematic view of the CNN we adopt in this paper. Definition 1 (Convolutional Neural Networks (CNNs)). Let M ∈ N >0 and L m ∈ N >0, which will be the number of residual blocks and the depth of m-th block, respectively. Let C DISPLAYFORM6 m ∈ R be the weight tensors and biases of l-th layer of the m-th block in the convolution part, respectively. Finally, let W ∈ R D×C (L 0) 0 and b ∈ R be the weight matrix and the bias for the fully-connected layer part, respectively. For θ:= ((w DISPLAYFORM7 is the identity function. Although CNN σ θ in this definition has a fully-connected layer, we refer to the stack of convolutional layers both with or without the final fully-connect layer as a CNN in this paper. We say a linear convolutional layer or a linear CNN when the activation function σ is the identity function and a ReLU convolution layer or a ReLU CNN when σ is ReLU defined by ReLU(x) = x ∨ 0. We borrow the term from ResNet and call Conv For architecture parameters C = (C DISPLAYFORM8, and norm parameters for convolution layers B (conv) > 0 and for fully-connected layers DISPLAYFORM9, the hypothesis class consisting of ReLU CNNs, as follows: DISPLAYFORM10 Here, the domain of CNNs is restricted to [−1, 1] D. Note that we impose norm constraints to the convolution part and fully-connected part separately. We emphasize that we do not impose any sparse constraints (e.g., restricting the number of non-zero parameters in a CNN to some fixed value) to F (CNN), as opposed to previous literature such as Yarotsky FORMULA24, , and. Since the notation is cluttered, we sometimes omit the subscripts as we do in the above. Remark 2. In this paper, we adopted one-sided padding, which is not often used practically, in order to make proofs simple. However, with slight modifications, all statements are true for equallypadded convolutions, the widely employed padding style which adds (approximately) same numbers of zeros to both ends of an input signal, with the exception that the filter size K is restricted to DISPLAYFORM11 We also discuss our design choice, especially the comparison with the original ResNet proposed in in Section G of the appendix. In this section, we mathematically define FNNs we consider in this paper, in parallel with the CNN case. Our FNN, which we coin a block-sparse FNN, consists of M possibly dense FNNs (blocks) concatenated in parallel, followed by a single fully-connected layer. We sketch the architecture of a block-sparse FNN in Figure 2. Let DISPLAYFORM0 m ∈ R be the weight matrix and the bias of the l-th layer of mth block (with the convention DISPLAYFORM1 be the weight (sub)vector of the final fully-connected layer corresponding to the m-th block and b ∈ R be the bias for the last layer. DISPLAYFORM2 We call a block-sparse FNN with M blocks a M -way block-sparse FNN. We say θ is compatible with (D [Lm] and norm parameters for the block part B (bs) > 0 and for the final layer DISPLAYFORM3 DISPLAYFORM4, the set of function realizable by FNNs: DISPLAYFORM5 Again, the domain is restricted to [−1, 1] D. Similar to the CNN case, we sometimes remove subscripts of the function class for simplicity. With the preparation in the previous sections, we state our main of this paper. We only describe statements of theorems and corollaries and key ideas in the main article. All complete proofs are deferred to the appendix. Our first main theorem claims that any M -way block-sparse FNN is realizable by a ResNet-type CNN with fixed-sized channels and filters by adding O(M) parameters, if we treat the widths D (l) m of the FNN as constants with respect to M. DISPLAYFORM0 m, and DISPLAYFORM1 that is, any FNN in DISPLAYFORM2 An immediate consequence of this theorem is that if we can approximate a function f • with a blocksparse FNN, we can also approximate f• with a CNN. Our second main theorem bounds the estimation error of the clipped ERM estimatorf. DISPLAYFORM0 (conv) and B (fc) as in Theorem 1. Suppose L m, C, K satisfies the equation FORMULA24 DISPLAYFORM1 Here, DISPLAYFORM2. M 1 and M 2 are defined by DISPLAYFORM3 The first term of FORMULA27 is the approximation error achieved by F (FNN). On the other hand, M 1 and M 2 are determined by the architectural parameters of F (CNN) -M 1 corresponds to the Lipschitz constant of a function realized by a CNN and M 2 is the number of parameters, including zeros, of a CNN. Therefore, the second term of represents the model complexity of F (CNN). There is a trade-off between the two terms. Using appropriately chosen M to balance them, we can evaluate the order of estimation error with respect to the sample size N. Corollary 1. Under the same assumptions as Theorem 2, suppose further log DISPLAYFORM4 The Barron class is an example of the function class that can be approximated by block-sparse FNNs. We employ the definition of Barron functions used in.Definition 3 (Barron class). We say a measurable function f DISPLAYFORM0 Here, F andF are the Fourier transformation and the inverse Fourier transformation, respectively. studied the approximation of the Barron function f• with the parameter s = 2 by a linear combination of M ridge functions (i.e., a 2-layered ReLU FNN). Specifically, they showed that there exists a function f M of the form DISPLAYFORM1 with |b m | ≤ 1, a m 1 = 1 and |t m | ≤ 1, such that f DISPLAYFORM2. Using this approximator f M, we can derive the same approximation order using CNNs by applying Theorem 1 with DISPLAYFORM3 Barron function with the parameter s = 2 such that f• = 0 and ∇f DISPLAYFORM4 with M residual blocks, each of which has depth O and at most 4 channels, and whose filter size is at DISPLAYFORM5 We have one design choice when we apply Corollary 1 to derive the estimation error: how to set B (bs) and B (fin). Looking at, the naive choice would be B (1 + ρ m) whose logarithm is O(M). We want its logarithm to beÕ. In order to do that, we change the relative scale between parameters in the block-sparse part and the fully-connected part using the homogeneous property of the ReLU function: ReLU(ax) = aReLU(x) for a > 0. The rescaling operation enables us to choose B, depth of each residual block L = O, channel size C = O, filter size K ∈ {2, . . ., D}, and norm bounds for the con- DISPLAYFORM6, and for the fully-connected part DISPLAYFORM7 such that for sufficiently large N, the clipped ERM estimatorf of F: DISPLAYFORM8. Here, DISPLAYFORM9 We next consider the approximation and error rates of CNNs when the true function is a β-Hölder function. Definition 4 (Hölder class). Let β > 0, f DISPLAYFORM0 Yarotsky FORMULA24 showed that FNNs with O(S) non-zero parameters can approximate any D variate β-Hölder function (β > 0) with the order ofÕ(S − β D). also proved a similar statement using a different construction method. They only specified their width (only), depth, and non-zero parameter counts of the approximating FNN and did not write in detail how non-zero parameters are distributed explicitly in the statements (see Theorem 1 of and Theorem 5 of Schmidt-Hieber FORMULA24). However, if we carefully look at their proofs, we find that we can transform the FNNs they constructed into the block-sparse ones. Therefore, we can utilize these FNNs and apply Theorem 1. To meet the assumption of Corollary 1, we again rescale the parameters of the FNNs, as we did in the Barron class case, so that log M 1 =Õ. We can derive the approximation and estimation errors by setting γ 1 = β D and γ 2 = 1. and O channels, and whose filter size is at most K, such that f DISPLAYFORM1 Corollary 5. There exist the number of residual blocks DISPLAYFORM2, depth of each residual block L =Õ, channel size C = O, filter size K ∈ {2, . . ., D}, norm bounds for the convolution part B (conv) = O, and for the fully-connected part B (fc) > 0 (log B (fc) = O(log N)) such that for sufficiently large N, the clipped ERM estimatorf of F: DISPLAYFORM3. Here, DISPLAYFORM4 Since the estimation error rate of the β-Hölder class is DISPLAYFORM5 (see, e.g.,), Corollary 5 implies that our CNN can achieve the minimax optimal rate up to logarithmic factors even the width D, the channel size C, and the filter size K are constant with respect to the sample size N. In this paper, we established new approximation and statistical learning theories for CNNs by utilizing the ResNet-type architecture of CNNs and the block-sparse structure of FNNs. We proved that any M -way block-sparse FNN is realizable using CNNs with O(M) additional parameters, when the width of the FNN is fixed. Using this , we derived the approximation and estimation errors for CNNs from those for block-sparse FNNs. Our theory is general because it does not depend on a specific function class, as long as we can approximate it with block-sparse FNNs. To demonstrate the wide applicability of our , we derived the approximation and error rates for the Barron class and Hölder class in almost same manner and showed that the estimation error of CNNs is same as that of FNNs, even if the CNNs have a constant channel size, filter size, and width with respect to the sample size. The key techniques were careful evaluations of the Lipschitz constant of CNNs and non-trivial weight parameter rescaling of FNNs. One of the interesting open questions is the role of the weight rescaling. We critically use the homogeneous property of the ReLU activation function to change the relative scale between the block-sparse part and the fully-connected part, if it were not for this property, the estimation error rate would be worse. The general theory for rescaling, not restricted to the Barron nor Hölder class would be beneficial for deeper understanding of the relationship between the approximation and estimation capabilities of FNNs and CNNs. Another question is when the approximation and estimation error rates of CNNs can exceed that of FNNs. We can derive the same rates as FNNs essentially because we can realize block-sparse FNNs using CNNs that have the same order of parameters (see Theorem 1). Therefore, if we dig into the internal structure of FNNs, like repetition, more carefully, the CNNs might need fewer parameters and can achieve better estimation error rate. Note that there is no hope to enhance this rate for the Hölder case (up to logarithmic factors) because the estimation rate using FNNs is already minimax optimal. It is left for future research which function classes and constraints of FNNs, like block-sparseness, we should choose. For tensor a, a +:= a∨0 where maximum operation is performed in element-wise manner. Similarly a −:= −(−a ∨ 0). Note that a = a + − a − holds for any tensor a. For normed spaces (V, · V), (W, · W) and linear operator T: V → W we denote the operator norm of T by T op:= sup v V =1 T v W. For a sequence w = (w,..., w (L) ) and l ≤ l, we denote its subsequence from the l-th to l -th elements by w[l : l]:= (w (l),..., w (l) ). 1 P equals to 1 if the statement P is true, equals to 0 otherwise. DISPLAYFORM0, we realize a CNN f (CNN) using M residual blocks by "serializing" blocks in the FNN and converting them into convolution layers. First, we double the channel size using the m = 0 part of CNN (i.e., D (L0) 0 = 2). We will use the first channel for storing the original input signal for feeding to downstream (i.e., m ≥ 1) blocks and the second one for accumulating the output of each blocks, that is, m size-1 filters made from the weight parameters of the corresponding layer of the FNN. Observing that the convolution operation with size-1 filter is equivalent to a dimension-wise affine transformation, the first coordinate of the output of l-th layer of the CNN is inductively same as that of the m-th block of the FNN. After computing the m-th block FNN using convolutions, we add its output to the accumulating channel in the identity mapping. Finally, we pick the first coordinate of the accumulating channel and subtract the bias term using the final affine transformation. We relate the approximation error of Theorem 2 with the estimation error using the covering number of the hypothesis class F (CNN). Although there are several theorems of this type, we employ the one in due to its convenient form (Lemma 5). We can prove that the logarithm of the covering number is upper bounded by M 2 log((B (conv) ∨ B (fc) )M 1 /ε) (Lemma 4) using the similar techniques to the one in. Theorem 2 is the immediate consequence of these two lemmas. To prove Cororraly 1, we set M = O(N α) for some α ≥ 0. Then, under the assumption of the corolarry, we have DISPLAYFORM0 from Theorem 2. The order of the right hand side with respect to N is minimized when α = DISPLAYFORM1 and b ∈ R such that 1. DISPLAYFORM2 Proof. First, observe that the convolutional layer constructed from u = [u 1 . . . u K] ∈ R K×1×1 takes the inner product with the first K elements of the input signal: DISPLAYFORM3 K×1×1 works as the "left-translation" by K − 1. Therefore, we should define w so that it takes the inner product with the K left-most elements in the first channel and shift the input signal by K − 1 with the second channel. Specifically, we define DISPLAYFORM4 We set b:= (0, . . ., 0 L0 − 1 times, t). Then w and b satisfy the condition of the lemma. The following lemma shows that we can convert any linear CNN to a ReLU CNN that has approximately 4 times larger parameters. This type of lemma is also found in (Lemma 2.3). DISPLAYFORM0 be filter sizes. Let (l). Consider the linear convolution layers constructed from w and b: DISPLAYFORM1 DISPLAYFORM2 ∞, and DISPLAYFORM3 Proof. We definew andb as follows: DISPLAYFORM4 By definition, a pair (w,b) satisfies the conditions FORMULA24 and FORMULA27. For any x ∈ R D, we set y DISPLAYFORM5 for any α, β ∈ [D]. Summing them up and using the definition ofb DISPLAYFORM6 (y DISPLAYFORM7 (y DISPLAYFORM8 (y DISPLAYFORM9 −(w (l+1) ) α,:,: (y DISPLAYFORM10 for any α, β ∈ [D]. Again, by taking the summation and using the definition ofb (l+1), we get DISPLAYFORM11.By applying ReLU, we get DISPLAYFORM12 By using the induction hypothesis, we get DISPLAYFORM13 Therefore, the claim holds for l + 1. By induction, the claim holds for L, which is what we want to prove. We can concatenate two CNNs with the same depths and filter sizes in parallel. Although it is almost trivial, we state it formally as a proposition. In the following proposition, C and C is not necessarily 1. DISPLAYFORM0 We define w and b in the same way, with the exception that C (l) is replaced with C (l). We definew = (w,...,w (L) ) and DISPLAYFORM1. Then, we have, DISPLAYFORM2 for any x, x ∈ R D×C and any σ: R → R.Note that by the definition of · 0 and · ∞, we have DISPLAYFORM3 and DISPLAYFORM4. We will construct the desired CNN consisting of M residual blocks, whose m-th residual block is made from the ingredients of the corresponding m-th block in f Lm], and w m ). DISPLAYFORM5 [The m = 0 Block]: We prepare a single convolutional layer with 2 output channels and 2 size-1 filters suth that the first filter works as the identity function and the second filter inserts zeros to the second channel. Weight parameters of this convolutional layer are all zeros except single one. We denote this block by Conv 0. DISPLAYFORM6 1×D is the d-th row of the matrix DISPLAYFORM7 m ×D. We apply Lemma 1 and Lemma 2 and obtain ReLU CNNs realizing the hinge functions. By combining them in parallel using Proposition 1, we have a learnable parameter θ DISPLAYFORM8 Since we double the channel size in the m = 0 part, the identity mapping has 2 channels. Therefore, we made Conv ReLU θ m so that it has 2 input channels and neglects the input signals coming from the second one. This is possible by adding filters consisting of zeros appropriately. Next, for l-th layer (l = 2, . . ., L m), we prepare size-1 filters w DISPLAYFORM9 where ⊗ is the Kronecker product of matrices. Intuitively, the l = 2 layer will pick all odd indices of the output of Conv DISPLAYFORM10. By the inductive calculation, we have DISPLAYFORM11 By definition, Conv m has the depth of L 0 + L m − 1, at most 4D DISPLAYFORM12 DISPLAYFORM13 Then, we have DISPLAYFORM14 (the subscript 1 represents the first coordinate).[Final Fully-connected Layer] Finally, we set w:= B We assume w, w ∈ R K×J×I, b, b ∈ R, and x ∈ R D×I unless specified. We have in mind that the activation function σ is either the ReLU function or the identity function id. But the following proposition holds for any 1-Lipschitz function such that σ = 0. Remember that we can treat L w as a linear operator from R D×I to R D×J. We endow R D×I and R D×J with the sup norm and denote the operator norm DISPLAYFORM0 is evaluated as follows: DISPLAYFORM1 Note that the first inequality holds because the ReLU function is 1-Lipschitz. DISPLAYFORM2 In the following propositions in this subsection, we assume W, W ∈ R D×C, b, b ∈ R, and x ∈ R D×C. Again, these propositions hold for any 1-Lipschitz function σ: R → R such that σ = 0. But σ = ReLU or id is enough for us. DISPLAYFORM0 The number of non-zero summand in the summation is at most W 0 and each summand is bounded by W ∞ x ∞ Therefore, we have |FC DISPLAYFORM1 In this section, we denote the architecture of CNNs by DISPLAYFORM2 and the norm constraint on the convolution part by DISPLAYFORM3 and (conv). Then, for any DISPLAYFORM4 DISPLAYFORM5 Proof. We write in shorthand as C DISPLAYFORM6 By Proposition 2 and assumptions w DISPLAYFORM7, it is further bounded by DISPLAYFORM8 by Proposition 2 and 4) DISPLAYFORM9 by Proposition 2 and 5) DISPLAYFORM10 We denote the l-th convolution layer of the m-th block by C (l) m and the m-th residual block of by C m: Proof. By using Proposition 9 inductively, we have DISPLAYFORM11 DISPLAYFORM12 Lemma 3. Let ε > 0. Suppose θ and θ are within distance ε, that is, max m,l w DISPLAYFORM13 We will bound each term of. By Proposition 8 and Proposition 11, DISPLAYFORM14 On the other hand, for m = 0,..., M, DISPLAYFORM15 DISPLAYFORM16 By applying and FORMULA37 to FORMULA37, we have DISPLAYFORM17 For a metric space (M 0, d) and ε > 0, we denote the (external) covering number of DISPLAYFORM0 Proof. The idea of the proof is same as that of Lemma 12 of Schmidt-Hieber (2017 (i.e., 2B DISPLAYFORM1 can be realized by parameters such that every pair of corresponding parameters are in a same bin, then, f − f ∞ ≤ ε by Lemma 3. We make a subset F 0 of F (CNN) by picking up every combination of bins for M 2 parameters. Then, for each f ∈ F (CNN), there exists f 0 ∈ F 0 such that f − f 0 ∞ ≤ ε. There are at most 2BM 1 ε −1 choices of bins for each parameter. Therefore, the cardinality of F 0 is at most DISPLAYFORM2 D.2 PROOF OF THEOREM 2 AND COROLLARY 1We use the lemma in to bound the estimation error of the clipped ERM estimatorf. Since our problem setting is slightly different from one in the paper, we restate the statement. Lemma 5 (cf. Lemma 10). Let F be a family of measurable functions from [−1, 1] D to R. Letf be the clipped ERM estimator of the regression problem described in Section 3.1. Suppose the covering number of F satisfies N (ε, F, · ∞) ≥ 3. Then, Proof. Basically, we convert our problem setting so that it fits to the assumptions of Lemma 10 of • (x n)). Then, the probability that D is drawn from P ⊗N is same as the probability that D is drawn from P ⊗N where P is the joint distribution of (X, Y). Also, we can show thatf is the ERM estimator of the regression problem Y = f DISPLAYFORM3 • +ξ using the dataset D:f 1 ∈ arg min f ∈F R D (f). We apply the Lemma 10 of with n ← N, d ← D, ε ← 1, δ ← 1 N, ∆ n ← 0, F ← F, F ← 2F,f ←f 1 and use the fact that the estimation error of the clipped ERM estimator is no worse than that of the ERM estimator, that is, Next, we prove the existence of a block-sparse FNN with constant-width blocks that optimally approximates a given β-Hölder function. It is almost same as the proof of Theorem 5 of. However, we need to construct the FNN so that it has a block-sparse structure. FORMULA24 ) layers. It is left for future research whether our can extend to the ResNet-type CNNs with pooling or Batch Normalization layers. Second, our CNN does not have ReLU activation after the junction points and the final layer of the 0-th block, while they have in the original ResNet. We choose this design to make proofs simpler. We can easily extend our to the architecture that adds the ReLU activations to those points with slight modifications using similar techniques appeared in Lemma 2 of the appendix. DISPLAYFORM4 DISPLAYFORM5
It is shown that ResNet-type CNNs are a universal approximator and its expression ability is not worse than fully connected neural networks (FNNs) with a \textit{block-sparse} structure even if the size of each layer in the CNN is fixed.
593
scitldr
Few shot image classification aims at learning a classifier from limited labeled data. Generating the classification weights has been applied in many meta-learning approaches for few shot image classification due to its simplicity and effectiveness. However, we argue that it is difficult to generate the exact and universal classification weights for all the diverse query samples from very few training samples. In this work, we introduce Attentive Weights Generation for few shot learning via Information Maximization (AWGIM), which addresses current issues by two novel contributions. i) AWGIM generates different classification weights for different query samples by letting each of query samples attends to the whole support set. ii) To guarantee the generated weights adaptive to different query sample, we re-formulate the problem to maximize the lower bound of mutual information between generated weights and query as well as support data. As far as we can see, this is the first attempt to unify information maximization into few shot learning. Both two contributions are proved to be effective in the extensive experiments and we show that AWGIM is able to achieve state-of-the-art performance on benchmark datasets. While deep learning methods achieve great success in domains such as computer vision , natural language processing , reinforcement learning , their hunger for large amount of labeled data limits the application scenarios where only a few data are available for training. Humans, in contrast, are able to learn from limited data, which is desirable for deep learning methods. Few shot learning is thus proposed to enable deep models to learn from very few samples . Meta learning is by far the most popular and promising approach for few shot problems (; ; ; ;). In meta learning approaches, the model extracts high level knowledge across different tasks so that it can adapt itself quickly to a new-coming task . There are several kinds of meta learning methods for few shot learning, such as gradient-based and metric-based . Weights generation, among these different methods, has shown effectiveness with simple formulation (; ; ; . In general, weights generation methods learn to generate the classification weights for different tasks conditioned on the limited labeled data. However, fixed classification weights for different query samples within one task might be sub-optimal, due to the few shot challenge. We introduce Attentive Weights Generation for few shot learning via Information Maximization (AWGIM) in this work to address these limitations. In AWGIM, the classification weights are generated for each query sample specifically. This is done by two encoding paths where the query sample attends to the task context. However, we show in experiments that simple cross attention between query samples and support set fails to guarantee classification weights fitted to diverse query data since the query-specific information is lost during weights generation. Therefore, we propose to maximize the lower bound of mutual information between generated weights and query, support data. As far as we know, AWGIM is the first work introducing Variational Information Maximization in few shot learning. The induced computational overhead is minimal due to the nature of few shot problems. Furthermore, by maximizing the lower bound of mutual information, AWGIM gets rid of inner update without compromising performance. AWGIM is evaluated on two benchmark datasets and shows state-of-the-art performance. We also conducted detailed analysis to validate the contribution of each component in AWGIM. 2 RELATED WORKS 2.1 FEW SHOT LEARNING Learning from few labeled training data has received growing attentions recently. Most successful existing methods apply meta learning to solve this problem and can be divided into several categories. In the gradient-based approaches, an optimal initialization for all tasks is learned . learned a meta-learner LSTM directly to optimize the given fewshot classification task. learned the transformation for activations of each layer by gradients to better suit the current task. In the metric-based methods, a similarity metric between query and support samples is learned. (; ; ; ; a). Spatial information or local image descriptors are also considered in some works to compute richer similarities (; b;). Generating the classification weights directly has been explored by some works. generated classification weights as linear combinations of weights for base and novel classes. and both generated the classification weights from activations of a trained feature extractor. Graph neural network denoising autoencoders are used in . proposed to generate "fast weights" from the loss gradient for each task. All these methods do not consider generating different weights for different query examples, nor maximizing the mutual information. There are some other methods for few-shot classification. Generative models are used to generate or hallucinate more data in; ). and used the closed-form solutions directly for few shot classification. integrated label propagation on a transductive graph to predict the query class label. Attention mechanism shows great success in computer vision and natural language processing . It is effective in modeling the interaction between queries and key-value pairs from certain context. Based on the fact that keys and queries point to the same entities or not, people refer to attention as self attention or cross attention. In this work, we use both types of attention to encode the task and query-task information. The work most similar to ours is Attentive Neural Processes, which also employs self and cross attention. However, we are using attention for few-shot image classification via maximizing the mutual information. In stark contrast, worked on regression from the perspective of a stochastic process and the variational objective is optimized. Given two random variables x and y, mutual information I(x; y) measures the decrease of uncertainty in one random variable when another is known. It is defined as the Kullback-Leibler divergence between joint distribution p(x, y) and product of marginal distributions p(x) ⊗ p(y), When x and y are independent, p(x, y) = p(x) ⊗ p(y) so that I(x, y) = 0, indicating that knowing x does not reveal any information about y. When y is a deterministic function of x, I(x, y) achieves its maximum value. Mutual information has been widely applied in applications such as Generative Adversarial Networks, self-supervised learning , visual question generation and so on. Similarly, the attentive path enables the query samplex to be equipped with task knowledge. Both paths are achieved by attention mechanism.x ap is repeated to concatenate with X cp. The weight generator g takes these concatenated representations as input to generate classification weights W specific forx, denoted by the colorful matrix with slash. It can be used to predict the class label forx and X. W is also used to reconstruct the inputs of the generator g by two networks r 1 and r 2. In this way, the lower bound of mutual information is maximized and g is forced to generate classification weights sensitive to different query samples. In this section, we provide the problem formulation first. Then the proposed model is described in Sec. 3.3. The objective function, which maximizes the mutual information between certain variables, and theoretical analysis are provided in Sec. 3.4. Following many popular meta-learning methods for few shot classification, we formulate the problem under episodic training paradigm . One N -way K-shot task sampled from an unknown task distribution P (T) includes support set and query set: where S = {(x cn;k, y cn;k)|k = 1,..., K; n = 1,..., N }, Q = {(x 1, ...,x |Q|)}. Support set S contains N K labeled samples. Query set Q includesx and we need to predict labelŷ forx based on S. During meta-training, the meta-loss is estimated on Q to optimize the model. During metatesting, the performance of meta-learning method is evaluated on Q, provided the labeled S. The classes used in meta-training and meta-testing are disjoint so that the meta-learned model needs to learn the knowledge transferable across tasks and adapt itself quickly to novel tasks. Our proposed approach follows the general framework to generate the classification weights (; ; ; ; . In this framework, there is a feature extractor to output image feature embeddings. The meta-learner needs to generate the classification weights for different tasks. Latent Embedding Optimization (LEO) is one of the weights generation methods that is most related to our work. In LEO, the latent code z is generated by h conditioned on support set S, described as z = h(S). h is instantiated as relation networks . Classification weights w can be decoded from z with l, w = l(z). In the inner loop, we use w to compute the loss (usually cross entropy) on the support set and then update z: where L S indicates that the loss is evaluated on S only. The updated latent code z is used to decode new classification weights w with generating function l. w is adopted in the outer loop for query set Q and the objective function of LEO then can be written as min Here θ stands for the parameters of h and l and we omit the regularization terms for clarity. LEO avoids updating high-dimensional w in the inner loop by learning a lower-dimensional latent space, from which sampled z can be used to generate w. The most significant difference between LEO and AWGIM is that we do not need inner updates to adapt the model. Instead, AWGIM is a feedforward network trained to maximize the mutual information so that it fits to different tasks well. On the other hand, AWGIM learns to generate optimal classification weights for each query sample while LEO generates fixed weights conditioned on the support set within one task. In Section 3.4 we will show LEO can be casted as a special case of AWGIM under certain conditions. The framework of our proposed method is shown in Figure 1. Assume that we have a feature extractor, which can be a simple 4-layer Convnet or a deeper Resnet. All the images included in the sampled task T are processed by this feature extractor and represented as d-dimensional vectors afterwards, i.e., x cn;k,x ∈ R d. There are two paths to encode the task context and the individual query sample respectively, which are called contextual path and attentive path. The outputs of both paths are concatenated together as input to the generator for classification weights. Generated classification weights are used to not only predict the label ofx, but also maximize the lower bound of mutual information between itself and other variables, which will be discussed in the following section 3.4. The encoding process includes two paths, namely the contextual path and attentive path. The contextual path aims at learning representations for only the support set with a multi-head self-attention network f cp sa . The outputs of contextual path X cp ∈ R N K×d h 1 thus contain richer information about the task and can be used later for weights generation. Existing weights generation methods generate the classification weights conditioned on the support set only, which is equivalent to using contextual path. However, the classification weights generated in this way might be sub-optimal. This is because estimating the exact and universal classification weights from very few labeled data in the support set is difficult and sometimes impossible. The generated weights are usually in lack of adaptation to different query samples. We address this issue by introducing attentive path, where the individual query example attends to the task context and then is used to generate the classification weights. Therefore, the classification weights are adaptive to different query samples and aware of the task context as well. In the attentive path, a new multi-head self-attention network f ap sa on the support set is employed to encode the global task information. f ap sa is different from f cp sa in contextual path because the selfattention network in contextual path emphasizes on generating the classification weights. On the contrary, outputs of self-attention here plays the role of providing the V alue context for different query samples to attend in the following cross attention. Sharing the same self-attention networks might limit the expressiveness of learned representations in both paths. The cross attention network f ap ca applied on each query sample and task-aware support set is followed to produceX ap ∈ R |Q|×d h. We use multi-head attention with h heads in both paths. In one attention block, we produce h different sets of queries, keys and values. Multi-head attention is claimed to be able to learn more comprehensive and expressive representations from h different subspaces . More details of these two paths can be found in A.2. We replicate X cp ∈ R N K×d h andX ap ∈ R |Q|×d h for |Q| and N K times respectively and reshape them afterwards. Then we have X cp ∈ R |Q|×N K×d h andX ap ∈ R |Q|×N K×d h. These two tensors are concatenated to become X cp⊕ap ∈ R |Q|×N K×2d h. X cp⊕ap can be interpreted that each query sample has its own latent representations for support set to generate specific classification weights, which are both aware of the task-context and adaptive to individual query sample. cp⊕ap is decoded by the weights generator g: R 2d h → R 2d. We assume that the classification weights follow Gaussian distribution with diagonal covariance. g outputs the distribution parameters and we sample the weights from learned distribution during meta-training. The sampled classification weights are represented as W ∈ R |Q|×N K×d. To reduce complexity, we compute the mean value on K classification weights for each class to have W f inal ∈ R |Q|×N ×d. Therefore, ith query sample has its specific classification weight matrix W f inal i,:,: ∈ R N ×d. The prediction for query data can be computed byXW f inalT. The support data X is replicated for |Q| times and reshaped as X s ∈ R |Q|×N K×d. So the prediction for support data can also be computed as X s W f inalT. Besides the weights generator g, we have another two decoders r 1: They both take the generated weights W as inputs and learn to reconstruct X cp andX ap respectively. The outputs of r 1 and r 2 are denoted as X cp re,X ap re ∈ R |Q|×N K×d h. The reason we are using reconstruction as auxiliary tasks will be discussed in following Sec. 3.4. In this section, we perform the analysis for one query sample without loss of generality. The subscripts for classification weights are omitted for clarity. In general, we use (x, y) and (x,ŷ) to represent support and query samples respectively. Since the classification weights w generated from g are encoded with attentive path and contextual path, it is expected that we can directly have the query-specific weights. However, we show in the experiments that simply doing this does not outperform a weight generator conditioned only on the S significantly, which implies that the generated classification weights from two paths are not sensitive to different query samples. In other words, the information from attentive path is not kept well during the weights generation. To address this limitation, we propose to maximize the mutual information between generated weights w and support as well as query data. The objective function can be described as maxI((x,ŷ); w) + (x,y)∈S I((x, y); w) According to the chain rule of mutual information, we have I((x,ŷ); w) = I(x; w) + I(ŷ; w|x). Equation 6 Directly computing the mutual information in Equation 7 is intractable since the true posteriori distributions like p(ŷ|x, w), p(x|w) are still unknown. Therefore, we use Variational Information Maximization (; to compute the lower bound of Equation 5. We use p θ (x|w) to approximate the true posteriori distribution, where θ represents the model parameters. As a , we have H(·) is the entropy of a random variable. H(x) is a constant value for given data. We can maximize this lower bound as the proxy for the true mutual information. Similar to I(x; w), p θ (x|w), p θ (x, y|w) are used to approximate the true posteriori distribution p(x|w) and p(x, y|w). Put the lower bounds back into Equation 7. Omit the constant entropy terms and the expectation subscripts for clarity, we have the new objective function as The first two terms are maximizing the log likelihood of label for both support and query data with respective to the network parameters, given the generated classification weights. This is equivalent to minimizing the cross entropy between prediction and ground-truth. We assume that p θ (x|w) and p θ (x|w) are Gaussian distributions. r 1 and r 2 are used to approximate the mean of these two Gaussian distributions. Therefore maximizing the log likelihood is equivalent to reconstruct x cp and x ap with L2 loss. Thus the loss function to train the network can be written as is not equal to real log likelihood and we have to decide the weightage for each one. λ 1, λ 2, λ 3 are thus hyper-parameters for trade-off of different terms. With the help of last three terms, the generated classification weights are forced to carry information about the support data and the specific query sample. In LEO , the inner update loss is computed as cross entropy on support data. If we merge the inner update into outer loop, then the loss becomes the summation of first two terms in Equation 15. However, the weight generation in LEO does not involve specific query samples, thus making reconstructingx ap impossible. In this sense, LEO can be regarded as a special case of our proposed method, where only contextual path exits and λ 2 = λ 3 = 0. The encoding process in contextual path in computational complexity O((N K) 2 ) due to self-attention. Similarly, the computational complexity of attentive path is O((N K) 2 + |Q|(N K)). In total, the complexity is O((N K) 2 + |Q|(N K)). However, because of the nature of few-shot learning problem, the value of (N K) 2 is usually negligible. The value of |Q| depends on the setting and the cross attention can be implemented parallelly via matrix multiplication. Therefore, the induced computational overhead will be negligible. AWGIM avoids the inner update without compromising the performance, which furthers reduces both training and inference time significantly. The empirical evaluation is presented in A.3.4. We conduct experiments on miniImageNet and tieredImageNet , two commonly used benchmark datasets, to compare with other methods and analyze our model. Both datasets are subsets of ILSVRC-12 dataset . miniImageNet contains 100 randomly sampled classes with 600 images per class. We follow the train/test split in , where 64 classes are used for meta-training, 16 for meta-validation and 20 for meta-testing. tieredImageNet is a larger dataset compared to miniImageNet. There are 608 classes and 779,165 images in total. They are selected from 34 higher level nodes in ImageNet hierarchy. 351 classes from 20 high level nodes are used for meta-training, 97 from 6 nodes for meta-validation and 160 from 8 nodes for meta-testing. We use the image features in LEO provided by the authors 2. They trained a 28-layer Wide Residual Network on the meta-training set. Each image then is represented by a 640 dimensional vector, which is used as the input to our model. For N -way K-shot experiments, we randomly sample N classes from meta-training set and each of them contains K samples as the support set and 15 as query set. Similar to other works, we train 5-way 1-shot and 5-shot models on two dataset. During meta-testing, 600 N -way K-shot tasks are sampled from meta-testing set and the average accuracy for query set is reported with 95% confidence interval, as done in recent works (; ;). We use TensorFlow to implement our method and the code will be made available. d = 640 is the dimension of feature embeddings. d h is set to be 128. The number of heads h in attention module is set to be 4. g, r 1 and r 2 are 2-layer MLPs with 256 hidden units. We decide λ 1 = 1, λ 2 = λ 3 = 0.001 by meta-validation performance. Conv-4 48.70 ± 1.84% 63.11 ± 0.92% Meta LSTM Conv-4 43.44 ± 0.77% 60.60 ± 0.71% Prototypical Nets Conv-4 49.42 ± 0.78% 68.20 ± 0.66% Relation Nets Conv-4 50.44 ± 0.82% 65.32 ± 0.70% SNAIL Resnets-12 55.71 ± 0.99% 68.88 ± 0.92% TPN Resnets-12 59.46 75.65 MTL Resnets-12 61.20 ± 1.80% 75.50 ± 0.80 Dynamic WRN-28-10 60.06 ± 0.14% 76.39 ± 0.11% Prediction WRN-28-10 59.60 ± 0.41% 73.74 ± 0.19% DAE-GNN WRN-28-10 62.96 ± 0.15% 78.85 ± 0.10% LEO WRN-28-10 61.76 ± 0.08% 77.59 ± 0.12% AWGIM (ours) WRN-28-10 63.12 ± 0.08% 78.40 ± 0.11% Table 2: Accuracy comparison with other approaches on tieredImageNet. The are averaged on 600 tasks from meta-testing set with 95% confidence interval. Best are highlighted. Model Feature Extractor 5-way 1-shot 5-way 5-shot MAML Conv-4 51.67 ± 1.81% 70.30 ± 1.75% Prototypical Nets Conv-4 53.31± 0.89% 72.69 ± 0.74% Relation Nets Conv-4 54.48 ± 0.93% 71.32 ± 0.78% TPN Conv-4 59.91 ± 0.96% 72.85 ± 0.74% is used to optimize the network with weight decay 1 × 10 −6. The initial learning rate is set to 0.0002 for 5-way 1-shot and 0.001 for 5-way 5-shot, which is decayed by 0.2 for every 15,000 iterations. We train the model for 50,000 iterations. Batch size is 64 for 5-way 1-shot and 32 for 5-way 5-shot. Similar to LEO , we first train the model on meta-training set and choose the optimal hyper-parameters by validation . Then we train the model on meta-training and meta-validation sets together using fixed hyper-parameters. We compare the performance of our approach AWGIM on two datasets with several state-of-theart methods proposed in recent years. The of MAML, Prototypical Nets, Relation Nets on tieredImageNet are evaluated by. The of Dynamic on miniImageNet with WRN-28-10 as the feature extractor is reported in . The other are reported in the corresponding original papers. We also include the backbone network structure of the used feature extractor for reference. The on miniImageNet and tieredImageNet are shown in Table 1 and 2 respectively. The top half parts of Table 1 and 2 display the methods belonging with different meta learning categories, such as metric-based(Matching Networks, Prototypical Nets), gradient-based (MAML, MTL), graph-based (TPN). The bottom part shows the classification weights generation approaches including Dynamic, Prediction, DAE-GNN, LEO and our proposed AWGIM. AWGIM can outperform all the methods in top parts of two table. Comparing with other classification weights generation methods in the bottom part, AWGIM still shows very competitive performance, namely the best on tieredImageNet and close to the state-of-the-art on miniImageNet. We note that all the classification weights generation methods are using WRN-28-10 as backbone network, which makes the comparison fair. In particular, AWGIM can outperform LEO in all settings. Table 3: Analysis of our proposed AWGIM. In the top half, the attentive path is removed to compare with LEO. In the bottom part, ablation analysis with respective to different components is provided. We also shuffle the generated classification weights randomly to show that they are indeed optimal for different query samples. We perform detailed analysis on AWGIM, shown in Table 3. We include the of for reference. "Generator in LEO" means that there is no inner update in LEO. In the upper part of the table, we first studied the effect of attentive path. We implemented two generators including only the contextual path during encoding. "Generator conditioned on S with IM" indicates that we add the cross entropy loss and reconstruction loss for support set. It can be observed that "Generator conditioned on S only" is trained with cross entropy on query set, which is similar to "Generator in LEO" without inner update. It is able to achieve similar or slightly better than "Generator in LEO", which implies that self-attention is no worse than relation networks used in LEO to model task-context. With information maximization, our generator is able to obtain slightly better performance than LEO. The effect of attention is investigated by replacing the attention modules with 2-layer MLPs, which is shown as "MLP encoding". More specifically, one MLP in contextual path is used for support set and another MLP in attentive path for query samples. We can see that even without attention to encode the task-contextual information, "MLP encoding" can achieve accuracy close to LEO, for the sake of information maximization. However, if we let λ 1 = λ 2 = λ 3 = 0 for MLP encoding, the performance drops significantly, which demonstrates the importance of maximizing the information. We conducted ablation analysis with respective to λ 1, λ 2 and λ 3 to investigate the effect of information maximization. First, λ 1, λ 2 and λ 3 are all set to be 0. In this case, the accuracy is similar to "generator conditioned on S only", showing that the generated classification weights are not fitted for different query samples, even with the attentive path. It can also be observed that maximizing the mutual information between weights and support is more crucial since λ 1 = λ 2 = 0 degrades accuracy significantly, comparing with λ 3 = 0. We further investigate the relative importance of the classification on support as well as reconstruction. λ 1 = 0 affects the performance noticeably. We conjecture that the support label prediction is more critical for information maximization. The classification weights are generated specifically for each query sample in AWGIM. To this point, we shuffle the classification weights between query samples within the same classes and between different classes as well to study whether the classification weights are adapted for different query samples. Assume there are T query samples per class in one task. W f inal ∈ R |Q|×N ×d can be reshaped into W f inal ∈ R N ×T ×N ×d. Then we shuffle this weight tensor along the first and second axis randomly. The are shown as "random shuffle between classes" and "random shuffle in class" in Table 3. For 5-way 1-shot experiments, the random shuffle between classes degrades the accuracy noticeably while the random shuffle in class dose not affect too much. This indicates that when the support data are very limited, the generated weights for query samples from the same class are very similar to each other while distinct for different classes. When there are more labeled data in support set, two kinds of random shuffle show very close or even the same in 5-way 5-shot experiments, which are both worse than the original ones. This implies that the generated classification weights are more diverse and specific for each query sample in 5-way 5-shot setting. The possible reason is that larger support set provides more knowledge to estimate the optimal classification weights for each query example. More analysis is provided in Appendix A.3. In this work, we introduce Attentive Weights Generation via Information Maximization (AWGIM) for few shot image classification. AWGIM learns to generate optimal classification weights for each query sample within the task by two encoding paths. To guarantee this, the lower bound of mutual information between generated weights and query, support data is maximized. As far as we know, AWGIM is the first work utilizing mutual information techniques for few shot learning. The effectiveness of AWGIM is demonstrated by state-of-the-art performance on two benchmark datasets and extensive analysis. The multi-head attention can be described as X h1 ∈ R N K×d h is the matrix where each row stands for one support sample x cn;k h1. For one N -way K-shot task, the outputs of f sa cp are represented by a matrix X cp ∈ R N K×d h. The attentive path is instantiated by attention, similar to contextual path. First, a MLP f 2: andx h2. Then we employ another H-head selfattention network f cn;k h2 to encode the global task information to each support sample, The cross attention between query and context-aware support samples are computed aŝ HereX ap ∈ R |Q|×d h is the matrix form ofx q, where each query sample is context-aware. ∈ R 2d h, where i, j stands for ith query sample and jth support sample. x cp⊕ap is decoded by the weights generator g: R 2d h → R 2d. We assume that the classification weights follow Gaussian distribution with diagonal covariance and we sample the weights from this distribution during meta-training, shown in Equation 23 and 24. A.3 EXPERIMENTAL ANALYSIS A.3.1 FEW SHOT REGRESSION AWGIM can be applied to few shot regression task by slight modification. During meta-training, we set the number of classes N equal to 1 and adapt the cross entropy loss to mean square error. We use the data points (x, y) as inputs to AWGIM and generate weight as well as bias parameters for a three layer MLP with hidden dimension 40. This is consistent with few shot regression experimental setting in LEO. The few shot regression tasks are constructed as either sinusoidal or linear regression tasks. For sinusoidal regression tasks, the amplitude range is We replace the multi-head attention in the two paths with single-head attention and conduct the 5-way 1-shot and 5-way 5-shot experiments on miniImageNet dataset. The are shown in Table 4. We can see clearly that multi-head attention improve the performance. In particular, for 5-way 1-shot experiment, single head attention gives close to MLP encoding, which indicates that single head attention struggles when data are extremely scarce. We compare AWGIM with LEO in terms of convergence speed. The batch size is set to be 16 for both methods. We use the hyper-parameters tuned by authors to train LEO. The accuracy of metavalidation set during meta-training on 5-way 1-shot miniImageNet is plotted, shown in Figure 3. we can see clearly that AWGIM converges faster than LEO and outperforms LEO except for the first few iterations. We measure the inference time of AWGIM to show that it induces minimal computational overhead. In comparison, we use "MLP encoding" in two paths, which has time complexity O(N K + |Q|). We use two set-ups on miniImageNet and the batch size is set to be 64. 100 batches are processed and we report the average consumed time for one batch. All these experiments on done with the same GPU and workstation. The are shown in Table 5. It can be observed that the usage of self-attention and cross attention in AWGIM occurs negligible overhead, compared with MLP encoding. This is because the values of N, K, |Q| are all relatively small and matrix multiplication further can be processed very fast by GPU. We visualize the generated classification weights by t-SNE . First we sample 400 tasks from meta-validation set of 5-way 1-shot miniImageNet experiment. Each task contains 5 query samples from 5 different classes. Thus in total there are 400 × 5 × 5 = 10, 000 weight vectors to visualize. As comparison, inputs to the generator g are also plotted. The visualization are shown in Figure 4. The inputs to g are displayed in (a, b) and the generated classification weights in (c, d). From the comparison between (a) and (c), we can see the decoded weights for each class in (c) are clustered closer than (a) in general. Red and blue dots in (b, d) denotes the classification weights for two query samples from two classes within one task. It can be observed that g can generate adapted weights for different query samples. This is consistent with Table 3, where the of "random shuffle between classes" suggest that query samples from different class have distinct classification weights. (a) (b) (c) (d) Figure 4: t-SNE visualization of the inputs to g in (a, b) and the generated classification weights in (c, d). Blue and red dots in (b) and (d) are the classification weights for two query samples in the same task.
A novel few shot learning method to generate query-specific classification weights via information maximization.
594
scitldr
Conversational question answering (CQA) is a novel QA task that requires the understanding of dialogue context. Different from traditional single-turn machine reading comprehension (MRC), CQA is a comprehensive task comprised of passage reading, coreference resolution, and contextual understanding. In this paper, we propose an innovative contextualized attention-based deep neural network, SDNet, to fuse context into traditional MRC models. Our model leverages both inter-attention and self-attention to comprehend the conversation and passage. Furthermore, we demonstrate a novel method to integrate the BERT contextual model as a sub-module in our network. Empirical show the effectiveness of SDNet. On the CoQA leaderboard, it outperforms the previous best model's F1 score by 1.6%. Our ensemble model further improves the F1 score by 2.7%. Machine reading comprehension (MRC) is a core NLP task in which a machine reads a passage and then answers related questions. It requires a deep understanding of both the article and the question, as well as the ability to reason about the passage and make inferences. These capabilities are essential in applications like search engines and conversational agents. In recent years, there have been numerous studies in this field (; ;), with various innovations in text encoding, attention mechanisms and answer verification. However, traditional MRC tasks often take the form of single-turn question answering. In other words, there is no connection between different questions and answers to the same passage. This oversimplifies the conversational manner humans naturally take when probing a passage, where question turns are assumed to be remembered as context to subsequent queries. Figure 1 demonstrates an example of conversational question answering in which one needs to correctly refer "she" in the last two rounds of questions to its antecedent in the first question, "Cotton." To accomplish this kind of task, the machine must comprehend both the current round's question and previous rounds of utterances in order to perform coreference resolution, pragmatic reasoning and semantic implication. To facilitate research in conversation question answering (CQA), several public datasets have been published that evaluate a model's efficacy in this field, such as CoQA , QuAC and QBLink . In these datasets, to generate correct responses, models need to fully understand the given passage as well as the dialogue context. Thus, traditional MRC models are not suitable to be directly applied to this scenario. Therefore, a number of models have been proposed to tackle the conversational QA task. DrQA+PGNet combines evidence finding and answer generation to produce answers. BiDAF++ achieves better by employing answer marking and contextualized word embeddings on the MRC model BiDAF . FlowQA leverages a recurrent neural network over previous rounds of questions and answers to absorb information from its history context. Once upon a time, in a barn near a farm house, there lived a little white kitten named Cotton. Cotton lived high up in a nice warm place above the barn where all of the farmer's horses slept. But Cotton wasn't alone in her little home above the barn, oh no. She shared her hay bed with her mommy and 5 other sisters... What color was Cotton? A1: white Q2: Where did she live? A2: in a barn Q3: Did she live alone? A3: no Figure 1: Example passage and first three rounds of question and answers from CoQA dataset . Pronouns requiring coreference resolution is marked in bold. In this paper, we propose SDNet, a contextual attention-based deep neural network for the conversational question answering task. Our network stems from machine reading comprehension models, but it has several unique characteristics to tackle context understanding. First, we apply both interattention and self-attention on the passage and question to obtain a more effective understanding of the passage and dialogue history. Second, we prepend previous rounds of questions and answers to the current question to incorporate contextual information. Third, SDNet leverages the latest breakthrough in NLP: BERT contextual embeddings . Different from the canonical way of employing BERT as a monolithic structure with a thin linear task-specific layer, we utilize BERT as a contextualized embedder and absorb its structure into our network. To accomplish this, we align the traditional tokenizer with the Byte Pair Encoding (BPE) tokenizer in BERT. Furthermore, instead of using only the last layer's output from BERT , we employ a weighted sum of BERT layer outputs to take advantage of all levels of semantic abstraction. Finally, we lock the internal parameters of BERT during training, which saves considerable computational cost. These techniques are also applicable to other NLP tasks. We evaluate SDNet on the CoQA dataset, and it improves on the previous state-of-the-art F 1 score by 1.6% (from 75.0% to 76.6%). The ensemble model further increases the F 1 score to 79.3%. In this section, we propose our neural model, SDNet, for the conversational question answering task. We first formulate the problem and then present an overview of the model before delving into the details of the model structure. Given a passage/context C, and question-answer pairs from previous rounds of conversation Q 1, A 1, Q 2, A 2,..., Q k−1, A k−1, the task is to generate response A k given the latest question Q k. The response is dependent on both the passage and historic questions and answers. To incorporate conversation history into response generation, we employ the idea from DrQA+PGNet to prepend the latest N rounds of QAs to the current question Q k. The problem is then converted into a single-turn machine reading comprehension task, where the reformulated question is Encoding layer encodes each token in passage and question into a fixed-length vector, which includes both word embeddings and contextualized embeddings. For contextualized embedding, we utilize the pretrained language understanding model BERT . Different from previous work, we fix the parameters in BERT model and use the linear combination of embeddings from different layers in BERT. Integration layer uses multi-layer recurrent neural networks (RNN) to capture contextual information within passage and question. To characterize the relationship between passage and question, we conduct word-level attention from question to passage both before and after the RNNs. We employ the idea of history-of-word from FusionNet to reduce the dimension of output hidden vectors. Furthermore, we conduct self-attention to extract relationship between words at different positions of context and question. Output layer computes the final answer span. It uses attention to condense the question into a fixedlength vector, which is then used in a bilinear projection to obtain the probability that the answer should start and end at each position. An illustration of our model SDNet is in Figure 2. We first use GloVe embedding for each word in the context and question. Additionally, we compute a feature vector f w for each context word, following the approach in DrQA. This feature vector contains a 12-dim POS embedding, an 8-dim NER embedding, a 3-dim exact matching vector em i indicating whether this word, its lower-case form or its stem appears in the question, and a 1-dim normalized term frequency. BERT as Contextual Embedder. We design a number of methods to leverage BERT as a contextualized embedder in our model. First, because BERT uses Byte Pair Encoding (BPE) as the tokenizer, the generated tokens are sub-words and may not align with traditional tokenizer . To incorporate BERT into our network, we first use a conventional tokenizer (e.g. spaCy) to get word sequences, and then apply the BPE tokenizer from BERT to partition each word w in the sequence into subwords w = (b 1, ..., b s). This alignment makes it possible to concurrently use BERT embeddings and other word-level features. The contextual embedding of w is defined to be the averaged BERT embedding of all sub-words b j, 1 ≤ j ≤ s. proposes the method to append thin task-specific linear layers to BERT, which takes the from the last transformer layer as input. However, as BERT contains multiple layers, we employ a weighted sum of these layer outputs to take advantage of information from all levels of semantic abstraction. This can help boost the performance compared with using only the last transformer's output. Third, as BERT contains hundreds of millions of parameters, it takes a lot of time and space to compute and store their gradients during optimization. To tackle this problem, we lock the internal weights of BERT during training, only updating the linear combination weights. This can significantly increase the efficiency during training, which can be especially useful when computing resource is limited. To summarize, suppose a word w is tokenized to s BPE tokens w = (b 1, b 2, ..., b s), and BERT has L layers that generate L embedding outputs for each BPE token, h The contextual embedding BERT w for word w is computed as: where α 1,..., α L are trainable parameters. Word-level Inter-Attention. We conduct attention from question to context (passage) based on GloVe word embeddings. Suppose the context word embeddings are {h where D ∈ R k×k is a diagonal matrix and U ∈ R d×k, k is the attention hidden size. To simplify notation, we denote the above attention function as Attn(A, B, C), which linearly combines the vector set C using attention scores computed from vector sets A and B. This resembles the definition of attention in transformer . It follows that the word-level interattention can be rewritten as Attn({h . Therefore, the input vector for each context word and question word is: RNN. In this component, we use two separate bidirectional LSTMs to form the contextualized understanding for C and Q: where h and K is the number of RNN layers. We use variational dropout for the input vector to each layer of RNN, i.e. the dropout mask is shared over different timesteps. Question Understanding. For each question word in Q, we employ one more RNN layer to generate a higher level of understanding of the question. Self-Attention on Question. As the question has integrated previous utterances, the model needs to directly relate the previously mentioned concept with the current question for context understanding. Therefore we employ self-attention on question: is the final representation of question words. Multilevel Inter-Attention. After multiple RNN layers extract different levels of semantic abstraction, we conduct inter-attention from question to context based on these representations. However, the cumulative output dimensions from all previous layers can be very large and computationally inefficient. Here we leverage the history-of-word idea from FusionNet : the attention uses all previous layers to compute scores, but only linearly combines one RNN layer output. In detail, we conduct K + 1 times of multilevel inter-attention from each RNN layer output of question to context {m where HoW is the history-of-word vector: An additional RNN layer is added to context C: Self Attention on the Context. Similar to questions, SDNet applies self-attention to the context. Again, it uses the history-of-word concept to reduce the output dimensionality: The self-attention is followed by an additional RNN layer to generate the final representation of context words: {u 2.5 OUTPUT LAYER Question Condensation. The question is condensed into a single representation vector: where w is a trainable vector. Generating answer span. As SDNet outputs answers of interval forms, the output layer generates the probability that the answer starts and ends at the i-th context word, 1 ≤ i ≤ m: where W S, W E are parameters. The use of GRU is to transfer information from start position to end position computation. Special answer types. SDNet can also output special types of answer, such as affirmation "yes", negation "no" or no answer "unknown". We separately generate the probabilities of these three answers: P Y, P N, P U . For instance, the probability that the answer is "yes", P Y, is computed as: where W Y and w Y are parametrized matrix and vector, respectively. During training, all rounds of questions and answers for the same passage form a batch. The goal is to maximize the probability of the ground-truth answer, including span start/end position, affirmation, negation and no-answer situations. Therefore, we minimize the cross-entropy loss function L: indicate whether the k-th ground-truth answer is a passage span, "yes", "no" and "unknown", respectively. During inference, we pick the largest span/yes/no/unknown probability. The span is constrained to have a maximum length of 15. We evaluated our model on CoQA , a large-scale conversational question answering dataset. In CoQA, many questions require understanding of both the passage and previous rounds of questions and answers, which poses challenge to conventional machine reading models. Table 1 summarizes the domain distribution in CoQA. As shown, CoQA contains passages from multiple domains, and the average number of question answering turns is more than 15 per passage. For each in-domain dataset, 100 passages are in the development set, and 100 passages are in the test set. The rest in-domain dataset are in the training set. The test set also includes all of the out-of-domain passages. We use spaCy for word tokenization and employ the uncased BERT-large model to generate contextual embedding. During training, we use a dropout rate of 0.4 for BERT layer outputs and 0.3 for other layers. We use Adamax as the optimizer, with a learning rate of α = 0.002, β = (0.9, 0.999) and = 10 −8. We train the model for 30 epochs. The gradient is clipped at 10. The word-level attention has a hidden size of 300. The self attention layer for question words has a hidden size of 300. The RNNs for question and context have K = 2 layers and each layer has a hidden size of 125. The multilevel attention from question to context has a hidden size of 250. The self attention layer for context has a hidden size of 250. The final RNN layer for context words has a hidden size of 125. We compare SDNet 2 with the following baseline models: DrQA+PGNet , BiDAF++ (and FlowQA . Aligned with the official leaderboard, we use F 1 as the evaluation metric, which is the harmonic mean of precision and recall at word level between the predicted answer and ground truth. Table 2 shows the performance of SDNet and baseline models. 4 As shown, SDNet achieves significantly better than baseline models. In detail, the single SDNet model improves overall F 1 by 1.6%, compared with previous state-of-art model on CoQA, FlowQA. We also trained an ensemble model consisting of 12 SDNet models with the same structure but different random seeds for initialization. The ensemble model uses the answer from the most number of models as its predicted answer. Ensemble SDNet model further improves overall F 1 score by 2.7%. Figure 3 shows the F 1 score of SDNet on development set during training. As seen, SDNet overpasses all but one baseline models after the second epoch, and achieves state-of-the-art after 8 epochs. Ablation Studies. We conduct ablation studies on SDNet to verify the effectiveness of different parts of the model. As Table 3 shows, our proposed weighted sum of per-layer output from BERT is crucial, boosting the performance by 1.75% compared with the canonical method of using only the last layer's output. This shows that the output from each layer in BERT is useful in downstream tasks. Using BERT-base instead of the BERT-large pretrained model hurts the F 1 score by 2.61%.Variational dropout and self attention can each improve the performance by 0.24% and 0.75% respectively. Contextual history. In SDNet, we utilize conversation history via prepending the current question with previous N rounds of questions and ground-truth answers. We experimented with the effect of N and present the in Table 4. As shown, excluding dialogue history (N = 0) can reduce the F 1 score by as much as 8.56%, manifesting the importance of contextual information in conversational QA task. The performance of our model peaks when N = 2, which was used in the final SDNet model. In this paper, we propose a novel contextual attention-based deep neural network, SDNet, to tackle the conversational question answering task. By leveraging inter-attention and self-attention on passage and conversation history, the model is able to comprehend dialogue flow and the passage. Furthermore, we leverage the latest breakthrough in NLP, BERT, as a contextual embedder. We design the alignment of tokenizers, linear combination and weight-locking techniques to adapt BERT into our model in a computation-efficient way. SDNet achieves superior over previous approaches. On the public dataset CoQA, SDNet outperforms previous state-of-the-art model by 1.6% in overall F 1 score and the ensemble model further improves the F 1 by 2.7%. Our future work is to apply this model to open-domain multiturn QA problem with large corpus or knowledge base, where the target passage may not be directly available. This will be a more realistic setting to human question answering.
A neural method for conversational question answering with attention mechanism and a novel usage of BERT as contextual embedder
595
scitldr
Generative adversarial networks (GANs) are one of the most popular approaches when it comes to training generative models, among which variants of Wasserstein GANs are considered superior to the standard GAN formulation in terms of learning stability and sample quality. However, Wasserstein GANs require the critic to be 1-Lipschitz, which is often enforced implicitly by penalizing the norm of its gradient, or by globally restricting its Lipschitz constant via weight normalization techniques. Training with a regularization term penalizing the violation of the Lipschitz constraint explicitly, instead of through the norm of the gradient, was found to be practically infeasible in most situations. Inspired by Virtual Adversarial Training, we propose a method called Adversarial Lipschitz Regularization, and show that using an explicit Lipschitz penalty is indeed viable and leads to competitive performance when applied to Wasserstein GANs, highlighting an important connection between Lipschitz regularization and adversarial training. In recent years, Generative adversarial networks (GANs) have been becoming the state-of-the-art in several generative modeling tasks, ranging from image generation to imitation learning . They are based on an idea of a two-player game, in which a discriminator tries to distinguish between real and generated data samples, while a generator tries to fool the discriminator, learning to produce realistic samples on the long run. Wasserstein GAN (WGAN) was proposed as a solution to the issues present in the original GAN formulation. Replacing the discriminator, WGAN trains a critic to approximate the Wasserstein distance between the real and generated distributions. This introduced a new challenge, since Wasserstein distance estimation requires the function space of the critic to only consist of 1-Lipschitz functions. To enforce the Lipschitz constraint on the WGAN critic, originally used weight clipping, which was soon replaced by the much more effective method of Gradient Penalty (GP) , which consists of penalizing the deviation of the critic's gradient norm from 1 at certain input points. Since then, several variants of gradient norm penalization have been introduced (; ; ; b). Virtual Adversarial Training (VAT) is a semi-supervised learning method for improving robustness against local perturbations of the input. Using an iterative method based on power iteration, it approximates the adversarial direction corresponding to certain input points. Perturbing an input towards its adversarial direction changes the network's output the most. Inspired by VAT, we propose a method called Adversarial Lipschitz Regularization (ALR), enabling the training of neural networks with regularization terms penalizing the violation of the Lipschitz constraint explicitly, instead of through the norm of the gradient. It provides means to generate a pair for each input point, for which the Lipschitz constraint is likely to be violated with high probability. In general, enforcing Lipschitz continuity of complex models can be useful for a lot of applications. In this work, we focus on applying ALR to Wasserstein GANs, as regularizing or constraining Lipschitz continuity has proven to have a high impact on training stability and reducing mode collapse. Source code to reproduce the presented experiments is available at https://github.com/dterjek/adversarial_lipschitz_regularization. • We propose Adversarial Lipschitz Regularization (ALR) and apply it to penalize the violation of the Lipschitz constraint directly, ing in Adversarial Lipschitz Penalty (ALP). • Applying ALP on the critic in WGAN (WGAN-ALP), we show state-of-the-art performance in terms of Inception Score and Fréchet Inception Distance among non-progressive growing methods trained on CIFAR-10, and competitive performance in the high-dimensional setting when applied to the critic in Progressive Growing GAN trained on CelebA-HQ. Generative adversarial networks (GANs) provide generative modeling by a generator network g that transforms samples of a low-dimensional latent space Z into samples from the data space X, transporting mass from a fixed noise distribution P Z to the generated distribution P g. The generator is trained simultaneously with another network f called the discriminator, which is trained to distinguish between fake samples drawn from P g and real samples drawn from the real distribution P r, which is often represented by a fixed dataset. This network provides the learning signal to the generator, which is trained to generate samples that the discriminator considers real. This iterative process implements the minimax game played by the networks f and g. This training procedure minimizes the approximate Jensen-Shannon divergence (JSD) between P r and P g . However, during training these two distributions might differ strongly or even have non-overlapping supports, which might in gradients received by the generator that are unstable or zero. Wasserstein GAN (WGAN) was proposed as a solution to this instability. Originating from Optimal Transport theory , the Wasserstein metric provides a distance between probability distributions with much better theoretical and practical properties than the JSD. It provides a smooth optimizable distance even if the two distributions have non-overlapping supports, which is not the case for JSD. It raises a metric d X from the space X of the supports of the probability distributions P 1 and P 2 to the space of the probability distributions itself. For these purposes, the Wasserstein-p distance requires the probability distributions to be defined on a metric space and is defined as where Π(P 1, P 2) is the set of distributions on the product space X × X whose marginals are P 1 and P 2, respectively. The optimal π achieving the infimum in is called the optimal coupling of P 1 and P 2, and is denoted by π *. The case of p = 1 has an equivalent formulation called the Kantorovich-Rubinstein formula , where f: X → R is called the potential function, f L ≤ 1 is the set of all functions that are 1-Lipschitz with respect to the ground metric d X, and the Wasserstein-1 distance corresponds to the supremum over all 1-Lipschitz potential functions. The smallest Lipschitz constant for a real-valued function f with the metric space (X, d X) as its domain is given by Based on, the critic in WGAN implements an approximation of the Wasserstein-1 distance between P g and P r. The minimax game played by the critic f and the generator g becomes min a formulation that proved to be superior to the standard GAN in practice, with substantially more stable training behaviour and improved sample quality, although recent GAN variants do not always use this objective . With WGAN, the challenge became effectively restricting the smallest Lipschitz constant of the critic f, sparking the birth of a plethora of Lipschitz regularization techniques for neural networks. A general definition of the smallest Lipschitz constant of a function where the metric spaces (X, d X) and (Y, d Y) are the domain and codomain of the function f, respectively. The function f is called Lipschitz continuous if there exists a real constant for any x, y ∈ X. Then, the function f is also called K-Lipschitz. Theoretical properties of neural networks with low Lipschitz constants were explored in , and , showing that it induces better generalization. Learning mappings with Lipschitz constraints became prevalent in the field of deep learning with the introduction of WGAN. Enforcing the Lipschitz property on the critic was first done by clipping the weights of the network. This approach achieved superior compared to the standard GAN formulation, but still sometimes yielded poor quality samples or even failed to converge. While clipping the weights enforces a global Lipschitz constant, it also reduces the function space, which might not include the optimal critic any more. Soon this method has been replaced by a softened one called Gradient Penalty (GP) . Motivated by the fact that the optimal critic should have unit gradient norm on lines connecting the coupled points (x 1, x 2) ∼ π * according to, they proposed a regularizer that enforces unit gradient norm along these lines, which not only enforces the Lipschitz constraint, but other properties of the optimal solution as well. However, π * is not known in practice, which is why proposed to apply GP on samples of the induced distribution P i, by interpolating samples from the marginals P 1 and P 2. The critic in the WGAN-GP formulation is regularized with the loss where P i denotes the distribution of samples obtained by interpolating pairs of samples drawn from P r and P g, and λ is a hyperparameter acting as a Lagrange multiplier. Theoretical arguments against GP were pointed out by and , arguing that unit gradient norm on samples of the distribution P i is not valid, as the pairs of samples being interpolated are generally not from the optimal coupling π *, and thus do not necessarily need to match gradient norm 1. Furthermore, they point out that differentiability assumptions of the optimal critic are not met. Therefore, the regularizing effect of GP might be too strong. As a solution, suggested using a loss penalizing the violation of the Lipschitz constraint either explicitly with or implicitly with where in both cases (a) + denotes max(0, a). The first method has only proved viable when used on toy datasets, and led to considerably worse on relatively more complex datasets like CIFAR-10, which is why used the second one, which they termed Lipschitz Penalty (LP). Compared to GP, this term only penalizes the gradient norm when it exceeds 1. As P τ they evaluated the interpolation method described above, and also sampling random local perturbations of real and generated samples, but found no significant improvement compared to P i. proposed dropout in the critic as a way for creating perturbed input pairs to evaluate the explicit Lipschitz penalty, which led to improvements, but still relied on using GP simultaneously. A second family of Lipschitz regularization methods is based on weight normalization, restricting the Lipschitz constant of a network globally instead of only at points of the input space. One such technique is called spectral normalization (SN) proposed in , which is a very efficient and simple method for enforcing a Lipschitz constraint with respect to the 2-norm on a per-layer basis, applicable to neural networks consisting of affine layers and K-Lipschitz activation functions. proposed a similar approach, which can be used to enforce a Lipschitz constraint with respect to the 1-norm and ∞-norm in addition to the 2-norm, while also being compatible with batch normalization and dropout. argued that any Lipschitzconstrained neural network must preserve the norm of the gradient during backpropagation, and to this end proposed another weight normalization technique (showing that it compares favorably to SN, which is not gradient norm preserving), and an activation function based on sorting. VAT is a semi-supervised learning method that is able to regularize networks to be robust to local adversarial perturbation. Virtual adversarial perturbation means perturbing input sample points in such a way that the change in the output of the network induced by the perturbation is maximal in terms of a distance between distributions. This defines a direction for each sample point called the virtual adversarial direction, in which the perturbation is performed. It is called virtual to make the distinction with the adversarial direction introduced in clear, as VAT uses unlabeled data with virtual labels, assigned to the sample points by the network being trained. The regularization term of VAT is called Local Distributional Smoothness (LDS). It is defined as where p is a conditional distribution implemented by a neural network, D(p, p) is a divergence between two distributions p and p, for which chose the Kullback-Leibler divergence (KLD), and is the virtual adversarial perturbation, where is a hyperparameter. VAT is defined as a training method with the regularizer applied to labeled and unlabeled examples. An important detail is that is minimized by keeping p(y|x) fixed and optimizing p(y|x + r vadv) to be close to it. The adversarial perturbation is approximated by the power iteration r vadv ≈ r k, where r 0 is a randomly sampled unit vector and ξ is another hyperparameter. This iterative scheme is an approximation of the direction at x that induces the greatest change in the output of p in terms of the divergence D. found that k = 1 iteration is sufficient in practical situations. 3 argued that penalizing the norm of the gradient as in is more effective than penalizing the Lipschitz quotient directly as in, as the former penalizes the slope of f in all spatial directions around x, unlike the latter, which does so only along (x − y). We hypothesize that using the explicit Lipschitz penalty in itself is insufficient because if one takes pairs of samples x, y randomly from P r, P g or P i (or just one sample and generates a pair for it with random perturbation), the violation of the Lipschitz penalty evaluated at these sample pairs will be far from its maximum, hence a more sophisticated strategy for sampling pairs is required. As we will show, a carefully chosen sampling strategy can in fact make the explicit penalty favorable over the implicit one. Consider the network f as a mapping from the metric space (X, d X) to the metric space (Y, d Y). Let us rewrite with y = x + r to get A given mapping f is K-Lipschitz if and only if for any given x ∈ X, taking the supremum over r in in a value K or smaller. Assuming that this supremum is always achieved for some r, we can define a notion of adversarial perturbation with respect to the Lipschitz continuity for a given x ∈ X as r adv = arg max and the corresponding maximal violation of the K-Lipschitz constraint as We define Adversarial Lipschitz Regularization (ALR) as the method of adding as a regularization term to the training objective that penalizes the violation of the Lipschitz constraint evaluated at sample pairs obtained by adversarial perturbation. We call this term Adversarial Lipschitz Penalty (ALP). To put it in words, ALP measures the deviation of f from being K-Lipschitz evaluated at pairs of sample points where one is the adversarial perturbation of the other. If added to the training objective, it makes the learned mapping approximately K-Lipschitz around the sample points it is applied at. We found that in the case of the WGAN critic it is best to minimize without keeping f (x) fixed. See Appendix A.1 for the semi-supervised case and Appendix A.2 for how VAT can be seen as a special case of Lipschitz regularization. In general, computing the adversarial perturbation is a nonlinear optimization problem. A crude and cheap approximation is r adv ≈ r k, where is the approximated adversarial direction with r 0 being a randomly sampled unit vector. The derivation of this formula is essentially the same as the one described in , but is included in Appendix A.3 for completeness. Unlike in VAT, we do not fix, but draw it randomly from a predefined distribution P over R + to apply the penalty at different scales. Theoretically, ALR can be used with all kinds of metrics d X and d Y, and any kind of model f, but the approximation of r adv imposes a practical restriction. It approximates the adversarial perturbation of x as a translation with length with respect to the 2-norm in the adversarial direction, which is only a perfect approximation if the ratio in is constant for any > 0. This idealized setting is hardly ever the case, which is why we see the search for other approximation schemes as an important future direction. There is a large number of methods for generating adversarial examples besides the one proposed in VAT (; ;), which could possibly be combined with ALR either to improve the approximation performance or to make it possible with new kinds of metrics. The latter is important since one of the strengths of the Wasserstein distance is that it can be defined with any metric d X, a fact that and built on by extending GP to work with metrics other than the Euclidean distance. emphasized the fact that through explicit Lipschitz penalties one could extend WGANs to more general metric spaces as well. In practice, one adds the Monte Carlo approximation of the expectation (averaged over a minibatch of samples) of either or the square of (or both) to the training objective, multiplied by a Lagrange multiplier λ. While VAT adds the expectation of to the training objective, for WGAN we have added the square of the expectation of. To train the Progressive GAN, we have added both the expectation and its square. In the semi-supervised setting, we added only the expectation similarly to VAT. We have found these choices to work best in these scenarios, but a principled answer to this question is beyond the scope of this paper. The target Lipschitz constant K can be tuned by hand, or in the presence of labeled data it is possible to calculate the Lipschitz constant of the dataset . The hyperparameters of the approximation scheme are k, ξ and those of P. Choosing the right hyperparameters can be done by monitoring the number of adversarial perturbations found by the algorithm for which the Lipschitz constraint is violated (and hence contribute a nonzero value to the expectation of), and tuning the hyperparameters in order to keep this number balanced between its maximum (which is the minibatch size) and its minimum (which is 0). If it is too high, it means that either K is too small and should be increased, or the regularization effect is too weak, so one should increase λ. If it is too low, then either the regularization effect is too strong, or ALR is parametrized in a way that it cannot find Lipschitz constraint violations efficiently. In the former case, one should decrease λ. In the latter, one should either decrease K, tune the parameters of P, or increase the number of power iterations k for the price of increased runtime. We have not observed any significant effect when changing the value of ξ in any of the tasks considered. In terms of efficiency when applied to WGANs, ALR compares favorably to the implicit methods penalizing the gradient norm, and to weight normalization techniques as well, as demonstrated in the experiments section. See Appendix A.4 for a showcase of the differences between weight normalization methods, implicit penalty methods and explicit penalty methods, represented by SN, LP and ALR, respectively. The key takeaways are that • penalty methods in a softer regularization effect than SN, • ALR is preferable when the regularized network contains batch normalization (BN) layers, and • ALR gives more control over the regularization effect, which also means there are more hyperparameters to tune. The performance of ALR mostly depends on the speed of the approximation of r adv. The current method requires 1 step of backpropagation for each power iteration step, which means that running time will be similar to that of LP and GP with k = 1. SN is much cheaper computationally than each penalty method, although we believe ALR has the potential to become relatively cheap as well by adopting new techniques for obtaining adversarial examples . We specialize the ALP formula with f being the critic, d X (x, y) = x − y 2, d Y (x, y) = |x − y| and K = 1, and apply it to the WGAN objective to arrive at a version with the explicit penalty, which uses adversarial perturbations as a sampling strategy. It is formulated as where P r,g is a combination of the real and generated distributions (meaning that a sample x can come from both), λ is the Lagrange multiplier, and the adversarial perturbation is defined as This formulation of WGAN in a stable explicit Lipschitz penalty, overcoming the difficulties experienced when one tries to apply it to random sample pairs as shown in. To evaluate the performance of WGAN-ALP, we trained one on CIFAR-10, consisting of 32 × 32 RGB images, using the residual architecture from , implemented in TensorFlow. Closely following , we used the Adam optimizer with parameters β 1 = 0, β 2 = 0.9 and an initial learning rate of 2 × 10 −4 decaying linearly to 0 over 100000 iterations, training the critic for 5 steps and the generator for 1 per iteration with minibatches of size 64 (doubled for the generator). We used as a loss function to optimize the critic. K = 1 was an obvious choice, and we found λ = 100 to be optimal (the training diverged for λ = 0.1, and was stable but performed worse for λ = 10 and 1000). The hyperparameters of the approximation of r adv were set to ξ = 10, P being the uniform distribution over [0.1, 10], and k = 1 power iteration. Both batches from P r and P g were used for regularization. We used Inception Score and FID as our evaluation metrics. The former correlates well with human judgment of image quality and is the most widely used among GAN models, and the latter has been shown to capture model issues such as mode collapse, mode dropping and overfitting, while being a robust and efficient metric . We monitored the Inception Score and FID during training using 10000 samples every 1000 iteration, and evaluated them at the end of training using 50000 samples. We ran the training setting described above 10 times with different random seeds, and calculated the mean and standard deviation of the final Inception Scores and FIDs, while also recording the maximal Inception Score observed during training. We report these values for WGAN-ALP and other relevant GANs (; ; a; ; ; ;) in Table 1. We did not run experiments to evaluate competing models, but included the values reported in the corresponding papers (with the exception of the FID for WGAN-GP, which was taken from Zhou et al. (2019a) ). They used different methods to arrive at the cited , from which that of is the one closest to ours. We show some generated samples in Figure 1a. We also trained WGAN-LP in our implementation. During training, the best observed Inception Score and FID were 8.13 and 18.49, while at the end of training the best final Inception Score and FID were 8.01 and 15.42. To see that ALR indeed restricts the Lipschitz constant of the critic, we monitored the gradient norms during training, which converged to ≈ 5 with λ = 100. This was also the case using LP with λ = 0.1, but the number of Lipschitz constraint violations found by the algorithm were much higher in this case than with ALR. Our toy example in Appendix A.4 showed that when the regularized network contains BN layers, ALR seems to work better than competing methods. In order to see if this still applies in more complex settings, we have trained a variant of WGAN in which the critic contains BN layers (WGAN-BN). did not use BN in the critic as they argued that GP is not valid in that setting, and indeed when we trained WGAN-BN with GP, the best Inception Score observed during training was only 6.29. When we applied ALP to WGAN-BN, the were nearly on par with the original setting without BN, producing an even better maximal Inception Score of 8.71. We leave the question of how BN affects Lipschitz continuity for future work. Generated samples are shown in Figure 1b. made the distinction between one-sided and two-sided penalties, represented by and. The latter is based on the fact that in WGAN, the optimal critic has unit gradient norm on lines connecting points from the optimal coupling π *. showed that since π * is not known in practice, one should use the one-sided penalty, while proposed a method to approximate π * with an auto-encoding scheme. In the limit r 2 → 0 the expression inside the arg max operator in is equivalent to the directional derivative of f along r, and the vector r adv corresponding to the maximum value of the directional derivative at x is equivalent to ∇ x f (x). Since the critic f corresponds to the potential function in the dual formulation of the optimal transport problem, at optimality its gradient at x points towards its coupling y, where (x, y) ∼ π *. From this perspective, sampling pairs (x, x + r adv) using can be seen as an approximation of the optimal coupling π *. To test how reasonable this approximation is, we have trained a WGAN variant with the two-sided explicit penalty formulated as which performed similarly to the one-sided case with λ = 10, but was less stable for other values of λ. The findings of were similar for the case of the implicit penalty. Improving the approximation scheme of r adv might render the formulation using the two-sided penalty preferable in the future. To show that ALR works in a high-dimensional setting as well, we trained a Progressive GAN on the CelebA-HQ dataset , consisting of 1024 × 1024 RGB images. We took the official TensorFlow implementation and replaced the loss function of the critic, which originally used GP, with a version of ALP. Using as the training objective was stable until the last stage of progressive growing, but to make it work on the highest resolution, we had to replace it with meaning that we used the sum of the absolute and squared values of the Lipschitz constraint violation as the penalty. The optimal hyperparameters were λ = 0.1, P being the uniform distribution over [0.1, 100], ξ = 10 and k = 1 step of power iteration. The best FID seen during training with the original GP version was 8.69, while for the modified ALP version it was 14.65. The example shows that while ALP did not beat GP in this case (possibly because the implementation was fine-tuned using GP), it does work in the high-dimensional setting as well. For samples generated by the best performing ALR and GP variants see Appendix A.5. Inspired by VAT, we proposed ALR and shown that it is an efficient and powerful method for learning Lipschitz constrained mappings implemented by neural networks. Resulting in competitive performance when applied to the training of WGANs, ALR is a generally applicable regularization method. It draws an important parallel between Lipschitz regularization and adversarial training, which we believe can prove to be a fruitful line of future research. Since VAT is a semi-supervised learning method, it is important to see how ALR fares in that regime. To show this, we have replicated one of the experiments from. We trained the ConvLarge architecture to classify images from CIFAR-10 with the same setting as described in , except that we did not decay the learning rate, but kept it fixed at 3 × 10 −4. We split the 50000 training examples into 4000 samples for the classification loss, 45000 samples for regularization and 1000 for validation, with equally distributed classes. Test performance was evaluated on the 10000 test examples. We have found that unlike in the unsupervised setting, here it was important to assume f (x) fixed when minimizing the regularization loss, and also to complement the smoothing effect with entropy minimization . The baseline VAT method was ALR specialized with K = 0, d X being the Euclidean metric, d Y being the KL divergence, fixed = 8 and λ = 1. This setting achieved maximal validation performance of 84.2% and test performance 82.46%. After some experimentation, the best performing choice was K = 0, d X being the l 2 metric, d Y the mean squared difference over the logit space (which parametrize the categorical output distribution over which the KL divergence is computed in the case of VAT), P being the uniform distribution over and λ = 1. This way the maximal validation performance was 85.3% and test performance 83.54%. Although this ≈ 1% is improvement is not very significant, it shows that ALR can be a competitive choice as a semi-supervised learning method as well. VAT was defined by considering neural networks implementing conditional distributions p(y|x), where the distribution over discrete labels y was conditioned on the input image x. To see why LDS, the regularization term of VAT, can be seen as special kind of Lipschitz continuity, we will use a different perspective. Consider a mapping f: X → Y with domain X and codomain Y, where X is the space of images and Y is the probability simplex (the space of distributions over the finite set of labels). Since a divergence is in general a premetric (prametric, quasi-distance) on the space of probability measures , and Lipschitz continuity is defined for mappings between metric spaces, let us restrict the divergence D from the VAT formulation to be a metric d Y. used KLD in their experiments, which is not a metric, but one can use e.g. the square root of JSD or the Hellinger distance, which are metrics. Let us metrize the space of images X with d X being the Euclidean metric. From this perspective, the network f is a mapping from the metric space (X, d X) to the metric space (Y, d Y). Let us also assume that we aim to learn a mapping f with the smallest possible f L by setting K to 0. To enforce the condition x + r ∈ X in, we bound the Euclidean norm of r from above by some predefined > 0. If we make the additional assumption that the supremum is always achieved with an r of maximal norm, the denominator in will be constant, hence the formulas with and without it will be equivalent up to a scaling factor. With these simplifications, and reduce to and which are equivalent to and, respectively. Let us consider the question of keeping f (x) fixed when minimizing an implementation detail. With this discrepancy aside, we have recovered VAT as a special case of Lipschitz regularization. We assume that f and d Y are both twice differentiable with respect to their arguments almost everywhere, the latter specifically at x = y. Note that one can easily find a d Y for which the last assumption does not hold, for example the l1 distance. If d Y is translation invariant, meaning that for each u ∈ Y, then its subderivatives at x = y will be independent of x, hence the method described below will still work. Otherwise, one can resort to using a proxy metric in place of d Y for the approximation, for example the l2 distance. so that the second-order Taylor approximation of d(r, x) is d(r, x) ≈ 1 2 r T H(x)r, where H(x) = ∇∇ r d(r, x) r=0 is the Hessian matrix. The eigenvector u of H(x) corresponding to its eigenvalue with the greatest absolute value is the direction of greatest curvature, which is approximately the adversarial direction that we are looking for. The power iteration defined by where r 0 is a randomly sampled unit vector, converges to u if u and r 0 are not perpendicular. Calculating H(x) is computationally heavy, which is why H(x)r i is approximated using the finite differences method as where the equality follows from. The hyperparameter ξ = 0 is introduced here. In summary, the adversarial direction is approximated by the iterative scheme of which one iteration is found to be sufficient and necessary in practice. To showcase the differences between weight normalization methods, implicit penalty methods and explicit penalty methods, represented by SN, LP and ALR, respectively, we devised the following toy example. Suppose that we want to approximate the following real-valued mapping on the 2-dimensional interval [−4, 4] 2: for −4 ≤ x, y ≤ 4. In addition, we want the approximation to be 1-Lipschitz. It is easy to see that the optimal approximation with respect to the mean squared error iŝ This example has connections to WGAN, as the optimal critic is 1-Lipschitz, and its approximation will provide the learning signal to the generator in the form of gradients. Therefore, it is important to closely approximate the gradient of the optimal critic, which is achieved indirectly by Lipschitz regularization. In this example, we will see how closely the different Lipschitz regularization methods can match the gradient of the optimal approximationf opt. We implemented the example in PyTorch. For the approximationf, we use an MLP with 3 hidden layers containing 20, 40 and 20 neurons, respectively, with ReLU activations after the hidden layers, and a variant which also has batch normalization (BN) before the activations, since it has been found that BN hurts adversarial robustness , and hence it should also hurt Lipschitz continuity. We trained the networks for 2 14 iterations, with batches consisting of an input, a corresponding output, and an additional input for regularization. The inputs are drawn uniformly at random from [−4, 4] 2 and the output is defined by. The minibatch size was 64 for input-output pairs, and 1024 for regularization inputs. We used heatmaps to visualize the gradient norm surfaces of the optimal and learned mappings, with the color gradient going from black at 0 to white at 1, see Figure 2. This example is not intended to rank the competing Lipschitz regularization methods, as it always depends on the particular application which one is the best suited, but to show that they are fundamentally different and competent in their own way. (h) ∇xf ALP,λ=0.1,k=0 2 (i) ∇xf ALP,λ=1,k=0 2 (j) ∇xf ALP,λ=10,k=0 2 (k) ∇xf ALP,λ=0.1,k=1 2 (l) ∇xf ALP,λ=1,k=1 2 (m) ∇xf ALP,λ=10,k=1 2 (n) ∇xf ALP,λ=0.1,k=5 2 (o) ∇xf ALP,λ=1,k=5 2 (p) ∇xf ALP,λ=10,k=5 2 Figure 2: Gradient norm surfaces of optimal and learned approximations of f Without any kind of regularization, the network learned to approximate the target function very well, but its gradients look nothing like that off opt, although somehow it is a better match with BN. When we apply SN to the MLP layers, the without BN will be a very smooth mapping with maximum gradient norm far below 1. SN is not compatible with BN, the being only slightly better than the unregularized case. A detail not visible here is that because SN considers weight matrices as linear maps from R n to R m and normalizes them layer-wise, it regularizes globally instead of around actual data samples. In this case, on the whole of R 2 instead of just [−4, 4] 2. For WGANs trained on CIFAR-10, the input space consists of 32 × 32 RGB images with pixel values in [−1, 1], but the trained mapping is regularized on R 32×32×3 instead of just [−1, 1] 32×32×3 (which contains the supports of the real and fake distributions). This can hurt performance if the optimal mapping implemented by a particular network architecture is K-Lipschitz inside these supports, but not in some other parts of R 32×32×3. When the network is regularized using LP, the regularization strength can be controlled by tuning the value of λ. We trained with λ = 0.1, 1 and 10. Without BN, the highest of these values seems to work the best. With BN, the ing mapping is visibly highly irregular. With ALR, in addition to λ, we have additional control over the regularization by the hyperparameters of the approximation scheme of r adv. After some experimentation, we have found the best P for this case was the uniform distribution over [10 −6, 10 −5]. We trained with λ = 0.1, 1 and 10, and k = 0, 1 and 5 power iterations. Arguably, both with and without BN the λ = 1 and k = 5 case seems like the best choice. Without BN, the are quite similar to the LP case, but when BN is introduced, the ing mappings are much smoother than the ones obtained with LP. Figure 4: Images generated using Progressive GAN trained with GP
alternative to gradient penalty
596
scitldr
Multi-task learning promises to use less data, parameters, and time than training separate single-task models. But realizing these benefits in practice is challenging. In particular, it is difficult to define a suitable architecture that has enough capacity to support many tasks while not requiring excessive compute for each individual task. There are difficult trade-offs when deciding how to allocate parameters and layers across a large set of tasks. To address this, we propose a method for automatically searching over multi-task architectures that accounts for resource constraints. We define a parameterization of feature sharing strategies for effective coverage and sampling of architectures. We also present a method for quick evaluation of such architectures with feature distillation. Together these contributions allow us to quickly optimize for parameter-efficient multi-task models. We benchmark on Visual Decathlon, demonstrating that we can automatically search for and identify architectures that effectively make trade-offs between task resource requirements while maintaining a high level of final performance. Multi-task learning allows models to leverage similarities across tasks and avoid overfitting to the particular features of any one task . This can in better generalization and more robust feature representations. While this makes multi-task learning appealing for its potential performance improvements, there are also benefits in terms of resource efficiency. Training a multi-task model should require less data, fewer training iterations, and fewer total parameters than training an equivalent set of task-specific models. In this work we investigate how to automatically search over high performing multi-task architectures while taking such resource constraints into account. Finding architectures that offer the best accuracy possible given particular resource constraints is nontrivial. There are subtle trade-offs in performance when increasing or reducing use of parameters and operations. Furthermore, with multiple tasks, one must take into account the impact of shared operations. There is a large space of options for tweaking such architectures, in fact so large that it is difficult to tune an optimal configuration manually. Neural architecture search (NAS) allows researchers to automatically search for models that offer the best performance trade-offs relative to some metric of efficiency. Here we define a multi-task architecture as a single network that supports separate outputs for multiple tasks. These outputs are produced by unique execution paths through the model. In a neural network, such a path is made up of a subset of the total nodes and operations in the model. This subset may or may not overlap with those of other tasks. During inference, unused parts of the network can be ignored by either pruning out nodes or zeroing out their activations (Figure 1). Such architectures mean improved parameter efficiency because redundant operations and features can be consolidated and shared across a set of tasks. We seek to optimize for the computational efficiency of multi-task architectures by finding models that perform as well as possible while reducing average node use per task. Different tasks will require different capacities to do well, so reducing average use requires effectively identifying which tasks will ask more of the model and which tasks can perform well with less. In addition, performance is affected by how nodes are shared across tasks. It is unclear when allocating resources whether sets of tasks would benefit from sharing parameters or would instead interfere. Figure 1: Feature partitioning can be used to control how much network capacity is used by tasks, and how much sharing is done across tasks. In this work we identify effective partitioning strategies to maximize performance while reducing average computation per task. When searching over architectures, differences in resource use can be compared at different levels of granularity. Most existing work in NAS and multi-task learning searches over the allocation and use of entire layers (; ;), we instead partition out individual feature channels within a layer. This offers a greater degree of control over both the computation required by each task and the sharing that takes place between tasks. The main obstacle to address in searching for effective multi-task architectures is the vast number of possibilities for performing feature partitioning as well as the significant amount of computation required to evaluate and compare arrangements. A naive brute search over different partitioning strategies is prohibitively expensive. We leverage our knowledge of the search space to explore it more effectively. We propose a parameterization of partitioning strategies to reduce the size of the search space by eliminating unnecessary redundancies and more compactly expressing the key features that distinguish different architectures. In addition, the main source of overhead in NAS is evaluation of sampled architectures. It is common to define a surrogate operation that can be used in place of training a full model to convergence. Often a smaller model will be trained for a much shorter number of iterations with the hope that the differences in accuracy that emerge early on correlate with the final performance of the full model. We propose a strategy for evaluating multi-task architectures using feature distillation which provides much faster feedback on the effectiveness of a proposed partitioning strategy while correlating well with final validation accuracy. In this work we provide: • a parameterization that aids automatic architecture search by providing a direct and compact representation of the space of sharing strategies in multi-task architectures. • an efficient method for evaluating proposed parameterizations using feature distillation to further accelerate the search process. • on Visual Decathlon to demonstrate that our search strategy allows us to effectively identify trade-offs between parameter use and performance on diverse and challenging image classification datasets. Multi-Task Learning: There is a wide body of work on multi-task learning spanning vision, language, and reinforcement learning. The following discussion will center around designing multi-task architectures for deep learning in vision (; ;). There are many obstacles to overcome in multi-task architecture design, but the most pressing concerns depend largely on how the problem setting has been defined. Two distinguishing factors include: • Task Ordering: Are all tasks available at all times or are they presented one after the other? • Fixed vs Learned Strategies: Is a uniform strategy applied across tasks or is a task-specific solution learned? The former is important as work in which tasks are presented sequentially must address catastrophic forgetting . This is less of a concern in our work as we train on all tasks at once. As for the latter, finding a solution tuned to a specific set of tasks requires the same sort of outer-loop optimization seen in neural architecture search which is time-consuming and expensive. The contributions presented in this work seek to make this process more manageable. A strong baseline for multi-task architectures is the use of a single shared network . Deep networks are overparameterized in such a way that the same layers can be applied across different domains while producing features that are useful to different ends. Using a shared architecture is common in reinforcement learning to train a single agent to perform many tasks with a uniform observation and action space . A common technique to train single shared models well in both reinforcement learning and vision is distillation of multiple models into one (; ; ; ;). In work where tasks are presented sequentially, the focus is often to build on top of an existing network while not disrupting its ability to solve its original task (; ;). Currently, many methods freeze the network's weights so they are not changed while learning a new task. Examples include masking out specific filter weights or introducing auxiliary layers . Another approach is to dynamically expand a network with additional capacity for each new task . All of these methods build on top of a fixed model, meaning that new tasks must perform the computation required for the original task as well as take additional steps to be task-specific. It is also common to build multi-task architectures from sets of layers that are run in parallel (; ; ;). Cross-stitch networks compute activations for each task as a learned weighted sum across these layers . This sort of soft attention over features can be seen in other multi-task architecture work as well (; . There are approaches to search over paths through these layers such that each task has a unique, optimal execution path . Similar to work in single-task NAS, the best path is found by either reinforcement learning or evolutionary algorithms . The optimal trade-offs in parameter sharing may occur at a more fine-grained level than entire layers, so instead of working with parallel blocks of layers we divide up individual feature channels. Neural Architecture Search: There are three main areas in which contributions are made for more effective architecture search: search space, optimization, and sample evaluation. Search space: With a well-designed search space, it is possible to randomly sample and arrive at high performing solutions (; b). In general, NAS operates in a discrete space where entire layers are included or not. We instead propose a continuous search space where slight changes can be made in how resources are allocated across tasks. This allows alternatives for optimization that would not apply in other NAS work. Optimization: Leading approaches either use reinforcement learning or genetic algorithms for NAS (; ;). This search is difficult and the tradeoffs between approaches are unclear . We test the effectiveness of random sampling and evolutionary strategies optimization . Evaluating Samples: Training a model to convergence is time-consuming and resource intensive. It is not realistic to sample thousands of architectures and train them all. Instead one must use a cheaper form of evaluation. Some options include preserving weights across samples for faster training , successive halving , progressive steps to increase complexity (a), as well as techniques to model the expected performance of sampled architectures (; ;). It is unclear how well surrogate functions correlate with final model performance . We investigate the use of distillation for performing this evaluation. Concretely, given a feature tensor F ∈ R c×h×w we define a binary mask m f ∈ {0, 1} c for each task, and during a forward pass of the network multiply F by m f to zero out all channels not associated with a particular task. We further define a mask for the backward pass, m b ∈ {0, 1} c, whose non-zero elements are a subset of the non-zero elements in m f. Gradients are calculated as usual through standard backpropagation, but any weights we wish to leave unchanged will have their gradients zeroed out according to m b. Together, these masks can capture the training dynamics seen in many existing multi-task architecture designs . For example, one can devote an outsized proportion of features to a task like ImageNet classification, then make these features available during forward inference on a new smaller dataset. A backward mask, m b, can then be defined to ensure that the ImageNet weights remain untouched when finetuning on the new task. There are a number of advantages to allocating resources at the channel level. There is enough flexibility to allow fine-grained control allotting specific weights to particular subsets of tasks. And after training, it is straightforward to prune the network according to any mask configuration. Meaning for simple tasks that only require a small subset of channels we can use a fraction of compute at test time while still leveraging the advantages of joint training with other tasks. An important implementation detail is that masks are applied every other layer. Consider, for example, making a task use half the model. You might think to set half the values of m f to one and apply it to each layer. But that would mean c 2 inputs and c 2 outputs at each layer which only uses one quarter of the original model. Instead, applying a mask at every other layer produces the desired behavior of allocating half the model to the task. Now that we have decided to partition up feature channels, how do we go about finding the best masks for each task? Consider defining a binary matrix that specifies all partitioning masks: c×n, where c is the number of feature channels and n is the total number of tasks. A direct search over this matrix is problematic. It is not straightforward to optimize over a space of many discrete values, and one must account for significant redundancy given that all permutations of channels are equivalent. Moreover, naive random sampling would never cover the full space of partitioning strategies (consider the probability of randomly sampling two masks m that were mutually exclusive). In order to see diverse degrees of feature sharing, the overlap of channels between masks must be explicitly accounted for. Thus instead of searching over M, we propose searching directly over the features of M that determine performance: 1) the number of feature channels used by each task, and 2) the amount of sharing between each pair of tasks. The former decides the overall capacity available for the task while the latter shapes how tasks help or interfere with each other. We explicitly parameterize these factors in a matrix P. We can compute P = 1 c M T M, where the diagonal elements of P provide the percentage of feature channels used by each task and the off-diagonal elements define the percentage of overlapping features between task pairs. When sampling new partitioning strategies, we sample directly from P and identify a corresponding mask M to match it. To remove some ill posed parts of the space we take additional steps to adjust P. More details of this process as well as how we derive M from P can be found in the appendix. This representation has a number of distinct advantages. It is not tied to the number of channels in a given layer, so a single parameterization can be used for layers of different sizes. It is low dimensional, particularly since n is typically much smaller than c. And it is interpretable, providing a clear impression of which tasks require more or less network capacity and which tasks train well together. Moreover we get an immediate and direct measurement of average node usage per task, this is simply the mean of the diagonal of P. We will use this metric to compare the resource efficiency of different proposed partitioning strategies. In order to optimize over different parameterizations P, there are two key ideas to cover: how we choose samples from the space and how we evaluate and compare different samples. We treat our search setting as a black box optimization problem where given a particular parameterization we have a function which returns a score assessing its quality. Based on this score we can then choose how to further sample new parameterizations. We investigate two strategies for finding good constraint matrices. Random sampling: The first is to simply randomly sample values. This has already been demonstrated to serve as a strong baseline in some architecture search work (; a). With the low dimensionality of the matrix as well the additional steps taken to preprocess constraints, it is not unreasonable that much of the space can be covered with random samples. Random samples serve well to map out large swaths of the search space and identify the principle choices that affect final performance. Concretely, a random matrix P can be sampled with values taken uniformly from 0 to 1. If a particular resource target is desired, it is trivial to bias or restrict samples to a specific range of parameter usage. Evolutionary strategies: Because P is continuous, it is possible to search over parameterizations with gradient-based optimization. We run experiments using evolutionary strategies 1. More specifically, we use a simple implementation with the modifications as described by. A gradient direction is approximated by sampling several random directions in the parameter space and computing finite differences to see which directions seem most promising. A weighted average is then computed across all directions to calculate the gradient to update the current parameters. A key feature of our approach is that we modify the algorithm to prioritize parameterizations that use as few channels as necessary per task. An additional L2 weight regularization term is added to the parameters on the diagonal of P. This serves to reduce the number of channels used by each task, in particular those that can be pulled down without affecting the overall accuracy and performance of the model. By controlling the strength of this regularization we can tune the importance of resource efficiency in the search process. Using this optimization strategy is only possible because of the parameterization defined in 3.1. Approximating gradients make sense in the context of the continuous constraints defined in P, and we can more effectively explore the space of multi-task architectures using this signal. This is different from existing architecture search work where search decisions correspond to the coarse selection of entire computational blocks and their connections to each other. Finally, we must evaluate different partitioning schemes. But as discussed, determining the relative effectiveness of one partitioning over another by training models to convergence is expensive. One possible strategy is to train models for a short period of time assuming that the relative differences in performance that appear early in training should correlate well with differences in performance when trained for longer. We instead propose to use feature distillation to observe the representational capacity of a partitioned layer. We test how well shared multi-task layers can reproduce the activations of corresponding single-task layers. By only focusing on a few layers, we reduce total computation and the number of weights that need to be tuned. In addition, directly distilling to intermediate layer activations provides a more direct training signal than a final classification loss. Given a proposed partitioning mask, we initialize new layers to be distilled and load reference models for each target task. Input to the layer is generated by passing through the task-specific pretrained model up to a target depth. The ing features are then passed through the subsequent layers of the pretrained model as well as the new shared layers. In the new layers, intermediate features are masked according to the proposed partitioning. This procedure is illustrated in Figure 2. We use a mean-squared error loss to supervise the shared layers such that their output features match those produced by the reference teacher models. We can measure the effectiveness of this distillation by replacing the original pretrained layers with the new shared layers and measuring the updated model accuracy. Figure 2: Multi-task distillation: At a given step, a teacher network pretrained on a single task is chosen from the available tasks. Features at a target depth are produced then passed through the next residual block of both the teacher and student (where a mask is applied). Features are then compared using a standard MSE loss. This process can be applied to multiple layers at once. It is important to emphasize that we are not using distillation to get a final multi-task model as we do not want to be limited by the performance of individual pre-trained models. Instead, distillation serves as a proxy task for quickly evaluating partitioning strategies. We do not run the distillation process to convergence, only for a brief interval, and it serves as a sufficient signal to provide feedback on different parameterizations. This leads to a dramatic reduction in the time required to evaluate a particular masking strategy. We run a number of experiments to investigate the role our proposed parameterization and distillation play in finding multi-task architectures that minimize task computation and parameter use while achieving high accuracy. All experiments are performed using the Visual Decathlon dataset . Visual Decathlon is composed of many well-established computer vision classification datasets of various sizes and respective difficulties. There is sufficient diversity that it is difficult to determine which datasets would benefit from more or less network capacity and parameter sharing. We investigate how a model performs when trained on nine Decathlon tasks at once (all datasets except for ImageNet). We initialize a shared ResNet model with a separate fully connected layer output for each task. To simplify experiments, we freeze the first two-thirds of the model and only apply feature partioning to the last third. For training, we alternate mini-batches sampled from each dataset and apply a standard cross-entropy classification loss at the appropriate task specific output. More thorough implementation and experiment details can be found in the appendix. It is unclear that performance will necessarily be better with any feature restriction as opposed to using the full model for all tasks. One question is whether partitioning well leads to a reduction in interference across tasks and perhaps improved performance. In addition, we wish to see the overall relationship between performance and feature restriction in this multi-task setting. What's the best performance possible as average feature use is reduced further and further? Before performing our search we need to know that given a sampled set of feature masks M, distillation performance correlates well with the final accuracy of a model trained to convergence. This will determine whether our proposed distillation is a reasonable surrogate in lieu of full training. The higher the correlation between the two, the more confidence we can place in our search process. When performing distillation we initialize the child layers with pretrained layers from an ImageNet model, since the parent single-task networks have also been initialized from the same model. This accelerates the distillation process. The whole process takes just one minute on a P100 GPU. Further details are available in the appendix. We sample many random partitioning masks and run both the distillation procedure and full training to convergence. As a baseline, we also see how well final validation accuracy compares to accuracies seen earlier in training. We compare to the accuracies after 5k and 10k iterations (corresponding to 5 and 10 minutes of training). As seen in Table 1, the distillation procedure (which takes a fraction This allows us to sample and compare many more parameterizations during our search process and have more confidence that the top performing parameterizations will do well when training a full model. Randomly sampling parameterizations: To map out the performance of different partitioning strategies we sample random parameterizations and plot distillation performance against the average percentage of allocated features (the mean of the diagonal of P) in Figure 4 (left). From the distribution of random samples we get an impression of the best performance possible at different degrees of resource use. The combination of fast feedback with distillation plus effective search space coverage with our proposed parameterization produces this information with less samples and in less time. At high levels of average feature use, choice of partitioning can only make so much of a difference. We are interested in the opposite -how well we can do when restricting task computation as much as possible. Here, partitioning well is necessary to achieve high performance. Also it is important to note that explicitly specifying the degree of sharing between tasks is critical. This is shown in the middle plot of Figure 4. We evaluate different partitioning strategies with three fixed sets of values for the diagonal of P, only adjusting the amount of sharing that takes place between tasks. There can be a significant difference in performance in each of these cases, and as expected, sharing affects performance more and more as average parameter use goes down as there is more flexibility when choosing how to overlap features. It is important that feature sharing is parameterized when doing any sort of optimization. Finally, we look at per-task (Figure 5). In this particular setting, every task benefits from using as many features as possible. This may have to do with using a model pretrained on ImageNet, but it makes sense that tasks benefit from using as many features as they can. An important facet to this is how much of those features are shared. As average parameter usage increases across tasks (indicated by a lighter color), individual tasks suffer as they now share more of their features and must deal with the interference of other tasks. Table 2: Validation on Visual Decathlon. Evolutionary strategies: As mentioned above, the distribution of random samples gives an immediate impression of the best level of performance possible as a function of average parameter use. Evolutionary strategies provides a means to more directly push this edge of performance even further. We visualize the search process by plotting samples over the course of optimization overlaid over the distribution of samples found by random sampling (Figure 4 (right) ). ES optimization quickly identifies samples that provide the best accuracy given their current level of parameter use and densely samples in this space, making slight changes for any last available improvements to performance. Furthermore, adjusting the weight decay penalty used during optimization controls the resource use of the final partitioning strategy. This allows us to easily tune the optimization to reach the best architecture that meets specific resource needs. The best parameterization found with evolutionary strategies outperforms a number of baselines for performing partitioning as seen in Table 2. We compare across several strategies with different degrees of feature use and sharing. We measure validation accuracy of models trained to convergence (averaged over five trials). These baselines include: independent partitions that split features evenly across tasks, sharing half of available feature channels and splitting the rest, and finally, sharing all feature channels. In line with our random sampling, the more channels given across tasks, the better performance. Sharing everything does the best amongst these baselines. However, using the parameterization found from our optimization both reduces average channel use and achieves better performance overall. We see that there exist partitioning strategies that cut down average feature dramatically while still maintaining the same overall performance. This is in large part due to simple tasks that only need a small fraction of channels (DPed for example in Fig 5). By taking away the interference caused by these simpler tasks, harder tasks stand to gain more and that can be seen in Table 2 with tasks like CIFAR100, Flowers, and Omniglot seeing the largest gains from an effective partitioning strategy. In this work we investigate efficient multi-task architecture search to quickly find models that achieve high performance under a limited per-task budget. We propose a novel strategy for searching over feature partitioning that automatically determines how much network capacity should be used by each task and how many parameters should be shared between tasks. We design a compact representation to serve as a search space, and show that we can quickly estimate the performance of different partitioning schemes by using feature distillation. Refining P: We define a simple set of operations that convert from the raw search space P to a constraint matrix P that is more likely to correspond to feasible masks. Given the knowledge that pairwise values of P are conditioned on its diagonal terms, it is not possible for there to be more overlap between two tasks than the channels used by any one task. That is, no off-diagonal element M ij should be greater than the corresponding diagonal elements M ii and M jj. We remap all off-diagonal elements to appropriate values determined by the diagonal of the matrix. This means that for any off-diagonal element in P, 0 now maps to the minimum possible overlap and 1 to the maximum possible overlap of the two tasks. The procedure is defined as follows: Figure 6: Measuring correlation of feature activations across three blocks of task-specific ResNet models given a shared input image. Even though no restriction is made when finetuning individual models, the features produced are highly correlated through the first two-thirds of the model, and greater task differentiation is not seen until the end. Differentiation in early blocks would normally occur due to different batch normalization statistics across datasets, but that is controlled for here. Multi-task training: For full model training, we use a batchsize of 64 with SGD and momentum at a learning rate of 0.05 for 100k iterations, dropping to a learning rate of 0.005 at iteration 75k. All training was done on a single Nvidia P100 GPU. We followed the exact training, validation, and test splits provided by Visual Decathlon. Several steps are taken to ensure that a model trained simultaneously on multiple tasks converges well: • Batch normalization: We maintain separate batch normalization statistics per task as done in . This adds minimal parameter overhead and accelerates training. • Momentum: We maintain separate gradient statistics when using momentum with SGD. This is important given our use of feature partitioning. At any given training step, we do not want the unused weights associated with other tasks to be updated. • Training curriculum: Rather than uniformly sampling across tasks we apply a simple strategy to choose mini-batches from tasks inversely proportional to their current training accuracy. Tasks that lag behind get sampled more often. Evidence for this approach has been demonstrated in a multi-task reinforcement learning setting . We find a curriculum over tasks more effective than a curriculum over individual samples . • Pretrained ImageNet model: All experiments are performed with a pretrained ImageNet model. The model is trained using the PyTorch implementation made available by. Because ImageNet is orders of magnitude larger than any of the other datasets in Decathlon and takes much longer to train we exclude it in our partitioning experiments to focus on the interactions of other datasets. The main hyperparameters that determine performance were learning rate and a temperature term that controlled the task sampling curriculum. This temperature term determines whether minibatches are sampled uniformly across tasks or whether tasks with low training accuracy are weighted more heavily. For both hyperparameters we arrive at the final value by a simple grid search. Final validation accuracy reported in Table 2 (in the main paper) is averaged across 5 trials. Frozen layers: To further simplify experiments, we freeze and share the first two-thirds of the network. Partitioning is thus only performed on the last third of the model. By only updating the weights of the last block, we focus attention on the layers where task-specific features are most important without restricting the model's representational capacity to fit each task. In all of our experiments we use an ImageNet-pretrained ResNet model made up of three computational blocks with four layers each. We freeze the first two computational blocks and only perform partitioning on the last set of layers. The justification for this stems from analysis performed with individual single task models. We compare feature differences across finetuned task-specific models. These models were trained with no restrictions initialized from an ImageNet model until converging to high accuracy on some target task. Because we start with a pretrained model we can make mean-ingful comparisons of each model's channel activations to see how task feature use diverges after finetuning. We compare intermediate task features after passing in a shared image into every model. An important detail here is that we control for the batch normalization statistics associated with the dataset that the image is sampled from. The subsequent features produced by each model are almost identical all the way up through the first two-thirds of the model. Aside from subtle differences, task-specific differentiation did not occur until the final third of the model where features were still somewhat correlated but differed dramatically model to model. This is visualized in Figure 6. Because of this we decided the task-specific differentiation afforded by feature partitioning would not be as important in earlier stages of the model, and experiments would be more informative and also faster to run while focusing only on the last set of layers. Distillation details: We do not use the accuracy-based curriculum used in normal training during distillation and instead alternate mini-batches uniformly across each task. Distillation training is done for a brief 3000 iterations with a batch size of 4 and a learning rate of 1 which is dropped by a factor of 10 at iteration 2000. Distillation is done on the last four ResNet layers at once to match the final training setting as closely as possible. All scores reported when performing distillation are averaged across three trials. The optimization curves shown in the paper are from runs that have each taken 1000 samples, these were performed on machines with 4 P100 GPUs. Given that sample evaluation takes roughly a minute, the whole procedure takes just over four hours. At each step, 16 random parameter directions are sampled and these are both added and subtracted from the current parameterization P to produce 32 new samples to evaluate. A gradient is calculated based on the of these samples, and a gradient descent step is applied to the current parameters with a learning rate of 0.1. Both clipping and a sigmoid operation were tested to ensure that values remain between 0 and 1 with no discernible difference in optimization performance.
automatic search for multi-task architectures that reduce per-task feature use
597
scitldr
As distributed approaches to natural language semantics have developed and diversified, embedders for linguistic units larger than words (e.g., sentences) have come to play an increasingly important role. To date, such embedders have been evaluated using benchmark tasks (e.g., GLUE) and linguistic probes. We propose a comparative approach, nearest neighbor overlap (N2O), that quantifies similarity between embedders in a task-agnostic manner. N2O requires only a collection of examples and is simple to understand: two embedders are more similar if, for the same set of inputs, there is greater overlap between the inputs' nearest neighbors. We use N2O to compare 21 sentence embedders and show the effects of different design choices and architectures. Continuous embeddings-of words and of larger linguistic units-are now ubiquitious in NLP. The success of self-supervised pretraining methods that deliver embeddings from raw corpora has led to a proliferation of embedding methods, with an eye toward "universality" across NLP tasks. Our focus here is on sentence embedders, and specifically their evaluation. As with most NLP components, intrinsic (e.g., and extrinsic (e.g., GLUE;) evaluations have emerged for sentence embedders. Our approach, nearest neighbor overlap (N2O), is different: it compares a pair of embedders in a linguistics-and task-agnostic manner, using only a large unannotated corpus. The central idea is that two embedders are more similar if, for a fixed query sentence, they tend to find nearest neighbor sets that overlap to a large degree. By drawing a random sample of queries from the corpus itself, N2O can be computed on in-domain data without additional annotation, and therefore can help inform embedder choices in applications such as text clustering , information retrieval , and open-domain question answering , among others. After motivating and explaining the N2O method (§2), we apply it to 21 sentence embedders (§3-4). Our findings (§5) reveal relatively high functional similarity among averaged static (noncontextual) word type embeddings, a strong effect of the use of subword information, and that BERT and GPT are distant outliers. In §6, we demonstrate the robustness of N2O across different query samples and probe sizes. We also illustrate additional analyses made possible by N2O: identifying embeddingspace neighbors of a query sentence that are stable across embedders, and those that are not (§7); and probing the abilities of embedders to find a known paraphrase (§8). The latter reveals considerable variance across embedders' ability to identify semantically similar sentences from a broader corpus. We first motivate and introduce our nearest neighbor overlap (N2O) procedure for comparing embedders (maps from objects to vectors). Although we experiment with sentence embedders in this paper, we note that this comparison procedure can be applied to other types of embedders (e.g., phrase-level or document-level). Computation of N2O for two embedders, e A and e B, using a corpus C; the number of nearest neighbors is given by k. n is the number of queries (q 1 . . . q n), which are sampled uniformly from the corpus without replacement. The output is in, where 0 indicates no overlap between nearest neighbors for all queries, and 1 indicates perfect overlap. We would like to quantify the extent to which sentence embedders vary in their treatment of "similarity." For example, given the sentence Mary gave the book to John, embedders based on bag-of-words will treat John gave the book to Mary as being maximally similar to the first sentence, whereas other embedders may treat Mary gave the dictionary to John as more similar; our comparison should reflect this intuition. We would also like to focus on using naturally-occuring text for our comparison. Although there is merit in expert-constructed examples (see linguistic probing tasks referenced in §9), we have little understanding of how these models will generalize to text from real documents; many application settings involve computing similarity across texts in a corpus. Finally, we would like our evaluation to be task-agnostic, since we expect embeddings learned from large unannotated corpora in a self-supervised (and task-agnostic) manner to continue to play an important role in NLP. As a , we base our comparison on nearest neighbors: first, because similarity is often assumed to correspond to nearness in embedding space (e.g., Figure 1); second, because nearest neighbor methods are used directly for retrieval and other applications; and finally, because the nearest neighbors of a sentence can be computed for any embedder on any corpus without additional annotation. Suppose we want to compare two sentence embedders, e A (·) and e B (·), where each embedding method takes as input a natural language sentence s and outputs a d-dimensional vector. For our purposes, we consider variants trained on different data or using different hyperparameters, even with the same parameter estimation procedure, to be different sentence embedders. Take a corpus C, which is likely to have some semantic overlap in its sentences, and segment it into sentences s 1,..., s |C|. Randomly select a small subset of the sentences in C as "queries" (q 1, . . ., q n). To see how similar e A and e B are, we compute the overlap in nearest neighbor sentences, averaged across multiple queries; the algorithm is in Figure 2. nearest(e i, q j, C, k) returns the k nearest neighbor sentences in corpus C to the query sentence q j, where all sentences are embedded with e i. 2 There are different ways to define nearness and distance in embedding spaces (e.g., using cosine similarity or Euclidean distance); in this paper we use cosine similarity. We can think about this procedure as randomly probing the sentence vector space (through the n query sentences) from the larger space of the embedded corpus, under a sentence embedder e i; in some sense, k controls the depth of the probe. The N2O procedure then compares the sets of sentences recovered by the probes. In the previous section, we noted that we consider a "sentence embedder" to encompass how it was trained, which data it was trained on, and any other hyperparameters involved in its creation. In this section, we first review the broader methods behind these embedders, turning to implementation decisions in §4. We consider tf-idf, which has been clasically used in information retrieval settings. The tf-idf of a word token is based off two statistics: term frequency (how often a term appears in a document) and inverse document frequency (how rare the term is across all documents). The vector representation of the document is the idf-scaled term frequencies of its words; in this work we treat each sentence as a "document" and the vocabulary-length tf-idf vector as its embedding. Because sentence embeddings are often built from word embeddings (through initialization when training or other composition functions), we briefly review notable word embedding methods. Static embeddings. We define "static embeddings" to be fixed representations of every word type in the vocabulary, regardless of its context. We consider three popular methods: word2vec embeddings optimized to be predictive of a word given its context (continuous bag of words) or vice versa (skipgram); GloVe embeddings learned based on global cooccurrence counts; and FastText , an extension of word2vec which includes character n-grams (for computing representations of out-of-vocabulary words). Contextual embeddings. Contextual word embeddings, where a word token's representation is dependent on its context, have become popular due to improvements over state-of-the-art on a wide variety of tasks. We consider: • ELMo embeddings are generated from a multi-layer, bidirectional recurrent language model that incorporates character-level information. • GPT embeddings are generated from a unidirectional language model with multi-layer transformer decoder; subword information is included via byte-pair encoding (BPE;). • BERT embeddings are generated from a transformer-based model trained to predict (a) a word given both left and right context, and (b) whether a sentence is the "next sentence" given a previous sentence. Subword information is incorporated using the WordPiece model . Composition of word embeddings. The simplest way to obtain a sentence's embedding from its sequence of words is to average the word embeddings. 3 Despite the fact that averaging discards word order, it performs surprisingly well on sentence similarity, NLI, and other downstream tasks . 4 In the case of contextual embeddings, there may be other conventions for obtaining the sentence embedding, such as using the embedding for a special token or position in the sequence. With BERT, the [CLS] token representation (normally used as input for classification) is also sometimes used as a sentence representation; similarly, the last token's representation may be used for GPT. Table 1: Details of the pretrained sentence embedders we test in this paper. For methods which produce word embeddings, "composition" denotes how a single embedding was obtained from the sentence's word embeddings. ELMo embeddings are averaged across the three bi-LSTM layers; BERT and GPT embeddings come from the final hidden layer. All of the models besides tf-idf and the fine-tuned version of BERT are common pretrained versions; further details are in Appendix A. A more direct way to obtain sentence embeddings is to learn an encoding function that takes in a sequence of tokens and outputs a single embedding; often this is trained using a relevant supervised task. We consider two encoder-based methods: • InferSent : supervised training on the Stanford Natural Language Inference (SNLI;) dataset; the sentence encoder provides representations for the premise and hypothesis sentences, which are then fed into a clasifier. • Universal Sentence Encoder (USE;): supervised, multi-task training on several semantic tasks (including semantic textual similarity); sentences are encoded either with a deep averaging network or a transformer. Our main experiment is a broad comparison, using N2O, of the embedders discussed above and listed in Table 1. Despite the vast differences in methods, N2O allows us to situate each in terms of its functional similarity to the others. N2O computation. We describe a N2O sample as, for a given random sample of n queries, the computation of N2O(e A, e B, C, k) for every pair of sentence embedders as described in §2, using cosine similarity to determine nearest neighbors. The in §5 are with k (the number of sentences retrieved) set to 50, averaged across five samples of n = 100 queries. We illustrate the effects of different k and N2O samples in §6. Corpus. For our corpus, we draw from the English Gigaword , which contains newswire text from seven news sources. For computational feasibility, we use the articles from 2010, for a total of approximately 8 million unique sentences. 5 We note preprocessing details (segmentation, tokenization) in Appendix A. Queries. For each N2O sample, we randomly select 100 ledes (opening sentences) from the news articles of our corpus, and use the same ones across all embedders. Because the Gigaword corpus contains text from multiple news sources covering events over the same time period, it is likely that the corpus will contain semantically similar sentences for a given lede. The average query length is 30.7 tokens (s.d. 10.2); an example query is: "Sandra Kiriasis and brakewoman Stephanie Schneider of Germany have won the World Cup bobsled race at Lake Placid." Sentence embedders. Table 1 details the sentence embedders we use in our experiments. In general, we use popular pretrained versions of the methods described in §3. We also select pretrained variations of the same method (e.g., FastText embeddings trained from different corpora) to permit more controlled comparisons. In a couple of cases, we train/finetune models of our own: for tf-idf, we compute frequency statistics using our corpus, with each sentence as its own "document"; for BERT, we use the Hugging Face implementation with default hyperparameters, 6 and finetune using the matched subset of MultiNLI for three epochs (dev. accuracy 84.1%). We note that additional embedders are easily situated among the ones tested in this paper by first computing nearest neighbors of the same query sentences, and then computing overlap with the nearest neighbors obtained in this paper. To enable this, the code, query sentences, and nearest neighbors per embedder and query will be publicly available. In this section, we present the from the experiment described in §4. Fig. 3 shows N2O between each pair of sentence embedders listed in Table 1; the values range from 0.04 to 0.62. While even the maximum observed value may not seem large, we reiterate that overlap is computed over two draws of k = 50 sentences (nearest neighbors) from approximately 8 million sentences, and even an N2O of 0.04 is unlikely from random chance alone. Averages of static word embeddings. We first observe that there is generally high N2O among this set of embedders in comparison to other categories (Fig. 4, left). Some cases where N2O is high for variations of the same embedder: glove-6b-100d and glove-6b-300d, which have different dimensionality but are otherwise trained with the same method and corpus (and to a lesser extent glove-840b-300d, which retains casing and is trained on a different corpus); fasttext-cc and fasttext-wiki, which again are trained with the same method, but different corpora. The use of subword information, unique to fasttext-cc-sub and fasttext-wiki-sub, has a large effect on N2O; there is a high (0.52) N2O value for these two and much lower N2O with other embedders, including their analogues without subword information. This effect is also illustrated by measuring, for a given embedder, the average token overlap between the query and its neighbors (see Fig. 5 in Appendix B). As we would expect, subword methods find near neighbors with lower token overlap, because they embed surface-similar strings near to each other. tf-idf. Unsurprisingly, tf-idf has low N2O with other embedders (even those based on static word embeddings). Like the subword case, we can also use token overlap to understand why this is the case: its nearest neighbors have by far the largest token overlap with the query (0.43). Averages of ELMo embeddings. We test three ELMo pretrained models across different capacities (elmo-small, elmo-orig) but the same training data, and across different training data but the same model capacity (elmo-orig, elmo-orig-5.5b). These two embedder pairs have high N2O (0.42 and 0.55 respectively); the mismatched pair, with both different training data and capacities, has slightly lower N2O (0.38). Figure 3: Heatmap of N2O for every pair of sentence embedders in Table 1 for k = 50, averaged across five samples of n = 100 queries; darker colors indicate higher overlap. A larger version of this plot (annotated with N2O values) is in Appendix B. BERT and GPT. We first find that specific-token representations for BERT or GPT (bert-base-cls, gpt-last) are outliers compared to other embedders (i.e., low N2O; see Fig. 4). This itself is not unexpected, as the training objectives for both of the pretrained models (without finetuning) are not geared towards semantic similarity the way other embedders are. What is surprising is that this effect seems to hold even for the MultiNLI-finetuned version of BERT (bert-ft-cls); if anything, this decreases N2O with other embedders further. 7 Notably, taking averaged BERT and GPT embeddings yields higher N2O with other embedders, especially ELMobased ones. Fig. 6 (Appendix B) plots the N2O values for each embedder compared to all others. Encoder-based embedders. We find that InferSent has highest N2O (∼0.2-0.3) with the embeddings based on averaging, despite InferSent being trained using a NLI task; that said, this is not wholly surprising as the model was initialized using GloVe vectors (glove-840b-300d) during training. The USE variants (DAN and Transformer) have fairly distinct nearest neighbors compared to other methods, with highest N2O between each other (0.24). Varying k. One possible concern is how sensitive our procedure is to k (the number of nearest neighbors from which overlap is computed): we would not want conflicting judgments of how similar two sentence embedders are due to different k. To confirm this, we first compute the ranked lists of N2O output for each k ∈ {5, 10, . . ., 45, 50}, where each list consists of all embedder pairs ordered by N2O for that k. We then compute Spearman's rank correlation coefficient (ρ) between each pair of ranked lists, where 1 indicates perfect positive correlation. We find that the average 7 In preliminary experiments, we also saw similar with BERT finetuned on the Microsoft Research Paraphrase Corpus ; that is, the effect does not seem specific to MultiNLI. Spearman's ρ is very high (0.996; min. 0.986) -i.e., the rankings of embedder similarity by N2O are reasonably stable across different values of k, even as far as k = 5 and k = 50. Query sampling. We also examine how the may vary across different query samples; as noted previously, the presented are averaged across five samples of n = 100 queries each. Standard deviations for N2O values across the five samples range from 0.005 to 0.019 (avg. 0.011). That is, given the range of N2O values being compared, the differences due to different query samples is small. We compute Spearman's ρ across different N2O samples in the same manner as above (k = 50) and find an average ρ of 0.994 (min. 0.991). Runtime. A theoretical concern with N2O is that, naively, its computation is linear in the size of the corpus, and to have reasonable semantic overlap within a diverse set of sentences, the corpus should be large. While our implementation of exact nearest neighbor search is sufficiently fast in practice, 8 we provide comments on use of approximate nearest neighbor methods in Appendix C. In the previous section, we performed a basic comparison between sentence embedders using N2O. Here, we show one kind of analysis enabled by N2O: given a query, which sentences from the corpus C are consistently its neighbors across different embedders? We might expect, for example, that a nearly identical paraphrase of the query will be a "popular" neighbor chosen by most embedders. Table 2 shows an example query with a sentence that is in the 5-nearest neighborhood for all sentence embedders, as well as sentences that are highly ranked for some embedder but not in the nearest neighbor sets for any other embedder (for larger k = 50). Qualitatively, what we find with this example's outlier sentences is that they are often thematically similar in some way (such as fiscal matters in Table 2), but with different participants. We also observe that extremely "popular" neighbors tend to have high lexical overlap with the query. Attempts to derive sentence embeddings that capture semantic similarity are inspired by the phenomenon of paraphrase; in this section, we use nearest neighbors to probe how sentence embedders capture paraphrase. More specifically, we carry out a "needle-in-a-haystack" experiment using the Semantic Textual Similarity Benchmark (STS;). STS contains sentence pairs with human judgments of semantic similarity on a 1-5 continuous scale (least to most similar). Query: Britain's biggest mortgage lender says that average house prices fell 3.6 percent in September, but analysts believe the market isn't that weak. Embedder Rank Sentence all embedders ≤ 5 Average house prices in Britain fell 3.6 percent in September from a month earlier, the country's biggest mortgage lender said Thursday, although analysts believe the market isn't that weak. bert-base-cls 6 Some analysts say that the December data indicate that consumer spending remains weak, making it harder for the economy to keep a sustained rebound. bert-ft-avg 5 An industry group says German machinery orders were down 3 percent on the year in January but foreign demand is improving. gpt-last 8 The economy has since rebound and grew 8.9 percent year-on-year in the second quarter, the central bank said last month, with growth expected to exceed six percent in the full year. Table 2: Popular and outlier near neighbors for the given query (top). The first sentence is in the 5-nearest neighborhood for all embedders; the remaining sentences are highly-ranked by the given embedder and outside the 50-nearest neighborhood for all other embedders. See Table 3 (Appendix B) for additional examples. We take 75 pairs in the 4-5 range from the STS development and test sets where the sentence pair has word-level overlap ratio < 0.6 -i.e., near paraphrases with moderately different surface semantics. We also constrain the sentence pairs to come from the newstext-based parts of the dataset. The first sentence in each sentence pair is the "query," and the second sentence is (temporarily) added to our Gigaword corpus. An example sentence pair, scored as 4.6, is: (A) Arkansas Supreme Court strikes down execution law and (B) Arkansas justices strike down death penalty. We then compute the rank of the sentence added to the corpus (i.e., the value of k such that the added sentence is part of the query's nearest neighbors). An embedder that "perfectly" correlates semantic similarity and distance should yield a rank of 1 for the sentence added to the corpus, since that sentence would be nearest to the query. Results. Using mean reciprocal rank (MRR), we find that the larger ELMo models and Infersent do particularly well at placing paraphrase pairs near each other. We also find that averaged BERT and GPT embeddings consistently perform better than the [CLS]/final token ones 9; this is consistent with our earlier observation (§5) that their training objectives may not yield specific-token embeddings that directly encode semantic similarity, hence why they are outliers by N2O. The full table of is in Table 4 (Appendix B). Recent comparisons of sentence embedders have been primarily either linguistic probing tasks or downstream evaluations. Linguistic probing tasks test whether embeddings can distinguish surface level properties, like sentence length; syntactic properties, like tree depth; and semantic properties, like coordination inversion. , ,, and , among others. Downstream evaluations are often classification tasks for which good sentence representations are helpful (e.g., NLI). Evaluations like the RepEval 2017 shared task , SentEval toolkit , and GLUE benchmark seek to standardize comparisons across sentence embedding methods. N2O is complementary to these, providing a task-agnostic way to compare embedders' functionality. In this paper, we introduce nearest neighbor overlap (N2O), a comparative approach to quantifying similarity between sentence embedders. Using N2O, we draw comparisons across 21 embedders. We also provide additional analyses made possible with N2O, from which we find high variation in embedders' treatment of semantic similarity. GloVe. We use three sets of standard pretrained GloVe embeddings: 100D and 300D embeddings trained on Wikipedia and Gigaword (6B tokens), and 300D embeddings trained on Common Crawl (840B tokens). 13 We handle tokenization and embedding lookup identically to word2vec; for the Wikipedia/Gigaword embeddings, which are uncased, we lower case all tokens as well. FastText. We use four sets of pretrained FastText embeddings: two trained on Wikipedia and other news corpora, and two trained on Common Crawl (each with an original version and one trained on subword information). 14 We use the Python port of the FastText implementation to handle tokenization, embedding lookup, and OOV embedding computation. ELMo. We use three pretrained models made available by AllenNLP: small, original, and original (5.5B). 16 We use spacy to perform word tokenization, consistent with the allennlp library; we also use allennlp (0.7.2) to compute the ELMo embeddings. We average the embeddings over all three bidirectional LSTM layers. BERT. We use Hugging Face's pytorch-transformers (0.6.2) implementation and pretrained BERT base cased model. 17 To tokenize, we use the provided BertTokenizer, which handles WordPiece (subword) tokenization, and in general follow the library's recommendations for feature extraction. For finetuning BERT on MultiNLI (matched subset), we generally use the default parameters provided in the library's run classifier.py (batch size = 32, learning rate = 5e-5, etc.). We finetune for three epochs, and obtain 84.1% dev accuracy (reasonably consistent with the original work). GPT. We use the same Hugging Face library and associated pretrained model for GPT; we use their BPE tokenizer and spacy for subword and word tokenization respectively. InferSent. We use the authors' implementation of InferSent, as well as their pretrained V1 model based on GloVe. 18 (Unfortunately, the FastText-based V2 model was not available while performing the experiments in this paper; see issues #108 and #124 in the linked Github.) As per their README, we use the nltk tokenizer (3.2.5). Universal Sentence Encoder. We use pretrained models available on TensorFlow Hub for both the DAN and Transformer variants. 19 The modules handle text preprocessing on their own. Experiments for ELMo, BERT, GPT, and the Transformer version of USE were run on a NVIDIA Titan XP GPU with CUDA 9.2. All other experiments were performed on CPUs. In §5, we noted that the FastText subword variants had much lower N2O compared to other embedders (including analogues without subword information). Fig. 5 shows average token overlap between a query and its nearest neighbors, averaged over all queries. Unsurprisingly, tfidf has by far the highest overlap, and fasttext-wiki-sub and fasttext-cc-sub the lowest. In §5, we found that the BERT and GPT based embedders had low N2O with all other embedders, and averaging (rather than taking the [CLS] or last embedding) generally raised N2O. Fig. 6 shows boxplots of N2O values between each embedder and all other embedders. Table 3 shows additional outlier nearest neighbors from Table 2. Query: Britain's biggest mortgage lender says that average house prices fell 3.6 percent in September, but analysts believe the market isn't that weak. Embedder Rank Sentence all embedders ≤ 5 Average house prices in Britain fell 3.6 percent in September from a month earlier, the country's biggest mortgage lender said Thursday, although analysts believe the market isn't that weak. bert-ft-cls 2 Japanese consumer prices fell for 13th straight month in March, though the GDP data suggests that deflationary pressures are starting to ease. fasttext-cc-sub 6 It cautioned however that the economic situation abroad could still slow Sweden's recovery, and said the country's gross domestic product (GDP) would grow just 3.6 percent in 2011, down from its May estimate of 3.7 percent growth. glove-840b-300d 12 Meanwhile, Australia's central bank left its key interest rate unchanged at 3.75 percent on Tuesday, surprising investors and analysts who had predicted the bank would continue raising the rate as the nation's economy rebounds. Table 3: Additional outlier near neighbors for the given query (top; same as Table 2). The first sentence is in the 5-nearest neighborhood for all embedders; the remaining sentences are highlyranked by the given embedder and outside the 50-nearest neighborhood for all other embedders. Table 4 shows for the query-paraphrase experiment in §8: mean reciprocal rank (MRR), the number of queries for which its paraphrase was its nearest neighbor, and the number of queries for which the paraphrase was in its 5-nearest neighborhood. Table 4: Results for the query-paraphrase experiment (§8), sorted by decreasing MRR. # top and # top-5 are the number of queries for which the paraphrase was the nearest neighbor and in the 5-nearest neighborhood (max. 75), respectively.
We propose nearest neighbor overlap, a procedure which quantifies similarity between embedders in a task-agnostic manner, and use it to compare 21 sentence embedders.
598
scitldr
Generative Adversarial Networks (GANs) can achieve state-of-the-art sample quality in generative modelling tasks but suffer from the mode collapse problem. Variational Autoencoders (VAE) on the other hand explicitly maximize a reconstruction-based data log-likelihood forcing it to cover all modes, but suffer from poorer sample quality. Recent works have proposed hybrid VAE-GAN frameworks which integrate a GAN-based synthetic likelihood to the VAE objective to address both the mode collapse and sample quality issues, with limited success. This is because the VAE objective forces a trade-off between the data log-likelihood and divergence to the latent prior. The synthetic likelihood ratio term also shows instability during training. We propose a novel objective with a ``"Best-of-Many-Samples" reconstruction cost and a stable direct estimate of the synthetic likelihood. This enables our hybrid VAE-GAN framework to achieve high data log-likelihood and low divergence to the latent prior at the same time and shows significant improvement over both hybrid VAE-GANS and plain GANs in mode coverage and quality. Generative Adversarial Networks (GANs) have achieved state-of-the-art sample quality in generative modeling tasks. However, GANs do not explicitly estimate the data likelihood. Instead, it aims to "fool" an adversary, so that the adversary is unable to distinguish between samples from the true distribution and the generated samples. This leads to the generation of high quality samples . However, there is no incentive to cover the whole data distribution. Entire modes of the true data distribution can be missedcommonly referred to as the mode collapse problem. In contrast, the Variational Auto-Encoders (VAEs) explicitly maximize data likelihood and can be forced to cover all modes . VAEs enable sampling by constraining the latent space to a unit Gaussian and sampling through the latent space. However, VAEs maximize a data likelihood estimate based on the L 1 /L 2 reconstruction cost which leads to lower overall sample quality -blurriness in case of image distributions. Therefore, there has been a spur of recent work (; ;) which aims integrate GANs in a VAE framework to improve VAE generation quality while covering all the modes. Notably in , GANs are integrated in a VAE framework by augmenting the L 1 /L 2 data likelihood term in the VAE objective with a GAN discriminator based synthetic likelihood ratio term. reports that in case of hybrid VAE-GANs, the latent space does not usually match the Gaussian prior. This is because, the reconstruction log-likelihood in the VAE objective is at odds with the divergence to the latent prior (also in case of alternatives proposed by ;). This problem is further exacerbated with the addition of the synthetic likelihood term in the hybrid VAE-GAN objective -it is necessary for sample quality but it introduces additional constraints on the encoder/decoder. This leads to the degradation in the quality and diversity of samples. Moreover, the synthetic likelihood ratio term is unstable during training -as it is the ratio of outputs of a classifier, any instability in the output of the classifier is magnified. We directly estimate the ratio using a network with a controlled Lipschitz constant, which leads to significantly improved stability. Our contributions in detail are, 1. We propose a novel objective for training hybrid VAE-GAN frameworks, which relaxes the constraints on the encoder by giving the encoder multiple chances to draw samples with high likelihood enabling it to generate realistic images while covering all modes of the data distribution, 2. Our novel objective directly estimates the synthetic likelihood term with a controlled Lipschitz constant for stability, 3. Finally, we demonstrate significant improvement over prior hybrid VAE-GANs and plain GANs on highly muti-modal synthetic data, CIFAR-10 and CelebA. Generative Autoencoders. VAEs allow for generation by maintaining a Gaussian latent space. , the Gaussian constraint in applied point-wise and latent representation of each point is forced towards zero. Adversarial Auto-encoders (AAE) and Wasserstein Auto-encoders (WAE) tackle this problem by an approximate estimate of the divergence which only requires the latent space to be Gaussian as a whole. But, the Gaussian constraint in;; ) is still at odds with the data log-likelihood. In this work, we enable the encoder to maintain both the latent representation constraint and high data log-likelihood using a novel objective. Furthermore, we integrate a GAN-based synthetic likelihood term to the objective to enhance the sharpness of generated images. Mode Collapse in Classical GANs. The classic GAN formulation has several shortcomings -importantly mode collapse. Denoising Feature Matching deals with the mode collapse by regularizing the discriminator using an auto-encoder. MDGAN uses two separate discriminators and regularizes using a auto-encoder. In EBGAN (a), the discriminator is interpreted as an energy functional and is also cast in an auto-encoder framework, leading to improvements in semi-supervised learning tasks. BEGAN proposes a Wasserstein distance based objective to train such GANs with auto-encoder based discriminators. The proposed approach leads to smoother convergence. InfoGAN maximizes the mutual information between a small subset of latent variables and observations in a Information Theoretic framework. This leads to disentangled and more interpretable latent representations. PacGAN proposes to deal with the mode collapse problem by using the discriminator to distinguish between product distributions. D2GAN proposes to use two discriminators -one for the forward KL divergence between the true and generated distributions and one for the reverse. BourGAN proposes to learn the distribution of the latent space (instead of assuming Gaussian) which reflects the distribution of the data. In , a inverse mapping from from latent to data space is learned and the generator is penalized based on the inverted distribution to cover all modes. proposes a moment matching paradigm different from VAEs or GANs. However, as the presented moment matching network involves an order of magnitude more parameters compared to VAEs or GANs, we do not consider them here. As we propose a hybrid VAE-GAN framework these techniques can be applied on top to potentially improve . However, in hybrid VAE-GANs the reconstruction loss already incentivizes the coverage of all modes. Wasserstein Loss based Formulations.; proposes GANs which minimize the Wasserstein distance between true and generated distributions. demonstrates improved by applying Spectral Normalization on the weights. , distance constraints are applied on top. WGANs were extended to Banach Spaces to emphasize edges or large scale behavior. focus on progressively learning to use more complex model architectures to improve performance. We use the regularization techniques developed for WGANs to improve stability of our hybrid VAE-GAN framework. shows very high quality generations at high resolutions but these are class conditional. However, diverse class conditional generation is considerably easier as intra-class variability is generally much lower than inter-class variability. Here, we focus on the more complex unconditional image generation task. Hybrid VAE-GANs. a VAE-GAN hybrid is proposed with discriminator feature matching -the VAE decoder is trained to match discriminator features instead of a L 1 /L 2 reconstruction loss. ALI proposes to instead match the encoder and decoder joint distributions -with limited success on diverse datasets. BiGAN , builds upon ALI to learn inverse mappings from the data to the latent space and demonstrate effectiveness on various discriminative tasks. extends standard VAEs by replacing the loglikelihood term with a hybrid version based on synthetic likelihoods. The KL-divergence constraint to the prior is also recast to a synthetic likelihood form, which can be enforced by a discriminator (as in ;). The second improvement is crucial in generating realistic images at par with classic/Wasserstein GANs. We further improve upon by allowing the encoder multiple chances to draw desired samples and enforcing stability -enabling it to maintain low divergence to the prior while generating realistic images. We begin with a brief overview of hybrid VAE-GANs followed by details of our novel objective. Overview. Hybrid VAE-GANs (Figure 1) are generative models for data distributions x ∼ p(x) that transform a latent distribution z ∼ p(z) to a learned distributionx ∼ p θ (x) approximating p(x). The GAN (G θ,D I alone can generate realistic samples, but has trouble covering all modes. The VAE (R φ,G θ,D L) can cover all modes of the distribution, but generates lower quality samples overall. VAE-GANs leverage the strengths of both VAEs and GANs to generate high quality samples while capturing all modes. We begin with a discussion of the prior hybrid VAE-GAN objectives and its shortcomings, followed by our novel "Best-of-Many-Samples" objective with a novel reconstruction term and regularized stable direct estimate of the synthetic likelihood. Hybrid VAE-GANs (; ; ; b) maximizes the log-likelihood of the data (x ∼ p(x)) akin to VAEs. The log-likelihood, assuming the latent space to be distributed according to p(z), Here, p(z) is usually Gaussian. This requires the generator G θ to generate samples that assign high likelihood to every example x in the data distribution for a likely z ∼ p(z). Thus, the decoder θ can be forced to cover all modes of the data distribution x ∼ p(x). In contrast, GANs never directly maximize the data likelihood and there is no direct incentive to cover all modes. However, the integral in is intractable. VAEs and Hybrid VAE-GANs use amortized variational inference using a recognition network q φ (z|x) (R φ). The final hybrid VAE-GAN objective of the state-of-the-art α-GAN which integrates a synthetic likelihood ratio term is, This objective has two important shortcomings. Firstly, as pointed in , this objective severely constrains the recognition network as the average likelihood of the samples generated from the posterior q φ (z|x) is maximized. This forces all samples from q φ (z|x) to explain x equally well, penalizing any variance in q φ (z|x) and thus forcing it away from the Gaussian prior p(z). Therefore, this makes it difficult to match the prior in the latent space and the encoder is forced to trade-off between a good estimate of the data log-likelihood and the divergence to the latent prior. Secondly, the synthetic likelihood ratio term is the ratio of the output of D I, any instability (nonsmoothness) in the output of the classifier is magnified. Moreover, there is no incentive for D I to be smooth (stable). For two similar images, {x 1, x 2} with |x 1 − x 2 | ≤, the change of output |D I (x 1 |z 1) − D I (x 1 |z 2)| can be arbitrarily large. This means that a small change in the generator output (e.g. after a gradient descent step) can have a large change in the discriminator output. Next, we describe how we can effectively leverage multiple samples from q φ (z|x) to deal with the first issue. Finally, we derive a stable synthetic likelihood term to deal with the second issue. Figure 1: Overview of our BMS-VAE-GAN framework. The terms of our novel objective are highlighted at the right. We consider only the best sample from the generator G θ while computing the reconstruction loss. Building upon , we derive an alternative variational approximation of, which uses multiple samples to relax the constrains on the recognition network (full derivation in Appendix A), In comparison to the α-GAN objective where the expected likelihood assigned by each sample to the data point x was considered, we see that in the likelihood is computed considering all generated samples. The recognition network gets multiple chances to draw samples which assign high likelihood to x. This allows q φ (z|x) to have higher variance, helping it better match the prior and significantly reducing the trade-off with the data log-likelihood. Next, we describe how we can integrate a synthetic likelihood term in to help us generate sharper images. Considering only L 1 /L 2 reconstruction based likelihoods p θ (x|z) (as in ; ;) might not be sufficient in case of complex high dimensional distributions e.g. in case of image data this leads to blurry samples. Synthetic estimates of the likelihood leverages a neural network (usually a classifer) which is jointly trained to distinguish between real and generated samples. The network is traiend to assign low likelihood to generated samples and higher likelihood to real data samples. Starting from, we integrate a synthetic likelihood term with weight β to encourage our generator to generate realistic samples. The L 1 /L 2 reconstruction likelihood (with weight α) forces the coverage of all modes. However, unlike prior work , our synthetic likelihood estimator D I is not a classifier. We first convert the likelihood term to a likelihood ratio form which allows for synthetic estimates, To enable the estimation of the likelihood ratio p θ (x|z) /p(x) using a neural network, we introduce the auxiliary variable y where, y = 1 denotes that the sample was generated and y = 0 denotes that the sample is from the true distribution. We can now express (using Bayes theorem, see Appendix A), The ratio p θ (y=1|z,x) /1−p(y=1|x) should be high for generated samples which are indistinguishable from real samples and low otherwise. In case of image distributions, we find that direct estimation of the numerator/denominator (as in) exacerbates instabilities (non-smoothness) of the estimate. Therefore, we estimate this ratio directly using the neural network D I (x) -trained to produce high values for images indistinguishable from real images and low otherwise, To further unsure smoothness, we directly control the Lipschitz constant K of D I. This ensures, | -the function is strictly smooth everywhere. Small changes in generator output cannot arbitrarily change the synthetic likelihood estimate, hence allowing the generator to smoothly improve sample quality. We constrain the Lipschitz constant K to 1 using. Note that the likelihood p θ (x|z) takes the form e −λ x−x n in -a log-sum-exp which is numerically unstable. As we perform stochastic gradient descent, we can deal with this after stochastic (MC) sampling of the data points. We can well estimate the log-sum-exp using the max -the "Best-of-Many-Samples" , In practice, we observe that we can improve sharpness of generated images by penalizing generator G θ, using the least realistic of the T samples, Our final "Best-of-Many"-VAE-GAN objective takes the form (ignoring the constant log(T) term), We use the same optimization scheme as in. We provide the algorithm in detail in Appendix B. Approximation Errors. The "Best-of-Many-Samples" scheme introduces the log(T) error term. However, this error term is dominated by the low data likelihood term in the beginning of optimization . Later, as generated samples become more diverse, the log likelihood term is dominated by the Best of T samples -"Best of Many-Samples" is equivalent. Classifier based estimate of the prior term. Recent work has shown that point-wise minimization of the KL-divergence using its analytical form leads to degradation in image quality. Instead, KL-divergence term is recast in a synthetic likelihood ratio form minimized "globally" using a classifier instead of point-wise. Therefore, unlike , here we employ a classifier based estimate of the KL-divergence to the prior. However, as pointed out by prior work on hybrid VAE-GANs , a classifier based estimate with still leads to mismatch to the prior as the trade-off with the data log-likelihood still persists without the use of the "Best-of-Many-Samples". Therefore, as we shall demonstrate next, the benefits of using the "Best-of-Many-Samples" extends to case when a classifier based estimate of the KL-divergence is employed. Next, we evaluate on multi-modal synthetic data as well as CIFAR-10 and CelebA. We perform all experiments on a single Nvidia V100 GPU with 16GB memory. We use as many samples during training as would fit in GPU memory so that we make the same number of forward/backward passes as other approaches and minimize the computational overhead of sampling multiple samples. Table 2: Visualization of samples. The standard WAE and α-GAN objectives leads to mismatch to the prior in the latent space. We show samples z (in red) which are highly likely under the standard Gaussian prior (blue) but have low probability under the learnt marginal posterior q φ (z). Bottom Row: We show that such points z lead to low quality data samples (in red), which do correspond to any of the modes. We evaluate in Tables 1 and 2 on the standard 2D Grid and Ring datasets, which are highly challenging due to their multi-modality. The metrics considered are the number of modes captured and % of high quality samples (within 3 standard deviations of a mode). The generator/discriminator architecture is same as in. We see that our BMS-VAE-GAN (using the best of T = 10 samples) outperforms state of the art GANs e.g. and the WAE and α-GAN baselines. The explicit maximization of the data log-likelihood enables our BMS-VAE-GAN and the WAE and α-GAN baselines to capture all modes in both the grid and ring datasets. The significantly increased proportion of high quality samples with respect to WAE and α-GAN baselines is due to our novel "Best-of-Many-Samples" objective. We illustrate this in Table 3. we analyze the learnt latent spaces in detail, in particular we check for points (in red) which are likely under the Gaussian prior p(z) (blue) but have low probability under the marginal posterior q φ (z) = q φ (z|x)dx. We use tSNE to project points from our 32-dimensional latent space to 2D. In Table 3 (Top Row) we clearly see that there are many such points in case of the WAE and α-GAN baselines (note that this low probability threshold is common across all methods). In Table 3 (Bottom Row) we see that these points lead to the generation of low quality samples (in red) in the data space. Therefore, we see that our "Best-of-Many-Samples" samples objective helps us match the prior in the latent space and thus this leads to the generation of high quality samples and outperforming both state of the art GANs and hybrid VAE-GAN baselines. 0.0055±0.0006 α-GAN + SN (Ours) T = 1 0.0048±0.0005 BMS-VAE-GAN (Ours) T = 30 0.0037±0.0005 Table 5: Closest generated images found using IvOM. Next, we evaluate on the CIFAR-10 dataset. Auto-encoding based approaches do not perform well on this dataset, as a simple Gaussian reconstruction based likelihood is insufficient for such highly multi-modal image data. This makes CIFAR-10 a very challenging dataset for hybrid VAE-GANs. Architecture Details. We use two different types of architectures for the generator/discriminator pair G θ, D I: DCGAN based as used in and the Standard CNN used in;. Experimental Details and Baselines. We use the ADAM optimizer and use learning rate of 2 × 10 −4, β 1 = 0.0 and β 2 = 0.9 for all components. We use the same architecture of the latent space discriminator D L as in α- We consider the following baselines for comparison against our BMS-VAE-GAN with a DCGAN generator/discriminator, 1. A standard DCGAN , 2. The α-GAN model of . Furthermore, we compare our BMS-GAN with the Standard CNN generator/discriminator to, 1. SN-GAN , 2. BW-GAN , 3. Dist-GAN , 4. Our α-GAN + SN is an improved version of the α-GAN which includes Spectral Normalization for stable estimation of synthetic likelihoods. Again, the α-GAN and α-GAN + SN baselines are identical to the corresponding BMS-VAE-GAN except for the "Best-of-Many-Samples" reconstruction likelihood. Method FID ↓ DCGAN Architecture WAE 89.3±0.3 BMS-VAE (Ours) T = 10 87.9±0.4 DCGAN 30.7±0.2 α-GAN 29.4±0.3 BMS-GAN (ours) T = 10 28.8±0.4 Standard CNN Architecture SN-GAN 25.5 BW-GAN 25.1 α-GAN + SN (Ours) T = 1 24.6±0.3 BMS-VAE-GAN (Ours) T = 10 23.8±0.2 BMS-VAE-GAN (Ours) T = 30 23.4±0.2 Dist-GAN 22.9 BMS-VAE-GAN (Ours) T = 10 21.8±0.2 Table 6: FID on CIFAR-10. Table 6. Please note that the higher latent space dimensionality makes the latent spaces much harder to reliably analyze. Therefore, we rely on the FID and IoVM metrics. We follow evaluation protocol of; and use 10k/5k real/generated samples to compute the FID score. The α-GAN model with (DCGAN architecture) demonstrates better fit to the true data distribution (29.3 vs 30.7 FID) compared to a plain DCGAN. This again shows the ability of hybrid VAE-GANs in improving the performance of plain GANs. We observe that our novel "Best-of-ManySamples" optimization scheme outperforms both the plain DCGAN and hybrid α-GAN(28.8 vs 29.4 FID), confirming the advantage of using "Best-of-Many-Samples". Furthermore, we see that our BMS-VAE outperforms the state-of-the-art plain auto-encoding WAE . We further compare our BMS-VAE-GAN to state-of-the-art GANs using the Standard CNN architecture in Table 6 with 100k generator iterations. Our α-GAN + SN ablation significantly outperforms the state-of-the-art plain GANs , showing the effectiveness of hybrid VAE-GANs with a stable direct estimate of the synthetic likelihood on this highly diverse dataset. Furthermore, our BMS-VAE-GAN model trained using the best of T = 30 samples significantly improves over the α-GAN + SN baseline (23.4 vs 24.6 FID), showing the effectiveness of our "Best-of-Many-Samples". We also compare to using 300k generator iterations, again outperforming by a significant margin (21.8 vs 22.9 FID). The IoVM metric of (Tables 4 and 5), illustrates that we are also able to better reconstruct the image distribution. The improvement in both sample quality as measured by the FID metric and data reconstruction as measured by the IoVM metric shows that our novel "Best-of-Many-Samples" objective helps us both match the prior in the latent space and achieve high data log-likelihood at the same time. Next, we evaluate on CelebA at resolutions 64×64 and 128×128. Training and Architecture Details. As the focus is to evaluate objectives for hyrid VAE-GANs, we use simple DCGAN based generators and discriminators for generation at both 64×64 and 128×128. Approaches like progressive growing are orthogonal and can be applied on top. Baselines and Experimental Details. We consider the following baselines for comparison with our BMS-GAN with T = {10, 30} samples, 1. WAE the state-of-the-art plain auto-encoding generative model, 2. α-GAN the state-of-the-art hybrid VAE-GAN, 3. Our α-GAN + SN is an improved version of the α-GAN which includes Spectral Normalization for stable estimation of synthetic likelihoods. Note, the α-GAN baseline is identical to our BMS-GAN except for the "Best-of-Many" reconstruction likelihood. Moreover, we include several plain GAN baselines, 1. Wasserstein GAN with gradient penalty (WGAN-GP) , 2. Spectral Normalization GAN (SN-GAN) , 3. Dist-GAN . To train our BMS-VAE-GAN and αR-GAN models we use the two time-scale update rule with learning rate of 1 × 10 −4 for the generator and 4 × 10 −4 for the discriminator. We use the Adam optimizer with β 1 = 0.0 and β 2 = 0.9. We use a three layer MLP with 750 neurons as the latent space discriminator D L (as in) and a DCGAN based recognition network R φ. We use the hinge loss to train D I to produce high values for real images and low values for generated images, log(D I) ∈ [−0.5, 0.5] works well. 26.3±0.9 Dist-GAN 23.7±0.3 SN-GAN 21.9±0.8 α-GAN 19.2±0.8 α-GAN + SN (Ours) T = 1 15.1±0.2 BMS-VAE-GAN (Ours) T = 10 14.3±0.4 BMS-VAE-GAN (Ours) T = 30 13.6±0.4 Resolution: 128×128 SN-GAN 60.5±1.5 αR-GAN (Ours) T = 1 45.8±1.4 BMS-GAN (Ours) T = 10 42.7±1.2 Table 7: FID on CelebA. We train all models for 200k iterations and report the FID scores for all models using 10k/10k real/generated samples in Table 7. The pure auto-encoding based WAE has the weakest performance due to blurriness. Our pure autoencoding BMS-VAE (without synthetic likelihoods) improves upon the WAE (39.8 vs 41.2 FID), already demonstrating the effectiveness of using "Best-of-Many-Samples". We see that the base DCGAN has the weakest performance among the GANs. BEGAN suffers from partial mode collapse. The SN-GAN improves upon WGAN-GP, showing the effectiveness of Spectral Normalization. However, there exists considerable artifacts in its generations. The α-GAN of , which integrates the base DCGAN in its framework performs significantly better (31.1 vs 19.2 FID). This shows the effectiveness of VAE-GAN frameworks in increasing quality and diversity of generations. Our enhanced α-GAN + SN regularized with Spectral Normalization performs significantly better (15.1 vs 19.2 FID). This shows the effectiveness of a regularized direct estimate of the synthetic likelihood. Using the gradient penalty regularizer of lead to drop of 0.4 FID. Our BMS-VAE-GAN improves significantly over the α-GAN + SN baseline using the "Best-of-Many-Samples" (13.6 vs 15.1 FID). The at 128×128 resolution mirror the at 64×64. We additionally evaluate using the IoVM metric in Appendix C. We see that by using the "Best-of-Many-Samples" we obtain sharper (Figure 4d) that cover more of the data distribution as shown by both the FID and IoVM. We propose a new objective for training hybrid VAE-GAN frameworks which overcomes key limitations of current hybrid VAE-GANs. We integrate, 1. A "Best-of-Many-Samples" reconstruction likelihood which helps in covering all the modes of the data distribution while maintaining a latent space as close to Gaussian as possible, 2. A stable estimate of the synthetic likelihood ratio.. Our hybrid VAE-GAN framework outperforms state-of-the-art hybrid VAE-GANs and plain GANs in generative modelling on CelebA and CIFAR-10, demonstrating the effectiveness of our approach. We begin with a derivation of the multi-sample objective. We maximize the log-likelihood of the data (x ∼ p(x)). The log-likelihood, assuming the latent space to be distributed according to p(z), Here, p(z) is usually Gaussian. However, the integral in is intractable. VAEs and Hybrid VAEGANs use amortized variational inference using an (approximate) variational distribution q φ (z|x) (jointly learned), To arrive at a tractable objective, the standard VAE objective applies the Jensen inequality at this stage, but this forces the final objective to consider the average data-likelihood. , we apply the Mean Value theorem of Integration to leverage multiple samples, We can lower bound with the minimum value of z, As the term on the right is difficult to estimate, we approximate it using the KL divergence (as in). Intuitively, as the KL divergence heavily penalizes q φ (z|x) if it is high for low values p(z), this ensures that the ratio p(z) /q φ (z |x) is maximized. Similar to , this leads to the "many-sample" objective of the main paper, Next, we provide a detailed derivation of. Again, to enable the estimation of the likelihood ratio p θ (x|z) /p(x) using a neural network, we introduce the auxiliary variable y where, y = 1 denotes that the sample was generated and y = 0 denotes that the sample is from the true distribution. We can now express as (using Bayes theorem), This is because, (assuming independence p(z, and, Assuming, p(y = 0) = p(y = 1) (equally likely to be true or generated), Update D I using hinge loss to produce high values (≥ a) for real images and low (≤ b) otherwise: Update D L using the standard cross-entropy loss: We detail in algorithm 1, how the components R φ, G θ, D I, D L of our BMS-VAE-GAN (see Figure Figure 1) are trained. We follow in designing algorithm 1. However, unlike , we train R φ, G θ jointly as we found it to be computationally cheaper without any loss of performance. Also unlike , we use the hinge loss to update D I as it leads to improved stability (as discussed in the main paper). We additionally evaluate using the IoVM on CelebA in Table 8, using the base DCGAN architecture at 64×64 resolution. We observe that our BMS-VAE-GAN performs better. The improvement is smaller compared to CIFAR-10 because CelebA is less multi-modal compared to CIFAR-10. However, we still observe better overall sample quality from our BMS-VAE-GAN. This means that although difference in data reconstruction is smaller, our BMS-VAE-GAN enables better match the prior in the latent space. Finally, we provide additional examples of closest matches found using IoVM in Figure 3, illustrating regions of the data distribution captured by BMS-VAE-GAN but not captured by SN-GAN or α-GAN + SN. IoVM ↓ SN-GAN 0.0221±0.0003 α-GAN + SN (Ours) T = 1 0.0036±0.0001 BMS-VAE-GAN (Ours) T = 10 0.0034±0.0001 In Figure 4, we compare qualitatively our BMS-VAE-GAN against other state-of-the-art GANs. We use the same settings as in the main paper and use the same DCGAN architecture across methods (as the aim is to evaluate training objectives). Again note that, approaches like use more larger generator/discriminator architectures and can be applied on top. We see that BEGAN produces sharp images (with only a few very minuscule artifacts), but lack diversity -also reflected by the lower FID score in Table 2 of the main paper. In comparison, both SN-GAN and Dist-GAN produce sharp and diverse images (again reflected by the FID score in Table 2 of the main paper) but also introduce artifacts. Dist-GAN introduces relatively fewer artifacts in comparison to SN-GAN . In comparison, our BMS-VAE-GAN strikes the best balance -generating sharp and diverse images with few if any artifacts (also again reflected by the FID scores in the main paper). We also provide additional qualitative examples on CIFAR-10 in Figure 5, highlighting sharper images compared to α-GAN +SN. In Table 9 we include diversity using the LPIPS metric. To compute the LPIPS diversity score 5k samples were randomly generated and the similarity within the batch was computed. We see that our BMS-VAE-GAN generates the most diverse examples on both datasets, further highlighting the effectiveness of our "Best-of-Many-Samples" objective. Table 9: Evaluation using the LPIPS metric.
We propose a new objective for training hybrid VAE-GANs which lead to significant improvement in mode coverage and quality.
599
scitldr